rohitksingh_work has quit [Read error: Connection reset by peer]
<GitHub52>
[artiq] jordens commented on issue #636: > We can merge address and write enable instead.... https://git.io/v1RYM
<GitHub92>
[artiq] sbourdeauducq commented on issue #636: Doing that to the channel register is difficult because DRTIO uses it to look up first the state of the remote channel, and test for underflow, FIFO full, etc.... https://git.io/v1RGd
<GitHub52>
[artiq] sbourdeauducq commented on issue #636: > using a channel-dependent bit mask.... https://git.io/v1RZ8
<GitHub41>
[artiq] jordens commented on issue #636: Does it do that channel status lookup early to be in parallel with the CPU doing the address/data writes?... https://git.io/v1RnN
<rjo>
whitequark: why are the csr.rs access functions "pub unsafe fn .." and not "pub fn ... { unsafe { ... } }"?
<rjo>
whitequark: i.e. why do you consider the unsafe-ness to be cured only so late in the stack?
<whitequark>
rjo: e.g. consider FIFO functions. by manipulating the FIFO pointer you could cause unsafety elsewhere because the safe wrapper will unexpectedly encounter something\
<whitequark>
this doesn't apply to most CSRs, of course
<whitequark>
rjo: feel free to add some marking to CSRs that indicates whether writing to them is memory-safe
<whitequark>
oh hm
<whitequark>
I think all reads would be memory-safe
<whitequark>
so we can do that right away
<whitequark>
or you can also add an issue and assign it to me.
Gurty has quit [Ping timeout: 256 seconds]
<sb0>
they might be memory safe, but you can still crash the system by writing to many of them
<sb0>
e.g. mess with the timer, disable the DDR3 controller, ...
<sb0>
where do you draw the line?
<sb0>
you can actually write to any DDR3 location with just the CSRs
Gurty has joined #m-labs
kuldeep has quit [Ping timeout: 245 seconds]
<rjo>
afaict once an unsafe primitive is "correctly" wrapped, the unsafety is gone (in the rust sense).
<rjo>
afaict even reads are still unsafe because e.g. in the rtio_counter case a read is corrupted by a _update()
<rjo>
then i guess what you are doing right now is ok.
kuldeep has joined #m-labs
kuldeep has quit [Ping timeout: 265 seconds]
kuldeep has joined #m-labs
<GitHub32>
[artiq] jordens commented on issue #563: Funded by UMD/Britton except the IOSERDES port. https://git.io/v1REI
<whitequark>
sb0: "unsafe" is specifically for memory safety.
<whitequark>
so the DDR CSRs are definitely all unsafe but in case of timer that's not true.
rohitksingh has joined #m-labs
_florent_ has quit [Ping timeout: 260 seconds]
_florent_ has joined #m-labs
<sb0>
rjo, i'd be careful using an IRQ to mark a RTIO FIFO full condition
<sb0>
how do you know what to retry?
<sb0>
how do you know the write has been accepted and you can free your retry buffer?
<rjo>
sb0: yep. tricky. either track it on the host like for drtio and poll-update when uncertain. or wait for unasserted irq before writing/sumitting.
<sb0>
maybe the write can be followed by a short read - could be as short as 1 bit to say if everything is OK or if a longer read should happen to check the status?
<sb0>
ironically, SPI has less latency than the GTX
<rjo>
but isn't that the same as the IRQ?
<rjo>
i mean. yes. absolutely duplicate the IRQ in a register that can be read.
<rjo>
or are you annoyed by the fact that this would make the IRQ meaning context-dependent?
<sb0>
we can design it in a way that a read that follows a write always correctly returns the information corresponding to the write
<sb0>
there is a single channel - SPI
<sb0>
with a IRQ line you have a race between two channels
<rjo>
can't the IRQ line indicate the channel status from X spi cycles after the channel number has been written to Y cycles before the next channel number is written?
<sb0>
yes, you can have this sort of timing constraint, and theoretically they work
<rjo>
but yes. can also be done using SPI alone. i am fine with that. except that you can't use it nicely with a CPU and listen for SAWG clipping, input overflows,
<sb0>
but they're a lot more complicated than SPI alone
<sb0>
especially if $SPI_MASTER isn't good at timestamping IRQs
<sb0>
or controlling SPI timing
<sb0>
why bother dealing with a nasty race condition when you can just avoid it?
<rjo>
no need to timestamp AFAICT. but let's just expose this as a status read plus some IRQ line if people want to listen to that.
<rjo>
it's useful to not have to poll everything all the time.
<sb0>
fine. but I'm pretty sure people using the IRQ line will write concurrency bugs more than once
<whitequark>
what would the IRQ handler do anyway?
<whitequark>
feed the FIFO?
<rjo>
the irq handler would crawl the status registers, figure out what the source it, handle it, and clear the irq.
<rjo>
as usual.
<whitequark>
yes. I mean the handle part.
<felix_>
rjo: do you mean that the license changes to "GNU Lesser General Public License"? you wrote "GNU General Public License" in the 4th paragraph of the mail
<whitequark>
since this affects how the rest of the code should be structured, to synchronize the data structures used.
<rjo>
acknowledge the clipping of a saturating adder etc
<whitequark>
we'd have to use lockfree queues...
<whitequark>
ah
<whitequark>
so just atomic flags.
<rjo>
felix_: oh. yes. will fix that.
<rjo>
whitequark: yes.
<rjo>
whitequark: anyway, in this case we are just building a "peripheral". the handling and that cpu etc is somebody elses problem.
<whitequark>
rjo: sb0: I see no problem with that, given that in the end it just results in the flag being polled by some other code
<whitequark>
if you use Rust atomics and safe code to handle it you should not generate any concurrency bugs
<whitequark>
(although I will double-check any caveats Rust's memory model has wrt IRQs)
<rjo>
whitequark: it wouldn't even go through the/our cpu. this is basically "DRTIO-over-SPI".
<whitequark>
rjo: oh, I just realized what you meant by an IRQ line.
<whitequark>
disregard what I said as I think it's irrelevant
<rjo>
whitequark: i was doing some recreational LLVM IR and OR1K ASM reading the other day. And i noticed all that array checking code that is generated on every access. isn't that slow and lots of code?
rohitksingh has quit [Quit: Leaving.]
<GitHub139>
[artiq] jordens pushed 2 new commits to phaser2: https://git.io/v10Oc
<GitHub139>
artiq/phaser2 d34084b Robert Jordens: README_PHASER: update
<GitHub139>
artiq/phaser2 5efd0fc Robert Jordens: sawg: documentation
<whitequark>
rjo: it is, sorta.
<whitequark>
I'm not sure if there's an easy way around it in general, though we may well add microoptimizations for specific cases.
<whitequark>
list comprehensions and for..in loops generally work better than explicit indexing.
<whitequark>
the enumerate() wrapper may be a substantial win