sb0 changed the topic of #m-labs to: ARTIQ, Migen, MiSoC, Mixxeo & other M-Labs projects :: fka #milkymist :: Logs http://irclog.whitequark.org/m-labs
kristianpaul has joined #m-labs
kristianpaul has joined #m-labs
<sb0> rjo, OK to split the TTL RF switch from the DDS driver?
<sb0> automatic scheduling of DDS program commands when the RF switch is off is 1) complicated 2) difficult to debug for users when it breaks 3) potentially slow
<sb0> additionally, Penning lab people want to use DDSes without RF switches, sometimes
<sb0> so there would be a dds.program(frequency, phase, ...) that does register writes + FUD @now, and RF switch is controlled with another, separate TTL
<sb0> there will also be a dds.program_time variable that can be used in delay(dds.program_time) to advance now by the exact amount of time a dds program uses on the shared bus - useful to program multiple channels at once
<rjo> hmm
<rjo> how did you interpret the automatic scheduling of DDS program commands?
<rjo> across all ddses and arbitrary sequences of programming commands?
<sb0> when the RF switch of a channel is off, you have more freedom to do the write+FUD, and you can use that freedom to solve access conflicts to the shared DDS bus
<sb0> e.g.
<sb0> if you start a kernel with:
<sb0> with parallel: dds1.on(100*MHz); dds2.on(200*MHz)
<sb0> the smart thing to do is to schedule the first DDS program on the bus, then after this is done schedule the next one, and after that turn on the two RF switches simultaneously
<sb0> without this automatic scheduling, it would try to schedule the accesses to program the two DDSes at the same time on the shared bus
<sb0> the current system does that, using the concept of non-realtime DDS programs that are done by the CPU based on the current value of the RTIO timestamp counter
<sb0> but this only works because the CPU is syncing the FUD channel, which is slow
<rjo> afaict an implicit fud @now is problematic w.r.t. the different phase modes.
<rjo> because then you can not have two dds fud at the same time.
<rjo> how hard is that smart rescheduling of the real time dds programming? isn't it very much like instruction scheduling in a compiler?
<sb0> the problem is (again) that the flow of DDS commands is generated by an arbitrary algorithm
<sb0> so yes, the runtime can look at the utilization of the DDS bus and schedule an appropriate time, but that would be slow
<rjo> couldn't it be done by just doing dds programming "in the past" (with respect to `now()`), then fud @ now and then do the rf switches at the correct time? like a
<rjo> that way multiple simulateous programmings of ddses would just be pushed further into the past.
<sb0> yes, that would work. but the problem is determining how further you need to push a given DDS programming
<sb0> *far
<rjo> you could just revert the order -- it would not matter:
<rjo> your snipped from above would become: dds_bus.program(dds1, 100*MHz, now() - 1*dds_bus_program_time); dds_bus.program(dds2, 200*MHz, now() - 2*dds_bus_program_time); dds_bus.fud(now()); rf_switch{1,2}.whatever()
<sb0> ah, yes, this sort of manual handling is OK
<sb0> but doing this sort of thing automatically is difficult
<sb0> (in the general case)
<rjo> manual in the sense that dds{1,2}.on(*) would automatically do the above.
<sb0> how?
<sb0> that's the problem. for a given value of now(), how do you tell if there are already programmed conflicts on the shared DDS bus?
<rjo> just maintain a counter of how far back the dds bus is occupied and then schedule the programming?
<rjo> ah. no. it would need reordering.
<rjo> if you have pushed stuff up to now()-1*dt you can not push stuff for now()-2*dt anymore.
<sb0> yes, it would need reordering; additionally, there could be several values of now() to keep track of - you'll need to determine how far the DDS bus is occupied for each of them
<rjo> ok. can we have an api like dds_bus.schedule(*bus_stuff) that automatically schedules stuff in the past?
<sb0> the reordering can be dealt with using the worst case scenario "all other DDS channels will have to be reprogrammed"
<sb0> the other one is a more serious issue
<rjo> i would like to avoid having to count the number of dds programming things myself in order to do compact dds programming scheduling.
<sb0> yes, that should be doable
<rjo> afaict 60% of the use cases are completely fine with the dumb scheduling and manual handling of fud+rf_switch: just abstract them up in a single wrapper method.
<sb0> what are the other 40%?
<sb0> maybe we can have a nicer syntax with eg context managers - I'll have a look at that...
<rjo> the other 40% will benefit from being able to do compact dds programming of multiple registers/dds, by e.g. passing all the commands to the runtime at once so that the runtime can calculate the required time in the past.
<rjo> yes. or something transactional.
<rjo> with dds_bus.batch(): dds1.program(); dds2.program()
<rjo> would do a dds_bus.fud() as the "commit" on __exit__()
<sb0> yes
<rjo> and would schedule the dds programmings back-to-back in the past.
<rjo> and then having to do rf_switch() handling manually is absolutely fine.
<sb0> I think the manual rf_switch is actually desirable, at least for some Penning lab people
<rjo> i suspect their problem would be solved easily with the rf_switch being a NOP.
<rjo> i don't see the special requirement they claim to have.
<rjo> switching phase or frequency without having to switch the/a rf_switch off is generic.
<sb0> sparing RF switches by not having them on DDSes that don't need them
<sb0> yes, you could make it a NOP, but writing the experiment without using a RF switch is cleaner imo
<rjo> i had always assumed that the rf_switch-based dds api would be an extension of the rf_switch-less api.
<sb0> no, it wasn't, because of the scheduling thing with the rf switch off
<sb0> do we support dynamic numbers of programs in the batch?
<rjo> indeterminate (at py2llvm time) number of instruction? no.
<sb0> should be doable if the runtime memorizes the transactions and commits them in the end. also easy to integrate as a "generic context manager that calls the runtime" in the compiler (like it's done for the watchdog already)
<sb0> only caveat: it'll be a bit slower
<rjo> pushing them onto the batch and then assigning rtio times on commit?
<sb0> yes
<rjo> yes. that is nice for "generic scheduling in reverse"
<sb0> and the runtime would manage the batch
<sb0> syscall_batch_start, syscall_batch_add, syscall_batch_commit, etc.
<rjo> if all the rtio devices look alike (in wishbone space) it could be very generic.
<rjo> not "scheduling in reverse" but "rewriting the past"
<rjo> ah. you want to batch up the syscalls. with va_args?
<sb0> no, one syscall_batch_add per DDS program
<rjo> yes. but the possible syscalls do not have the same signature.
<sb0> doing it generic adds another layer of indirection (=slowness) to all RTIO commands...
<rjo> how do you extract the changes to `now` from the batch without executing it?
<rjo> why does that need a layer of indirection for all RTIO commands?
<rjo> ah. you start the transaction by saving now(), then batch up the actual rtio-like syscalls and changes to now, then roll back now() to the save now() - the batch duration, then execute the batch.
<rjo> and restoring now() comes for free.
<rjo> the problem with that is that the rtio syscall data must not depend on now() (problem for the phase modes)
<rjo> and there must only be rtio-like syscalls within the batch. e.g. communication within the batch is hard to understand for me.
<sb0> now() is used at the start of a batch, then all the DDS programs that are added to the batch are scheduled before that value of now(), and FUD at the end is scheduled at that value of now()
<sb0> if you use delay() inside a batch, it won't affect the DDS programs
<sb0> communication inside a batch is also unaffected
<sb0> and batch-within-batch is an error.
<rjo> ok. that works for the dds syscalls where now() is only relevant for the data w.r.t. FUD. and since the fud time is always known everything is fine.
<rjo> let me summarize: splitting rf_switch calls of the dds calls is ok (users can write wrapper functions). compact dds_bus scheduling will be done either through the api (a schedule() method that receives all instructions and schedules them in the past) or -- preferred -- with some transactional syntax where fud is __exit__().
<sb0> yes
kmehall has quit [Remote host closed the connection]
kmehall has joined #m-labs
<GitHub89> [pyparser] whitequark pushed 1 new commit to master: http://git.io/vJASK
<GitHub89> pyparser/master f671204 whitequark: 95% grammar coverage....
<GitHub79> [misoc] sbourdeauducq pushed 1 new commit to master: http://git.io/vJxq5
<GitHub79> misoc/master 566d973 Sebastien Bourdeauducq: README: add note about submodules
<GitHub51> [misoc] enjoy-digital pushed 1 new commit to master: http://git.io/vJxcj
<GitHub51> misoc/master d9111f6 Florent Kermarrec: litesata: fix packets figure in frontend doc
<mindrunner> whats the best way to debug my misoc modules? I cant see how to properly use verilator. It runs my design, but seems like the same as running it on the fpga. i just got the serial output.
fengling has quit [Quit: WeeChat 1.1.1]
Gurty has quit [Read error: Connection reset by peer]
antgreen has joined #m-labs
antgreen has quit [Remote host closed the connection]
<GitHub24> [pyparser] whitequark pushed 1 new commit to master: http://git.io/vJpgC
<GitHub24> pyparser/master f52cd79 whitequark: Implement true LL(k) lookahead.
Gurty has joined #m-labs
antgreen has joined #m-labs
sb0 has quit [Read error: Connection reset by peer]
sb0 has joined #m-labs
imrehg has joined #m-labs
<GitHub137> [pyparser] whitequark pushed 1 new commit to master: http://git.io/vJjGb
<GitHub137> pyparser/master 25e4d3f whitequark: 100% grammar coverage.
imrehg has quit [Remote host closed the connection]
<GitHub128> [pyparser] whitequark pushed 1 new commit to master: http://git.io/vJj6q
<GitHub128> pyparser/master bc28d95 whitequark: Ensure AST is consistent with Python's builtin ast.
<GitHub187> [pyparser] whitequark pushed 1 new commit to master: http://git.io/vJjMb
<GitHub187> pyparser/master 44644f2 whitequark: Add a way to run parser from command line: -m pyparser.parser foo.py
<GitHub90> pyparser/master 1fcdcc8 whitequark: Improve testbench.
<GitHub90> [pyparser] whitequark pushed 6 new commits to master: http://git.io/vUeLq
<GitHub90> pyparser/master ba810c5 whitequark: Fix a namespace collision.
<GitHub90> pyparser/master 5517553 whitequark: Implement from __future__ import print_function.
mumptai has joined #m-labs
mumptai has quit [Remote host closed the connection]
<rjo> sb0: i thought a bit about implementing remote import vs shared filesystem for the GUI code and number crunching on controllers.
<rjo> and having read lots of code in and around importlib, also looking at the required reloading of changed modules, change notification, modification dates, bytecode caching etc, it does look like a shared filesystem will be ok.
<rjo> we would just assume that those components that use code from the experiment repository share a filesystem. controllers that don't do experiment dependent crunching can run separate.
<rjo> but still, e.g. for the gui code carried by an experiment, there needs to be some reload() logic.
<rjo> other wise afaict remote import would end up looking more or less like a remote (read-only) filesystem anyway.
<rjo> i did some tests on the file locking issues that i was afraid of (files not being deletable if they are open e.g.) but that has either disappeared under windows or it was never a problem...