lekernel changed the topic of #m-labs to: Mixxeo, Migen, MiSoC & other M-Labs projects :: fka #milkymist :: Logs http://irclog.whitequark.org/m-labs
<sb0> rjo, you have to determine and read yourself the correct amount of bytes, buffer it, and send it to pickle
<sb0> rjo, when reading without asyncio, you could pass the file object directly to pickle, which would call read() on it for you (and do the size determination which already exists in the pickle format it seems)
<sb0> but since it does "file_object.read()" and not "yield from file_object.read()", you can't use asyncio...
<sb0> asyncio has this sort of integration issue with about everything
<sb0> stekern, I just took their generated fpgatop.v and fpgatop_mem.v (or something like that), added `define SYNTHESIS on top of each, and put them into a ISE GUI project
<sb0> rjo, btw, surprisingly, pickle is slower than json
<rjo> sb0: oh. but that has nothing to do with asyncio specifically. you will see that with any asynchronous/coroutine style framework.
<rjo> sb0: none of them can allow blocking ops.
<rjo> sb0: in serializing/deserializing speed?
<sb0> it would be nice, though, if the language would automatically turn functions that do blocking IO into coroutines somehow
<sb0> it would also remove all the "yield from" clutter
<sb0> yes
<rjo> sb0: yes. "yield from" is a bit ugly. but much nicer than the equivalent in twisted (e.g.).
<rjo> sb0: well. i guess you could do the usual: expose the pickle read() blocking in a thread and feed data to that thread from the event loop in asyncio-style.
<sb0> yeah, meh
<rjo> sb0: i never liked pickle somehow. tried it about a hundred times for different things. but usually something else is much nicer (sqlite, numpy, json, yaml)...
<rjo> sb0: ;)
<sb0> right now what I'm doing is using readline() for each message instead, since json escapes \n
<sb0> and buffer it
<rjo> sb0: in strings, that is?
<rjo> sb0: if you pretty-print json it has lots of \n ...
<sb0> yes. and I disable the pretty-printing
<rjo> sb0: looks like you would be better of with some datagram or messaging style transport and not a "stream"
<rjo> s/of/off/
<sb0> I need it at two places:
<sb0> 1) communication between master and client: datagram protocol is UDP, message size is limited by MTU
<sb0> 2) communication between master and worker process: unix sockets are messy, stdout/stdin is stream
<sb0> and modules like zeromq, multiprocessing, etc. are not asyncio-compatible
<rjo> what are worker processes? the distributed adapters between devices and the master?
<sb0> yeah, I saw it... and I'm a bit hesitant to add dependencies like this, which can end up messy, e.g. llvmpy
<rjo> sb0: "But Windows is a second-class citizen in ZeroMQ world, sorry.
<rjo> sb0: nice
<sb0> and it doesn't solve the multiprocessing problem. using the asyncio process functions + stdin/stdout as asyncio streams is not too bad
<rjo> sb0: yep. maybe 0mq is in fact overkill.
<rjo> did github change syntax highlighting colors recently?
<sb0> worker process - I'd like to have the master run the experiment code in a child process, which makes it easier to implement e.g. timeouts and helps contain crashes
<sb0> especially if that worker process is going to load $DISGUSTING_BINARY_DRIVER
<sb0> the idea is to have the master handle client connections on one side, and a child process on the other - all with asyncio
<sb0> the master process would do the communication with clients, scheduling/queuing, extraction of the code from the experiment repository, and tell its worker process to run module X or Y
<rjo> sb0: yep. i called them "controllers" in the original design.
elaforest has quit [Ping timeout: 246 seconds]
<rjo> sb0: the core device controller/worker can also be separate from the master.
<sb0> by 'separate' you mean on a different machine?
<rjo> separate process actually. but that then allows different machines.
<sb0> yes, I'm planning to do that (instead of a thread) for crash/memory leak containment
<rjo> sb0: good. i like that. the only thing i am a bit worried about is startup and shutdown of that bundle of processes potentially also across machines.
<rjo> gtg. adjust some vacuum bakeout heaters...
<sb0> different machines is a bit harder, I'm planning to checkout all the experiment git into some folder in /tmp (to allow experiments that are spread across several modules in several files), and I'm also simply doing a "asyncio.create_subprocess_exec(sys.executable, ..."
<sb0> the worker process is only run once in the best case. it stays online until it crashes, or is killed.
<rjo> sb0: you will need some heartbeat protocol and a cascade of mechanisms to SIGTERM/SIGKILL processes.
<sb0> we don't necessarily need a heartbeat protocol. if the worker process doesn't reply within a reasonable time to the next "run" command, or if its results are excessively delayed, kill + rerun
<sb0> what are you baking out? :)
xiangfu has joined #m-labs
xiangfu_ has joined #m-labs
xiangfu has quit [Remote host closed the connection]
xiangfu_ has quit [Remote host closed the connection]
imrehg has joined #m-labs
imrehg has quit [Quit: Leaving]
mumptai has joined #m-labs
siruf has quit [Ping timeout: 250 seconds]
siruf has joined #m-labs
FabM has quit [Quit: ChatZilla 0.9.90.1 [Iceweasel 24.8.1/20140924224142]]
imrehg has joined #m-labs
elaforest has joined #m-labs
imrehg has quit [Quit: Leaving]
Alain has joined #m-labs
mumptai has quit [Ping timeout: 260 seconds]
<rjo> sb0: timeout on run() might be a bit long. especially for long running experiments (infinitely long running).
<rjo> sb0: baking one of my trap chambers.
bhamilton has joined #m-labs
bhamilton has quit [Remote host closed the connection]
Alain has quit [Remote host closed the connection]
mumptai has joined #m-labs
mumptai has quit [Ping timeout: 245 seconds]
elaforest_ has joined #m-labs
elaforest has quit [Ping timeout: 258 seconds]
<rjo> sb0: i forgot about the example yesterday. will do that now.