<whitequark>
rjo hit this way before the DDS merge.
<whitequark>
this became a problem with the rust runtime because rust is trying to be efficient, and doesn't just load an entire packet into a massive static buffer
<whitequark>
this apparently tickles some lwip bug.
<whitequark>
I haven't spent much time looking into it because we are changing stacks anyway
<whitequark>
anyway, what triggers the bug is large packets
<whitequark>
it doesn't actually have to be a kernel, I hit this with RPCs too
<whitequark>
but something in the RPC code makes triggering this bug less likely
<sb0>
this needs a rapid fix, so changing stacks is likely not an option
<whitequark>
flush after every 1024 bytes on the host side?
<sb0>
if the problem is large packets, a workaround that sends small packets is acceptable
<sb0>
yes
<whitequark>
okay
<sb0>
host side? how does this have to do with better efficiency of the new runtime network code?
<whitequark>
memory efficiency
<sb0>
how does this control the size of the packets that the host sends?
<whitequark>
what does?
<sb0>
you're saying that what tickles the bug is better efficiency of the Rust packet processing and memory management on the runtime side
<whitequark>
yes
<sb0>
and that the workaround is flushing every 1024 bytes on the host side
<whitequark>
yes
<sb0>
wasn't the host already sending large packets before, with the previous C code in the runtime?
<whitequark>
it was. but the tcp_recv/tcp_recved functions were called in a different way, I suppose.
<whitequark>
I don't actually know what causes it exactly, I just know that small packets are OK, so we can switch to sending smaller packets
<whitequark>
there's probably some debug switch in lwip...
<whitequark>
I wonder if lwip is exhausting receive buffers
<sb0>
reactos is using lwip?! wow
<whitequark>
lol
<whitequark>
no wonder it barely worked the last time I tried to use it for anything
sb0 has quit [Quit: Leaving]
rohitksingh_work has joined #m-labs
sb0 has joined #m-labs
<sb0>
whitequark, do you happen to have a CA3140EZ op-amp or equivalent by any chance?
<whitequark>
sb0: definitely not
mumptai has joined #m-labs
<GitHub68>
[artiq] sbourdeauducq pushed 2 new commits to drtio: https://git.io/v1IRD
<GitHub68>
artiq/drtio c419c42 Sebastien Bourdeauducq: drtio: support for local RTIO core
<GitHub68>
artiq/drtio d37b73f Sebastien Bourdeauducq: drtio: FIFO timeout is handled in gateware + give remote side more time
<GitHub0>
[artiq] sbourdeauducq pushed 1 new commit to drtio: https://git.io/v1IRQ
<GitHub98>
[artiq] sbourdeauducq pushed 1 new commit to drtio: https://git.io/v1Iug
<GitHub98>
artiq/drtio f4c6d6e Sebastien Bourdeauducq: kc705_drtio_master: fix number of fine RTIO timestamp bits
MiW has quit [Remote host closed the connection]
FabM has quit [Ping timeout: 250 seconds]
FabM has joined #m-labs
mumptai has quit [Quit: Verlassend]
Ultrasauce has quit [Read error: Connection reset by peer]
key2 has joined #m-labs
<rjo>
sb0: you can grab the kc705 (kill my flterm) whenever you see my login sessions idle for more than a few minutes.
<rjo>
flterm (python/asyncio) behaves badly if one rips out that FD from under its nose. i.e. eating up all memory.
Ultrasauce has joined #m-labs
rohitksingh_work has quit [Read error: Connection reset by peer]
kuldeep has quit [Ping timeout: 248 seconds]
kuldeep has joined #m-labs
sb0 has quit [Quit: Leaving]
rohitksingh has joined #m-labs
mumptai has joined #m-labs
Gurty has quit [Ping timeout: 256 seconds]
Gurty has joined #m-labs
ohama has quit [*.net *.split]
kaalikahn has quit [*.net *.split]
stekern has quit [*.net *.split]
kyak has quit [*.net *.split]
kyak has joined #m-labs
kyak has joined #m-labs
stekern has joined #m-labs
ohama has joined #m-labs
kaalikahn has joined #m-labs
rohitksingh has quit [Quit: Leaving.]
rohitksingh has joined #m-labs
Ultrasauce has quit [Ping timeout: 245 seconds]
rohitksingh has quit [Quit: Leaving.]
rohitksingh has joined #m-labs
rohitksingh has quit [Quit: Leaving.]
<GitHub26>
[artiq] r-srinivas commented on issue #626: To be more explicit, I guess if we can just check it at the start of the experiment at prepare that would be more than enough. Does scripting moninj already work?... https://git.io/v1LyQ
key2 has quit [Ping timeout: 260 seconds]
cr1901_modern has quit [Ping timeout: 268 seconds]
cr1901_modern has joined #m-labs
mumptai has quit [Quit: Verlassend]
MiW has joined #m-labs
<GitHub113>
[artiq] jordens commented on issue #626: But for that issue it would seem you want something entirely different: a more compact moninj UI. https://git.io/v1tsM
<GitHub27>
[artiq] r-srinivas commented on issue #566: This seemed to happen when I installed artiq 3.0 as well. https://git.io/v1tnE
<GitHub162>
[artiq] r-srinivas commented on issue #626: Making the gui more compact doesn't seem scalable especially if we upgrade the our backplane on the HPC side to have more ttl channels. I think the solution should be independent of the gui. If there's some way to check if a specific ttl channel were overridden that would help. It's not a high priority but it helps you avoid silly errors.... https://git.io/v1tcp
<GitHub84>
[artiq] r-srinivas opened issue #629: Selecting new applet on empty applet dock in artiq 3.0 crashes dashboard https://git.io/v1tlF
<GitHub17>
[artiq] r-srinivas opened issue #630: ERROR:dashboard:quamash.QEventLoop:Task exception was never retrieved when disabling applet on dashboard https://git.io/v1tzf