<GitHub57>
[artiq] dhslichter commented on issue #407: Definitely an improvement over the ~3.5 s that was being seen with 3.0 previously. See post in this issue from @r-srinivas on 12/1/16. Still would be nice to understand how this compares to what you see @whitequark on your Linux and Windows machines. https://github.com/m-labs/artiq/issues/407#issuecomment-333848984
<GitHub166>
[artiq] jordens commented on issue #837: AFAICT this is actually "received from the MAC" (hence the printing), with the MAC being involved as well. And IIRC there were changes to the MAC as well.... https://github.com/m-labs/artiq/issues/837#issuecomment-333855224
<GitHub103>
[smoltcp] whitequark opened issue #47: Convert EthernetInterface to use a builder pattern https://git.io/vdWKa
<GitHub114>
[smoltcp] whitequark opened issue #48: Implement a macro for eas{y,ier} creation of interface and socket set on bare-metal https://git.io/vdW6u
rohitksingh1 has joined #m-labs
rohitksingh has quit [Read error: Connection reset by peer]
<GitHub149>
[smoltcp] whitequark pushed 3 new commits to master: https://git.io/vdWXx
<GitHub187>
[artiq] r-srinivas commented on issue #407: `@r-srinivas OK, this is definitely the AV in NIST. But that's not the whole story. Even with the AV slowdown it should give you not much more than 300ms (I estimate maybe 100ms more taken by stripping, not accounted for in perf_embedding) delay between pulses, whereas you observe 500-700ms. Something else is afoot.`... https://github.com/m-labs/artiq/issues/407#issuecomment-333880534
<GitHub59>
[artiq] r-srinivas commented on issue #407: > @r-srinivas OK, this is definitely the AV in NIST. But that's not the whole story. Even with the AV slowdown it should give you not much more than 300ms (I estimate maybe 100ms more taken by stripping, not accounted for in perf_embedding) delay between pulses, whereas you observe 500-700ms. Something else is afoot.... https://github.com/m-labs/artiq/issues/407#issuecomment-333880534
<GitHub32>
[artiq] whitequark commented on issue #407: @r-srinivas Can you please recheck with the AV off? It's clear that performance is not going to be acceptable with it on anyway, so all the speculation on exactly how much delay it adds is unhelpful. https://github.com/m-labs/artiq/issues/407#issuecomment-333933940
<GitHub138>
[artiq] whitequark commented on issue #837: @jordens I have an idea. Could this be related with the rate at which the MAC can fill the RX buffers versus the speed at which smoltcp can empty them? Maybe at 100M our four buffers *just happen* to be *just enough* for decent performance, and at 1000M they can be filled in less time than smoltcp can process even a single packet. (This didn't happen with lwip because lwip never achieved the 2.3 MB/s tr
<GitHub87>
[artiq] whitequark commented on issue #837: @jordens I have an idea. Could this be related with the rate at which the MAC can fill the RX buffers versus the speed at which smoltcp can empty them? Maybe at 100M our four buffers *just happen* to be *just enough* for decent performance, and at 1000M they can be filled in less time than smoltcp can process even a single packet. (This didn't happen with lwip because lwip never achieved the 2.3 MB/s tran
<GitHub69>
[artiq] whitequark commented on issue #837: @jordens I have an idea. Could this be related with the rate at which the MAC can fill the RX buffers versus the speed at which smoltcp can empty them? Maybe at 100M our four buffers *just happen* to be *just enough* for decent performance, and at 1000M they can be filled in less time than smoltcp can process even a single packet. (This didn't happen with lwip because lwip never achieved the 2.3 MB/s tran
<GitHub171>
[artiq] whitequark commented on issue #837: @jordens I have an idea. Could this be related with the rate at which the MAC can fill the RX buffers versus the speed at which smoltcp can empty them? Maybe at 100M our four buffers *just happen* to be *just enough* for decent performance, and at 1000M they can be filled in less time than smoltcp can process even a single packet. (This didn't happen with lwip because lwip never achieved the 2.3 MB/s tr
<whitequark>
sb0: looks like get_constants() gatherer in misoc has never worked
<whitequark>
oh nvm
<whitequark>
I misunderstood how it works
rohitksingh1 has quit [Quit: Leaving.]
<GitHub56>
[misoc] whitequark pushed 1 new commit to master: https://git.io/vdlsn
<GitHub79>
[smoltcp] podhrmic commented on issue #46: Here is the backtrace - it starts at: `let _poll_at = iface.poll(&mut sockets, timestamp).expect("poll error");`... https://git.io/vdl4E
<GitHub115>
[smoltcp] whitequark commented on issue #46: It's normal for `poll` to return errors. It will do so to indicate various boundary conditions, e.g. transmit buffers being exhausted, malformed packets, or (as we have here) unknown packets. The TCP/IP RFCs indicate that such conditions should be logged for debugging, so I provide a facility for logging them. None of these errors are fatal so if you don't care about logging you should just ignore them
<bb-m-labs>
build #824 of artiq-board is complete: Failure [failed conda_build] Build details are at http://buildbot.m-labs.hk/builders/artiq-board/builds/824 blamelist: whitequark <whitequark@whitequark.org>, Sebastien Bourdeauducq <sb@m-labs.hk>
<bb-m-labs>
build #1711 of artiq is complete: Failure [failed] Build details are at http://buildbot.m-labs.hk/builders/artiq/builds/1711 blamelist: whitequark <whitequark@whitequark.org>, Sebastien Bourdeauducq <sb@m-labs.hk>