sb0 changed the topic of #m-labs to: ARTIQ, Migen, MiSoC, Mixxeo & other M-Labs projects :: fka #milkymist :: Logs http://irclog.whitequark.org/m-labs
<GitHub70> [artiq] sbourdeauducq commented on issue #640: > I have set the lowest priority on my long (main) experiment, and the calibration experiments still never run.... https://git.io/v1ipu
cr1901 has joined #m-labs
cr1901 has quit [Client Quit]
rohitksingh_work has joined #m-labs
rohitksingh_work has quit [Client Quit]
sb0 has quit [Quit: Leaving]
sb0 has joined #m-labs
<sb0> rjo, you are using kc705-phaser elsewhere
<sb0> can we call it kc705_phaser.py?
<sb0> btw README_PHASER still says to use the phaser branch
<GitHub155> [artiq] sbourdeauducq pushed 2 new commits to master: https://git.io/v1P3x
<GitHub155> artiq/master 527757b Sebastien Bourdeauducq: kc705_drtio: use ad9154_fmc_ebz
<GitHub155> artiq/master 3b5abae Sebastien Bourdeauducq: drtio: fix clock domain conflict
sb0 has quit [Ping timeout: 250 seconds]
<bb-m-labs> build #269 of artiq-board is complete: Success [build successful] Build details are at http://buildbot.m-labs.hk/builders/artiq-board/builds/269
sb0 has joined #m-labs
<bb-m-labs> build #1166 of artiq is complete: Failure [failed python_unittest_1] Build details are at http://buildbot.m-labs.hk/builders/artiq/builds/1166 blamelist: Sebastien Bourdeauducq <sb@m-labs.hk>
hobbes- has quit [Ping timeout: 246 seconds]
hobbes- has joined #m-labs
<rjo> sb0: that's the bitstream name.
<rjo> sb0: sure. feel free to go ahead and change both.
<rjo> sb0: also, the drtio/dma changes broke startup kernels. they just freeze (afaict after completion).
<sb0> is that why you disabled dma in your bitstream?
<sb0> does that work around the problem?
<sb0> and sigh, I'm so annoyed by bugs like that
<sb0> and drtio is full of them, btw
<rjo> i disabled dma because of address conflicts
<rjo> it doesn't work around that problem.
<rjo> using the startup kernel as a regular kernel does.
<rjo> why is drtio full of bugs? tricky design? not transparent enough? corner cases?
<sb0> not transparent enough
<sb0> stuff works fine in simulation and doesn't on board
<sb0> and the others, yes, but mostly obscure failures and tedious debugging
<sb0> e.g. right now I'm running a PRBS test, and everything works fine, but when I send payload data stuff gets corrupted in all sorts of random ways
<sb0> and simulating the same shows no issue
<sb0> this SMA cable issue (transceiver only works when improperly driven single-ended) is like the cherry on top
<sb0> rjo, whitequark, the comms CPU is now responsible for resetting the (D)RTIO core. I suppose this is why it broke.
<sb0> the startup kernel would make the reset request and the comms CPU would fail to reset it
<sb0> as it may not be looking into the mailbox properly at that stage
<whitequark> sb0: can you elaborate?
<whitequark> this is in master branch, or phaser?
<sb0> master branch
<sb0> I have added a mailbox message type that means "reset the (D)RTIO cores"
<sb0> since the DRTIO reset is closely coupled to the link management, this should move to the comms CPU
<whitequark> that shouldn't be the case (failing to reset it), hm
<whitequark> flash_kernel_worker calls process_kern_message
<whitequark> which is where you correctly put that code
<whitequark> we are talking about rtio_mgt::init_core();, right?
<sb0> yes
<whitequark> hm
<sb0> I'm still using the kc705s for ~15min
<whitequark> ok
<rjo> sb0: that was my suspicion. but i am not even resetting the rtio core anymore in the startup_kernel.
<sb0> mh
<sb0> okay, done with the kc705s
<sb0> the only positive thing I can say about the aux packet system is it sends K characters
<sb0> ffs
<rjo> sb0: i confirmed that even an empty startup kernel hangs.
<sb0> that doesn't do RTIO operations?
sb0 has quit [Quit: Leaving]
<rjo> sb0: yes. empty run().
sb0 has joined #m-labs
<GitHub83> [smoltcp] whitequark pushed 1 new commit to master: https://git.io/v1PHX
<GitHub83> smoltcp/master 7e45f2d whitequark: README: clarify.
sandeepkr has joined #m-labs
<whitequark> this is what slip should have used instead of that bullshit escaping...
<whitequark> but too late now
<rjo> whitequark: that's just (a special kind of) escaping the frame delimiter.
<whitequark> rjo: I meant I specifically dislike the kind of escaping SLIP uses.
<rjo> whitequark: yes. that's weird escaping.
<rjo> whitequark: but COBS seems to want to know paket lengths too early.
<whitequark> rjo: you need that to checksum the IP header anyway
<rjo> whitequark: but it looks like we could do slip transparently in gateware...
<whitequark> slip in gateware seems a bit like of a waste of time, imo
<rjo> whitequark: sure. just in case one would want that, i have the escaper and the unescaper for "regular" escaping.
<whitequark> rjo: oh, if you already have it, that's a different question
<whitequark> if you can add that to the pipistrello gateware i'll be grateful
<rjo> whitequark: well. it's not exactly SLIP escaping. and AFAICT we would have to battle the regular console output if we were to route the pakets through gateware again.
<whitequark> rjo: can't we simply put the escaper into outgoing uart path?
<rjo> whitequark: i don't know whether the interface is really what you'd want. the unescaper looks like this: https://github.com/m-labs/pdq2/blob/master/gateware/escape.py
<whitequark> oh, this is what you mean
<whitequark> sb0: wow, the guy with better LLVM bindings has already delivered
<whitequark> his approach is remarkably impressive
<rjo> whitequark: what better llvm bindings? llvmlite replacement?
<rjo> sb0: i want to move the latency compensation into the rtio phy interface. it's getting way to messy if i do it manually using delay lines in the data path.
<rjo> sb0: i'll only permit positive delays so that the guard cycle stuff is still valid.
<whitequark> rjo: yes. simply bind llvm-c, like you are properly supposed to do. none of this text generation bullshit, which is explicitly frowned upon by the llvm devs
<whitequark> and which takes like 1/3 of our compile time to generate.
<whitequark> actually, I wonder if this is the part that's slow on windows, windows has a way worse allocator
<whitequark> so python might be spending time in that
<rjo> whitequark: that's what the old llvm bindings did iirc.
<whitequark> rjo: nope. they bound C++ using a custom binding "framework" that was very slow and very hard to maintain.
<rjo> whitequark: i.e. the reason why the continuum guys started llvmlite was the instability of llvm-c and pyllvm lagging behind.
<rjo> ah.
<rjo> but then i also remember that llvm-c was incomplete w.r.t. llvm-c++
<whitequark> llvm-c used to be *more* stable in sense of API evolution than it is now, actually
<whitequark> but it's a good thing that it is less stable now because now it evolves like people actually want it to
<whitequark> (and I participate in that process)
<whitequark> llvm-c is indeed incomplete wrt llvm-c++, but not wrt llvmlite (with very minor exceptions)
<whitequark> llvm-c provides way more capability than llvmlite, i.e. you can read the IR (which the author of the new binding wants)
<rjo> then i wonder why the continuum guys chose to write llvmlite
<whitequark> as you know I am not very fond of decisions of continuum in general, so my opinion here is easy to predict
<whitequark> that said there is one partly good reason to generate text; when you use llvm-c the error checking is asserts inside llvm itself, which crash your process and are only enabled in debug mode
<rjo> you would be wrong to make that a general approach.
<whitequark> what do you mean?
<rjo> i suspect you are not aware of the full scope of what continuum is doing.
<whitequark> maybe. but so far none of their decisions in the small are encouraging me to inquire into that.
<rjo> and for conda, yes, it has more rough edges than it should have. but without it we would be off way worse.
<whitequark> it's not about it having rough edges, it's about it having rough edges in a world where they could have simply copied any decent package manager that already exists
<whitequark> instead of painfully and slowly reinventing one
<whitequark> it's the same worse-is-better apprach as with llvmlite, causing just a bit not enough pain to make people start working on an improvement
<rjo> whitequark: it's not just package management.
mumptai has joined #m-labs
<whitequark> I actually wonder again how hard would it be to use conda's meta.yaml files to build packages with something saner
<rjo> whitequark: and the fact that Python has "issues" w.r.t. package management is much older than conda.
<rjo> this is not worse-is-better. this is no-perfect-is-better-than-nothing.
<whitequark> I don't see how Python not having a package manager itself harms conda in any way, it is probably making its job easier
<whitequark> ocaml didn't have a package manager before opam too
<whitequark> and yes. conda is better than nothing. my complaint is that there is no reason conda ought to be as bad as it is.
<rjo> it's also building, distributing packages, transitioning from older package formats, multiple platforms and OS.
<whitequark> I know.
<whitequark> none of that is unique to conda
<whitequark> since i've mentioned opam: opam does all of that, although in a different way, mostly because ocaml didn't have first-class windows support nor there was much desire to have it.
<rjo> but the number of package managers that work well for "everything" (i.e. not just python, but qt, R, gcc, llvm), on OSX, Linux, Windows is how large?
<whitequark> I disagree that this requires special complex support from package manager
<whitequark> case in point: https://github.com/whitequark/opam-cross-android cross-compiles not just ocaml packages but a multitude of their dependencies as well, with no explicit support from opam, and using an approach very similar to what conda does
<rjo> well for one the package manager can't assume that everything is a python package.
<rjo> like all other package formats for python do.
<whitequark> sure.
<rjo> so is there a package manager that handles windows well?
<whitequark> yypkg comes to mind
<whitequark> and yes, sure, opam
<whitequark> (the main problem with opam is that historically all of the build recipes depend on posix. but the package manager itself, not so.)
<whitequark> but I think we've already discussed this and decided that replacing conda is a non-starter because people specifically want to use conda.
<rjo> and continuum is big. the guys working on conda are a too lenient on code quality IMHO as well. but extrapolation from those people does not seem warranted.
<whitequark> in case of llvmlite I don't extrapolate, I've looked at their code and talked with them enough
<rjo> yep. it's also integration and available packages.
<whitequark> the code quality in llvmlite isn't bad.
<whitequark> but the architectural decisions are.
<rjo> but i would also not extrapolate from llvmlite+conda to all the other things.
<whitequark> okay, that is a fair point and I should stop that.
<rjo> ;)
<rjo> back to that other llvm-c-py thing. got any pointers?
<whitequark> forwarded the email to you.
<rjo> ah. nice. metaprogramming again. why do i have the feeling that FFI always involves way to much copy-pasting of original header files.
<rjo> for cython there is an entire family of tools that try to solve the analogous problem there. https://github.com/cython/cython/wiki/AutoPxd
<whitequark> right. the ruby bindings use ffi_gen, which is another similar tool.
<whitequark> cython involves running a C compiler, right?
<whitequark> this cffi solution doesn't need any C compiler as I believe what happens is it parses the declarations and then uses libffi.
<whitequark> or at least that's what it should do, I haven't verified that it doesn't use the fallback yet.
<rjo> yes. but you tend to run the cython+C compiler at build time. then you get a python .so that wraps the target.
<whitequark> I feel like this will be more fragile, e.g. what about Windows?
<whitequark> but overall I do not have a strong opinion on this
<rjo> if you can compile python modules on windows then this is the same.
rohitksingh has joined #m-labs
<whitequark> ok
<GitHub178> [smoltcp] whitequark pushed 1 new commit to master: https://git.io/v1X0A
<GitHub178> smoltcp/master be4ea0a whitequark: Respond with ICMP echo request data in echo reply.
<rjo> whitequark: is that stack from the person you mentioned a while ago or did you start it?
<whitequark> rjo: I started it.
<whitequark> the reason is threefold, a) if I screwed up the lwip interface in a way I didn't notice there is no guarantee the same won't happen with Brian's stack (or picotcp); b) Brian is not being very fast about relicensing it, and I cannot push his code still, c) licensing
<whitequark> I'd like to have TCP working by tomorrow evening
<whitequark> without reordering or client; just server that receives in-order segments and drops everything else
<whitequark> still will be an improvement over the current state of lwip. then that can be evolved when needed.
<whitequark> rjo: I've devised a really elegant way to buffer packets that meshes really well with Rust's ownership mechanisms, so for now I'm fond of the idea to write this stack from scratch.
<whitequark> the agreement with sb0 was that if I don't produce it very rapidly then instead I'll hook up picotcp.
<whitequark> let's see.
<GitHub32> [smoltcp] whitequark pushed 1 new commit to master: https://git.io/v1XaG
<GitHub32> smoltcp/master 8a3dee0 whitequark: Simplify checksum computation.
<whitequark> honestly I had no idea how much of a mess TCP/IP is. not necessarily hard to implement especially if you can afford to be noncompliant, just... so many bizarre choices
<whitequark> why have fragmentation both at TCP and IP layer? better yet, why keep that in IPv6?!
<whitequark> then there's this whole talk about how TCP/IP lost a layer, which makes the more sense the longer I look at it
<rjo> we can't really use picotcp without loosing most of our user base.
<rjo> or without a lot of refactoring and splitting of artiq
<whitequark> well, sb0 was saying different things about it...
<rjo> well at least ipv6 doesn't fragment en-route.
<whitequark> why is gplv3 a problem anyway? aren't we doing science?
<whitequark> i thought you were supposed to publish the code.
<whitequark> re fragmenting en-route: I suppose that would make me happy if I was writing a router
<rjo> from yours and my personal ethics perspective yes.
<rjo> but that's not the world.
<whitequark> okay. i'm, well, fine with non-gpl3 too, since my personal view of copyright is that copyright doesn't exist.
<rjo> if -- by become derivative works -- experiments are GPLv3, then a recipient/collaborator of a scientist receiving the code, could wander of with it/redistribute it.
<rjo> you would need to clarify "exist" for me.
<whitequark> "doesn't exist" as in "some people are delusional enough to consider intellectual property a natural right, and since they threaten me with state violence, I sometimes have to treat their delusions seriously"
<whitequark> all of my new code will be released under a 0-clause BSD, since that's OSI-approved, and that communicates my intent well to anyone who does care about copyright
<rjo> ok. that's fine for your code. it unfortunately generates little incentive/pressure for the untypical open source contributor (e.g. scientist) to become one.
<whitequark> that's true. but i'm not sure how much does that pressure help in any case.
<whitequark> all of my experience around people who normally use proprietary software demonstrates that (L)GPL is usually worked around, not accepted
<rjo> how's that?
<whitequark> what I mean is that, when encountering a crucial building block that's (L)GPL, usually people do not seriously consider an option of opening their codebase up more, but rather either compartmentalize it or migrate to a different building block.
<whitequark> I haven't personally seen nor heard of a case where such "opening up" happened
<whitequark> and for some reason I've been involved in a fair number of discussions on using such (L)GPL building blocks.
<whitequark> this is a set of anecdotes, but nevertheless.
<whitequark> libreadline and gcc are Stallman's poster child of "make it GPL-only and they will open up codebases", but of course all that happened is LLVM and libedit
<whitequark> ... conversely, LLVM has a *very* good motivator for submitting your code upstream: if you don't then you will spend the rest of your life refactoring pointers into references and vice versa.
<rjo> IME it is much more leading by example. at least in physics i can see it working to my steadily increasing satisfaction.
<whitequark> (I mean tracking upstream changes, of course.)
<whitequark> yes. leading by example works. do you really need the GPL stick to do that?
<whitequark> (or even further, does the GPL stick help, or alienate people who then feel forced to release code, and don't even begin to use the software?)
<whitequark> anyway I don't really want to start another BSD vs GPL holy war, I don't have a horse in this race
<whitequark> let me finish the stack instead.
<rjo> the alienation is a purely educational issue IMO/E. and people seem to be well able to understand that it is less of a stick and much more of a guarantee and generous contract.
<rjo> ;) sounds good.
<whitequark> ack.
<whitequark> I suppose scientists would be less inclined to see it as a stick compared to businessmen, for whom GPL rarely has any perceived benefit
<whitequark> so that makes sense.
<rjo> right.
ohama has quit [Ping timeout: 264 seconds]
ohama has joined #m-labs
rohitksingh has quit [Quit: Leaving.]
<GitHub34> [artiq] dleibrandt commented on issue #640: In the example below, ExpA is the "main" experiment, and ExpB is the "calibration" experiment. ExpA is submitted first with `Priority=0`. ExpB is submitted second with `Priority=1` and `Flushing=True`. Exp never runs.... https://git.io/v113a
kuldeep_ has joined #m-labs
kuldeep_ has quit [Read error: Connection reset by peer]
mumptai has quit [Quit: Verlassend]