sb0 changed the topic of #m-labs to: ARTIQ, Migen, MiSoC, Mixxeo & other M-Labs projects :: fka #milkymist :: Logs http://irclog.whitequark.org/m-labs
sandeepkr__ has joined #m-labs
sandeepkr_ has quit [Ping timeout: 252 seconds]
kuldeep has quit [Ping timeout: 265 seconds]
kuldeep has joined #m-labs
<whitequark> bb-m-labs: force build --props=package=rustc conda-lin64
<bb-m-labs> build forced [ETA 45m21s]
<bb-m-labs> I'll give a shout when the build finishes
rohitksingh_work has joined #m-labs
<bb-m-labs> build #227 of conda-lin64 is complete: Success [build successful] Build details are at http://buildbot.m-labs.hk/builders/conda-lin64/builds/227
<whitequark> bb-m-labs: force build --props=package=rust-core-or1k conda-lin64
<bb-m-labs> build forced [ETA 52m31s]
<bb-m-labs> I'll give a shout when the build finishes
<bb-m-labs> build #228 of conda-lin64 is complete: Success [build successful] Build details are at http://buildbot.m-labs.hk/builders/conda-lin64/builds/228
<whitequark> bb-m-labs: force build artiq
<bb-m-labs> build forced [ETA 31m57s]
<bb-m-labs> I'll give a shout when the build finishes
<bb-m-labs> build #106 of artiq-board is complete: Success [build successful] Build details are at http://buildbot.m-labs.hk/builders/artiq-board/builds/106
<bb-m-labs> build #997 of artiq is complete: Failure [failed] Build details are at http://buildbot.m-labs.hk/builders/artiq/builds/997
rohitksingh_wor1 has joined #m-labs
rohitksingh_work has quit [Ping timeout: 264 seconds]
<GitHub174> [artiq] whitequark pushed 2 new commits to master: https://git.io/vPlkb
<GitHub174> artiq/master 8be60cc whitequark: runtime: fix KERNELCPU_LAST_ADDRESS after layout change.
<GitHub174> artiq/master 4f11b07 whitequark: runtime: remove useless handshaking in analyzer.
kuldeep has quit [Ping timeout: 272 seconds]
sandeepkr__ has quit [Ping timeout: 272 seconds]
<bb-m-labs> build #107 of artiq-board is complete: Success [build successful] Build details are at http://buildbot.m-labs.hk/builders/artiq-board/builds/107
<bb-m-labs> build #998 of artiq is complete: Failure [failed] Build details are at http://buildbot.m-labs.hk/builders/artiq/builds/998 blamelist: whitequark <whitequark@whitequark.org>
sandeepkr has joined #m-labs
sandeepkr has quit [Max SendQ exceeded]
kuldeep has joined #m-labs
<GitHub172> [artiq] enjoy-digital pushed 2 new commits to phaser: https://git.io/vPltp
<GitHub172> artiq/phaser e998a98 Florent Kermarrec: phaser/startup: use get_configuration_checksum()
<GitHub172> artiq/phaser b02a723 Florent Kermarrec: phaser: use 125MHz refclk for jesd
<_florent_> rjo: ^ can you test that? I think the 500MHz refclk is too high, I tested with a 500Mhz refclk on my board and got the same behaviour you had yesterday.
<whitequark> sb0: I don't understand
<whitequark> it looks like runtime.rs isn't being installed into site-packages
<whitequark> but runtime is
<whitequark> sb0: is there some magic setuptools incantation? but even if yes I can't find its application to runtime...
<whitequark> oh, MANIFEST
<GitHub93> [artiq] whitequark pushed 2 new commits to master: https://git.io/vPlOr
<GitHub93> artiq/master 8eeb6ea whitequark: packaging: include runtime.rs in MANIFEST.
<GitHub93> artiq/master ef10344 whitequark: runtime: rewrite isr() in Rust.
<bb-m-labs> build #108 of artiq-board is complete: Exception [exception interrupted] Build details are at http://buildbot.m-labs.hk/builders/artiq-board/builds/108 blamelist: whitequark <whitequark@whitequark.org>
<bb-m-labs> build #999 of artiq is complete: Failure [failed] Build details are at http://buildbot.m-labs.hk/builders/artiq/builds/999 blamelist: whitequark <whitequark@whitequark.org>
Gurty has quit [Ping timeout: 240 seconds]
<sb0> is the lattice linux toolchain still complete unusable garbage?
<sb0> last time I touched it (2008) it required some outdated linux kernel that didn't support SATA hard disks
mumptai has quit [Remote host closed the connection]
<sb0> rhel 6, hm, doesn't look completely horrible
<GitHub19> [conda-recipes] whitequark pushed 1 new commit to master: https://github.com/m-labs/conda-recipes/commit/e120ecf5191ccc731c979127e7896e798710d5b5
<GitHub19> conda-recipes/master e120ecf whitequark: binutils-or1k-linux: fix patch.
<whitequark> bb-m-labs: force build --props=package=binutils-or1k-linux conda-lin64
<bb-m-labs> build forced [ETA 28m22s]
<bb-m-labs> I'll give a shout when the build finishes
<bb-m-labs> build #229 of conda-lin64 is complete: Success [build successful] Build details are at http://buildbot.m-labs.hk/builders/conda-lin64/builds/229
<whitequark> bb-m-labs: force build artiq
<bb-m-labs> build forced [ETA 31m57s]
<bb-m-labs> I'll give a shout when the build finishes
<whitequark> rjo: ping
<bb-m-labs> build #109 of artiq-board is complete: Success [build successful] Build details are at http://buildbot.m-labs.hk/builders/artiq-board/builds/109
<bb-m-labs> build #1000 of artiq is complete: Failure [failed] Build details are at http://buildbot.m-labs.hk/builders/artiq/builds/1000
<whitequark> bb-m-labs: force build artiq
<bb-m-labs> build forced [ETA 31m57s]
<bb-m-labs> I'll give a shout when the build finishes
<bb-m-labs> build #110 of artiq-board is complete: Success [build successful] Build details are at http://buildbot.m-labs.hk/builders/artiq-board/builds/110
<bb-m-labs> build #1001 of artiq is complete: Failure [failed] Build details are at http://buildbot.m-labs.hk/builders/artiq/builds/1001
<whitequark> what the fuck is the problem here exactly?
<whitequark> where do these pings go?
<whitequark> the board comes up...
bb-m-labs has quit [Quit: buildmaster reconfigured: bot disconnecting]
<GitHub10> [buildbot-config] whitequark pushed 1 new commit to master: https://github.com/m-labs/buildbot-config/commit/4730b1492796c7e40b46aaf26128fbcd366123c1
<GitHub10> buildbot-config/master 4730b14 whitequark: radically increase amount of pings
bb-m-labs has joined #m-labs
<whitequark> bb-m-labs: force build artiq
<bb-m-labs> build #1002 forced
<bb-m-labs> I'll give a shout when the build finishes
<bb-m-labs> build #111 of artiq-board is complete: Success [build successful] Build details are at http://buildbot.m-labs.hk/builders/artiq-board/builds/111
<rjo> _florent_: too high for what?
<rjo> _florent_: i'll adapt it.
<bb-m-labs> build #1002 of artiq is complete: Failure [failed python_unittest_1] Build details are at http://buildbot.m-labs.hk/builders/artiq/builds/1002
<rjo> whitequark: pong
<_florent_> rjo: I'm not sure about the limitation, but I did a test and had the same behaviour you had yesterday.
<_florent_> rjo: can be worth trying with 125MHz, if it's better we'll investigate for the higher frequencies, if that's not better the problem is elsewhere.
<rjo> _florent_: will try. it just needs a bit more adapting.
<_florent_> rjo: ok thanks.
<GitHub145> [artiq] jordens pushed 1 new commit to phaser: https://git.io/vPlgl
<GitHub145> artiq/phaser 09434ec Robert Jordens: phaser: also adapt rtio_crg
<rjo> _florent_: how did you drive the ad9516?
<_florent_> 500Mhz clock from the kx705
<_florent_> kx
<_florent_> kc...
<rjo> derived from the 200 MHz or 156 MHz oscillator?
<_florent_> 200
<GitHub109> [artiq] jordens pushed 1 new commit to phaser: https://git.io/vPl2r
<GitHub109> artiq/phaser c846e75 Robert Jordens: phaser: fix startup_kernel/ceil
<GitHub151> [artiq] jordens pushed 1 new commit to phaser: https://git.io/vPlaT
<GitHub151> artiq/phaser 9b860b2 Robert Jordens: phaser: fix rtio pll inputs
<whitequark> rjo: about the background RPCs.
<whitequark> so right now the code that traverses the (possibly rather deep) tree of pointers that is the RPC arguments is on the comm CPU side
<whitequark> the way #551 is phrased implies that you want these to be moved to the kernel CPU side
<whitequark> I can do that but then #551 will wait until ksupport is moved to Rust too
<whitequark> there is another problem here, which is synchronization
<whitequark> I currently have no idea how to implement a FIFO in a non-cache-coherent AMP system, where the reader doesn't block the writer
<whitequark> this sounds hard and error-prone.
<rjo> is your question whether the issue implies that the serialization should be move from the kernel cpu side to the comms cpu side?
<whitequark> I think moving serialization to the kernel CPU side would be troublesome, yes.
<whitequark> so I am asking whether it's in the spec.
kuldeep_ has joined #m-labs
<rjo> well. the spec may have been affected by physicist phantasies.
<whitequark> if we move the FIFO to a dedicated hardware buffer then that becomes a question of rust on kernel cpu
<whitequark> which is not hard but will only take a bit of time.
kuldeep has quit [Ping timeout: 272 seconds]
<whitequark> but I'm not sure whether that's realistic to implement
<rjo> making #551 dependent on rust would not be a problem afaict.
<rjo> just to check: we are talking about kernel-to-host RPCs.
<whitequark> correct.
<whitequark> can we move the FIFO to a hardware buffer then? how would that work?
<whitequark> basically what I am looking for is not dealing with the caches
<rjo> i am not entirely certain i know how the rpc through mailbox stuff works currently. you are saying currently there is one pointer coming through the mailbox and then the comms cpu serializes everything?
<whitequark> rpc through mailbox works as follows.
<whitequark> the mailbox is a peripheral that's just one 32-bit register. it's in an uncached area.
<whitequark> both before setting it, and after reading it on the other side, all L1 caches are purged
<rjo> sidenote: afaics serialization is actually not a very hungry thing. and not the thing we need to optimize here.
<whitequark> and yes, serialization happens on the comms cpu
<whitequark> hm.
<rjo> but what gets passed?
<whitequark> optimizing serialization is an independent problem
<rjo> how does the comms cpu know what and how to serialize?
<rjo> magic format strings?
<whitequark> to my understanding the reason for serializing on kernel CPU is having latency bounded just by the serialization
<whitequark> what gets passed: a struct with RPC number, RPC "tag" and a pointer to an array of arguments
<whitequark> the tag is a serialization of the complete type of the arguments.
<rjo> yes. that's correct. serialization on the kernel cpu also (to me at least) would make the fifo simpler ("tcp wire format") and allow the kernel cpu to continue its work when serialized without bothering about caches and dirty data.
<whitequark> right now serialization is not exceptionally fast in the details (i.e. not microoptimized), but it doesn't have inefficiencies in the large (e.g. it doesn't allocate or traverse anything supralinearly)
<sb0> whitequark, if you have two pointers in the mailbox instead of just one, I think you can easily make a FIFO with storage in RAM.
<whitequark> rjo: you can't not bother about caches
<whitequark> at least, you have to flush the cache after serializing every RPC
<rjo> yes.
<whitequark> how does that make the FIFO simpler?
<rjo> or have some non-cached inter-cpu DMA arena.
<whitequark> non-cached arena would mean that every write to that arena does a roundtrip to SDRAM, right?
<whitequark> that sounds very bad
<rjo> then the comms cpu would not have to bother with the inner structure of the rpc.
<whitequark> it makes no difference who bothers with the inner structure of the RPC
<whitequark> well, complexity-wise
<whitequark> it will be the same Rust code but running on a different core
<rjo> but only once you have serialized, the kernel cpu can mess with that original data again.
<sb0> whitequark, the FIFO is only wanted for kernel CPU -> comms CPU
<sb0> the FIFO would contain messages that are all of a certain size, say, 1KB
<rjo> i.e. background_rpc(array); array[7] = 9;
<whitequark> rjo: or even returning from the current function, because the array might have been allocated in the current frame.
<sb0> serialization fitting into one message is a condition for using background RPCs. otherwise it falls back to a blocking behavior.
<rjo> yes. isn't this a reason to do the serialization on the kernel cpu?
<sb0> then the "mailbox" simply contains produce/consume pointers/indices that address that message FIFO.
<sb0> write to the FIFO: fill in empty message slot in SDRAM (note that the cache is write-through), then increment produce pointer
<whitequark> yes, with this restriction it is not hard to implement
<sb0> read from the FIFO: invalidate caches, process messages, increment consume pointer
<whitequark> then that is only blocked on Rust
<sb0> when serialization doesn't fit, put a pointer to the serialization into a message, and wait for an ack from the comms CPU.
<whitequark> naw
<whitequark> just fall back to the non-background RPC pth
<sb0> which is what it does
<whitequark> I would serialize directly into the message slot
<sb0> but how do you know in advance if it fits?
<whitequark> I wouldn't
<whitequark> I'll optimistically assume that it does and bail out if it doesn't
<whitequark> with your scheme I will need some sort of temporary buffer for serialization
<whitequark> a) how large?
<whitequark> there is currently no limit on RPC size
<rjo> even serializing twice for those cases would be fine by me.
<whitequark> you can just as well transmit the entire main RAM because the comms CPU doesn't store the serialized data anywhere, it directly transmits that
<whitequark> b) that means one extra copy on the fast path
<whitequark> which seems weird.
<rjo> but still, also in the fallback case the kernel cpu should not have to wait for the rpc return from the host.
<whitequark> sure
<whitequark> why would it
<whitequark> there's no return
<whitequark> we will need some sort of performance counter that tells you when too many background RPCs fall back.
<rjo> yes. so the only features that we would need are the RAM message slots + a ringbuffer for their handles. and rpcs without return.
<rjo> or maybe the rpcs could be "partial" and then assembled on the host.
<whitequark> "partial" ?
<rjo> then the parts would always fit.
<rjo> just fragment them.
<rjo> without the comms cpu knowing about it.
<whitequark> that's extremely complex
<rjo> start serializing until you have filled a fragment, send, start the next fragment. have the host assemble the fragments into one rpc.
<whitequark> I also need to put the final length of transfer somewhere, handle the case where the RPC is larger than the entire buffer, ...
<sb0> as I understand, background RPCs would be used essentially to transmit small amounts of data each time
<rjo> sb0: why? i would use them to transmit large amounts as well.
<whitequark> rjo: that doesn't bring you any benefit, assuming the large amount >> fifo size
<whitequark> you'll just block on fifo instead of blocking on comms CPU request
<sb0> in what case? why not transmit your large buffer with several smaller calls?
<rjo> whitequark: can't you just do that when you hand the fragment over to the comms cpu? fill a fragment, hand "partial RPC" message over to the comms cpu, fill next fragment, hand over "full RPC" message.
<rjo> when the buffer is full you have to stall anyway.
<whitequark> rjo: i hate the word "just".
<rjo> whitequark: ha.
<whitequark> no, i can't "just" do that. there's a zillion edge cases to handle
<rjo> ok. without the "just".
<sb0> also, the current RPC is particularly inefficient with small data, but with large data it gets better
<rjo> whitequark: fine. i'll leave it to you ;)
<whitequark> can i write a fragmentation engine for RPCs? sure. is it worth the hassle? I really doubt so
<whitequark> and it will take a lot of time for sure
<rjo> well one thing this would help with is that the comms cpu could be made busy with txing while the kernel cpu could be busy with serializing the next fragment.
<rjo> but anyway. i am happy to leave it to you.
<whitequark> if we would go the route that you want then I think we should just ditch RPCs in the main TCP stream entirely
<whitequark> add another channel that's dedicated to kernel CPU, have it send UDP datagrams with one fragment per datagram
<whitequark> otherwise we have a whole lot of coordination overhead (TCP overhead plus core communication overhead) for no good reason
<whitequark> so that would be a (logical) FIFO between the host machine and the kernel CPU
<whitequark> ok, then I will start converting ksupport to Rust monday
<GitHub132> [artiq] jordens pushed 1 new commit to phaser: https://git.io/vPlr7
<GitHub132> artiq/phaser cfd2fe8 Robert Jordens: phaser: fix fpga deviceclock divider
<rjo> _florent_, larsc, sb0: success!
<rjo> now "just" need to test with actual data.
<_florent_> rjo: great :) are you also checking the outputs?
key2 has joined #m-labs
<rjo> whitequark: ack. but. udp sounds unreliable to me.
<whitequark> hence a logical FIFO.
<rjo> whitequark: ack.
<rjo> _florent_: coming.
<whitequark> an ack from the host would be advancing the consumer pointer. etc.
<_florent_> ok
<whitequark> there is actually one bad part, which is the lack of UDP tunneling in ssh.
<whitequark> so I wouldn't be able to develop on my laptop, which is aggravating.
<rjo> yep. that's a shame. that ssh doesn't do udp socks.
<whitequark> there's socat etc but the process is already very fiddly.
<sb0> nah, no UDP
<rjo> socat is really nifty. i use it to exfiltrate multicast hd tv...
<sb0> rjo, cool! :)
shuffle2 has quit [Ping timeout: 244 seconds]
<whitequark> hmm, one test fails.
<whitequark> self.assertLess(self.dataset_mgr.get("rpc_time_mean"), 15*ms)
<whitequark> AssertionError: 0.046450169200000009 not less than 0.015
<whitequark> that's quite interesting.
<rjo> over ssh?
<whitequark> no
<whitequark> that's on the buildbot, in low latency mode
<whitequark> I think the problem is I'm calling into lwip too much instead of buffering.
<whitequark> I assumed, when I am not using the |MORE flag, it will implement Nagle properly
<rjo> but 30ms is a lot.
<whitequark> i.e. just buffer the writes somewhere.
<whitequark> when I am using* rather
<whitequark> oh, hm: TCP_WRITE_FLAG_MORE (0x02) for TCP connection, PSH flag will not be set on last segment sent
<whitequark> what does that do anyway?..
<rjo> aaaaah. i think i crashed the scope.
<whitequark> you did. I've rebooted it.
<rjo> whitequark: o you are there?
<whitequark> yes.
<whitequark> I need to disassemble one of my HPLC pumps and take photos of the part that's broken
<rjo> could you just describe whether there was anything interesting in the traces? or was it all boring?
<whitequark> there wasn't anything at all on the screen.
<whitequark> and it was in the "WAIT" state
<rjo> whitequark: ok. if you have a minute, could you fiddle with it and see whether there is something coming out of the DACs? should be a ~100 MHz oscillatory stuff at maybe a volt amplitude.
<whitequark> flat line on both channels
<whitequark> or do you mean I should plug it somewhere else?
<rjo> any of j2 j4 j5 j17 on the ad9154-fmx-ebz
<whitequark> ch1 is connected to j4 and there's nothing on it
<whitequark> sb0: by the way. do you know where the blue screwdriver is in the lab?
<whitequark> or any screwdriver
<rjo> whitequark: can i bother you with another reboot of the scope?
<whitequark> rjo: sec
<whitequark> done
<rjo> whitequark: thx
rohitksingh_wor1 has quit [Read error: Connection reset by peer]
<sb0> whitequark, the translucent one? in one of the top drawers in the cabinet near the argon bottle
<whitequark> thx
<whitequark> sb0: wow, what an inefficient use of space
<whitequark> oh well.
<rjo> _florent_: the SYNC machinery is busy all the time. does it get out of that state for you?
<rjo> larsc: is it a problem if sysref has 50% duty cycle? (and not something much smaller like in your drawing)?
<_florent_> rjo: I'm going to look at hat
<_florent_> that
<larsc> rjo: the rising edge is what matters
<GitHub134> [artiq] jordens pushed 1 new commit to phaser: https://git.io/vPlQt
<GitHub134> artiq/phaser 72932fc Robert Jordens: phaser: fix sysref for 250 MHz sample rate
<rjo> larsc: ok. even though it says "The SYSREF± signal is an active high signal sampled by the
<rjo> device clock rising edge"
<larsc> ideally you'd align the different sysref signals so that all deviceclocks sample the rising edge a the same time
<rjo> _florent_: ^ could you double check that patch above?
<larsc> rjo: it will sample it and look for the low->high transistion
<rjo> larsc: ack.
<larsc> 50/50 is was you'd normally do in periodic mode
<_florent_> larsc: is there an easy way to know that the dac is receiving data correctly?
<_florent_> rjo: ok I'll look at that
<larsc> _florent_: prbs maybe
<_florent_> larsc: prbs is fine, but I just want to be sure that it receives data correctly after ILAS
<larsc> "transport layer testing"
<_florent_> ah yes sorry, I read that some time ago...
<_florent_> rjo: I'm going to do that test
<rjo> _florent_: i could imagine that the sync_busy stuff is preventing data from getting through.
<larsc> if sync is getting re-asserted that means the ILAS did not work
<rjo> does it say anywhere, when exactly the normal datapath is activated? i'd guess it deactivates the datapath for lots of things...
sb0 has quit [Quit: Leaving]
rohitksingh has joined #m-labs
<_florent_> larsc: it seems SYNC is keep de-asserted
<_florent_> keep/kept
<larsc> always?
<larsc> like what waveform do you see on sync
<_florent_> I've only looked with the analyzer inside the fpga, going to connect it to the scope
<larsc> when you say it is always de-asserted, do you mean always low, or always high
<rjo> yeah. that's a real brainfart to have an active-low "inverted" differential signal...
<_florent_> larsc: low at startup, then high after CGS
<larsc> _florent_: ok, that's good
<larsc> that's the expected behaviour
<_florent_> ok
<larsc> that also means ILAS is good, otherwise it would go low again
<_florent_> ok thanks
<_florent_> rjo: what do you think if this new GTXTransmitter parameters:
sb0 has joined #m-labs
<_florent_> we just have to pass a GTXChannelPLL or GTXQuadPLL module to it
<_florent_> for ChannelPLL, we will have one for each Transmitter
<_florent_> for the QuadPLL, one for 4 Transmitters
<sb0> whitequark, speaking of efficient use of space, can you unpack/assemble/fill the other cabinet that's been on the floor for a while?
<rjo> _florent_: that's good. isn't there a way to make "local" clock domains so that cd_name can go as well?
<rjo> _florent_: afaict the pll should be decoupled from the phy (as you do). because of the "quad" thing.
<_florent_> rjo: in fact I have this cd_name since I need to retrieve clock domains that where defined in the transmitter inside the core
<_florent_> rjo: but maybe you see a better solution for that
<whitequark> sb0: tomorrow, yes
<sb0> _florent_, doesn't RenameClockDomains() take care of that?
<sb0> iirc it renames everything
<sb0> *ClockDomainsRenamer
<_florent_> sb0/rjo: I need the name of the clock domain defined by the phy here for example: https://github.com/m-labs/jesd204b/blob/master/jesd204b/core.py#L52
<sb0> doesn't the default clock domain conflict resolution rule work in your case?
<whitequark> rjo: 27ms out of those 56ms were spent printing the logs.
hozer has quit [*.net *.split]
<whitequark> rjo: but I'm not sure where 15ms more went.
<_florent_> sb0: I'm not sure, IIRC it returned phy.gtx.cd_tx.name as "tx" for all phys
<_florent_> sb0: but I can retry
<sb0> you can also not define clock domains in the transceiver, and simply output a signal
<sb0> then only define the clock domains when you put the transceivers together. but maybe CD resolution already does the right thing by default.
<sb0> (or can be made to)
<GitHub88> [artiq] whitequark pushed 2 new commits to master: https://git.io/vPlNi
<GitHub88> artiq/master 9c33947 whitequark: runtime: cap log level at debug.
<GitHub88> artiq/master 4d790b4 whitequark: runtime: discard unnecessary sections.
<_florent_> sb0: I'll have another look at that
<sb0> e.g. if you have submodules that contain the elastic buffer clocked on one side by "gtx_clock" and another by "user_clock", the ClockDomain "gtx_clock", and the transceiver
<sb0> then CD resolution will rename all those gtx_clock's. the only downside is you cannot use them outside the submodules.
<sb0> but you can use "user_clock" just fine, since it doesn't have a ClockDomain in the submodules.
<_florent_> ok thanks, I'm trying to get the QPLL working now, I'll look at that after
<sb0> this system btw does not force you to put all (transceiver/elastic buffer/etc.) into one single submodule, the CD renaming happens at the first module that has submodules defining the same clock domain
hozer has joined #m-labs
<sb0> _florent_, why is the "produce square wave" signal 2-bit long and called "pattern_config"?
<sb0> what other patterns are you planning to have?
<sb0> does GTXChannelPLL really need to be a class?
<sb0> it seems OOP just makes things worse here. you'd be better off with pure function return values.
<_florent_> sb0: yes not sure I'll add others pattern, I'll probably rename this
<_florent_> sb0 for the GTXChannelPLL, it's to be similar to GTXQuadPLL
<sb0> hm, ok, but you should probably use functions there too
<_florent_> but I need a least a GTXQuadPLL for the GTXE2_COMMON instance
<sb0> or, you might want to use a class as a "bag of related pure functions"
<sb0> but I don't see why you'd pass what should be function return values using object attributes
FabM has quit [Quit: ChatZilla 0.9.92 [Firefox 45.3.0/20160802213348]]
<sb0> is the compute_* family of function called outside the constructor?
<_florent_> GTXChannelPLL and GTXQuadPLL also have refclk/lock signals that are connected differently to the GTXE
<_florent_> GTXE2_CHANNEL depending it's a CPLL or QPLL
mumptai has joined #m-labs
<sb0> yes I see that
<sb0> I'm just trying to avoid a code style where there is unwarranted state and side effects
<_florent_> yes no problem, if you see a better solution I'm ok to implement it
<sb0> I'd make compute_* static methods
<sb0> n1/n2/m become simply local variables of compute_config, and arguments of compute_freq/compute_linerate when they need them
<bb-m-labs> build #112 of artiq-board is complete: Success [build successful] Build details are at http://buildbot.m-labs.hk/builders/artiq-board/builds/112
<sb0> compute_config can return freq, linerate, {"n1": n1, "d2
<sb0> ": d2, ...}
<sb0> then you simply do self.freq, self.linerate, pll_settings = compute_config(...) in __init__
<sb0> pll_settings can be a local variable afaict, not sure about the other two
<bb-m-labs> build #1003 of artiq is complete: Failure [failed python_unittest_1] Build details are at http://buildbot.m-labs.hk/builders/artiq/builds/1003 blamelist: whitequark <whitequark@whitequark.org>
<sb0> if a class method doesn't really need self, like compute*, you should make it @staticmethod and it can be called externally as GTXChannelPLL.compute_* without altering the state of any object
<_florent_> ok but are you ok to keep the two GTXChannelPLL and GTXQuadPLL classes?
<_florent_> or do you want to do that in GTXTransmitter?
<_florent_> bbl
<sb0> those two classes are fine afaict
<sb0> like I said, using classes as bags of related pure functions is fine
<sb0> what I was complaining about was the excessive state and side effects
<_florent_> ok no problem, I'll change that then
<sb0> thanks!
<rjo> _florent_: soe does SYNC_BUSY stay high for you as well?
<rjo> *so
<_florent_> rjo: it seems yes
<rjo> _florent_: and SYNC_LOCK how?
<rjo> _florent_: in the datasheet it says SYNC_LOCK is required to proceed.
<rjo> _florent_: table 22 footnote 2
<rjo> *SYNC_LOCK low
<_florent_> SYNC_LOCK is high here
<larsc> I think that SYNC stuff is not for the SYNC signal, but for SYSREF
<larsc> and only in oneshot sysref mode
<rjo> sure.
<rjo> but we are using oneshot.
<larsc> ah, ok.
mumptai has quit [Remote host closed the connection]
<GitHub196> [artiq] jordens pushed 4 new commits to phaser: https://git.io/vP8Zi
<GitHub196> artiq/phaser 4e60a6a Robert Jordens: phaser: tweak sawg example
<GitHub196> artiq/phaser 1157a3a Robert Jordens: ad9514_status: more info
<GitHub196> artiq/phaser 89a30b6 Robert Jordens: phaser: error on startup kernel
ohama has joined #m-labs
key2 has quit [Quit: Page closed]
sandeepkr has joined #m-labs
rohitksingh has quit [Quit: Leaving.]
kyak has quit [*.net *.split]
balrog has quit [*.net *.split]
mithro has quit [*.net *.split]
jaeckel has quit [*.net *.split]
kyak has joined #m-labs
kyak has joined #m-labs
balrog has joined #m-labs
mithro has joined #m-labs
jaeckel has joined #m-labs
ohama has quit [*.net *.split]
kuldeep_ has quit [*.net *.split]
sandeepkr_ has joined #m-labs
ohama has joined #m-labs
kuldeep_ has joined #m-labs
sandeepkr has quit [Ping timeout: 256 seconds]
hozer has quit [*.net *.split]
sb0 has quit [*.net *.split]
bb-m-labs has quit [*.net *.split]
acathla has quit [*.net *.split]
cyrozap has quit [*.net *.split]
cyrozap has joined #m-labs
bb-m-labs has joined #m-labs
acathla has joined #m-labs
sb0 has joined #m-labs
hozer has joined #m-labs
kyak has quit [*.net *.split]
attie has quit [*.net *.split]
_florent_ has quit [*.net *.split]
felix_ has quit [*.net *.split]
kmehall has quit [*.net *.split]
kmehall_ has joined #m-labs
felix_ has joined #m-labs
kyak has joined #m-labs
kyak has joined #m-labs
attie has joined #m-labs
cr1901_modern has quit [*.net *.split]
larsc has quit [*.net *.split]
siruf has quit [*.net *.split]
gric has quit [*.net *.split]
hobbes_ has quit [*.net *.split]
kristianpaul has quit [*.net *.split]
Neuron1k has quit [*.net *.split]
ysionneau has quit [*.net *.split]
cr1901_modern has joined #m-labs
kristianpaul has joined #m-labs
kristianpaul has joined #m-labs
kristianpaul has quit [Changing host]
gric has joined #m-labs
siruf has joined #m-labs
ysionneau has joined #m-labs
hobbes_ has joined #m-labs
Neuron1k has joined #m-labs
larsc has joined #m-labs
stekern has quit [*.net *.split]
tmbinc__ has quit [*.net *.split]
[florian] has quit [*.net *.split]
early` has quit [*.net *.split]
tmbinc__ has joined #m-labs
early has joined #m-labs
cyrozap has quit [Ping timeout: 258 seconds]
stekern has joined #m-labs
[florian] has joined #m-labs
[florian] has joined #m-labs
cyrozap has joined #m-labs
MiW has quit [*.net *.split]
rjo has quit [*.net *.split]
rjo has joined #m-labs
MiW has joined #m-labs
attie has quit [Remote host closed the connection]
attie has joined #m-labs
mithro has quit [Ping timeout: 258 seconds]
mithro has joined #m-labs
awallin_ has quit [*.net *.split]
stigo has quit [*.net *.split]
awallin__ has joined #m-labs
_florent_ has joined #m-labs
stigo has joined #m-labs
mithro has joined #m-labs
mithro has quit [Changing host]
_florent_ has quit [Changing host]
_florent_ has joined #m-labs
MiW has quit [*.net *.split]
cyrozap has quit [*.net *.split]
ohama has quit [*.net *.split]
jaeckel has quit [*.net *.split]
hobbes_ has quit [*.net *.split]
gric has quit [*.net *.split]
kristianpaul has quit [*.net *.split]
siruf has quit [*.net *.split]
kmehall_ has quit [*.net *.split]
[florian] has quit [*.net *.split]
stekern has quit [*.net *.split]
Neuron1k has quit [*.net *.split]
cr1901_modern has quit [*.net *.split]
felix_ has quit [*.net *.split]
acathla has quit [*.net *.split]
balrog has quit [*.net *.split]
hozer has quit [*.net *.split]
bb-m-labs has quit [*.net *.split]
kuldeep_ has quit [*.net *.split]
sandeepkr_ has quit [*.net *.split]
mithro has quit [Ping timeout: 248 seconds]
[florian] has joined #m-labs
stekern has joined #m-labs
cr1901_modern has joined #m-labs
Neuron1k has joined #m-labs
felix_ has joined #m-labs
acathla has joined #m-labs
balrog has joined #m-labs
gric has joined #m-labs
kmehall_ has joined #m-labs
hobbes_ has joined #m-labs
siruf has joined #m-labs
kristianpaul has joined #m-labs
ohama has joined #m-labs
jaeckel has joined #m-labs
cyrozap has joined #m-labs
MiW has joined #m-labs
kuldeep_ has joined #m-labs
hozer has joined #m-labs
sandeepkr_ has joined #m-labs
bb-m-labs has joined #m-labs
mithro has joined #m-labs
_florent_ has quit [Remote host closed the connection]
_florent_ has joined #m-labs
_florent_ has quit [Ping timeout: 260 seconds]
_florent_ has joined #m-labs