<mog>
sb0, it just says "latest" is their a known stable or what the project is currently built on?
<sb0>
I have 4.9.0 installed here
fengling has quit [Quit: WeeChat 1.0]
<mog>
i just havent used lm32 stuff before and was something for something i could build cross tools and then a hello world i could run on my spartan 6
<mog>
sb0, version 2.25 of binutils?
<sb0>
2.23.2
<sb0>
but you can try 2.25
<mog>
i want to try things that are known good as i im sure i will have my own problems
<mog>
and which version of newlib?
<mog>
hmm ill just try newest of everything i guess
<sb0>
I don't use newlib
<GitHub189>
[artiq] sbourdeauducq pushed 5 new commits to master: http://git.io/bRfm
<GitHub189>
artiq/master f848a7a Robert Jordens: gitignore coverage
<GitHub189>
artiq/master 0a91f86 Robert Jordens: add .travis.yml
<GitHub189>
artiq/master 695aa95 Robert Jordens: README.rst: add travis badge
<sb0>
rjo, why use nist-ionstorage/llvmlite.git instead of numba/llvmlite.git?
<sb0>
in travis
balrog has quit [Ping timeout: 264 seconds]
balrog has joined #m-labs
mumptai has joined #m-labs
<rjo>
simpler way to manage the patches.
travis-ci has joined #m-labs
<travis-ci>
nist-ionstorage/artiq#85 (master - ebf5699 : Sebastien Bourdeauducq): The build has errored.
<rjo>
sb0: imho pandas (or at least pytables) will be nicer than h5py
aeris has joined #m-labs
<sb0>
rjo, there's only one patch
<sb0>
rjo, why exactly?
sturmflut-work has quit [Ping timeout: 276 seconds]
jpeg has left #m-labs ["See ya"]
sturmflut-work has joined #m-labs
<rjo>
there is the CXX patch
FabM has quit [Quit: ChatZilla 0.9.91.1 [Firefox 35.0.1/20150122214805]]
<sb0>
ah, travis-specific.
<sb0>
I guess it should be mergeable
<rjo>
pandas is higher level and people use it to read data. thus more symmetric.
<sb0>
panda won't integrate with sync_struct that is needed to send the results realtime to the gui
<sb0>
do you see problems with keeping the results in memory as python lists and then dumping them to hdf5 at the end of each experiment (for which h5py is the most straightforward tool)?
<sb0>
let me write a short note to the numba folks re. the CXX patch
<rjo>
i don't see the difference between pandas and h5py WRT sync_struct.
<rjo>
that's fine. distributed syncrhonized n-dim arrays is a bigger problem.
<sb0>
there is none. but if we have python lists, the slimmer way to write them to hdf5 files is h5py, not pandas. we don't need all the extra features.
<sb0>
for n-dimensional arrays, sync_struct can handle nested lists
<rjo>
more precisely: i assume that the data will not be more than 1M values if flattened. and i would guess sync_struct is ok for that.
<sb0>
also: only the real-time results will go through sync_struct.
FabM has joined #m-labs
<rjo>
even currently, i would guess that _result_dict_to_hdf5() would be shorter with pandas.
<sb0>
the non-realtime ones are encapsulated in Notifiers as well only to have the same API for all, regardless what the get_realtime_result() function returns
<rjo>
if you don't like pandas, then let's look at pytables.
<rjo>
because that is the layer under pandas and is as slim as h5py.
<sb0>
what pandas or pytables feature better solves the problem of writing python lists to a hdf5 file?
<rjo>
no. that is orthogonal. they prefer numpy arrays.
<rjo>
but hdf5 can become complicated and i have seen valid hdf5 files (generated with h5py) which were not readable by pytables and thus pandas.
<rjo>
re complicated experiments:
<rjo>
one aspect of complicsted is big. i will ask somebody who has been doing "big" experiments to do that. basically a large table of gates.
<sb0>
what's a table of gates?
<rjo>
a long specifically designed list of gates to test gate imperfections
<sb0>
did you send it already, or did they do it spontaneously?
<rjo>
spontaneous i presume
<rjo>
yes. a few thousand gates so far.
<sb0>
how much information goes into one list entry?
<sb0>
ie how long is the description of one gate operation?
<rjo>
if they are smart, a gate can be encoded in about one byte.
<rjo>
but that is precisely the question: how readable is the code, how much overhead is the encoding on compilation, does it collide or play well with function inlining
<sb0>
ok. even with the current inefficient implementation of list constants, that shouldn't be slow...
<sb0>
(right now it alloca's the list and does a bunch of stores from immediates)
<sb0>
the inline stuff might be a source of issues, yes
<rjo>
it would be nice if that list could end up in rodata at the end.
<sb0>
that's feasible, but requires detecting if the list is indeed read-only, and processing rodata relocations in the linker
<sb0>
and if not read-only, alloca it + memcpy from rodata
<rjo>
yes.
<rjo>
is there no bss init code currently?
<sb0>
there is no bss at all
<sb0>
everything is done with alloca
<rjo>
let's see whether the current implementation is good enough for a few thousand gates.
<sb0>
and lists with all elements having the same value are initialized with a small loop. which you could call a bss init code...
<rjo>
for override mode the non-rt cpu (let's call it comms-cpu) circumvents the rtio-cpu?
<rjo>
are they masters on the same wishbone bus?
<sb0>
i.e. if you declare the list as [<value> for _ in range(<len>)], it does the init loop. [a, b, c, d, ...] does a series of stores from immediate values.
<sb0>
the main rtio bus will be connected directly to the rtio-cpu. I'm planning to add another concurrent debug bus on the RTIO core.
<rjo>
the first shot will be a copy-pasted list [a,b,c...]. then later, they might figure out that it is smart to generate the list algorithmically.
<sb0>
so using the debug features won't cause underflows
<sb0>
you can generate the list algorithmically and put it into an object attribute or pass it as parameter to a kernel
<sb0>
the inline transform deals with that, and will turn it into x = [a,b,c...] in the AST
<rjo>
i will leave it to the quantum ii guys to figure that out.
<rjo>
hmm. i can see the rtio-cpu stalling the comms-cpu through the debug bus...
<sb0>
how so?
<rjo>
what kind of debug bus are we talking about?
<sb0>
the debug bus would be an independent bus that allows, through polling: reading the current state of the rtio outputs, setting one output to a fixed value (GPIO-style, non-RT), and similar things for channels that have a DDS driver in them instead of the TTL
<rjo>
_real_ debug bus like boundary scan?
<sb0>
the two CPUs won't interfere with each other
<sb0>
yes, something like that
<rjo>
or just wishbone master and slave?
<sb0>
well it doesn't matter if it's serial or parallel
<rjo>
but a bus that accesses the registers and the address space of the rtio-core.
<sb0>
no, it accesses independent debug registers
<sb0>
that can override the behavior from the main register on a per-channel basis
<rjo>
then how does the comms-cpu abort a tight loop in the rtio-cpu?
<sb0>
and retrieve the last bits of info that have been sent to each channel via the main registers
<sb0>
easy. the rtio-cpu reset line will be a gpio of the comm-cpu.
<rjo>
ok. the comms-cpu preps the code in some shared address space, deasserts resets, handles requests from rtio-cpu through some kind of mailbox?
<sb0>
yes
<sb0>
the only way the two CPUs may interfere real-time wise is when accessing the SDRAM
<rjo>
and accesses the gpio and dds rtios through some bypass bus.
<sb0>
yes
<rjo>
ok. so they can share an address space and even the wishbone bus to the rtio devices.
<sb0>
they'd share a restricted SDRAM address space and the mailbox, that's all
<sb0>
this way even a miscompiled kernel won't crash the comm-cpu
<rjo>
are the gpio rtios wishbone slaves on two busses then? on the rtio-cpu and on the comms-cpu for the override registers?
<sb0>
what's a "gpio rtio"?
<rjo>
the gateware.
<sb0>
the rtio-cpu doesn't access the debug registers
<rjo>
but they are in the same address space as the fifo registers?
<sb0>
the rtio core has two wishbone slave interfaces: one with the main control registers (to push events etc.) connected to the rtio-cpu, the other one with the debug registers connected to the comm-cpu
<rjo>
ok. that's what i asked. two slaves.
<rjo>
remind me. was the reason for the dual-cpu the override or crash-containemet/comms asynchronicity?
<sb0>
all 3 reasons
<sb0>
plus abort capability
mumptai has quit [Quit: Verlassend]
<rjo>
i still think for overrides and status monitoring, compile-time filtering and readback after kernel return would have been sufficient.
<sb0>
it's not easier
<rjo>
and for comms-async and abort, some timer thing and polling would have been ok.
<rjo>
ok. then i am happy.
<rjo>
do you want the passphrase for the xilinx image or do you want to generate and host your own?
<sb0>
and the dual-cpu is more straightforward and robust anyway. debugging things like intermittent stack corruption when generating an abort from an ISR is a PITA
<rjo>
the only wrinkle with that setup is that pull requests can not be auto-built because that could circumvent the secret...
<rjo>
yes. but once you have have built such a mini-rtos and debugged these things it is done.
<sb0>
I'm fine keeping it on the ethz server
<rjo>
also i could imagine some funny bugs cropping up when implementing you own SMP with shared sdram.
<sb0>
there's no S
<rjo>
even better.
<sb0>
with this architecture, there are only two funny things: 1) cache coherency when using the mailbox 2) realtime breakage due to the shared sdram bus
<rjo>
they will have independent i/d caches, right?
<rjo>
or are you talking l2?
<sb0>
the two-CPU approach makes #2 better, not worse, since the ISRs from ethernet also use sdram bandwidth *and* stop the CPU core
<sb0>
s/stop/use
<rjo>
yes. if you multi-master sdram controller is good enough, this is the perfect showcase.
<sb0>
m1 does more extreme sdram multi-mastering than that, with cpu+framebuffer+2 graphics acceleration cores+video input
<rjo>
the data rate might be higher but the random access-ness of two cpus stomping on accross the space is a different kind of fun.
<rjo>
the m1 setup looks like your usual well constrained and organized DMA machinery.
<sb0>
well, if we run into severe sdram access conflict problems, considering the amount of on-chip BRAM, there's an easy exit route...
<sb0>
but I guess that's unlikely anyway
<rjo>
do you want to give me git access to m-labs/artiq? i promise to keep out of the master branch. but i can then keep my branches auto-building and keep managing the auto-builder.
<sb0>
the comm-cpu firmware would definitely fit in BRAM. we can even XIP+cache from the flash if it doesn't.
<rjo>
yeah. do think we can keep both implementations (the single-cpu for ppro, and the big one for bigger fpgas) alive?
<sb0>
yeah
<rjo>
ok. that's nice.
<rjo>
ppro is still a good starter-kit for artiq.
<sb0>
it just takes a bit of code organization to share what can be shared (linker, rtio syscalls, etc.)
<sb0>
then there will be coredevice.comm_serial and coredevice.comm_ethernet on the PC side
<sb0>
rjo, just modify the travis stuff directly in master
<sb0>
I don't want to mess with branches
<sb0>
what's the autobuilder exactly about? ie where does it compile?
Bertl is now known as Bertl_zZ
<rjo>
sb0: it compiles gateware, bios, runtime, on travis-ci
<rjo>
runs the tests
<sb0>
rjo, yes, they have independent i/d caches. no l2, or shared l2 which we don't have to deal with then. icache is cleared on cpu reset, dcache has to be cleared or bypassed when transferring mailbox messages.
<rjo>
reports coverage results and build results.
<sb0>
ah. do you need it for ci to work?
<rjo>
the mailbox would be in non-cached part the the space, i presume.
<rjo>
need what?
<sb0>
the autobuild script
<sb0>
you can bypass the dcache on a per-access basis by setting bit 31 of the address
<rjo>
that is what i meant.
<sb0>
but if the message is large, better clear the cache and leave it enabled to take advantage of bursts to fetch from sdram
<rjo>
do i need the autobuild script for ci to work? < don't understand that.
<sb0>
do you need those files hooks/post-receive and requirements.txt, which are currently not merged, for travis to work
<rjo>
i think there are no hooks or requirements.txt anymore.
<mog>
sorry for stupid question again sb0 . i built binutils and gcc and when i try to make something i get an error telling me fatal error: stdint.h: No such file or directory # include_next <stdint.h>
<GitHub22>
[artiq] sbourdeauducq pushed 3 new commits to master: http://git.io/bEku
<sb0>
rjo, maybe we can skip the bitstream build and just check that the verilog generation completes?
<GitHub44>
[artiq] sbourdeauducq pushed 1 new commit to master: http://git.io/bumm
<GitHub44>
artiq/master 2f06574 Sebastien Bourdeauducq: ddb: controller support
kugg has joined #m-labs
balrog has quit [Ping timeout: 250 seconds]
balrog has joined #m-labs
Bertl_zZ is now known as Bertl
bliss-sid_ has joined #m-labs
bliss-sid_ has quit [Quit: Page closed]
mumptai has joined #m-labs
sturmflut has joined #m-labs
<sturmflut>
sb0: I think there might be a second problem with the flash in my tablet. Using Ubuntu kernel 3.16.0-23-generic it is quite stable, but at some point the filesystem tries to write to some of the sectors at the end of the storage, which fails. I have seen this before and the sector numbers seem to be the same in each case.
<sturmflut>
sb0: I'll do a badblocks check and enable read-write this time
<sturmflut>
sb0: Only did a read-only badblocks until now
Bertl is now known as Bertl_zZ
sh4rm4 has joined #m-labs
sh4rm4 has quit [Remote host closed the connection]