<sb0>
_florent_, why not import target.xxx in those external designs?
<_florent_>
that's what I wanted to do
<_florent_>
but with sys.path.insert(1, os.path.abspath(args.external)) it seems you cannot do that
<_florent_>
with sys.path.insert(0, os.path.abspath(args.external)) it was ok
<sb0>
?
<_florent_>
do you remember why you used sys.path.insert(1, os.path.abspath(args.external)) and not sys.path.insert(os.path.abspath(args.external))?
<sb0>
if you really have to commit hacks/workarounds into misoc, there should be comments about them
<sb0>
0 should be fine.
<_florent_>
ok, then I'm going to use that
<_florent_>
thanks
<sb0>
I guess that when you use 1, it finds the misoc "targets" directory, not the external one
<sb0>
I think that eventually make.py should die and targets should invoke the build process themselves...
<sb0>
which will make it a bit easier to have misoc as a regular python lib
<_florent_>
yes, it's difficult to have a generic make.py
<GitHub170>
[misoc] enjoy-digital force-pushed master from 9b64eb2 to 6c13879: http://git.io/LjONPA
<GitHub170>
misoc/master 6c13879 Florent Kermarrec: make.py: use sys.path.insert(0...) to allow external designs to have specific targets derived from a base target
ysionneau has quit [*.net *.split]
folkert has quit [*.net *.split]
felix_ has quit [*.net *.split]
MiW has quit [*.net *.split]
ysionneau has joined #m-labs
felix_ has joined #m-labs
MiW has joined #m-labs
<sb0>
_florent_, does artiq still compile after this change?
<sb0>
and artiq is actually a good example of an "external designs with specific targets derived from a base target" that worked before your commit ...
<_florent_>
I tested with artiq and it seems to be fine
<sb0>
other controller parameters such as device S/N are simply a controller-specific command line argument
<GitHub152>
[artiq] whitequark force-pushed new-py2llvm from 7687dae to ebe243f: http://git.io/vmI6O
<GitHub152>
artiq/new-py2llvm ebe243f whitequark: Add printing of SSA functions.
<cr1901_modern>
whitequark: Since, AFAIK, Python doesn't have an official grammar, how do you know when you're parser is correct/will accept all valid Python programs?
tija has quit [Quit: Connection closed for inactivity]
<whitequark>
it has
<cr1901_modern>
Oh? Is there a parse.yy somewhere like Ruby has?
<cr1901_modern>
Oh, well if that's the case, I guess verifying that your code is correct is as "simple" as using Coq and/or throwing a test bunch of programs at it lol
ylamarre has quit [Quit: ylamarre]
<whitequark>
I don't verify. my grammar is directly based on the reference one
<whitequark>
the places where it does differ, of which there are like two, are checked manually
<cr1901_modern>
Do I understand this correctly? Pyparser- converts to a full/abstract syntax tree. py2llvm- does the context-sensitive semantic analysis, and converts the tree to LLVM-IR
<whitequark>
yes
<cr1901_modern>
I didn't know until recently that the division between parsing and semantic analysis is somewhat arbitrary, and that most new langs try to be yacc-friendly
<whitequark>
it's not, if your language is context-free
<whitequark>
C/C++ are not
<cr1901_modern>
Well, I was certainly wrong to claim that yacc "can't parse a grammar" for any real lang, b/c I was thinking in terms of C/C++.
<whitequark>
for that matter, clang has semantic analysis as a separate step
<cr1901_modern>
Idk if I understood the wikipedia page for typedef-indentifier problem, but Clang redefines the C grammar to MAKE it context-free?
<whitequark>
it produces ambiguous AST nodes
<cr1901_modern>
In other words, it defers the parsing, saying "I can't handle this right now, but the tree structure won't change, just the node contents, so handle it later"?
<whitequark>
it defers construction of an unambiguous AST
balrog has quit [Ping timeout: 240 seconds]
<rjo>
sb0: yes. in that case the unittests for the controllers should actually start the controller process and talk to the devices through that process.
balrog has joined #m-labs
ylamarre has joined #m-labs
<GitHub43>
[artiq] sbourdeauducq pushed 4 new commits to master: http://git.io/vmL6c
<GitHub43>
artiq/master 8b02b58 Sebastien Bourdeauducq: sync_struct/Notifier: do not pass root param to publish
<whitequark>
sb0: rational math would make it slow?..
<sb0>
yes, because timestamp() returned a rational that the compiler did not optimize well
<whitequark>
that sounds fixable, but I guess not necessary anyway
<sb0>
there's another reason not to use rationals: they overflow
<sb0>
and we want the time counter continuously running, so timestamp() may return a very large value
<whitequark>
I see
<rjo>
again a request: i'd like to push my openocd patches through their queue. _anyone_ willing to do a bit of simple code review? http://openocd.zylin.com/#/q/owner:jordens%2540gmail.com+status:open
<sb0>
(that's also why there is no timestamp() that returns a float in the current API, because it may cause losses of precision)
<whitequark>
2^52 seconds?
<whitequark>
er, milliseconds, I guess
<sb0>
nano
<whitequark>
oh
<whitequark>
oh, right, quantum physics
<sb0>
or even less, if we ever run the DDSes at 3.5GHz
<cr1901_modern>
Are the timestamps generated using a "number of clocks elapsed since power on" register as a reference?
<sb0>
yes
<sb0>
(since recently)
<cr1901_modern>
What was used beforehand?
<sb0>
at experiment startup
<cr1901_modern>
Ahh, an external reference (brain's not working right now).
<sb0>
no, the time counter was reset at the start of every experiment
sb0_ has joined #m-labs
sb0 has quit [Read error: Connection reset by peer]
<sb0_>
ysionneau, any progress with the SERDES TTL?
<rjo>
it would be sweet if we could set the counter relative to epoch (best effort ~ms). then we get absolute timestamping for free.
<cr1901_modern>
rj0: Before I forget, thanks for the DAC code. I just relearned how Delta-Sigma modulation worked yesterday; this saves me some time for what I want to do when my chips come in.
<rjo>
yes. its nice. and you can clock it at >200MHz IIRC
<cr1901_modern>
Won't need to go that high most likely. Although I'm most likely gonna have to sync it to an external clock :/
<rjo>
with the second order DS you need to be careful about the validity range but it improves the spectrum.
<rjo>
but the faster you clock the lower the noise after the lowpass.
<rjo>
even if you clock it externally, you can multiply the clock up a few times.
<cr1901_modern>
Ahh, right, my FPGA has some DCMs
<cr1901_modern>
I unfortunately got interested in old arcade FM synths again, so I bought one in IC form. It requires a special DAC. >>
<cr1901_modern>
I decided to not only implement the PC->synth interface on the FPGA, but the DAC as well, since I know the serial format that gets fed into the DAC.
<sb0_>
rjo, i would make Platform a module-level class that takes ios as arguments, and instantiate it in make_platform
<sb0_>
by "ios", I mean e.g. the pins variable
<rjo>
ah. bscan_spi. then you can not easily inherit from the different toolchains xilinx/altera. if that comes up later.
<rjo>
factory functions!
<rjo>
but you could have different base platforms again.
<sb0_>
yes, factory functions can get messy
<sb0_>
things like bitgen_opts are not portable anyway
<rjo>
true.
<sb0_>
so you'd need to redefine a large part of the class for each toolchain anyway...
<sb0_>
Misc("PULLUP") is also non portable
<sb0_>
though it could be made portable. most FPGAs have the pullup option, I believe
<rjo>
ack. if you could post the comments and add +1 all around to assure the openocd overlords that this is good code, that would be awesome ;)
<whitequark>
rjo: i don't understand how that code review process works
<whitequark>
a contribution from someone they don't know should be independently confirmed by someone else they don't know and who is not necessarily familiar with openocd codebase? wat?
<rjo>
you push patches, people brabble about them, you change, rebase etc, some +1, then if somebody of the core does +2, it gets cherry-picked.
<rjo>
the +2 comes from whatever they consider the core team i guess.
<whitequark>
then ping the core team?
<rjo>
i don't think that accelerates things. they have that on the mailing list and on their dashboard.
<whitequark>
I don't see how review by e.g. me adds any value
<rjo>
i think what helps is if they have no idea about fpgas (e.g.) and somebody reviews it for them that lends a bit of credibility to the patch.
<cr1901_modern>
So basically, openocd is a meritocracy?
<rjo>
well yes. plus a hint of ochlocracy
<whitequark>
hahaha
<cr1901_modern>
"hint" lol (I just learned a new word). OpenOCD has Pi support built-in. I was highly considering using it. Not anymore if that's how their submission process works.
<sb0_>
pi support?
<sb0_>
bitbang on the parallel port?
<sb0_>
of the broadcom devboard?
<cr1901_modern>
yes, essentially XD
<cr1901_modern>
(is "broadcom devboard" your new pejorative?)
<sb0_>
just calling it what it is.
<cr1901_modern>
But yes, bitbang on the parallel port is built-in, only on Linux unfortunately, and I run BSD on my Pi b/c reasons. >>
<sb0_>
even though they are now famous for their open hardware work, broadcom has still not answered my request for the docs of the gps/bt/nfc chip inside my tablet
<cr1901_modern>
Since BSD doesn't enable /dev/mem by default, I'd have to alter the Pi support and/or add GPIO ioctl() support
<cr1901_modern>
I found this room from whitequark's article on the Pi. I didn't realize how "closed" Broadcom really is until I started searching around.
<cr1901_modern>
No pinouts for the main CPU, no schematic, closed gfx, difficult-to-read manual (guess that's not unique)
<sb0_>
video decoder with hardware-enforced license keys... lol
<whitequark>
sb0_: don't forget the CPU not powerful enough to run some kind of decent decoder
<cr1901_modern>
Tbh, the reason I was looking up info on the CPU was b/.c I was curious how much it factored into the final cost
<cr1901_modern>
I can't even get that information!
<whitequark>
well, you can send an RFQ to broadcom
<whitequark>
tell you have about 100,000 devices to make
<cr1901_modern>
They won't respond to me.
<whitequark>
well, if you're being honest, of course
<cr1901_modern>
That's a blatant lie (though I guess they don't know that)
<whitequark>
duh
<cr1901_modern>
Sorry, I'm a bad liar :P
<whitequark>
also works for samples and occasionally docs
<cr1901_modern>
I've basically come to the conclusion that without Eben Upton's (or similar) connections, a $35 SBC wouldn't be possible.
<cr1901_modern>
(I'll keep that in mind when asking companies for quotes- find my own company just solely for that)
<whitequark>
obviously
<sb0_>
cr1901_modern, you can also find lots of cheap SBCs in Shenzhen
<whitequark>
broadcom with broadcom hat is not going to demand full cost from broadcom with rpihat
<cr1901_modern>
sb0: I recall there's a market for old router internals repurposed as SBCs?
<sb0_>
I guess they would make decent rpi competitors if one could get the western idiots to talk about them
<cr1901_modern>
Well, it wasn't obvious to me XD. I was (am) naive, and was hoping the situation wasn't as depressing as it is
<cr1901_modern>
Tangent: I wish ppl would report more on the fact that Chip SBC kickstarter is extremely misleading.
<cr1901_modern>
(Well, at least it's open source. But it's not $9 like the tagline claims)
<ysionneau>
sb0_: I started the refactor but then I stopped to have a look at the llvm_or1k issue, this is sorted out now (I'll push that tomorrow), I will resume the sedes refactor tomorrow then
<ysionneau>
serdes*
ylamarre has quit [Ping timeout: 244 seconds]
<ysionneau>
(and I started to work on addressing the points of the python patch review from Victor Stinner)
ylamarre has joined #m-labs
<GitHub85>
[artiq] fallen pushed 1 new commit to master: http://git.io/vmtKe
<GitHub85>
artiq/master 90ba9f7 Yann Sionneau: llvmlite: rename our package to be llvmlite_or1k to avoid collision with llvmlite package needed for numba
<whitequark>
ysionneau: can't you just build host along with or1k?
<whitequark>
it does not feel right to build two copies of LLVM where one would work
<ysionneau>
hummm and then make our own (or1k-enabled) copy satisfy for instance numba dependency on llvmlite?
<ysionneau>
that would mean naming our own package llvmlite also, but telling artiq's meta.yaml that we need our own particular llvmlite package from our own anaconda repo
<ysionneau>
(binstar has been renamed anaconda)
<ysionneau>
don't know if one can put the anaconda repo in the dependency
<whitequark>
ysionneau: yes, that was the idea
<whitequark>
I mean, if there's a conflict, that ought to work
<whitequark>
and you should see the 8GB webkit one
<ysionneau>
so, I think I will just host those packages on my dedicated server, and not on binstar, to save some space
<ysionneau>
since no user should need them
<whitequark>
llvmlite seems... odd
<ysionneau>
(I'm talking about the llvmdev)
<whitequark>
-64 and -32 clearly use different flags
<whitequark>
somewhere
<whitequark>
actually I doubt the -64 one even works
<whitequark>
no part of LLVM will fit into 200kb
ylamarre1 has joined #m-labs
<ysionneau>
maybe I did something wrong here
ylamarre has quit [Read error: No route to host]
<whitequark>
yes
<ysionneau>
ah I think I built it against the old llvmdev package (with shared libs)
<whitequark>
you built LLVM with shared libs
<whitequark>
NEEDED libLLVMCore.so
<ysionneau>
yes
<ysionneau>
let's rebuild it with the new llvmdev package
<ysionneau>
my computer is dying manipulating this 1.5 GB file...
<whitequark>
last person who I heard built webkit with LTO had to use 64GB of RAM and it took ~eight hours
<ysionneau>
wow ...
<whitequark>
you can't link it at all on a 32-bit machine because the symbols don't fit into the address space
<ysionneau>
I'm building on a Core2Duo, 4 GB of RAM ...
<ysionneau>
the linker actually got killed once by the OOM killer because I had firefox running and a make -j4 of llvm ...
mumptai has quit [Ping timeout: 264 seconds]
<ysionneau>
I had to rebuild everything from scratch (conda \o/)
<whitequark>
oh yeah, that eight hours? it was done on an overclocked 12-core i7
<ysionneau>
omg
<ysionneau>
rha, the build is still picking up some shared libs I must have somewhere
<ysionneau>
calling it a day, will re-generate that tomorrow
<rjo>
rumor has it that the gentoo users trying to build their entire system with -O99 and LTO are still out there. but there have been no life signs from them in years.
<whitequark>
-O99 doesn't make sense. the largest level is -O4, which is just -O3 with LTO, which is just -O2 with vectorize and LTO
<whitequark>
the ways different compilers handle -On with n>4 can be quite bizarre
<rjo>
yes. that was one of the jokes in that statement.
<whitequark>
some clamp it at 4, some just use the first symbol, some ignore
<whitequark>
I've seen code in the wild which actually broke because it specified -O10
<whitequark>
and some compiler I used treated that as -O1
<cr1901_modern>
I'm beginning to think computers were a mistake with those webkit tweets
<rjo>
libreoffice builds for debian take ~15h and 20GB disk. that is just with -O2.
<whitequark>
I remember being on gentoo and building libreoffice, yeah
<whitequark>
~8h on atmel 300+ but that was circa 2009
<whitequark>
*3000+
<whitequark>
er
<whitequark>
athlon
<rjo>
avr!
<whitequark>
you may laugh, but someone ran linux on an atmega328 or similar
<whitequark>
by implementing an ARM emulator on it, naturally