<unixb0y>
Does someone here actually know what the progress on the SymbiFlow project is like, overall? 10%? 50%? 90%?
<unixb0y>
Would be great to know!
<unixb0y>
Stuck to Vivado with my Nexys 4 DDR
unixb0y has quit []
m_t has quit [Quit: Leaving]
genii has quit [Quit: BEERTIME !]
azonenberg_work has quit [Ping timeout: 260 seconds]
<digshadow>
Ultrasauce: we got it mostly sorted out, although I think there was a minor error at the end of the commit
<mithro>
btw -- I don't check this channel unless people say my name :-)
<digshadow>
then we'll need a codename when we are talking about you :)
dingbat has quit [Quit: Connection closed for inactivity]
soylentyellow has quit [Read error: Connection reset by peer]
soylentyellow has joined ##openfpga
azonenberg_work has joined ##openfpga
Dolu has quit [Ping timeout: 256 seconds]
<qu1j0t3>
m1thro
Dolu has joined ##openfpga
Dolu has quit [Ping timeout: 260 seconds]
azonenberg_work has quit [Ping timeout: 240 seconds]
soylentyellow has quit [Ping timeout: 276 seconds]
Dolu has joined ##openfpga
Xark has quit [Ping timeout: 252 seconds]
Xark has joined ##openfpga
digshadow-w has joined ##openfpga
soylentyellow has joined ##openfpga
pie__ has quit [Ping timeout: 248 seconds]
digshadow has quit [Ping timeout: 255 seconds]
GenTooMan has quit [Quit: Leaving]
rohitksingh_work has joined ##openfpga
digshadow has joined ##openfpga
digshadow has quit [Ping timeout: 268 seconds]
Bike has quit [Quit: Lost terminal]
digshadow has joined ##openfpga
Dolu has quit [Ping timeout: 240 seconds]
soylentyellow has quit [Ping timeout: 240 seconds]
Dolu has joined ##openfpga
tinyfpga has joined ##openfpga
<tinyfpga>
Howdy
soylentyellow has joined ##openfpga
<digshadow>
hey tinyfpga
<digshadow>
I heard you might stop by to chat to mithro kc8apf and I in the near future
<tinyfpga>
yup :)
<tinyfpga>
but maybe in meat-space too
<tinyfpga>
happen to be heading their way on Thursday
<digshadow>
very nice
<mithro>
hey tinyfpga
<azonenberg>
o/
pie__ has joined ##openfpga
<tinyfpga>
Howdy!
<tinyfpga>
\o/
<tinyfpga>
mithro, are you actively working on project x-ray? I’m trying to figure out how it documents the FPGA routing fabric
<mithro>
tinyfpga: Kinda - I'm mostly working on getting x-ray stuff into verilog to routing
<tinyfpga>
I’m working on documenting the ECP5 bitstream. The primitives are relatively easy since they can be instantiated directly. But I don’t yet understand the fuzzer strategy in project x-ray for the routing
<tinyfpga>
will
<tinyfpga>
Will you be around on Thursday to meet up?
<mithro>
tinyfpga: Yes
<tinyfpga>
Great
<mithro>
tinyfpga: I know more about how the output database looks rather then how from the fuzzers created it
<tinyfpga>
Ok
<tinyfpga>
I also want to generate same format output database
<tinyfpga>
I love the HTML tables
<tinyfpga>
Made it really easy to understand what’s going on
<mithro>
tinyfpga: The general idea is that they produce a lot of designs and asked the tool how it routed them then look for correlations in the bit patterns and the routes
azonenberg_work has joined ##openfpga
<tinyfpga>
In fact it made the routing muxes really obvious with the group-based highlighting
<tinyfpga>
gotchya
<tinyfpga>
I’ll have to see if the Lattice tools provide similar capabilities
<mithro>
tinyfpga: the "brains" behind prjxray is really Clifford -- he can probably give you some potential ides on how to figure that out
<mithro>
tinyfpga: When you get to the stage of wanting to do PnR and stuff, I can help you get it into a format that Verilog to Routing and use
<mithro>
tinyfpga: anyway, I'm heading out
<cr1901_modern>
Everybody is in California...
<digshadow>
tinyfpga: are you posting your docs?
<tinyfpga>
later mithro
<tinyfpga>
digshadow: I will be, just started a couple days ago though :)
pie__ has quit [Ping timeout: 255 seconds]
<digshadow>
cool
<rqou>
hey, does icestorm not support ice40 LM-series?
<rqou>
also, wtf is ultra/ultralite?
<rqou>
the product lines are so asymmetric too
<rqou>
digshadow: if i buy a whole pile of ice40 parts and do the wet lab processing, can you do high-res scans?
pie__ has joined ##openfpga
<digshadow>
rqou: sure
* digshadow
doing wet processing right now
<rqou>
O_o
<rqou>
i thought that was on hold forever?
<digshadow>
no
<digshadow>
I just don't like doing it :P
<digshadow>
also while I was moving I couldn't
<digshadow>
now that I have a house its not as big of a deal
<rqou>
yeah, wtf is ice40 LM?
pie__ has quit [Ping timeout: 248 seconds]
soylentyellow has quit [Read error: Connection reset by peer]
<rqou>
digshadow: do wlcsp parts need any special "decap" procedures?
<digshadow>
wafer scale?
<digshadow>
Hmm don't do a lot of those
<digshadow>
but nothing comes to mind
<digshadow>
er chip size
<rqou>
apparently they have some kind of polymer dielectric layer
<azonenberg>
its just polyimide afaik
<digshadow>
you are using sulfuric acid?
<azonenberg>
for the fanout
<rqou>
i also have RFNA now
<rqou>
hrm, afaict wlcsp dielectric polymers are various organic materials (including possibly polyimide) so i think RFNA+sulfuric should remove them
<rqou>
eduardo__: ok, i have purchased those parts
<eduardo__>
rqou. Cool. Thank you.
<rqou>
i've also purchased a small number of _all_ ice40 parts (HX/LP/LM/Ultra/UltraPlus/UltraLite) so we can get a full set of images
<azonenberg>
:D
<eduardo__>
:D Will present the die images at the Lattice training I will give in April in Italy.
<digshadow>
rqou: be aware that h2so4 or rfna by itself may be sufficient
<digshadow>
combined that mixture is extremely reactive
<digshadow>
and may explode on contact with many organics
m_t has joined ##openfpga
pie_ has joined ##openfpga
pie__ has joined ##openfpga
pie_ has quit [Read error: Connection reset by peer]
eduardo_ has joined ##openfpga
eduardo__ has quit [Ping timeout: 268 seconds]
futarisIRCcloud has quit [Quit: Connection closed for inactivity]
unixb0y has joined ##openfpga
<unixb0y>
hi whats up?
pie__ has quit [Ping timeout: 248 seconds]
Dolu has quit [Ping timeout: 256 seconds]
<unixb0y>
@digshadow you there?
xdeller has quit [Read error: Connection reset by peer]
xdeller_ has joined ##openfpga
pie__ has joined ##openfpga
pie__ has quit [Ping timeout: 256 seconds]
pie_ has joined ##openfpga
futarisIRCcloud has joined ##openfpga
rohitksingh_work has quit [Read error: Connection reset by peer]
rohitksingh has joined ##openfpga
Ultrasauce has quit [Ping timeout: 252 seconds]
<unixb0y>
What's up everybody?
<jn>
not much
<unixb0y>
ok :D
genii has joined ##openfpga
Dolu has joined ##openfpga
ym has quit [Ping timeout: 265 seconds]
ym has joined ##openfpga
m_t has quit [Quit: Leaving]
RaivisR has quit [Read error: Connection reset by peer]
<mithro>
unixb0y: morning!
<mithro>
unixb0y: Thanks for the pull request!
<mithro>
unixb0y: Merged!
<mithro>
unixb0y: Would be awesome if you could go and fix the lists in the readme too
daveshah has joined ##openfpga
<mithro>
daveshah: Evening!
<unixb0y>
Hi @mithro, that's great!
<mithro>
daveshah: So, I see you are working on finishing my ice40 for vpr stuff!
<mithro>
daveshah: That is pretty awesome
<mithro>
daveshah: How much did I explain about the current state of symbiflow-arch-defs for ice40 at 34c3?
<daveshah>
mithro: I think you went through the arch and pb_type stuff, but not so much on the rr_graph
<mithro>
daveshah: Did I go through the idea about eventually generating the pb_type stuff from the sim.v files?
<daveshah>
mithro: yes, I think so
<mithro>
daveshah: That doesn't happen yet
<unixb0y>
mithro: Which lists do you mean? Lists in general starting with dashes?
<mithro>
unixb0y: Yeah
<unixb0y>
mithro: I can do it later or tomorrow! Do you want the regular bulleted lists?
<balrog>
daveshah: you now maintain arachne-pnr or are you moving to vpr for it? :)
<unixb0y>
mithro: Oh yes now I see it on GitHub.
<daveshah>
balrog: arachne-pnr will always be kept
<daveshah>
balrog: because it will almost certainly be fastest and easiest for small designs and education, so I and Clifford will keep maintaining it
<unixb0y>
mithro: I know Markdown, but thanks anyway for always writing how to do the MD formatting :D
<daveshah>
balrog: but we also want to make vpr for ice40 as part of the general SymbiFlow project and because it should give much better output quality for timing-constrained designs
<daveshah>
balrog: I've been looking at that, it's an annoying one because it requires quite a few changes throughout arachne. But I'll try and fix it one day
<daveshah>
mithro: is it worth doing the pb_type.xml from sim.v before doing the rr_graph.xml stuff do you think?
<daveshah>
mithro: I'm still trying to get my head around what is supposed to be done in the arch, and what has to be overriden in the rr_graph
<mithro>
daveshah: So arch should have all the real tile configuration and logic
<daveshah>
mithro: ok, that makes sense. in the end we will have no routing in arch?
<mithro>
daveshah: And a "virtual" connection infrastructure which something somewhat similar to the real structure (IE 4 length wires, and 12 length wires, etc)
<daveshah>
mithro: ok, that makes sense
<daveshah>
mithro: we will presumably need to find a way to get that into the verilog then...
<mithro>
daveshah: We then override the virtual connection infrastructure with the real connectivity
<daveshah>
mithro: on the subject of the connections, do you know anything about how the ptc numbering is supposed to work?
<mithro>
daveshah: One part I'm struggling with is how much of the routing infrastructure should be inside the tiles and how much should be part of the "switching / routing" level
<daveshah>
mithro: yes, I can see that being something that needs to be worked out. particularly as different tile types have to work together
RaivisR has joined ##openfpga
<mithro>
daveshah: vpr comes from the original idea of each tile having a full connected crossbar infront of the inputs to the logic
<mithro>
daveshah: They call it the "Classic Soft Logic Block Tutorial"
<daveshah>
mithro: thanks for the link
<mithro>
daveshah: It took me a while to realize, but the behaviour in the ice40 were you route from the wires onto the "logic tracks" and then onto the logic is actually very much emulating the "fully populated crossbar" found in that diagram
<daveshah>
mithro: yes, exactly, that makes sense
<daveshah>
mithro: although there are some oddities, like there being slightly too few tracks in some tricky cases (that actually break arachne-pnr)
<mithro>
daveshah: The part where it gets a bit weird is that the routing onto the "logic tracks" can *also* be used to connect the wires together -- which vpr does in a thing it calls a "switchbox"
<mithro>
daveshah: at some point we need to refactor that code to have a generic python library for doing rr_graph generation
<daveshah>
mithro: Yes, that definitely makes sense as we have more architectures
<daveshah>
mithro: And presumably ties into having more generic database formats too in the long run
<mithro>
daveshah: Maybe
<mithro>
daveshah: I haven't seen the ptc problems -- what is the best way to repo that?
<daveshah>
mithro: It happens with the iCE40 rr_graph.xml at the moment
<daveshah>
mithro: basically because the indexes aren't strictly consecutive starting from 0, because some nodes have different types/numbers of nets
<daveshah>
*not nodes, tiles
<daveshah>
mithro: and VPR only makes the indicies vector size the number of nodes
<daveshah>
mithro: so if your PTC numbers don't follow that sequence, then you exceed the bounds of the vector
<mithro>
daveshah: BTW The idea is very much to take the existing rr_graph.xml that vtr generates and patch the rr_nodes and rr_edges bits?
<daveshah>
mithro: I've just improved it from corrupting random memory and segfaulting later on to a more palatable assert failure for debugging
<mithro>
daveshah: :-P
<daveshah>
mithro: ah, that makes more sense, I didn't realise that
<daveshah>
mithro: I did feel there was too much duplication atm between the arch and the rr_graph
<mithro>
daveshah: well, eventually we want vpr to generate the correct rr_graph directly - but for now we just want to use this patching method
<daveshah>
mithro: that makes sense
<daveshah>
mithro: but I imagine there's a lot to do before that, I think things like the floating point fractions at the moment make that feel a bit "ugly"
<mithro>
daveshah: That all goes away with my direct connect idea
unixb0y has quit [Remote host closed the connection]
<mithro>
daveshah: btw do you have any connections with universities / students? TimVideos and FOSSi foundation both got into GSoC and we both would be more then happy to have students work on symbiflow / vtr / yosys projects
unixb0y has joined ##openfpga
<daveshah>
mithro: I'm at Imperial, but for whatever reason GSoC isn't that popular here, most students I know prefer to do placements at companies
<daveshah>
mithro: but if anyone asks or I think of anyone I'll point them in that direction
<mithro>
People don't really ask, kind of have to go out and recruit them :-)
<daveshah>
mithro: Unfortunately I think it's a bit late now :( most people in my year are about to do a 6 month industrial placement (April-October)
<daveshah>
mithro: In my case with Clifford :)
unixb0y has quit [Ping timeout: 268 seconds]
<mithro>
daveshah: on the sim.v to pb_type.xml conversion - I think it is a "nice to have" / "low priority" until we can do some non-trivial pnr with vpr
<daveshah>
mithro: OK. I think first steps should be getting the pb_type.xml reasonable, then go back to getting the rr_graph working.
<mithro>
daveshah: if you want to take a crack at it, I'm more then happy for you to do it (if your stuck with rr_graph stuff for example)
<mithro>
I think the pb_type files should be mostly good?
<mithro>
I know the sim.v files need work
<mithro>
Specially on the Artix-7 side
<daveshah>
mithro: I'm not sure, I think there were some issues to do with connections between the IO and logic tiles. But I was starting to think that maybe that could only be done in the rr_graphs
<mithro>
daveshah: btw can you give me some instructions to replicate where you are at with the ice40 stuff?
<daveshah>
mithro: At the moment the "make ff.disp" manages placement, but not routing, due to some missing connections still I think
<mithro>
Just want to make sure that I have the exact same state when I run some tests :-)
<daveshah>
mithro: If I use this command to load the rr_graph.xml, that's when it fails due to the PTC numbering issue
<daveshah>
make ff.disp VPR_ARGS="--read_rr_graph ~/symbiflow/symbiflow-arch-defs/utils/rr_graph.xml --route_chan_width 32" DEVICE=HX1K
<mithro>
daveshah: you should try "make ff.echo"
<mithro>
daveshah: actually one thing which would be really helpful would be able to generate the rr_graph only for a small region of the iCE40
<mithro>
I've found that super helpful for the Artix-7
<daveshah>
mithro: Thats a good idea. I've had to "upgrade" from the HX0K to the HX1K, so I could use a real device
<mithro>
Working on a 4*4 tile FPGA is much easier :-P
<daveshah>
mithro: The other option is to use the 384, which might be small enough but is also a real device...
<cr1901_modern>
There's an HX0K ._.?
<daveshah>
cr1901_modern: No, it's a fake FPGA for VPR testing
<cr1901_modern>
ahhh
<daveshah>
cr1901_modern: the smallest actual iCE40 is the LP384
<cr1901_modern>
Well an FPGA with 0 logic elements would be interesting. If not at all useful.
<daveshah>
Not useless
<daveshah>
It could be used as an IO crossbar switch, albeit not a very easily switchable one
<cr1901_modern>
you'd have to rewrite the bitstream each time to switch it, correct?
<daveshah>
yep
<sorear>
lp384 completely lacks RAMs, which could be interesting
<daveshah>
well, it simplifies the architecture for VPR if you only have 2/3 tile types (depending on whether you treat top and side io as different)
<mithro>
daveshah: Which revisions are you on and do I need to pull anything from you?
<daveshah>
mithro: Everything I've talked about is what you merged in the arch-defs
<daveshah>
mithro: In VPR I added some error checking which was merged a few minutes ago, and I'd suggest you pull to make debugging rr_graphs easier
<mithro>
okay
<mithro>
daveshah: checking now
<mithro>
daveshah: How did you generate the rr_graph file?
daveshah has quit [Read error: Connection reset by peer]
daveshah_ is now known as daveshah
<mithro>
daveshah: Just pushing a bunch of changes
<daveshah>
Brilliant :-)
user10032 has joined ##openfpga
unixb0y has joined ##openfpga
<azonenberg>
Sooo if anybody is interested
<azonenberg>
i'm going to be trying to implement vu+ support in libjtaghal shortly
<azonenberg>
Looking at the BSDLs it appears that the boundary scan register runs down the SLRs
<azonenberg>
and each SLR die is essentially daisy chained
<azonenberg>
so the IDCODE instruction for a device with three SLRs is actually going to be three copies of the single-device IDCODE instruction concatenated
<azonenberg>
and most instructions are broadcast like that
<azonenberg>
Some are unicasted to specific dies, for example CFG_IN has one instruction for each of SLR0/SLR1/SLR2
<azonenberg>
Those have some kind of nop instruction for the unselected SLRs and then the normal CFG_IN instruction for that active SLR
unixb0y has quit [Ping timeout: 240 seconds]
digshadow has quit [Ping timeout: 256 seconds]
unixb0y has joined ##openfpga
unixb0y has quit [Ping timeout: 248 seconds]
<azonenberg>
Interestingly enough the slice ordering for three-SLR devices seems to be {SLR1 SLR0 SLR2}
<kc8apf>
azonenberg: that all makes sense given the multi-die on interposer approach they talk about in their marketing material
<sorear>
huh, I thought they were physically larger
<rqou>
it turns out that large dies tank yields
<rqou>
i think TSVs will make for some really fun technologies (and even more fun RE challenges)
m_t has joined ##openfpga
<daveshah>
rqou: very interesting to see you plan to do an iCE40 decap
<rqou>
i mean, you asked :P
<daveshah>
Yeah, I'm really curious as to how the physical layout compares to the bitstream and logical layout
<sorear>
rqou: i saw a photo once that made it look like the VU9P was a foot across [which I rationalized as 1000s of SLRs and unusually good yield on the interposer], but I guess it was very bad perspective since the datasheet says 47mm…
unixb0y has joined ##openfpga
<rqou>
yeah, inerposer yields aren't _that_ good
<rqou>
i think the current limit is 4 SLRs? (azonenberg?)
<rqou>
daveshah: another interesting thing would be extracting NVM contents that lattice claims is super secure
<rqou>
i don't know if that's easily achievable or not though
<rqou>
although i would laugh super hard if the nvm is a separate die bonded to the main fpga (cough cough spartan 3an)
<daveshah>
rqou: yeah I'm pretty sure they have a software readout that has to be locked, so it's almost guaranteed not to be secure
<daveshah>
wouldn't be surprised if the old uv to erase lock bit trick works
<rqou>
lattice claims the nvm is not floating gate technology
unixb0y has quit [Ping timeout: 255 seconds]
<daveshah>
interesting
<daveshah>
yet they also claim no observable difference with any kind of microscopy
<daveshah>
they expose the nvcm power supply (vpp2v5) seperately
<daveshah>
maybe some kind of glitch attack as it checks the lock bits is possible
<rqou>
I'm going to laugh if it's a completely separate die that can just be bonded out
<daveshah>
the "programming interface similar to a 25 SPI PROM" is a bit suspicious
<daveshah>
but I find it hard to believe they have two dice in the tiny WLCSP packages...
<rqou>
hmm that's true
unixb0y has joined ##openfpga
<azonenberg>
kc8apf: well what i meant was
<azonenberg>
my original thought was that the slrs would be chained and have their own tap
<azonenberg>
that doesnt seem to be the case
<azonenberg>
they're kinda in parallell
<azonenberg>
rqou: lol i am not decapping a vu+
<azonenberg>
i have short term access to one that i can play with
<kc8apf>
right. that makes sense to me. Some of the materials talk about how the global clocks can be tied across the SLRs. Implied a bus-like architecture.
<azonenberg>
But its not my board and i'm not going to decap somebody else's $10k chip :p
<kc8apf>
somewhat an extension of how they construct an SLR out of rows that are paralleled
<azonenberg>
basically what it seems to do is have three tap controllers run in parallel vs daisy chained
<azonenberg>
Then the boundary scan register runs in kind of a zigzag fashion across the interposer between the SLRs
unixb0y has quit [Ping timeout: 256 seconds]
<kc8apf>
yep. that's very similar to how they construct an SLR die internally
<azonenberg>
well the confusing bit is
<azonenberg>
logically bottom = slr0, top = slrN
<rqou>
daveshah: yeah, I've seen that. not sure how much is marketing fluff
<azonenberg>
but in the bscan register they have 0 in the middle, then 1 at left and 2 at right
<azonenberg>
So it's not a nice straight line like you'd expect
<kc8apf>
config frames are sent out parallel to each row
<daveshah>
rqou: hard to say. they seem to be more concerned with people reading the memory cells directly rather than bypassing/clearing the lock bit, which would be my concern
<kc8apf>
within an SLR, they divide it in half around the global clock buffers. I wonder if the interposer has a similar center-line
<rqou>
e.g. i think the nvcm probably has a serial interface internally and a serious attacker can just use a fib and get access to the clk/dout wires
<kc8apf>
rows within an SLR and numbered outward from the centerline with a separate bit for top vs bottom half
<azonenberg>
kc8apf: yeah
<azonenberg>
Looking at the vu9p BSDL
<daveshah>
rqou: suspiciously the same configuration time/configuration frequency relationship is given in the datasheet for configuration using both external SPI flash and NVCM, so I think you're right
<daveshah>
mithro: it reminds me of some of the normalisation challenges in the iCE40 stuff - the logic, DSP and ipconnect tiles all have the same routing configuration bits, but they call the nets different things - until you use Clifford's normalisation function - so the raw databases for each tile looks very different
<daveshah>
mithro: afaik we don't have to worry about any of that for VPR stuff though as it's all dealt with and tidied up by icebox
<azonenberg>
rqou: yeah i would love to have a fulyl working open coolrunner toolchain
<azonenberg>
if you get the 32a working i'll have a good excuse to start working on the bigger parts
<sorear>
is the ultimate goal "all commercially available fpgas and cplds"? :p
<rqou>
no
<rqou>
e.g. I'm not going to work on cplds with product term allocators anytime soon
<azonenberg>
sorear: I mean that would be a good long term goal but shorter term we have to pick our battles
<azonenberg>
the very long term goal would be to crowdfund a fully open FPGA family from scratch
<azonenberg>
like. the rtl and gds are on github
<rqou>
btw interestingly GAL support actually looks easier than CPLD support
<rqou>
because GALs have so _few_ features
<azonenberg>
Thats why i picked coolrunner
<azonenberg>
very orthogonal and fairly simple
<rqou>
Coolrunner-II is ok
<rqou>
it has rough edges still
<rqou>
but it's much better than the older devices with product term allocators
<azonenberg>
Yes agreed
<azonenberg>
Thats why i picked it :p
unixb0y has joined ##openfpga
unixb0y has quit [Ping timeout: 265 seconds]
pie_ has quit [Ping timeout: 276 seconds]
<mithro>
daveshah: So you get to the routing stage and get an error like "Message: This circuit requires a channel width above 1000, probably is not going to route." ?
unixb0y has joined ##openfpga
<daveshah>
mithro: yes that's the status so far
<mithro>
daveshah: so, the first thing is to fix things up so routing works without your custom rr_graph
<daveshah>
mithro: yes, I didn't realise that was the plan at first. I'll look into it
<daveshah>
mithro: I think it's to do with missing routing between IO and logic tiles
<mithro>
Cannot route from PIO.o_neigh_op_bnl[1] (rr_node: 4973 type: SOURCE ptc: 33 xlow: 3 ylow: 4) to PLB.o_neigh_op_rgt[2] (rr_node: 4003 type: SINK ptc: 19 xlow: 3 ylow: 3) to -- no possible path
<mithro>
Failed to route connection from 'di' to 'my_ff' for net 'di'
unixb0y has quit [Ping timeout: 260 seconds]
<mithro>
anyway lunch time for me
<mithro>
daveshah: I actually think we should go back to before I added all the interconnect into the tile itself
<mithro>
I'll look at it later tonight and then should have something before you get up
<daveshah>
OK, that works well. I think I might have some time tomorrow morning to look at it. Hopefully I didn't break anything in the attempt to get the rr_graph.xml working
pie_ has joined ##openfpga
unixb0y has joined ##openfpga
unixb0y has quit [Ping timeout: 256 seconds]
user10033 has joined ##openfpga
user10032 has quit [Ping timeout: 248 seconds]
kem_ has joined ##openfpga
unixb0y has joined ##openfpga
unixb0y has quit [Remote host closed the connection]
unixb0y has joined ##openfpga
soylentyellow has joined ##openfpga
pie__ has joined ##openfpga
pie_ has quit [Ping timeout: 248 seconds]
mumptai has joined ##openfpga
user10033 has quit [Quit: Leaving]
asdfa has quit [Quit: Page closed]
<rqou>
"And how do you find such a matrix? Well, I just use MATLAB"
<rqou>
hashtag Berkeley i guess? :P
<rqou>
cc awygle
* azonenberg
is starting to play with Octave for some DSP filter design
<rqou>
why not scipy?
<rqou>
although imho scipy is lacking in "controls" type tools
<rqou>
seems to be ok for "signals" stuff
<rqou>
oh right you don't like sneklang :P
<azonenberg>
Danger noodle
<azonenberg>
No boops for you
<azonenberg>
Bad snek no curly braces
<rqou>
your $WIFE needs to boop your snoot :P
<Zorix>
lol
<azonenberg>
She does that far too often already. When i'm trying to work :p
<rqou>
lolol
<azonenberg>
also i'm not actually doing any filtering of real data in octave
<azonenberg>
I just want to plot frequency response, design filters, etc then export FIR/IIR coefficients to use elsewhere