<pie_> need less state? just recalculate everything all the time \o/
<pie_> wooo derived state \o/
<jn__> pie_: now add a transparent state cache
<pie_> jn__, coming full circle lol
<lain> it's states all the way down
<lain> the encoding used by PSE's energy meters is weiiiiiiird
<pie_> politics, in my engineering? it's more likely than you think!
<lain> I need to write a decoder to confirm it obeys the patents, but this is a very fascinating departure from typical DSSS modulation
<jn__> pie_: :D
<pie_> jn__, i havent finished reading the article, but software-engineering-wise that might actually kind of make sense
<pie_> if you decrease the amount of state, then you have less possible inconsistent state and less likely to have bugs
<pie_> you're not always doing recalculation because its bad for performance
<lain> is that like how decreasing the number of lines in a program decreases the number of bugs (on average)? :D
<pie_> so if doing something like that would still increase simplicity i think it might not actually be a bad idea
<pie_> ( qu1j0t3 ^ ? )
<lain> although I guess it's also proportional to the cleverness per line
<pie_> assuming your caching is bug free :p
<lain> which in some programs is so high as to raise the bugs per line well above 1
<lain> :P
<pie_> lain, lol
* lain glares at anyone who ever put an assignment in an if() statement in C/C++
<pie_> case in point <lain> the encoding used by PSE's energy meters is weiiiiiiird
* lain has done this, and is full of regret
<pie_> lain, well, k&r even encourages that with the string processing examples :p
<lain> d'oh
<pie_> re: transparent cache: something something memoization
<pie_> yet another reason functional programming is great \o/ :p
<pie_> (cant really memoize if you dont have pure functions because output will depend on not only the function parameters yeah?)
* pie_ thinks he noobed this before in python ^
<jn__> rust kind of has a sane version of assignment-inside-if() with if let
<rqou> I've done `if ((ret = foobar()))` many times
<rqou> am i evil?
<qu1j0t3> pie_: I'm pretty convinced parsimony is a helpful engineerign principle yeah
<rqou> also, azonenberg: I've found a huge quality problem with the xc2par flow for now
<rqou> we're still not aggressive enough at inferring TFFs
<rqou> apparently that ends up saving a ton of space
<lain> rqou: that's about the only thing I consider relatively sane, re: assignment in if()
<lain> anything more complex than assigning and checking a retval is a recipe for disaster imo
<whitequark> pie_: you can totally memoize pure functions in python
<whitequark> you can't do it *automatically* but memoization is applicable everywhere. c++ uses it a lot
<rqou> yeah, I've done it many times with a magic decorator
<whitequark> also even haskell has unsafePerformIO :p
<rqou> (Berkeley really likes this trick)
<pie_> whitequark, i mean i did it wrong
<Bike> i remember seeing a cool memoization thing for fibonacci, and then they dropped it and did linear algebra instead and it was faster and took no memory
<rqou> heh I've seen that trick
<qu1j0t3> it's in SICP
<qu1j0t3> Bike: non-naive Fib is O(1) space anyway. the algebra trick just takes time complexity to O(log n) instead of O(n). iirc
<rqou> ok, maybe that's why I've seen it
<Bike> sounds right
<rqou> Berkeley's curriculum is SICP-derived
<Bike> since you do the binary method on the exponentiation of the matrix or something
<cr1901_modern> qu1j0t3: non-naive Fib is O(1) time too
<qu1j0t3> yes
azonenberg_work has joined ##openfpga
<cr1901_modern> there's a closed-form solution involving phi
<qu1j0t3> heh
<qu1j0t3> yes, but i mean specifically the recurrence impl.
<Bike> it's closed form but you still have to exponentiate and now there's floats
<Bike> i forget the O for addition chain exponentiation. is there one even
<qu1j0t3> i refer to that as O(n)
<Bike> maybe just log with a smaller constant
<rqou> oh hey awygle, did you take the "old-school bh version" of 61A or the sneklang version?
* oeuf wakes up
<oeuf> floats!
oeuf is now known as egg|z|egg
<rqou> lol
* rqou pets egg|z|egg
<qu1j0t3> if an egg|z|egg floats, it's, ... well, it's not good.
<Bike> i know finding an optimal addition chain is np complete or at least a generalized version is np complete so Who Knows, but fuck that part
<Bike> mhm looked up a paper and it goes right into genetic algorithms
<qu1j0t3> lol
<rqou> <egg|z|egg> is a float of 0.9999999998 due to rounding bugs :P
<Bike> oh i just realized that's a bra ket joke
<egg|z|egg> nah its' just that there is a smol probablity that I'm not actually asleep
<egg|z|egg> (don't look at the fact that I'm typing you'll collapse me)
<Bike> i wonder if tetration is as bullshit to optimize
<Bike> i guess it is since in general you get ackermann which is bullshit
<azonenberg_work> awygle: sooo it looks like doing this switch with an xc7k70t would be really tight
<azonenberg_work> you'd need super tiny queues
<azonenberg_work> Here's some tentative architectural numbers
<azonenberg_work> 1G ports: small input queue (4x small BRAM / 8KB) is 5.4 MTU-sized frames
<azonenberg_work> that should be plenty since they get emptied fast
<azonenberg_work> 10G ports: large input queue (16x small BRAM / 32 KB) is 21.8 MTU-sized frames
<azonenberg_work> on the output side, 10G ports get 2 BRAMs (basically enough to buffer a frame or two and maybe do really basic qos, but they empty at line rate so should never fill up)
<azonenberg_work> 1G exit queues are larger, 8 BRAMs (10.9 frames) since we can push data to them at 10 Gbps
<azonenberg_work> and we dont want them to overflow too easily
<azonenberg_work> I'm gonna do some simulations later on to adjust those queue numbers
<azonenberg_work> But this comes out to 160 BRAMs for input queues and 200 for exit
<azonenberg_work> The xc7k70t only has 270 blocks and that's before adding in the 64 BRAMs for the mac table
<azonenberg_work> the xc7k160t, with 650 blocks, would still be 65% full
<azonenberg_work> I'll try and see i fi can shrink the queues a bit once i've finished figuring out how to handle broad/multicasts
azonenberg_work has quit [Ping timeout: 256 seconds]
azonenberg_work has joined ##openfpga
GenTooMan has quit [Quit: Leaving]
digshadow has quit [Ping timeout: 260 seconds]
<rqou> now i understand why vpr has all of the random weird features it has
bitd has joined ##openfpga
<rqou> ping azonenberg awygle
digshadow has joined ##openfpga
<pie_> rqou, tl;dr? :p
<rqou> ok, i figured out what the (major) difference is between max ii vs max v
<rqou> max v has distributed ram
<rqou> pie_: why don't you read it and then explain it to me? :P
<rqou> pie_: read section 4.1 and explain it to me pl0x? :P
<rqou> but overall i'm really surprised how much of fpga reverse engineering is actually OSINT
<rqou> not hammering things with cpu cores
<pie_> now that you mention it
<pie_> i have ran across stuff like this before but it never actually it me that this would help with RE
<pie_> well, at the least i wasnt doing re at the time so :D
<pie_> i wonder if removing the huge unnecessary ass-brand on my jeans with an xacto would reveal some weird pattern under it
<pie_> i paid for your damn jeans so dont make me an advertisement (ironically, people wear some things for the brand so...?)
<pie_> well here goes nothing
<pie_> err..offtopic :P
<rqou> meh, it's still hacking :P
<pie_> also, apparent success.
<rqou> man, this architecture seems quite easy to fuzz so far
<azonenberg> rqou: yes there is a lot of OSINT involved
<azonenberg> but thats true of almost everything, not just fpga re
<azonenberg> OSINT is a lot more powerful than people give it credit for
<pie_> well, if you know where to look
<rqou> i love the "every lut has exactly the same inputs" feature
<rqou> *cough* *cough* ice40
<pie_> githubs ceo resigned at some point?
<pie_> wut?
<pie_> also
<pie_> http://www.businessinsider.com/2-billion-startup-github-could-be-for-sale-microsoft-2018-5?IR=T "Microsoft has been talking about buying GitHub, a startup at the center of the software world last valued at $2 billion"
<pie_> YOU BETTER FUCKING NOT
<rqou> oh what
<rqou> this differs from the paper
<rqou> the way the local lines are divided
<rqou> oh i see a possible reason
<azonenberg> github is a startup?
<azonenberg> also, if microsoft buys them i'll probably move back to self-hosting :p
<daveshah> Always have kept a gitlab server going partly for that reason
<azonenberg> daveshah: i moved to self hosting after google code kicked the bucket
<azonenberg> vowed to never use a third party host again
<azonenberg> then kinda got pushed into using github by external forces
<azonenberg> (also i had been using redmine internally which was a pain to maintain)
<daveshah> I started using GitLab myself because a VPS was cheaper than a subscription to any service (considering other stuff like web hosting and email too) and that actually mattered when I was younger
<daveshah> It worked very well for a few group projects
<azonenberg> my nonpublic stuff is mostly hosted internally
<azonenberg> with no web presence whatsoever
<azonenberg> just git clone file:///nfs4/share/repos/project/
<daveshah> Yeah, I've never worked on anything private enough to worry about that. Either Github/GitLab private project or the aforementioned server
<daveshah> Which is now actually a Hetzner dedi
<azonenberg> i also have a bunch of vps's i plan to shutter once i'm settled at the new place
<azonenberg> i'm hosting in house on a bare metal system
<azonenberg> email needs to migrate off godaddy, i havent figured out a plan for that
<azonenberg> i've been using them way too long and just havent had the time to sit down and do something about it
<azonenberg> The only infrastructure i plan to keep offsite is a dedicated system somewhere for offsite backuip
<azonenberg> although i may change hosts for that
<rqou> azonenberg: so, every lut input has _18_ choices
<rqou> any guesses as to how the bits are encoded?
<rqou> (no, i haven't started this part of the fuzzing yet)
<azonenberg> Sixteen routing channels plus constant 0/1
<rqou> what
<rqou> no
<azonenberg> 18 routing channels?
<rqou> yeah
<azonenberg> Plus 0/1? or are those not available in fabric
<rqou> not available
<rqou> why do you need that? it's a lut, silly
<rqou> 36 local tracks (26 inputs + 10 feedback) where each input can get 18
<rqou> *each lut input
<rqou> but inputs a/b/c/d have a different selection of 18
<azonenberg> one hot makes the most sense
<azonenberg> but i guess we'll find out soon
<rqou> that'd be huge
<daveshah> I think some kind of dual one-hot is quite common
<daveshah> That's what the ECP5 looks like
<rqou> 18*4*40=2880 out of 7168 bits per column
<azonenberg> I could be wrong
<rqou> dual one-hot?
<azonenberg> but i thought that xilinx had one bit per pip
<azonenberg> and it was just a question of which bit = which pip
<daveshah> rqou: two one hot muxes cascaded
<azonenberg> textbook one hot
<azonenberg> oh, that would make sense
<rqou> ah ok
<azonenberg> i think coolrunner might do that in the larger parts
<azonenberg> possibly even three
<rqou> i've never heard that called "dual one-hot"
<rqou> i would have expected that to mean one-cold
<rqou> hardest problems: naming things
<rqou> :P
<daveshah> Lattice actually kindly included a circuit diagram of that in a presentation they gave at a university
<daveshah> Going back to the OSINT point
<rqou> how many bits is that PLC2 tile?
<daveshah> Also confirmed for me that everything was muxes and not bidirectional
<azonenberg> fpgas havent had bidirs in ages
<daveshah> rqou: about 1200 off the top of my heaf
<rqou> um... ice40 does
<daveshah> yep
<azonenberg> internally?
<daveshah> But I think the ice40 is just odd
<daveshah> azonenberg: bidir routing
<daveshah> Not switchable tristates at runtime
<daveshah> Those died around Virtex II
<azonenberg> I mean, in many fpgas you can have multiple pips driving one line
<azonenberg> But each line is still unidirectional
<azonenberg> you just have multiple drivers
<daveshah> azonenberg: I don't think the ECP5 even has that
<daveshah> Whereas the ice40 does, and it has switch transistors between lines too
<azonenberg> Still have to re-length-match the diff pairs
<azonenberg> And finish laying out the fanout of the backplane connector
<azonenberg> and those few passives and sensors you see in the left side
<azonenberg> Putting the connector up top gets it away from the SMPS and also means that the backplane can be a little shorter
<pie_> so have yall been documenting hwere you got all your re info from :p
<pie_> for posterity
<rqou> i have
<rqou> everybody else seems to love their secrets
<azonenberg> rqou: well, there is a benefit to it
<azonenberg> if source X is later determined to be illegal
<azonenberg> its harder to sue because they have to prove you used it
<azonenberg> But if you told everyone where the info came from, and that source later turns out to be dirty...
<azonenberg> otoh if you are *certain* the source is clean, like my silicon RE work on coolrunner
<azonenberg> Then documenting it up the wazoo can be used as a defense
<azonenberg> So it goes both ways
<rqou> except for how we're now stuck thanks to you not having enough sem time
<azonenberg> get a sem? :P
<azonenberg> Or come back and help me finish the house :p
<rqou> or just pay some "cheap labor of dubious immigration status"? :P :P :P
<pie_> all those people on twitter with SEMs up the wazoo ;_;
<rqou> azonenberg: ready to do inspection yet btw?
<rqou> on your house
<azonenberg> electrical?
<azonenberg> No, the dumpster/fireplace thing derailed us a bit
<azonenberg> prob gonna take the rest of the weekend
<rqou> huh, for some reason in quartus the _synthesis_ stage is slow as shit
<rqou> everything else is acceptably fast
StCypher has quit [Ping timeout: 255 seconds]
<rqou> i'll be so glad to get to the fuzzing that doesn't require invoking quartus_map
m_t has joined ##openfpga
<daveshah> rqou: ultimately for the ECP5 I ended up doing post-PAR fuzzing which was ideal
<rqou> how?
<daveshah> tinyfpga found tools included that convert the post-PAR database to and from a text format (NCL)
<daveshah> those tools were also documented in that university presentation
<rqou> huh
<rqou> i'm not aware of an equivalent for quartus
<daveshah> that also shows how the two-one-hot MUX is constructed
<rqou> wait, this is for machxo?
<daveshah> rqou: presentation is for machxo, high level arch is basically the same
<rqou> interesting
<daveshah> the other nice thing about the Lattice post-PAR files is they allow incomplete routing
<rqou> i thought machxo was a "CPLD"?
<rqou> incomplete?
<daveshah> no machxo is an FPGA with flash
<daveshah> incomplete - you can have a design with just the mux you want to fuzz
<rqou> huh
<rqou> no drc?
<daveshah> no need to worry about anything else or generating a complete path
<daveshah> you can bypass drc
<daveshah> unfortunately, timing analysis requires a complete path
<daveshah> so for that a bit more work will be needed
<rqou> wtf this ppt is super confusing
<rqou> max v seems much simpler
<rqou> (but it also only has one type of span)
<daveshah> the diagram on p21 matches the ecp5 exactly as far as I know
<daveshah> the ecp5 is still a lot simpler than the ice40 imo
<rqou> hmm
<daveshah> the ice40 is ironically one of the most complicated fpgas routing wise that is out there
<rqou> i find the ice40 pretty straightforward now
<daveshah> but the ice40 PLBs are simple because no DisRAM etc
<daveshah> most FPGAs these days don't bother with local tracks, or bidirectional switches, for example
<rqou> huh really?
<daveshah> the ice40 also has way more wires per tile than any other fpga I know
<rqou> er, 32 local tracks?
<rqou> plus span wires
<daveshah> I mean counting the spans too
<daveshah> they have a lot of spans, also the r_v_b adjacent spans
<daveshah> and the span wires stop in every tile, unlike ecp5 for example
<rqou> so max v definitely has local tracks
<rqou> not sure exactly how span wires work
<rqou> but it only has span4s since the grid is so small
<rqou> oh i think i see what it's going for
<rqou> the big switch box eliminates the need for local tracks
<daveshah> yes
<daveshah> there are still loopback connections and 8 span0 wires which are effectively local tracks if needed
<rqou> max v is also a pretty old architecture
<rqou> so maybe that's why it still has spans
<daveshah> do you know if it has bidirectional switches yet?
<rqou> meaning?
<rqou> like the ice40 routing bits?
<daveshah> yes
<daveshah> whereas the ecp5 only has unidirectional muxes
<rqou> i actually don't know at this point
<daveshah> it's probably something to spend a bit more osint on
<daveshah> i found some debug output of diamond that made it clear
<daveshah> but often its mentioned in app note or presentations etc
<rqou> the doc i linked earlier seems to imply no
<rqou> but it doesn't state explicitly
<rqou> it spends a lot of time talking about the advantages of using direct-drive muxes
<rqou> hmm also, the internal codename of max _ii_ seems to be "tsunami"
futarisIRCcloud has joined ##openfpga
user10032 has joined ##openfpga
plaes has quit [Ping timeout: 256 seconds]
plaes has joined ##openfpga
plaes has joined ##openfpga
futarisIRCcloud has quit [Quit: Connection closed for inactivity]
GenTooMan has joined ##openfpga
Hamilton has joined ##openfpga
clifford has quit [Read error: Connection reset by peer]
clifford has joined ##openfpga
indy has quit [Read error: Connection reset by peer]
indy has joined ##openfpga
clifford has quit [Ping timeout: 240 seconds]
clifford has joined ##openfpga
m_t has quit [Quit: Leaving]
user10032 has quit [Quit: Leaving]
clifford has quit [Read error: Connection reset by peer]
clifford has joined ##openfpga
clifford has quit [Read error: Connection reset by peer]
clifford has joined ##openfpga
Hamilton2 has joined ##openfpga
Hamilton2 has quit [Remote host closed the connection]
Hamilton has quit [Ping timeout: 276 seconds]
StCypher has joined ##openfpga
clifford has quit [Read error: Connection reset by peer]
clifford has joined ##openfpga
clifford has quit [Read error: Connection reset by peer]
clifford has joined ##openfpga
clifford has quit [Remote host closed the connection]
clifford has joined ##openfpga
m_t has joined ##openfpga
pie__ has joined ##openfpga
pie_ has quit [Ping timeout: 240 seconds]
bitd has quit [Remote host closed the connection]
<rqou> https://twitter.com/k8em0/status/1003017810258219008 <--, so, #opencatgirls when? :P :P :P
<pie__> anyone know if its possible to have a user xkb extensions keyboard in the home directory
<rqou> i know some of those words :P
<pie__> *keyboard config
m_t has quit [Quit: Leaving]
gnufan has joined ##openfpga