<kc8apf> Lustre is similar but can only do a filesystem
<kc8apf> I wanted a mix of CephFS, iSCSI volumes, and S3
<azonenberg_work> yeah all i need is filesystems for now
<azonenberg_work> and i can't see needing anything else any time soon
<azonenberg_work> Right now i'm running NFS just fine, but i dont like the protocol and i have a SPOF in the server
<kc8apf> for example, if a kubernetes job asks for storage, a Ceph volume is automatically provisioned
<kc8apf> I don't remember if Lustre does multi-master
<kc8apf> Ceph relies on a quorom model
<azonenberg_work> well i guess that is something to look into once i'm done with the initial lab buildout
<azonenberg_work> Short term my current nas is sufficient to get me up and running
<azonenberg_work> And i should probably have walls and power and a floor before i do too much more...
azonenberg_work has quit [Ping timeout: 245 seconds]
emeb has quit [Quit: Leaving.]
<openfpga-github> [Glasgow] whitequark pushed 1 new commit to master: https://github.com/whitequark/Glasgow/commit/60cf959986bf7df92035a69de5b800376645e0a0
<openfpga-github> Glasgow/master 60cf959 whitequark: applet.jtag.pinout: also probe TRST# if pulldowns are detected....
<whitequark> ugh NFS
<travis-ci> whitequark/Glasgow#87 (master - 60cf959 : whitequark): The build has errored.
unixb0y has quit [Ping timeout: 252 seconds]
unixb0y has joined ##openfpga
<Bob_Dole> what's it take to make a pci host controller? can a risc-v and pci host controller fit on the ecp5 comfortably?
<Bob_Dole> pci is something I just want.
<whitequark> pci or pcie?
<Bob_Dole> pci, because a bridge chip is an option.
<Bob_Dole> pcie would be nice but a bridge chip is an option.
<Bob_Dole> (if needed at all.)
<whitequark> pci isn't really complex at all
<SolraBizna> plain old PCI is probably easier to implement from scratch than DDR4
<Bob_Dole> I thought it wasn't, thought an ice40 could implement it.
<whitequark> yeah
<whitequark> isn't it just address, data, strobes
<SolraBizna> plus a few interrupt lines and some control signals
Miyu has quit [Ping timeout: 244 seconds]
<Bob_Dole> but pci+risc-v+some sort of Memory Controller
<sorear> we know pcie on ecp5 is a thing bc lattice offers a core for it
<Bob_Dole> soft core yeah, I saw that
<sorear> idk if there’s a usable open pcie
<whitequark> litepcie? :P
<Bob_Dole> but I kinda want something that I have a chance in hell of getting SolraBizna to design. >.>
<Bob_Dole> and fit it all together logically.
<Bob_Dole> and I solder
<sorear> litepcie is neat
<sorear> so uhhhhhhhh
<sorear> how many person-years to an open tb endpoint
<Bob_Dole> tuberculosis?
<pie___> pci over tuberculosis
<SolraBizna> sourcing tuberculosis bacteria that are rated for operation at 33MHz is... difficult
<pie___> watchlist++
<zkms> i'm vaccinated against tuberculosis, can't say the same for thunderbolt.
<SolraBizna> the fastest I've seen were 18μHz
<SolraBizna> I'm sure Moore's Law will fix this eventually
<sorear> tb3 is extremely cursed but laptop usable Pcie on a fpga board would be fun maybe
<zkms> thats what m.2 is for ;p
<sorear> Then I’d need an external drive to boot from:p
<sorear> Also that imposes dimensional constraints
noobineer has joined ##openfpga
mumptai_ has joined ##openfpga
mumptai has quit [Ping timeout: 252 seconds]
rohitksingh has joined ##openfpga
<sorear> litepcie seems to have a lot of hardcoded 32s
rohitksingh has quit [Quit: Leaving.]
rohitksingh has joined ##openfpga
rohitksingh has quit [Quit: Leaving.]
rohitksingh has joined ##openfpga
lexano has quit [Ping timeout: 246 seconds]
lexano has joined ##openfpga
_whitelogger has joined ##openfpga
<SolraBizna> the datasheet says an iCE40-LP1k bitstream image is 32303 bytes long, but the .bin file I get from icepack is 32220 bytes long
<SolraBizna> why the discrepancy?
<sorear> the "bitstream" is a packet format which can omit or reorder packets in some cases
<sorear> i'm not familiar with the details but it's possible icestorm handles the packets slightly differently from icecube
<SolraBizna> hm...
<sorear> it is not the case that byte 3456 of the bitstream has a prior determinable meaning, because you have to parse the packet structure
<SolraBizna> so, I should still be able to just plop that .bin onto my EEPROM and have it work
<sorear> should be
<SolraBizna> guess I'll find out in 5 weeks!
<SolraBizna> (this is why I normally prefer working in software...)
<SolraBizna> (this and because I'm insanely poor)
<rqou> SolraBizna: are you using arachne? there is a known 'feature' where it generates a broken comment packet
<SolraBizna> I am
<Bob_Dole> nextpnr is the future
fseidel has quit [Ping timeout: 250 seconds]
fseidel has joined ##openfpga
Kitlith_ has quit [Ping timeout: 272 seconds]
Kitlith_ has joined ##openfpga
Bike has quit [Quit: Lost terminal]
noobineer has quit [Ping timeout: 252 seconds]
luvpon has joined ##openfpga
Kitlith_ has quit [Ping timeout: 272 seconds]
<SolraBizna> should I really have a decoupling capacitor for *every* positive/negative pair of *every* complex IC?
<sorear> you mean differential I/Os?
<whitequark> probably power
<whitequark> SolraBizna: it is not necessary to have a decoupling cap for every Vcc pin, it is just a safe guideline
<sorear> yeah but power/ground doesn't come in pairs
<whitequark> sometimes it does
<SolraBizna> coincidentally, it has on every IC in my design that I've considered needing a decoupling cap for
<whitequark> SolraBizna: usually you'd place af ootprint on every vcc/gnd pair
<whitequark> and then if you don't need them all, you don't populate
Kitlith_ has joined ##openfpga
<sorear> current thought: given a $5 fpga with 99 GND and 42 total VCC, is it possible to use without spending well over $5 in passives
<sorear> vcc+vccaux+vccio
<whitequark> sure
<whitequark> you won't even be able to fit them meaningfully
<whitequark> follow the mfgr guidelines
azonenberg_work has joined ##openfpga
<SolraBizna> half of my PCB is going to end up being just footprints for caps
<sensille> and when the fpga ends up only needing 100mA, what should all those caps be good for?
<sorear> well if I'm using ~half of the 197 user I/Os as 800 MT/s DDR outputs, there'll probably be quite a bit of noise current from that
<TD-Linux> the point of decoupling caps is to be close
<TD-Linux> if you pack so many in that some get pushed further away, the far ones are useless
<SolraBizna> Since I have dedicated power and ground planes, do I need a dedicated trace from each end of the decoupling cap to the corresponding pins on the IC, or is it enough to connect things to the planes (as long as the vias are close)?
<SolraBizna> (Having dedicated planes is something very new to me)
<azonenberg_work> SolraBizna: i generally run vias directly from the bga dogbone to the plane
<azonenberg_work> Then i put the cap tangent to the vias on the back of the PCB
<azonenberg_work> forming a []B shape
<SolraBizna> oh... right... because once I have a via, I have a via
<azonenberg_work> o[----]o mounting of caps has higher inductance which hurts high frequency performance
<azonenberg_work> 8[]8 is even better but is overkill for most applications
<azonenberg_work> []8 is good enough most of the time
<azonenberg_work> if you have two sets of vias just use two caps :p
<azonenberg_work> What FPGA are you using btw?
<SolraBizna> an ICE40 LP1k for this test board
<azonenberg_work> And what are you doing that needs so much ddr io?
<SolraBizna> it's not DDR IO, it was supply
<azonenberg_work> oh wait that was sorear
<azonenberg_work> sorry
<azonenberg_work> SolraBizna: anyway, in general you're best off having only the smallest (0402 or similar) caps under the fpga
<SolraBizna> (I was wondering the same thing about sorear's project though)
<azonenberg_work> the 0603-esque stuff is targeting lower frequency ranges so it can be moved further away
<azonenberg_work> Typically i put them very close to but not under the fpga
<azonenberg_work> then bigger caps can go almost anywhere
<SolraBizna> so, aiming to have a tiny cap for each supply pin, a not-as-tiny cap for each IC, and a big cap for the board is a good way to go?
<sorear> azonenberg_work: still going through the details on "how to get as much bandwidth as possible between N ecp5s in close proximity"
<azonenberg_work> sorear: what are you using the cluster for?
<azonenberg_work> SolraBizna: It depends on the fpga, read decoupling recommendations if the vendor has them
<azonenberg_work> Xilinx has optimized decoupling recommendations that don't require a cap on every pin
<sorear> weird hpc ideas
<azonenberg_work> sorear: Lol
<azonenberg_work> Any particular problem domains?
<sorear> cryptographic mostly
<azonenberg_work> machine learning? /me ducks incoming hype storm
<azonenberg_work> ooh rsa factorization?
<sorear> computing GB-sized FFTs over GF(2^255), etc
<azonenberg_work> Gigabyte sized FFTs?
<sorear> yes.
* azonenberg_work tries to think of what that's good for
<azonenberg_work> is that for ECC stuff?
<azonenberg_work> my ECC-fu is weak
<sorear> non-ECC zero-knowledge stuff
<azonenberg_work> Either way sounds interesting
<azonenberg_work> i would love to have somebody make a Deep Crack equivalent that can factorize rsa keys
<sorear> but at this point it's more of a "motivating example" than a "design target"
<azonenberg_work> How far do you think we are from a public break of rsa-1024?
<azonenberg_work> or a 1024-bit DH group precomputation?
<sorear> i don't think this machine will be the most cost-effective way to attack rsa
<azonenberg_work> (I assume TLAs have been doing it for years but it's never been publicly demoed)
<sorear> for attacking RSA with state of the art algorithms, you want something with a lot of ~200 bit adders/multipliers, which you can build with FPGAs but GPUs are probably a better bet
<azonenberg_work> Hmm
<azonenberg_work> so is this just for tinkering then?
<azonenberg_work> or did you have a problem in mind that FPGA would be effective for
<sorear> see above "non-ECC zero-knowledge stuff"
<sorear> i have no idea why you brought up rsa
<azonenberg_work> i thought i remembered there being fun FFT based algorithms for breaking RSA
<azonenberg_work> but that's way beyond my level of cryptographic knowledge
* azonenberg_work will laugh if this thing ends up just being used to mind bitcoins
<azonenberg_work> mine*
<SolraBizna> now I'm just trying to figure out what's what in "[]B"
<azonenberg_work> SolraBizna: Vias to the side of the cap footprints
<sorear> i mean it won't *just* be used to mine bitcoins but if I build it that's probably what it will be doing when I run out of project ideas
<azonenberg_work> [] is cap, B/8 is the vias
<SolraBizna> [ and ] are the ends?
<azonenberg_work> illustrates better than i can do in ascii art
<sorear> assuming there is at least one coin where doing so is marginally profitable (the machine exists, but it needs to be powered)
<azonenberg_work> sorear: the only time i've ever mined anything was dogecoins, and it was just to stress-test a flaky machine
<azonenberg_work> SolraBizna: I normally do option C
<SolraBizna> [ and ] are the sides
<azonenberg_work> Yeah
<SolraBizna> got it now
<azonenberg_work> except i make it even closer, so the via disks are tangent to the cap pads
<azonenberg_work> then the trace just fills in the gaps
<sorear> (heating a house with a mining rig uses ~2.5x as much primary energy as heating a house with oil or a heat pump, as a consequence of Carnot, it ain't free)
<azonenberg_work> sorear: it beats a resistive heater, though
<azonenberg_work> If you live in a location that has cheap electricity and is too cold to use a heat pump
<azonenberg_work> some folks in Scandinavia have done that iirc
<sorear> the specific house that I am in is heated by an oil burner, which turns 100% of the energy content of the oil into house heat
<azonenberg_work> not 100%
<azonenberg_work> Some is lost out the chimney
<azonenberg_work> But a high fraction
<azonenberg_work> Combustion heat is only 100% efficient transfer in a closed system where you don't vent the exhaust anywhere
<SolraBizna> breathing the exhaust would increase the efficiency of the heater
<azonenberg_work> SolraBizna: exactly
<azonenberg_work> You obviously run the exhaust through a heat exchanger but you cant get 100% of the combustion energy absorbed
<SolraBizna> I can't believe I resisted HDLs for so long
<SolraBizna> I blame video games
<SolraBizna> When Bob_Dole dragged me kicking and screaming into the world of FPGAs, I seriously considered manually working out the logic
<azonenberg_work> lolol
<azonenberg_work> meanwhile here i am thinking of doing pcb design in HDL
<azonenberg_work> So i dont have to ever see a schematic again
<SolraBizna> dooo eeet
<azonenberg_work> structural verilog description of a PCB (including generate loops, etc)
<azonenberg_work> synthesized to a kicad netlist
<azonenberg_work> import to pcbnew and go to town
<azonenberg_work> and it's nontrivial when it comes time to do things like figure out refdes for PCB elements
<azonenberg_work> since long hierarchial hdl instance names dont map well to silkscreen
<azonenberg_work> I have done small scale PoC's
<azonenberg_work> i designed a verilog IP for a LTC3374-based buck converter
<SolraBizna> use sorear's giant ECP5 array to run a machine learning algorithm to make better names
<azonenberg_work> :p
<azonenberg_work> And i actually made a pic12 based board using an early draft of the flow
<azonenberg_work> ERC/DRC is nontrivial too
<azonenberg_work> i wanted to add a lot of metadata to the component designs, but it would have massively increased complexity of creating a part
<azonenberg_work> Things like doing Vih/Vil sanity checks on all digital connections
<azonenberg_work> Making sure Vdd for a part is within safe limits
<azonenberg_work> Doing all of the engineering for that was a pain and i just didnt have the time with all the other stuff on my plate
gnufan has quit [Remote host closed the connection]
<sorear> finally found lattice TN1068
<azonenberg_work> sorear: so i dont entirely agree with that
<azonenberg_work> in particular modern MLCCs are such that for almost all frequency bands
<azonenberg_work> you are better off using a larger cap in a given package size
<azonenberg_work> typically 0.47 uF vs 0.1 for 0402, and 4.7 uF vs 1 uF for 0603
<SolraBizna> the research I found said that you should use the largest cap that fits the footprint and there's no advantage to smaller ones / putting multiple different ones right next to each other
<azonenberg_work> (that note is from 2004, over the past 14 years capacitor design has come quite a long way)
<azonenberg_work> SolraBizna: correct
<azonenberg_work> keep in mind voltage derating though
rohitksingh has quit [Quit: Leaving.]
<azonenberg_work> a super high cap in a small footprint may not buy you anything under DC bias
<azonenberg_work> My last research indicated 0.47 and 4.7 X*R were the sweet spots
<azonenberg_work> for typical FPGA power rails
<azonenberg_work> The xilinx decoupling guidelines are well written and reasoned
<azonenberg_work> (Just don't pull exact numbers of caps out for other chips obviously)
<sorear> device if built would have roughly 500K supply pins to decouple, so minimizing the total cost of capacitors is a consideration
<SolraBizna> o_o
<azonenberg_work> sorear: how many ecp5s are you planning to use?
<azonenberg_work> and how many logic cells each?
<azonenberg_work> And have you thought about physical form factor yet?
<sorear> nominally 10,000 x 25K each ($50k in FPGA parts)
<sorear> current thought on physical form factor is "2-3 square meters of PCB, split between [TBD] boards in a roughly cubical box"
<sorear> since people don't make 3 square meter PCBs, splitting is necessary, but the details are mostly tbd
<azonenberg_work> I would rack mount it personally
<azonenberg_work> also do you have $50K to spend on this? :p
<azonenberg_work> Also, what does the price per LUT come out to for the FPGAs?
<sorear> $.0002/LUT4
<azonenberg_work> So 500 LUT4/$?
<sorear> 5000
<azonenberg_work> oops, missed a zero
<sorear> 250M total, for about the BOM cost of the biggest us+ qty 1
<azonenberg_work> comparing... xc7a100t is $109 on digikey, 15850 slices of 4 LUT6s or 101,440 logic cells by Xilinx's marketing numbers
<azonenberg_work> Which comes out to about 1000 LUT4/$
<sorear> for a fair comparison, count unofficial capacity, because I am
<azonenberg_work> you mean using fused chips to full capacity?
<sorear> yes
<azonenberg_work> The xc7a75t is $92.61 for 101440 LCs or 1095 LC/$
<azonenberg_work> xc7a15t is $27.93 for 52160 LCs or 1867 LC/$
<azonenberg_work> But you also have to consider the xilinx parts probably clock faster and have more block ram etc
<azonenberg_work> also i doubt a 7a100t needs four times the caps of a 25k ecp5
<azonenberg_work> and certainly not 4x the PCB real estate
<sensille> "fused" chips?
<azonenberg_work> unless f/oss tools NOW (vs soon) are a priority, 7 series is probably worth considering on that metric alone
<sorear> right, clock is a complication i know about but haven't attempted to control for in any way
<azonenberg_work> sorear: i would just make all the links source synchronous
<azonenberg_work> dont even attempt a global clock
<azonenberg_work> put oscillators and buffers every few fpgas
<sorear> it's much less interesting for xilinx because *other people have done xilinx*
<azonenberg_work> sorear: also consider that no matter what fpga you use
<azonenberg_work> if you are buying five-digit volumes the price will come waaaay odwn
<azonenberg_work> So 10K $5 chips may cost you $15K or something, not $50K
<SolraBizna> making a high-speed clock sync across a 1.5x1.5x1.5 cube would be ... hard
<azonenberg_work> Also what network topology did you have in mind?
<azonenberg_work> Also consider thermal dissipation... I would not build it as a cube
<sorear> azonenberg_work: nearest neighbors only
<azonenberg_work> sure but how many dimensions?
<azonenberg_work> 2D? 3D? 4D?
<azonenberg_work> My recommendation would be vertically mounted blades in rack mounted modules of some sort
<sensille> are you still pondering the 10k-chip-array?
<azonenberg_work> a fan tray every couple units of blades
<sensille> what is the application?
<SolraBizna> immerse the whole thing
<azonenberg_work> i designed a smaller scale version of this (two backplanes side by side in 3U with a 48->12V DC power supply, two ethernet switches, two management cards, and 16 compute nodes)_
<sorear> sorry, how is the backplane oriented relative to the rack?
<azonenberg_work> sorear: normal of the backplane points to the front of the rack
<azonenberg_work> compute blades are vertical and plug into the backplane
<azonenberg_work> 21 blades in 3U
<sorear> so the backplane is a skinny rectangle
<azonenberg_work> Yeah
<azonenberg_work> 3U x 160mm eurocard for mfactor
<azonenberg_work> My design called for one VRM blade and two 10-card backplanes side by side
<azonenberg_work> just so the pcbs would be smaller and easier to work with
<azonenberg_work> each backplane had 8 compute nodes, a management card, and 13-port 1/10G ethernet switch (9 gig ports to the other cards on the backplane, one 10G front panel port, and three 1G front panel ports because i had transceivers left over)
<azonenberg_work> i did most of the pcb for that and finished the backplane pcb desing but never made either of them
<azonenberg_work> i did some mechanical mockups of the architecture though to confirm the things fit
<sorear> sensille: yes; mostly a design exercise, a bit of "I would use this for fiddling with algorithms"
<sensille> i imagine it will be very hard to beat a big xilinx device with this, really needs to be a specialized algorithm
<sorear> there was a specific thing I was fiddling with last year that doesn't fit in interal memory on an xcvu9p and is badly bottlenecked on I/O if you try to do it with DDR4 (and is similarly bottlenecked on CLMUL units on SKL)
<sensille> so you want tons of DDR3 instead?
<sorear> yes
<azonenberg_work> doesnt fit in a vu9p??
<sensille> like... monero?
<azonenberg_work> oh dear
<sorear> makes 3 passes over about 128GB of data
<sensille> linearly?
<sorear> the vu9p only has 90MB of SRAM
<sorear> not quite, but close to
<sensille> rmw cycles?
<sorear> yes
<sensille> or reading from one DRAM and writing to another?
<azonenberg_work> hbm?
<sensille> keeping the flow in one direction would be great
<azonenberg_work> also predictable
<azonenberg_work> optimized ddr controller that prefetches
<azonenberg_work> or hard to tell in advance?
<sensille> or read a good chunk into an internal cache and alternate big read/write chunks
<azonenberg_work> that too
<sensille> and do it fast enough so you don't need refresh cycles :)
<sorear> don't make me remember the details of the memory access pattern
<azonenberg_work> sorear: lol well that is very important if you're memory bound
<azonenberg_work> something like, say, doing convolutions on a large 2D array is an optimal case that's very easy to tweak things for (I did a lot of GPU tuning for such things)
<travis-ci> whitequark/Glasgow#88 (master - 60cf959 : whitequark): The build has errored.
<sorear> azonenberg_work: i trust myself that I spent a lot of time optimizing the memory access pattern and couldn't get it under a minute with the four x72 DDR4 interfaces on aws f1, I'm only asking that you do the same
<sorear> i think the problem might have been that the FFT has a working set larger than the 90MB internal memory (it's the Gao-Mateer "additive FFT" over a finite field, which works with some but not all of the standard FFT cache optimization techniques)
<sorear> anyway I would like to stress that THIS IS NOT A YAK SHAVE
<sorear> i'm designing it because it's there, the fact that my project last year would have used it is non-causal
<sensille> (sorry)
<azonenberg_work> sorear: clearly you need an XCVU9000P
<azonenberg_work> with 90 GB of block RAM
<sorear> unfortunately I do not have $50MM
<azonenberg_work> The die is round so it fits in a 12" wafer and is in FFG65536 package
<azonenberg_work> i dont even want to speculate what yield on a die like that would be like :p
<azonenberg_work> oh and you better have half a petabyte of RAM to run P&R for it...
<sorear> you're obviously not using this for mass production, so you just accept that each die is unique and ship it with a p&r database
<azonenberg_work> lolol
<sorear> (@jangray posted a weirdly perspective photo of an xcvu9p which caused me to spend most of 2016 thinking it *was* a wafer-sized chip and interposers were magic)
<azonenberg_work> where?
* azonenberg_work isnt in the mood to search his entire tweet stream
<azonenberg_work> also fwiw if you were going to make such a big chip
<azonenberg_work> what you'd probably do is fill like a 12" wafer with a giant interposer
<azonenberg_work> Then put known-good xcvu+ logic dies onto it
<azonenberg_work> That way your P&R db only has to handle the occasional SLL that doesn't work
<azonenberg_work> And routing within each xcvu+ module is normal
<sorear> ok my timing is a bit off
<sorear> i have handled the board on the lower left
<sorear> on the top board, the heatsink and fan look by perspective to be about a square foot
m_t has joined ##openfpga
<azonenberg_work> The VCU118 heatsink is large
<azonenberg_work> But it isn't that big
<azonenberg_work> it looks to be about the size of the pcie x16 connector?
luvpon has quit [Ping timeout: 252 seconds]
<azonenberg_work> Which is 89 mm according to the pcie spec
<azonenberg_work> or a 3.5 x 3.5 inch heatsink
<azonenberg_work> roughly "beefy x86 CPU" sized heatsink iirc
<azonenberg_work> i've been around vcu118s but dont have one in front of me right now
<azonenberg_work> atm i'm working on a puny little ac701 :p
Prf_Jakob has quit [Quit: Spoon!]
Prf_Jakob has joined ##openfpga
ayjay_t has quit [Read error: Connection reset by peer]
ayjay_t has joined ##openfpga
<whitequark> azonenberg_work: how do you feel about 3.5" floppies
<whitequark> i wonder how much you can stuff on one with say 128b/130b instead of the braindead MFM encoding and also using some proper ECC
<whitequark> unfortunately shingled recording isn't going to happen because of erase heads...
rohitksingh has joined ##openfpga
mmicko has joined ##openfpga
mmicko has quit [Quit: leaving]
rqou has quit [Remote host closed the connection]
rqou has joined ##openfpga
genii has joined ##openfpga
rohitksingh has quit [Ping timeout: 276 seconds]
<gruetzkopf> ooh
<gruetzkopf> scaling that to LS120 disks..
Bike has joined ##openfpga
genii has quit [Read error: Connection reset by peer]
rohitksingh has joined ##openfpga
wpwrak has quit [Quit: Leaving]
wpwrak has joined ##openfpga
<azonenberg_work> whitequark: havent touched one in years and wouldn't miss it :p
<azonenberg_work> the access speed would still be super slow because the RPM is necessarily low
<azonenberg_work> due to mechanical issues
<azonenberg_work> whitequark: that being said i would laugh if you tried to write a custom FPGA-based floppy drive controller using modern tech
<whitequark> azonenberg_work: guess what am i doing right now
GenTooMan has joined ##openfpga
<whitequark> azonenberg_work: this thing uses
<whitequark> actual TTL logic
<whitequark> as in
<whitequark> pullups and open drain......
<azonenberg_work> wait
<azonenberg_work> you're implementing ECC and 128/130 in discrete ttl logic??
<azonenberg_work> how big is this board gonna be???
<whitequark> no i mean
<whitequark> the floppy interface
<whitequark> it's WEIRD
<whitequark> all signals are active low and open drain and have massive pullups and sink capability
rohitksingh has quit [Quit: Leaving.]
azonenberg_work has quit [Ping timeout: 245 seconds]
rohitksingh has joined ##openfpga
m4ssi has joined ##openfpga
rohitksingh has quit [Ping timeout: 260 seconds]
Miyu has joined ##openfpga
rohitksingh has joined ##openfpga
m_t_ has joined ##openfpga
m_t has quit [Ping timeout: 245 seconds]
m_t_ has quit [Read error: Connection reset by peer]
carl0s has joined ##openfpga
m4ssi has quit [Quit: Leaving]
ZipCPU has quit [Ping timeout: 252 seconds]
kuldeep_ has quit [Read error: Connection reset by peer]
Bob_Dole has quit [Ping timeout: 250 seconds]
Bob_Dole has joined ##openfpga
<openfpga-github> [Glasgow] whitequark pushed 2 new commits to master: https://github.com/whitequark/Glasgow/compare/60cf959986bf...e82ecabb6219
<openfpga-github> Glasgow/master e82ecab whitequark: access: allow hinting reads for dramatically improved performance....
<openfpga-github> Glasgow/master aa80b03 whitequark: gateware.fx2: replace "non-streaming" FIFOs with "auto-flush" FIFOs....
<travis-ci> whitequark/Glasgow#89 (master - e82ecab : whitequark): The build has errored.
Maylay has quit [Quit: Pipe Terminated]
carl0s has quit [Quit: Page closed]
Maylay has joined ##openfpga
lovepon has joined ##openfpga
rohitksingh has quit [Ping timeout: 252 seconds]
rohitksingh has joined ##openfpga
rohitksingh has quit [Quit: Leaving.]
<Bob_Dole> http://miaowgpu.org/ subset of the GCN ISA, looks like in verilog, but xilinx centric. how pissy would AMD get if you made an actual gpu with it?
<sorear> probably less so than if you did the same thing with mali
<Bob_Dole> I suppose. if it's just being used for very-low-performance-embedded type things.. it wouldn't be competing with AMD's products, but doing the same thing with mali would compete with arm's
<sorear> my impression of miaow is that it's fairly low on productization
<Bob_Dole> just not a lot of options for doing what I want: pair some gpu without an NDA to a risc-v core, and that probably means some DIY solution but doing that without being able to have a lot of reuse of some other thing is probably not going to be viable as a project
<Bob_Dole> just enough to be able to run, say, MATE on it, without being horrifyingly sluggish
rofl__ has joined ##openfpga
<sorear> ah, I was mixing up miaow with one of the other projects, miaow seems a bit more mature but not "drop it in to your project" ready
rofl_ has quit [Ping timeout: 250 seconds]
<Bob_Dole> yeah. it's got the graphics stuff stripped out, and it's xilinx centric
<Bob_Dole> and is meant for a narrow range of figuring out compute stuff.. BUT, that it has a lot done means it has some advantages to starting from scratch
<Bob_Dole> (I think.)
<sorear> what's the target environment anyway
<Bob_Dole> kind of where smarttops were at.
<Bob_Dole> I'm a bit worried by how prevalent javascript has gotten vs my last trials of lower speed cpus for that kinda role. A 400mhz UltraSPARC IIi was fast enough, with only the 8MB Rage II+DVD only supporting 800x600, and video playback being untenable being the major drawbacks for me on it then
<sorear> i have a big advantage here in that i can't stand video
<Bob_Dole> I rarely watch it
<Bob_Dole> but I had had a pentium mmx run video smoother than that system somehow..
<Bob_Dole> my pentium mmx is now Gone, parents lost the thinkpad, so I can't test that anymore, but I somehow image the UltraSPARC is a better example of how a RISC-V would turn out.
<Bob_Dole> s/image/imagine/
<Bob_Dole> but having an x86 coprocessor for handling shit that doesn't work right is An Idea.
<Bob_Dole> I should buy a new Super Skt7 mobo and see if I can't find my 500mhz K6-2 for more Testing.
mumptai_ has quit [Quit: Verlassend]
Bob_Dole has quit [Read error: No route to host]
Bob_Dole has joined ##openfpga
<SolraBizna> sometimes I want to make a GPU
<SolraBizna> then I remember that I don't really understand how modern GPUs work, and that I'm bad at math
<SolraBizna> then I don't want to anymore
<Bob_Dole> hi
<Bob_Dole> SolraBizna, look at the price of Socket 7 motherboards. look at that datasheets exist for socket7 cpus, and then at the performance of K6-2s. I think there might be a Product there.. if 66mhz is something you might consider touching.
<Bob_Dole> and I do some market-research first
<SolraBizna> it's a little bit too AC for me
<Bob_Dole> ...50?
<SolraBizna> technically the NTSC stuff was too AC for me, honestly not sure how I muddled through it
azonenberg_work has joined ##openfpga
<Bob_Dole> well, here, you have many Smart people. and you have my money.
<sorear> azonenberg_work: it occurred to me last night that while $needsaname isn't the most useful for sieving, it's great for block Lanczos, could do the matrix step of the current open RSA record (768-bit) in a couple days
<azonenberg_work> sorear: nice
<azonenberg_work> serious q btw... do you have any plans to actually build the thing? :p
<azonenberg_work> budget wise i mean
<sorear> if the project gets that far along and I find work, the budget is serious
<balrog> what's keeping rsa-1024 from being publicly cracked sooner? :)
<sorear> moore's law and the size of academic budgets
<sorear> the sha1 break was a bit of an anomaly, that would have been about $1M without google's subsidy
<sorear> google *probably could* demonstrate a rsa1024 factorization now but they haven't, why?