<whitequark> awygle: I'm having a *really* hard time driving a HD44780 here
<whitequark> none of it makes any sense
<whitequark> 400us timings work, 600us don't, 800us work
<whitequark> strongly suspecting signal integrity
<whitequark> especially so because I seem to have setup/hold violations but I have a factor of 1000 of headroom over the datasheet
<whitequark> I add a space and the entire consecutive output becomes fucked
oeuf is now known as egg|zzz|egg
steakpizza has joined ##openfpga
steakpizza has quit [Ping timeout: 264 seconds]
noobineer has quit [Ping timeout: 240 seconds]
<openfpga-github> [Glasgow] whitequark pushed 3 new commits to master: https://github.com/whitequark/Glasgow/compare/adac2ce5972c...f5dfbfaac502
<openfpga-github> Glasgow/master f5dfbfa whitequark: Improve program-ice40 applet description.
<openfpga-github> Glasgow/master d5c1156 whitequark: In program-ice40 applet, use math.ceil to calculate timings....
<openfpga-github> Glasgow/master 5b5a7c3 whitequark: Implement default GlasgowApplet.{add_arguments,build,run}.
<openfpga-github> [Glasgow] whitequark pushed 2 new commits to master: https://github.com/whitequark/Glasgow/compare/f5dfbfaac502...b1e301a2a2f4
<openfpga-github> Glasgow/master b1e301a whitequark: Fix IOPort.__getitem__ invocation without explicit stop index.
<openfpga-github> Glasgow/master d163614 whitequark: Reduce on-FPGA FIFO depth to 128 to improve timing closure....
<openfpga-github> [Glasgow] whitequark pushed 1 new commit to master: https://github.com/whitequark/Glasgow/commit/3a542a36791459cef5d47efad015c922194e97be
<openfpga-github> Glasgow/master 3a542a3 whitequark: Add hd44780 applet.
<openfpga-github> [Glasgow] whitequark opened issue #56: Implement a self-test routine checking for solder bridges https://github.com/whitequark/Glasgow/issues/56
<openfpga-github> [Glasgow] whitequark opened issue #57: Allow retargeting applets to different ports and pins https://github.com/whitequark/Glasgow/issues/57
<openfpga-github> [Glasgow] whitequark opened issue #58: Rework arbiter so that applets can request and configure FIFOs https://github.com/whitequark/Glasgow/issues/58
Ellied has quit [Ping timeout: 255 seconds]
blazra has quit [Quit: Connection closed for inactivity]
mumptai_ has joined ##openfpga
mumptai has quit [Ping timeout: 248 seconds]
steakpizza has joined ##openfpga
carl0s has joined ##openfpga
GenTooMan has quit [Quit: Leaving]
rohitksingh has joined ##openfpga
<rqou> ugh fucking abc is fucking broken fucking again
Bike has quit [Quit: Lost terminal]
<rqou> oh right, now i remember why it's force-compiled as C++
<rqou> ucb plz 2 pay someone to apply software engineering to abc kthx
<rqou> so yeah, building abc as "C or C++? why not both?!" doesn't work anymore
<rqou> at least on win32
<rqou> *win64
<rqou> oh what
<rqou> that didn't fix the problem?
<cr1901_modern> >ugh fucking abc is fucking broken fucking again
<cr1901_modern> On Windows only, no doubt?
<cr1901_modern> abc_broke++;
<rqou> yeah idk why
<rqou> some issue with C vs C++ symbols
<rqou> i don't want to debug this since it's not my fault :P
<cr1901_modern> Well, appveyor claims it's passing
<rqou> well, ¯\_(ツ)_/¯
<rqou> their build must be different somehow
rohitksingh has quit [Quit: Leaving.]
<rqou> god yosys's makefile is a giant mess
<rqou> (mostly surrounding abc and windows)
X-Scale has quit [Quit: HydraIRC -> http://www.hydrairc.com <- Po-ta-to, boil em, mash em, stick em in a stew.]
<rqou> anyways, someone with sufficient clout really needs to force alanmi to just migrate abc to c++11 or whatever
<rqou> or someone just needs to write a brand new aig/mig tool (cc awygle? :P )
<eduardo_> rqou: can we two have a conference call the next two days?
<rqou> er, about what?
<eduardo_> open source place and route software
<rqou> hmm, ok
<rqou> email me? rqou@berkeley.edu
<eduardo_> would now be suitable too?
<rqou> no, it's pretty late already and I'm tired
<eduardo_> ok.
<rqou> better is probably some time in the morning my time (so evening your time)
<rqou> 11 am my time tomorrow? (so 20:00 your time?)
<rqou> aka in ~11 hours
<eduardo_> sent you two proposals by email.
<cr1901_modern> https://twitter.com/cr1901/status/997732683022323712 I guess I shouldn't be surprised, but... don't xfer files between two machines on wifi or this happens?
<sorear> Or what happens? idgi
<cr1901_modern> The two machines interfere w/ each other and my xfer speed drops to 1MBps between the two machines
rohitksingh has joined ##openfpga
rohitksingh has quit [Quit: Leaving.]
rohitksingh has joined ##openfpga
fouric has quit [Ping timeout: 240 seconds]
fouric has joined ##openfpga
rohitksingh has quit [Ping timeout: 268 seconds]
carl0s has quit [Ping timeout: 260 seconds]
bitd has joined ##openfpga
<whitequark> awygle: I'm now completely certain it's SI.
<whitequark> making the bus longer makes the display glitch less.
<whitequark> and in fact depending on which bus lines I make longer, I can observe that they stop interfering with each other
<whitequark> by the erroneous command it executes
<Ultrasauce> sounds like a job for series resistors? make that breadboard even messier
Bike has joined ##openfpga
ironsteel has joined ##openfpga
ironsteel has quit [Client Quit]
<whitequark> yep
eduardo__ has joined ##openfpga
eduardo_ has quit [Ping timeout: 240 seconds]
rohitksingh has joined ##openfpga
rohitksingh has quit [Client Quit]
rohitksingh has joined ##openfpga
rohitksingh has quit [Quit: Leaving.]
rohitksingh has joined ##openfpga
rohitksingh has quit [Ping timeout: 260 seconds]
Ishan_Bansal has quit [Ping timeout: 256 seconds]
carlosFPGA has quit [Ping timeout: 256 seconds]
marex-cloud has quit [Ping timeout: 256 seconds]
_florent_ has quit [Ping timeout: 256 seconds]
nickjohnson has quit [Ping timeout: 256 seconds]
Ishan_Bansal has joined ##openfpga
marex-cloud has joined ##openfpga
carlosFPGA has joined ##openfpga
hl has quit [Ping timeout: 256 seconds]
mithro has quit [Ping timeout: 256 seconds]
_florent_ has joined ##openfpga
nickjohnson has joined ##openfpga
mithro has joined ##openfpga
hl has joined ##openfpga
eddyb has quit [Remote host closed the connection]
elms has quit [Remote host closed the connection]
eddyb has joined ##openfpga
elms has joined ##openfpga
GenTooMan has joined ##openfpga
carl0s has joined ##openfpga
carl0s has left ##openfpga [##openfpga]
rohitksingh has joined ##openfpga
lexano has joined ##openfpga
X-Scale has joined ##openfpga
<pie_> just hypothetically, how much sense could it make to stick to foss and user MOAR ICE40 vs switching to closed source with bigger fpga
<pie_> hm. i guess that would give harder timing constraints
<pie_> and more routin bottlenecks
<jn__> i don't think automatic multi-chip P&R is currently supported, at all
<jn__> which means: (a) keep your design small enough for one ice40, or (b) partition your design explicitly and deal with inter-chip busses, or (c) automatic multi-chip P&R sounds like an interesting research topic
<jn__> or (d) wait for 7-series tools :)
* awygle emerges from his long slumber
<awygle> whitequark: yeah that's SI all right
rohitksingh has quit [Quit: Leaving.]
* pie_ plays windows startup sound for awygle
user10032 has joined ##openfpga
m_t has joined ##openfpga
<rqou> ping daveshah
<daveshah> rqou: hey
<rqou> in the ice40, do the output wires not show up as neighbor inputs in adjacent IO tiles?
<rqou> *output wires of IO tiles
<rqou> they appear to only show up as neighbor wires in the adjacent logic tiles?
<rqou> this is for still attempting to see if i can cram rgmii into an ice40
<rqou> i wanted to cram a clk pin into a certain io tile and then the rgmii pins into the adjacent io tiles
<rqou> but it seems there's no way to route the clock signal like that
<daveshah> I think they do appear as logic_op_xxx
<rqou> it only has logic_op_{bnr,rgt,tnr}
<rqou> no way to get the outputs from io tile 0,19 or 0,17
<daveshah> ah, sorry didn't read your question
<daveshah> properly
<rqou> so if i want to play with very aggressive optimizations the clock inputs and data inputs need to be 4 tiles away?
<rqou> route via a span4?
<daveshah> yeah
<daveshah> that's what you'd have to do
<rqou> wow so many gotchas
<daveshah> although I think the span4s might stop in the middle too
<daveshah> I'm not sure of the exact routability though
<rqou> stop in the middle?
<daveshah> be accessible in the tiles they go through
<daveshah> not just the tiles 4 apart
<daveshah> given all the caveats with connectability, might be worth just writing a python script to check
<rqou> ooh right
<rqou> i forgot about that
<daveshah> I don't actually know the routing of the IO tiles that well
<rqou> the way clifford describes the span wires always confuses me
<rqou> because you can actually "get on/off the span wire" in the middle
<daveshah> yeah
<rqou> not just in the cells where they terminate
<daveshah> yep. the ECP5 is different for example, the span wires in that connect to the ends and the centre only, not any other
<rqou> i'm increasingly curious how the ice40 works at a silicon level
* rqou looks around for people pointing fingers at me
* awygle j'accuse
rohitksingh has joined ##openfpga
<bitd> Absolutely breaking my head over what approach is best in general for place-and-route.
<rqou> hmm daveshah: do you know if arachne-pnr knows about neighbor tracks and prioritizes them?
<rqou> first of all, are they faster than spanNs?
<daveshah> rqou: it knows about them, because they're in the db but I don't think it specifically prioritises them
<rqou> bitd: try simulated annealing :P :P :P
<bitd> I've got some handle on most of the papers in the wiki, but it still feels like yet another attempt this way
<rqou> bitd: try getting a PhD :P
<bitd> You guys mind if I take a different run at it?
<rqou> what do you mean?
<bitd> I've got some friends at the university who are working on CGRA scheduling, placing and routing.
<bitd> Cause its just a forest of information, I think its best to get some expert advise.
<bitd> Not trying to sound like a critic here, but it feels like there are a lot of unexplored options.
<rqou> there probably are
<bitd> :)
<rqou> er, daveshah: where do i find delay information for neighbor tracks?
<daveshah> rqou: delays are based on cells rather than wires specifically, if you look in the timing data there are hopefully some neighbour-related buffer cells
<rqou> yeah, i don't see any :P
<rqou> is nobody else trying to aggressively push the timing like this?
<bitd> awygle did most of this research, correct?
<daveshah> rqou: no
<awygle> bitd: in the sense of finding a bunch of papers, yes
<daveshah> icetime only has the information from iCEcube to work from anyway, and that used pretty crappy STA
<daveshah> no load model or anything
<bitd> Did you have full academic access?
<rqou> use sci-hub? :P
<awygle> But I had a very specific focus at the time. I was looking for parallel approaches without regard to QoR
<rqou> otherwise i still have a way to get UCB's access
<awygle> bitd: I had IEEE access
<bitd> I might just be misunderstanding the approach of openfpga, but in my mind, a good placement and routing algorithm is at the heart of all of it.
<rqou> yeah, we're working on it, slowly
<rqou> i just finished the "acquire another fancy piece of paper" thing which frees up a lot of time
<bitd> Same >.>
<rqou> O_o another "academic" here :P
<bitd> Just an MSc.
<rqou> yeah same
<q3k> you and your fancy credentials
<rqou> not azonenberg
<rqou> except mine technically isn't a MSc
<rqou> q3k: no PhD like azonenberg? :P
<q3k> i have the self-appointed 'hippie dropout' degree
<bitd> EE with a specialisation in Systems on Chip.
<awygle> bitd: it's a chicken and egg problem but with N chickens and M eggs
<awygle> The ice40 is so small that SA is fine
<awygle> But why reverse a bigger FPGA if you can't place and route it?
<awygle> And there's a whole ecosystem of things we don't have yet
<awygle> We are inevitably going to build a bunch of terrible tools before we figure out what good ones look like
<awygle> But then people want to fix the terrible tools, and that soaks up time
<egg|zzz|egg> awygle: chicken and me?
<awygle> So.... Yknow
<awygle> egg|zzz|egg: your irc alerts must be exhausting
<bitd> I understand awygle :)
<egg|zzz|egg> awygle: eh, "double" and "integration" have far more false positives than egg
<awygle> Integration is the one I was trying to remember
<rqou> oh btw egg|zzz|egg since you're here can i annoy you with the question that digshadow tried to ask you a few months(?) ago?
<egg|zzz|egg> rqou: the optimization question? I'm incompetent, forward to bofh_
<rqou> not exactly an optimization question
<egg|zzz|egg> if you have more enumerative or eggstremal combinatorics questions throw them my way though
<rqou> i'll try to ask it again and you can tell me if i have to ask bofh_? :P
<egg|zzz|egg> You probably have to ask bofh_, but you can ask it to bofh_ again :D
<rqou> anyways, the question as i finally understand it (after asking digshadow irl) is:
<rqou> given a not-necessarily-square matrix A known symbolically
<rqou> we want to find x such that Ax >= b where b is a set of measured numbers
<rqou> obviously this can be solved by just setting x to infinity, so...
<rqou> we try to add some kind of (what AI/ML people would call) regularization term
<rqou> i.e. find the x with the smallest |x| such that Ax >= b
<rqou> but "somehow" adding this constraint does not appear to be good enough when A is singular
<rqou> so the idea shifted to: "can we somehow do a change of variables and get an A' and x' such that A' is 'less' singular)
<rqou> "
<rqou> so then the question seems to have shifted to "how can we determine such a change of variables?"
<rqou> afaict we want "express the null space of A as a set of basis vectors that are somehow in 'a nice form' whatever that means (i'm not sure yet)"
<rqou> egg|zzz|egg: so, are we going about it in the right way or are we totally lost?
<egg|zzz|egg> rqou: this is either optimization or linear algebra, either way ask bofh_
<rqou> lol ok
<rqou> hopefully at least this explanation was more helpful?
<rqou> for "wtf are we trying to do"
<egg|zzz|egg> also Ax >= b irks me, that's not something you can do to vectors >_>
<egg|zzz|egg> I assume it means entrywise
<rqou> oh right
<rqou> um
<rqou> yeah
<egg|zzz|egg> but at that point you lost a lot of nice linear algebra
<rqou> i know
<rqou> the matrices aren't symmetric either so it's not a LMI
<egg|zzz|egg> I thought some things needed to be integers last time, did that vanish?
<rqou> the matrix A contains only integers
<rqou> so the 'nice way' of expressing basis vectors should preferably only contain integers
<rqou> but x, b are not integers
<bitd> Sounds like a linear programming problem to me.
<rqou> hmm i don't see how?
<rqou> oh yeah, i mean, i guess
<rqou> afaik the problem is the "'somehow' minimizing |x| isn't sufficient" part
<bitd> The problem definition is still a bit hazy to me >.>
<rqou> also egg|zzz|egg coordinate-free is great until you actually need to turn squiggles on paper into code in the computer :P :P :P
<rqou> bitd: that's because there are two problems
<bitd> One is the definition, the other the problem? >.>
<egg|zzz|egg> rqou: nah, if you want to stay sane while giving it to the computer you *must* know what is sensible and what isn't
<rqou> the original problem is "find x such that Ax >= b with 'appropriate' regularization on x"
<rqou> attempting to solve that gave a new problem
<rqou> "we do not seem to be able to apply 'appropriate' regularization when A is singular"
<bitd> Ah right.
<egg|zzz|egg> rqou: see the current discussion in #kspacademia on trying to figure out wtf the moment of inertia is on a space equipped with an inner product but no orientation
<rqou> the attempt to solve _this_ problem is "can we do a change of variables to get A' and x' that 'work better'?"
<rqou> and the attempt to solve this is "can we express the null space of A as a set of 'nice' basis vectors"
<egg|zzz|egg> rqou: but again you want that to be all integers
<rqou> yeah, the problem is that we aren't really sure what "nice" should mean
<bitd> Nor appropriate.
<egg|zzz|egg> but they're not a field so you're *really* not dealing with vector spaces here
<bitd> Which is where the problem starts.
<egg|zzz|egg> at best a module
<egg|zzz|egg> and then you have your dreadful >=
<rqou> yeah i know
<rqou> the problem isn't super well posed is it? :P
<egg|zzz|egg> rqou: just because you can write one bit of it as a matrix multiplication doesn't make linear algebra applicable to the rest
<bitd> Well thats no issue, talking about it clarifies it.
<egg|zzz|egg> as bitd said, this is probably some sort of linear programming thing, where I'm incompetent; maybe bofh_ can help
<rqou> the final part of the problem "can we express the null space of A as a set of 'nice' basis vectors" is actually still useful without the first inequality part
<rqou> because hopefully we can define 'nice' in a way that is useful for "how do we gather more data?"
<rqou> gathering more data then adds more rows to A and x and hopefully makes A less singular
<egg|zzz|egg> rqou: so this seems to be https://en.wikipedia.org/wiki/Covering_problems
<rqou> so unlike the traditional LP forms we don't have c^Tx
<rqou> we have x^Tx
<rqou> oh wait
<rqou> maybe we do have c^Tx
<rqou> idk, i didn't actually work on this
<rqou> if we have c^Tx then i can totally see how that can have problems
<egg|zzz|egg> x^Tx is the (squared) 2 norm
<rqou> ok yes you can probably use the 1 norm
<rqou> since it's finite-dim
<rqou> wait, i'm not using a matrix norm
<rqou> this is just a vector norm so whatever
<egg|zzz|egg> yeah
<egg|zzz|egg> but still
<egg|zzz|egg> the 1 and uniform norms are your friends
<egg|zzz|egg> the others less so
<rqou> i'll keep that in mind for when i need to do "real" linear algebra
* rqou mumbles "all norms are equivalent in finite dimensions"
<egg|zzz|egg> all norms are equivalent, some norms are more equivalent than others (cc bofh_)
<egg|zzz|egg> rqou: so then you have a covering ILP problem, have fun
* egg|zzz|egg incompetent here
<rqou> um, not ILP?
<rqou> x is not restricted to integers
<egg|zzz|egg> but a is?
<rqou> the matrix A is
<rqou> but that's known ahead of time anyways
<egg|zzz|egg> somewhat-I LP :-p
<rqou> there are only restrictions for computing "nice" basis vectors of the null space
egg|zzz|egg is now known as egg|nomz|egg
<bofh_> 19:22:41 < rqou> we have x^Tx
<bofh_> then you have a second-order cone problem, which is somewhat more involved
<rqou> i actually have no idea what was being used here
<rqou> i'm just trying to convert in and out of math-speak :P
<rqou> (not trying to shit on digshadow)
pie__ has joined ##openfpga
pie_ has quit [Ping timeout: 256 seconds]
<bitd> Any tips on where to learn about SOCP bofh_ ?
* egg|nomz|egg gives bofh_ a conifer
<bitd> Oh already found something good :)
<bitd> Maybe not the best thing to try and learn after an all nighter.
mumptai_ has quit [Quit: Verlassend]
m_t has quit [Quit: Leaving]
Ellied has joined ##openfpga
rohitksingh has quit [Quit: Leaving.]
cr1901_modern has quit [Ping timeout: 248 seconds]
<whitequark> !wpn egg
<whitequark> oops
rohitksingh has joined ##openfpga
<bitd> Right, need my REM cycles, too much neural metabolic waste. More on placement and routing tomorrow :)
<bitd> gnight =)
bitd has quit [Quit: Leaving]
<whitequark> awygle: nope
<whitequark> series r didn't help at all
<whitequark> awygle: any ideas how to proceed?
finsternis has quit [Ping timeout: 255 seconds]
finsternis has joined ##openfpga
user10032 has quit [Quit: Leaving]
<egg|nomz|egg> !wpn whitequark :-p
egg|nomz|egg is now known as egg|zzz|egg
<whitequark> awygle: oh fuck's sake
<whitequark> 200R, not 20R, is what I've got
<rqou> what are you fixing?
<rqou> signal integrity issues?
<whitequark> yes
<whitequark> well, I think these are signal integrity issues
<rqou> wtf how is tinyfpga so popular?
<rqou> mad marketing skillz
<tinyfpga> XD
<tinyfpga> my voice is gone
* jn__ thought tinyfpga was a product, for a moment, and tried to come up with an explanation
<rqou> well, there is also a product
<jn__> i.e. a small fpga devboard or something
<whitequark> also?
<jn__> ah, one of those people whose nick is their project's name
<rqou> yeah
<rqou> i guess i really don't understand how the "maker movement" works
<rqou> hence why nobody seems to care about xc2par
<jn__> my initial guess was "probably a combination of (a) almost a generic term as the name, (b) cheap"
<rqou> meanwhile xc2par is free :P
<rqou> i just want people to report bugs :P :P
<awygle> there's a car in the parking lot of my building with the license plate "homura"
<rqou> nice
<awygle> I should really watch rebellion at some point
<awygle> whitequark: well 200r isn't ideal but should have dropped the edge rates way down. you say it didn't improve thingd
<awygle> *things?
<rqou> daveshah still awake?
<whitequark> awygle: it's broken in a more mysterious way now
<whitequark> so, the read/write and command/data pins are real close to the clock
<whitequark> and i'm observing lotta crosstalk on those
<whitequark> i'm starting to suspect that there were good reasons those IDC cables have ground every other pin
<whitequark> you know the IDE ones
<whitequark> awygle: also the resitsors are on a breadboard
<whitequark> if I'm understanding things correctly, then the resistor is going to be the place with impedance mismatch
<whitequark> so everything before it still goes to shit
<whitequark> awygle: it looks like basically the root cause is that I'm using the 4-bit mode of the display
<whitequark> so if it ever misreads one extra nibble the rest of the communication is wasted
<whitequark> all future ones anyway
<whitequark> this is consistent with garbled charcters
<awygle> hmm couple things here then. breadboard makes it hard to generalize the results. There can be almost arbitrarily bad SI in the breadboard
<whitequark> that's true but I'm not sure if I have a real good way to insert those resistors
<whitequark> put them in a cable inline close to glasgow?
<awygle> I think I'm going to try to strip the trace, if there's room. Might not be though
<awygle> Seems quite tight
<whitequark> yeah...
<awygle> Was the breadboard involved before the resistors?
<whitequark> nope