clifford changed the topic of #yosys to: Yosys Open SYnthesis Suite: http://www.clifford.at/yosys/ -- Channel Logs: https://irclog.whitequark.org/yosys
m_t has quit [Quit: Leaving]
sklv has quit [Ping timeout: 272 seconds]
qu1j0t3 has quit [Quit: WeeChat 0.4.3]
qu1j0t3 has joined #yosys
ZipCPU|Laptop has joined #yosys
sklv has joined #yosys
sklv has quit [Ping timeout: 272 seconds]
sklv has joined #yosys
sklv has quit [Ping timeout: 272 seconds]
sklv has joined #yosys
pie_ has joined #yosys
pie_ has quit [Ping timeout: 240 seconds]
ZipCPU|Laptop has quit [Ping timeout: 248 seconds]
sklv has quit [Remote host closed the connection]
sklv has joined #yosys
digshadow has quit [Ping timeout: 250 seconds]
sklv has quit [Ping timeout: 272 seconds]
sklv1 has joined #yosys
digshadow has joined #yosys
strfry has joined #yosys
nrossi has joined #yosys
vinny has joined #yosys
vinny has quit [Client Quit]
sklv1 has quit [Ping timeout: 272 seconds]
sklv1 has joined #yosys
dys has quit [Ping timeout: 240 seconds]
promach_ has quit [Quit: WeeChat 1.4]
dys has joined #yosys
eduardo__ has joined #yosys
eduardo_ has quit [Ping timeout: 276 seconds]
TFKyle has joined #yosys
m_t has joined #yosys
m_t has quit [Remote host closed the connection]
m_t has joined #yosys
pie_ has joined #yosys
TFKyle has quit [Quit: :q!]
strfry has quit [Read error: Connection reset by peer]
strfry has joined #yosys
<promach> ZipCPU: sorry to repeat the question. What do you think about http://www.clifford.at/papers/2016/yosys-synth-formal/slides.pdf#page=28 ?
<ZipCPU> promach: Okay, I'm looking at the slide ... what's your question?
quigonjinn has quit [Ping timeout: 248 seconds]
<promach> "flatten"
<ZipCPU> Yes, I see that, so ... what's your question?
<promach> ZipCPU: I think I should just tried it in my makefile
eduardo__ has quit [Ping timeout: 248 seconds]
<promach> but how do I reference those internal signals ?
<promach> use submodule_name.internal_signal_name ?
eduardo__ has joined #yosys
<promach> okay
<promach> ZipCPU: thanks
<ZipCPU> As for the "sat" command, it's a strict subset of what can be done with yosys-smtbmc or SymbiYosys, so ... not all that valuable anymore. The other two commands do more/better.
<ZipCPU> promach: The "flatten" command is important for yosys's internal command flow, but doesn't really support any knew or different user features.
<promach> ok
adj__ has quit [Ping timeout: 248 seconds]
adj__ has joined #yosys
m_w has joined #yosys
dys has quit [Ping timeout: 276 seconds]
quigonjinn has joined #yosys
nrossi has quit [Quit: Connection closed for inactivity]
pie__ has joined #yosys
pie_ has quit [Remote host closed the connection]
digshadow has quit [Ping timeout: 240 seconds]
<shapr> This channel got quiet since I took a few months away
<shapr> howdy ZipCPU, how's life?
<ZipCPU> Hello!
<ZipCPU> Life is good.
<ZipCPU> The blog has been ... well received.
<shapr> hurrah!
<ZipCPU> It's also helped to make and open opportunities.
<shapr> ZipCPU: I'm glad to hear that!
<shapr> Do you have more patreon supporters now?
<ZipCPU> Still not that many ... just 11 still.
<ZipCPU> I'm too proud to beg for support. ;)
<shapr> that's up from the last number I heard
<ZipCPU> I did lose a sponsor last month. Some one I'd enjoyed having as part of the "ZipCPU family" ;)
<shapr> things change
<ZipCPU> Yes, they do, in so many ways as well.
<ZipCPU> I think you'll enjoy the top-10 post, though.
<ZipCPU> Those are all good articles, in case you've missed any of them.
<ZipCPU> Some are truly cutting edge as well--such as the formal methods ones.
<ZipCPU> Indeed, my experience with formal has been *so* successful, that I usually formally prove components before ever simulating them now.
<shapr> wow!
<ZipCPU> Rather then simulating first and then proving.
<shapr> I've been distracted from my hardware projects
<shapr> I built the third ergodox keyboard kit I had lying in a box
<gurki> sounds like a hard thing to do for non-trival designs
<ZipCPU> Good ... then I don't have to worry about not posting as much as I've wanted to. ;)
<ZipCPU> I've been doing it for ... *lots* of things.
<shapr> Although I bought 100 time separator nixie dots
<ZipCPU> I think the most complicated thing I've used it for is block exponents.
<shapr> so I need to get onto the vacuum circuitry project
<gurki> well i suppose its easy to prove that some verilog does what you expected it to do by the nature of how verilog works
<gurki> oh. read expected it to do in a way "what its meant to do"
<gurki> "what you expected it to do" is and can be a whole different story
<shapr> I'm a big fan of formal verification, even though "programming language with a type system" is considered academic nonsense in much of my workplace.
<gurki> which imho is the problem about formal verification.
<gurki> does this make any sense? :S
<ZipCPU> gurki: Not yet, can you try again? I am interested.
digshadow has joined #yosys
<shapr> ZipCPU: did you make a blog about whether the ZipCPU is vulnerable to meltdown/spectre? :-D
<ZipCPU> The ZipCPU has no speculative execution. Therefore it is not vulnerable to meltdown or spectre.
<shapr> yeah, just thought it'd be amusing
<ZipCPU> Even with the MMU added in (current work in progress, although it passes FV), it still wouldn't have that vulnerability.
<ZipCPU> So, again, my most complicated design so far takes groups of N samples, coming from M A/D's, and calculates a block exponent for each A/D's channel.
<ZipCPU> The samples are then shifted so as to have a smaller bit width, and the shift amount is done by block.
<shapr> ZipCPU: "This figure hows a “slow timer” in its top line" is that work 'hows' supposed to be 'has' ? (from the debouncing post)
<ZipCPU> That's required having a memory N deep, checking for the largest exponent across all N items, then adding the "solved" exponent back once the next group of N items starts, etc.
<ZipCPU> shapr: Let me pull that up and take a look
<shapr> ok, thanks
<ZipCPU> Ouch, it's hard to believe I misspelled an item *that* badly. ;)
<gurki> ZipCPU: im a bit cautious when ppl state "i can prove this does what i think it does", because you a) ussually just prove that it does what your verilog stated, which is a whole different thing from "i think it does x" and b) the verification tends to go to sh... once you have multiple compontents interacting, possibly without a complete model for its behaviour
<ZipCPU> "This figure shows a slow timer in its top line"
<gurki> eg. a chip. which might be influenced by a gurki talking nearby
<shapr> oh!
<gurki> my point being that this verification has a tendency to be meaningless in the real world
<shapr> I strongly disagree with that
<ZipCPU> gurki: Well, okay, I understand your point, but can I give you a few examples of some things I've found encouraging?
<gurki> i like ppl who strongly disagree with my opinions as i might learn sth from them
<gurki> shapr: can you elaborate? :)
<ZipCPU> Oh, and ... one side point ... for the most part, I've only used the formal verification on small components.
<shapr> sure, though I can't use FPGA anything for my discussion
<ZipCPU> Your comment about using it on a whole design is well taken.
<shapr> My background/interest is in using Haskell (functional programming plus *strong* type system) to build safer/saner software.
<gurki> ZipCPU: sure
<ZipCPU> Example 1) A FIFO. How easy can a FIFO be? I've built several. However, FV discovered a circumstance where I might "read" and "write" from an empty FIFO, and thus get invalid data out of it.
<ZipCPU> This was something none of my test benches ever found, primarily because I just wasn't creative enough to try that combination.
<shapr> hm, I have a bunch of hand-wavy thoughts .. I'd start by saying that software testing is 95% confirming that expected behavior works correctly.
<ZipCPU> Is it a whole system? No, but the subsystem now works better as a result.
<shapr> The last 5% is usually tests that check a bug that was reported from the users.
<shapr> In my experience, the closer software gets to formal verification, the more whole classes of bugs are removed.
<shapr> Fuzzing just wouldn't work against a formally verified piece of software.
<ZipCPU> Example 2) A prefetch module. When building my prefetch, the FV approach tested all kinds of unexpected, and perhaps even unpredictable combinations of resets and cache clearing.
<ZipCPU> The result found several bugs in code I thought was working.
<ZipCPU> (BTW, if you ever wonder how *bad* a bug can be, I recently posted on the ugliest bug I had to solve.)
<shapr> even if it didn't do exactly what you thought you told it to do, you still would still drastically reduce the bug count
<shapr> ooh, which post was that?
<shapr> oh, got it
<gurki> hm. well i guess the interesting part about this discussion would be "how big a part can be formally proven successfully?"
<ZipCPU> shapr: With a software background such as yours, you will love it.
<shapr> "these patters are enforced" -> patterns
<ZipCPU> gurki: Yes. That's a good question.
<shapr> ZipCPU: do you want me to report typos?
<gurki> your examples strike me as usefull ones for dev
<ZipCPU> I was about to give an example of an SDRAM controller --- since I just finished proving that one recently, *and* found a bug lurking within it that had been lurking within it for years.
<ZipCPU> shapr: (sniff) Sure.
<gurki> oh i would certainly call sdram controller too big
<ZipCPU> No, seriously, shapr, go ahead.
<shapr> heh, ok :-)
<ZipCPU> gurki: I'm thinking of trying to apply a DDR3 SDRAM controller to formal methods as well.
<ZipCPU> The neat thing about FV in those cases is that the chips you are working with have *very* defined interfaces that *have* to be met.
<gurki> how do you model your ram chpis behaviour?
<ZipCPU> FV is therefore both possible and profitable.
<shapr> gurki: multithreaded code in C, etc is not compositional. I prefer to use STM in Haskell because it was proven to be compositional.
<gurki> ZipCPU: problem is that i work for too long in asic design to trust theese models
<shapr> You can stuff one chunk of thread-using C into another, and be nearly guaranteed to learn new things about deadlock/livelock/etc
<ZipCPU> That models an SDRAM chip for me.
<gurki> has haskell left the region of being so slow that it cannot beat singlethreaded c code even when multithreading?
<shapr> So formal verification at the library level saves many hours of my life.
<gurki> sounds like a flame, isnt one. genuinely interested
<shapr> gurki: nope, Haskell is at fastest about 200% execution time of the equivalent single threaded C code
<shapr> but for developer time, Haskell is a huge benefit
<ZipCPU> shapr: You mean ... haskell takes 2x the time, so its 50% slower? But yet yields better and more reliable code?
<shapr> I (or my employer) can buy more/faster computers, but my lifetime is not so easily extensible.
<gurki> ZipCPU: im confused not to find some chip producers lib or sth :S
<ZipCPU> gurki: Heheh.
<shapr> ZipCPU: yup, tuned/optimized Haskell is about 200% the execution cost of C
<ZipCPU> Ok, I think I get it.
<shapr> thoughtpolice likely knows more about than than I do.
<gurki> ZipCPU: which kind of means youre simulating some theoretical chip, which is far from optimal :/
<gurki> because optimal models are almost always far from reality
<ZipCPU> gurki: You are making a fair point.
<shapr> and yes, Haskell yields much more reliable code than C
<gurki> theoretical models*
<shapr> if I lived anywhere nearby I'd show up at a coffee shop and give you an intro
<ZipCPU> If I understand it correctly, you are arguing that just because my code works with my contrived model, doesn't mean that it will work with the actual hardware model in real life.
<gurki> ZipCPU: yes
<ZipCPU> shapr: You live South of me, otherwise I might take you up on the offer--if for nothing else than to have a chance to meet you. ;)
<shapr> gurki: My anecdotal evidence says that the decreased development/maintenance costs of Haskell are worth the increased execution time over C
<gurki> shapr: depends, like all things do
<ZipCPU> gurki: Ok, let me through another monkey wrench into the discussion--even with my "simulator" based code, I *still* had a bug in my controller that I didn't find until using formal methods.
<gurki> shapr: there is no way to deal with the slow-ness of haskell once you entered hpc
<shapr> gurki: how so?
<ZipCPU> shapr: It might be fun to see and discuss that evidence, if we ever do end up in the covfefe shop together.
<gurki> shapr: besides, a skilled c programmer is likely to write equally good/readable/whatever code than some skilled haskell programmer
* ZipCPU leans back, pulls his beard, and waits to watch the fireworks.
<shapr> ZipCPU: You're in Virginia, is that right?
<shapr> haha
<ZipCPU> shapr: Yes.
<gurki> shapr: because - given your numbers stated above - i would need 1280 instead of 64 cluster nodes for my recent simulations
<gurki> shapr: and then i would find out that the networking overhead is so insane that i cannot do any computations
<ZipCPU> gurki: ?? How do you get those numbers?
<ZipCPU> I didn't follow that part.
<gurki> ZipCPU: he stated that multithreaded haskell is 200% of singlethreaded c
<gurki> i cannot vouch for the exact numbers, theyre ofc guesstimations
<shapr> gurki: I'd argue that it's easier and safer to make sweeping changes to data structures and abtractions in Haskell, and that increase in the speed of change is a net benefit
<gurki> but be it 500 or 1000
<ZipCPU> 64*200% = 128, not 1280.
<gurki> my point stands
<gurki> ZipCPU: ive been calculating with the 20 cpu cores per node
<shapr> gurki: would a good counterpoint be links to CloudHaskell and descriptions of its popularity? :-)
<shapr> Some years ago Credit Suisse was using Haskell on a massive cluster for purposes of simulating the global economy
<ZipCPU> Meh ... global warming was the in thing some years ago, a very popular theory, ... we could use some global warming around here right now.
<gurki> shapr: theres a mpi interface for python. i find this equally "useful"
<ZipCPU> Point: popularity doesn't equate to correct or truth.
<shapr> true that
<gurki> ^
<shapr> gurki: my statement was that single threaded Haskell is 200% of single threaded C
<gurki> ofc, when you know how to use a hammer, everything looks like a nail
<shapr> but I would bet multithreaded Haskell has almost the same performance and is *much* easier to write
<ZipCPU> gurki: +1
<gurki> shapr: no.
<gurki> i have to very strongly disagree with this
<gurki> for things not trivial to parallelize haskell is a royal pita compared to the mpi/omp approach used in hpc
<ZipCPU> gurki has the ball at 1st and 10, looks like he may have made a new first down, the judges are coming out to measure.
<gurki> ZipCPU: :D
<ZipCPU> gurki: So, here's the rest of the story with the block exponent example I gave earlier.
<shapr> gurki: for something that is easy to parallelize, I would always bet on having multicore-friendly Haskell code sooner than the same C code
<ZipCPU> When I placed it on the hardware, it still had a problem and didn't work. I'd been lazy, and so I hadn't used a simulator yet. I needed to then get the simulator up and running in order to find the bug.
<shapr> I could point to some of my advent of code solutions from last year, where I use parMap instead of map, and was able to use all the cores on my laptop.
<gurki> shapr: if its easy to parallize i can just write some mpi wrapper and be done in 5 minutes
<ZipCPU> It turned out to be a subtlety that ... I had neither modeled nor expected, associated with resetting one component but not another.
<gurki> literally
<gurki> i do this quite often
<gurki> i have a feeling you compare some skilled haskell guy with some noobish c guy
<shapr> gurki: I'd argue that businesses are more likely to have multicore systems instead of an MPI setup.
<gurki> shapr: eh?
<gurki> shapr: you can use mpi on anything, including some raspi
<gurki> you dont need infiniband n cluster stuff for it
<ZipCPU> Hmm ... sounds like there's been a turnover, shapr's pushing the ball downfield.
<shapr> I haven't worked anywhere that used MPI, or knew anything about it.
<shapr> though I agree it's the best tool in the hpc world
<gurki> shapr: well. 99% of tools in my field do
<shapr> RDMA with infiniband is cool stuff
<gurki> which is why it is interesting to talk to ppl with a different background
<gurki> :)
<shapr> I still have some infiniband hardware lying around
<shapr> from my bladecenter experiment
* ZipCPU searches for "MPI": meeting professionals international, migration policy institute, magnetic particle imaging, mannose phsphate isomerase, ...
<shapr> message passing interface
<ZipCPU> Oh, thanks
<gurki> ZipCPU: the thing for parallelization youd use once openmp isnt enough
* ZipCPU thought he saw a flag on the play, but it seems to have been put back in the ref's pocket.
<shapr> gurki: I agree that it's always possible to write C code that accomplishes the same task sooner than Haskell code on the same system.
<shapr> But I also argue I can get Haskell code that accomplishes that same task on all the cores on the box in a much smaller amount of time.
<ZipCPU> Oooh, a punt, and a turnover.
<ZipCPU> Let's see if gurki signals a fair catch ...
<gurki> shapr: well. like i stated. you seem to be more proficient in haskell than in c.
<shapr> gurki: I think the biggest difference in priorities between you and me, is that I put more weight into developer time required
<gurki> so you compare some good haskell guy with a not-so-good c guy
<shapr> and it sounds like you put more weight into execution time than developer time
<gurki> shapr: i actuallyt hink that a good c programmer is faster or at leasdt about as fast as you developing your haskell stuff in c
<ZipCPU> Guys, can I split the question into two parts? HPC and non-HPC? Would the answer be different for each part?
<ZipCPU> For example, I have a timesheet app that could just as easily run 2x slower, and no one would notice.
<gurki> ZipCPU: for non hpc this discussion would go to "who gives a sh..."
<ZipCPU> There is a lot of software out there like that.
<shapr> heh
<gurki> because nobody cares whether your code runs 0.1 or 0.2 seconds
<gurki> and you just use the lang youre familiar with
<ZipCPU> gurki: Yes, but I do care about how much time I spend building it, and how much time it takes me to get it to work.
<gurki> well. shapr thinks hes faster in haskell, i think im faster in c
<awygle> Looks like I'm missing an interesting discussion
<shapr> howdy awygle !
<gurki> ZipCPU: looks like "depends which lang youre more familiar with"
<gurki> ;)
<shapr> I like the crutch of the strong type system in Haskell
<ZipCPU> gurki: +1
<gurki> oh i did not say there arent good things about haskell, dont get me wrong on that one
<shapr> sometimes I'm tired or under-caffeinated, and the type system catches a bunch of problems that happen.
<awygle> HPC is interesting because it is very atypical
<awygle> The security aspects of strong type systems are likely to be much less relevant
<shapr> I also don't know how much refactoring happens in HPC
<gurki> shapr: a LOT
<shapr> gurki: do you write code that you need to reuse or change later?
<awygle> And the "execution time vs dev time" ROI is likewise very skewed from traditional
<gurki> shapr: "can we do this a bit faster pls?"
<awygle> Since a single run is likely to be many hours for complex simulation
<gurki> ^
<shapr> yeah, I can see where dev time is not the constraint for HPC
<awygle> Saving a single op from a tight loop potentially saves you hours
<awygle> Summed over the life of the program of course
<ZipCPU> gurki: You mentioned your background was ASICs, right?
<shapr> gurki: ok, at this point I'd argue that a clash-lang or Ivory approach is how I would use Haskell to build HPC code
<shapr> gurki: do you have a blog or twitter account?
<gurki> ZipCPU: somewhere in between asics and hpc
<gurki> ZipCPU: because you have to run your simulations somewhere :P
<awygle> On the other hand if you make it easier for non-mpi-expert programmers to get engaged, then you may open up new classes of problem domains
<shapr> I like finding interesting people :-)
<ZipCPU> gurki: My own background is (currently) FPGA's. However, I heard at one point from an ASIC developer how he spent hours and hours (days?) simulating his components with random values through his designs to know if they worked.
<awygle> So *shrug* as usual I end up Switzerland
<ZipCPU> gurki: Does this describe your need for HPC at all?
<shapr> Zurich?
<gurki> shapr: unfortunately not, though i kind of feel like i should start one one day
<shapr> gurki: have you seen clash-lang?
<shapr> gurki: please do!
<gurki> shapr: i only know clash in a sense of "haskell -> fpga"
<shapr> right!
<gurki> i didnt know theres some hpc part
<gurki> ah :)
<gurki> i had some real problems using clash, but i have to agree that they might have been my lack of haskell skill for a good part
<shapr> Haskell is used to great advantage in hard realtime systems (embedded systems in garbage trucks!) where the C source is generated from Haskell data structures.
<shapr> I wouldn't use the Haskell runtime in hard or even soft realtime systems.
<gurki> ZipCPU: this kind of covers some of it, yes
<ZipCPU> Ok, well ... what if you had a way of doing that for components, and instead of being random about it you could be 1) more thorough, and 2) more educated about the tests?
<ravenexp> shapr: what 'bout rust?
<ravenexp> it isn't haskell vs. C anymore
* awygle waves his rust flag
<shapr> I haven't dug into Rust yet, though a coworker is starting a lunch time class on that next week.
<gurki> ravenexp: i guesstimate that rust will have no real part in influence before a long time
<ravenexp> hell, even ada is an improvement over C
<shapr> I still wish Rust had, or could have typeclasses
<gurki> given that even fortran is still a major player there
<awygle> I wonder if rust has an MPI library that's good
<gurki> *in hpc - ofc
<gurki> non hpc is a whole different world, concept and approaches
<awygle> I wish I could do RDMA from a GPU to another GPU over commodity Ethernet
<gurki> awygle: rdma directly from gpu to gpu exists
<gurki> awygle: some strange implementation of rdma over ethernet exists
<gurki> you could try
<awygle> gurki: only on infiniband and iwarp :-(
<gurki> :P
<shapr> I recently saw RDMA over Ethernet for main memory, dunno about GPU
<gurki> you could buy some used infiniband cards for 20 bucks each or sth
<awygle> shapr: I'd be interested in a pointer if that isn't IB or iWarp
<shapr> yeah, a nearest neighbor infiniband network is easy to setup
<gurki> keep in mind that "old and rusty" in infiniband still is 40gbs
<gurki> *g*
<shapr> with your ~10ns latency guarantee
<awygle> gurki: that significantly raises the deployment barrier for the hypothetical application. I'd like to drop it into existing corporate networks
<shapr> yeah, it was fun to play with infiniband at home
<gurki> awygle: oh well
<ZipCPU> ravenexp: Having worked in both Ada and C, I would disagree. I like C better.
<awygle> Currently living with many copies and increased latency and going through main memory. Oh well.
<shapr> I couldn't afford the mesh in the middle, so a flat neighborhood network was the solution
<gurki> awygle: i found it to be surprisingly good when used with 10gbe cards
<awygle> gurki: yeah Ethernet is fine, I just know it could be *better* :-P
<gurki> awygle: the latency improvement of 10gbe is _huge_ so if youre at 1gbe atm you should try
<shapr> Is there a thunderbolt 10Gig card?
<gurki> isnt thunerbolt pcie-something?
<gurki> x4?
<shapr> yeah
<gurki> so you can just plug in some 10gbe pcie-card
<gurki> (via one or two converters, ofc)
<gurki> but as ppl already do this with gpus ...
<shapr> now I have to go look up the throughput of a pcie lane to see if four is 10GB
<gurki> :D
<awygle> shapr: more than
<awygle> If not 1.1
<awygle> 2.0 is 16Gb, 3.0 is almost 32Gb
<shapr> oh good
<awygle> at x4
* awygle has just started working on an open source 10Gbe NIC
<gurki> awygle: theres been some talk at the ccc about writing drivers for them. you might be interested
<gurki> :)
<awygle> gurki: certainly!
<ravenexp> ZipCPU: doesn't spark do for ada what yosys-smbtc does to verilog?
<ravenexp> I thought you liked FV stuff
<shapr> ouch, between $500 and $700 for a thunderbolt to 10Gbe
<ZipCPU> I'm not familiar with spark, and (I'll admit) my Ada background is about 20yrs old.
<gurki> shapr: id guesstimate you can do it for 200-300
<awygle> shapr: how much for TB to PCIe?
<shapr> ah! good point
<ravenexp> ZipCPU: the good stuff started appearing in Ada 2012
<ZipCPU> I've heard that Ada started going OO. That would be a good improvement from my experiences in 199x.
pie___ has joined #yosys
pie__ has quit [Read error: Connection reset by peer]
<ravenexp> OO in ada is an old hat
<ravenexp> formal contracts are new
<shapr> awygle: can't find anything cheaper than $256 on newegg
<gurki> shapr: well. some used 10gbe card is 50 bucks
<shapr> but that's a good suggestion
<shapr> gurki: good point
<ZipCPU> ravenexp: Really? That sounds intriguing.
<shapr> gurki: please start a blog or twitter account
<ravenexp> some guys from AdaCore were pushing it rather hard on /r/programming
<shapr> gurki: and one day we'll have some fun contents solving coding contests with Haskell vs C and see who gets it done sooner :-D
<ZipCPU> ravenexp: The one reddit I'm not reading. :)
<gurki> shapr: this might actually be fun :D
<shapr> gurki: kanske svensk?
<shapr> ok, probably not Swedish
<gurki> shapr: ingen
<shapr> haha
<gurki> only swedish i can do is via google tho :(
<shapr> heh, no worries
<shapr> I picked up Swedish when I lived there
<shapr> ZipCPU: next time I'm in Virginia, I'll buy you coffee
<ZipCPU> I'll look forward to the opportunity to get together.
* ZipCPU doesn't actually drink coffee, but might be partial to a soda.
<shapr> works for me
<cr1901_modern> sodaware- if my software helped you, but me a soda
<ZipCPU> [sp]
<ZipCPU> Dreamware - What I want someone to sell me.
<ZipCPU> Vaporware - what most research money ends up buying.
<shapr> true that
<ZipCPU> Ok, question for all ... does anyone here have beginner experience with induction?
<ZipCPU> I mean, I've been doing it for quite some time, but I'm wondering if there's someone with fresh experiences of their struggles that can help me formulate ideas for explaining the concept.
<awygle> ZipCPU: I would put myself in that category, more or less
<cr1901_modern> I haven't done much of it since the SPI project b/c I'm spread thin as it is
<ZipCPU> Have you tried induction at all?
<cr1901_modern> I remember the basics of course
* ZipCPU is supposed to be writing a course, with a chapter on the topic.
<ZipCPU> I need to figure out how to explain it simply.
<ZipCPU> awygle: Have you tried it at all?
<ZipCPU> If so, can you tell me about struggles you may have had with it?
<ZipCPU> Let's see ... my list of common FV problems: 1) the initial conditions aren't satisfied, and the tools offer you no help.
<ZipCPU> 2) Nothing satisfies your conditions--again the tools offer no help. (This is an assumption err, not an assertion error)
<ZipCPU> 3) The data after the clock is irrelevant for an always @(posedge clk) if (condition) assert(statement)
<ZipCPU> Hmm ... not enough for a new post yet.
<shapr> ZipCPU: I enjoy inductive proofs, and most of my Haskell functions end up inductive
<shapr> length [] = 0
<shapr> length (x:xs) = 1 + length xs
<gurki> eh? isnt that recursion?
<shapr> sure, but can't it also be induction?
<shapr> base case and inductive case?
<gurki> point.
<awygle> ZipCPU: yes, I have tried it. I can do so this evening - rather not try to have that conversation in "compiling" moments at worj
<ZipCPU> No, I don't think so.
<ZipCPU> awygle: Ok, holler at me later then. I'd love to have a conversation with someone still "new" to the concept, to help me get my class notes in order.
<awygle> ZipCPU: will do
<ZipCPU> shapr: Induction requires an if. IF (true for zero), and IF (true for N -> true for N+1) THEN true.
<shapr> hm, ok
<ZipCPU> I think I'd agree with gurki on his statement, an assignment in that fashion is recursion not induction.
<ZipCPU> The interesting thing, though, is that this doesn't describe FV induction very well.
<ZipCPU> FV induction is more like, IF (true for clocks 1:N, assuming assumptions and checking assertions) and IF (true for M:(M+N), while assuming assumptions and assertions, implies true for M+N+1) THEN code is formally verified.
<ZipCPU> cr1901_modern helped me understand that better ... (Thanks)
<awygle> ZipCPU: why do you think that those two descriptions of induction are different?
<ZipCPU> Good question. In the first case, you only examine one step and assume the properties are true for that step, and then prove they are true for the next.
<ZipCPU> But in the FV case, you assume N steps are valid, starting with absolutely random values for all clocked registers and inputs.
<ZipCPU> You then trim possibilities down by the assertions AND assumptions over those N steps.
<awygle> I basically see what we do wtih Yosys smtbmc as somewhere between "simple induction" and "strong induction" from my high school math classes
* ZipCPU looks up "strong induction"
<awygle> You must assume N previous states are correct. It is convenient to make N as small as possible because it makes the computational proof cheaper
<ZipCPU> Ok, yes, this analogy seems to fit.
<thoughtpolice> Well, it's arguable that shapr's definition is indeed inductive, but it depends on how you look at it, I guess. Haskellers look at it differently: Every inductive type (such as a list) gives rise to a generalized notion of induction on that type. And every program is a proof, albeit in an inconsistent logic. So you could make the case length is a kind of inductive proof that a list can have a notion of length.
<ZipCPU> Yes, thoughtpolice, you may well be right in this.
<awygle> "simple induction" is N=1, "strong induction" is N=N (and probably not amenable to automated proof?), and FV induction is N=(as small a number as we can get away with)
<thoughtpolice> Traditional induction is 'just' structural induction where the structure is a Peano natural, or whatever. And In a sense the pattern match is doing an 'if' but it's a bit different: you're defining both the base inductive case (if zero), and the follow up case (N+1) in terms of a smaller case, just like normal induction. e.g. if you think of a list 'xs' as N+1, you can split the list into its head and tail -- the number 'N+1'
<thoughtpolice> and the list 'x:xs' ("x on the front of list, xs") are the same
<awygle> Then BMC is your inductive base case.
<ZipCPU> Yes, the BMC is the inductive base case.
<ravenexp> a really beginner question: how do I know the assertions I've formulated are sufficient?
<ZipCPU> ravenexp: Yes.
<awygle> Every time I try to explain a CS concept I imagine people like thoughtpolice, who actually know the underlying theory, twitching in agony at my sloppy metaphors
<cr1901_modern> Nah, thoughtpolice isn't arrogant like that
<awygle> ravenexp: "sufficient" for what, is the real question
<ZipCPU> Maybe not, but I can understand being young/new/etc on a channel and wanting to stand out, but not wanting to be wrong.
<thoughtpolice> Oh, I don't know much beyond what I've taught myself and have metaphors for, trust me. My theory is not rigorous at all. :) Functional programming just gave me a good toolbox for thinking about things like that (and, coincidentally, hardware!)
<ZipCPU> Let's come back to ravenexp's comment.
<ZipCPU> How do you know the asserts/assumptions are *sufficient* to prove your design works. it's a good question.
<awygle> assert(true) is sufficient to prove that true is true, but probably not very useful. You have to define properties before you can prove them
<ravenexp> awygle: for the proof to be useful in practice, for finding all bugs, not just some of them
<cr1901_modern> I'm mulling over something @erincandescent told me... in complex designs, one method to reduce burden is to assert _one_ property is true without making proving the other asserts simultaneously.
<cr1901_modern> After you prove that assert holds, you can use a script to convert that assert to an assume for the rest of your proof
<awygle> This is why bugs remaining after FV are likely to be bugs in the spec or your understanding of the spec
<cr1901_modern> My question is- what are valid properties (asserts/assumes) you can add, delete, or modify in between those steps
<awygle> cr1901_modern: ooooo that's interesting
<cr1901_modern> Clearly, modifying an existing assume is bad
<cr1901_modern> (you render your whole proof invalid for the property you just proved if you modified an existing assume)
<awygle> Right
<cr1901_modern> Adding an assume is probably okay
<awygle> Hmmm yes, that should be a strictly smaller state space
<ravenexp> the specs are not usually formulated in terms of assertions of some properties
<awygle> Similarly I'd say you can't remove asserts
<ravenexp> I have to do it myself
<cr1901_modern> Deleting an assume is _also_ probably okay
<cr1901_modern> wait, no it's not
<awygle> ravenexp: yup. Bummer, isn't it?
<ravenexp> how do I know I should stop inventing asserts that I think might be useful?
<ZipCPU> Wait, adding an assume is okay?
<cr1901_modern> Sure, you've already proven the property holds without the assume
<ZipCPU> Adding invalid assumptions is how I normally end up with non-working proofs.
<ZipCPU> Ok
<ZipCPU> I get it now.
<cr1901_modern> so adding an assume just further reduces the state space of a property you've already proven to be true
<ZipCPU> Does it speed up induction?
<cr1901_modern> I don't know, but from what you've told me, riscv_formal uses tricks like this
<ZipCPU> Oooohhh ... okay, that sounds like a good question for the master then.
<awygle> It should _not slow down_ induction
<awygle> It _may_ speed it up. IIUC.
<cr1901_modern> Okay my big question is: If I have property A, and property B I want to assert true. Is it valid to assume(A) while asserting(B), and then after asserting "B" holds,
<cr1901_modern> then assert(B) while assuming(A), and get the correct answer.
<cr1901_modern> ^ thoughtpolice
<awygle> cr1901_modern: how does one prove a single property without proving the others in a way which takes less time than proving all of them?
<cr1901_modern> awygle: That's something I should probably ask Erin
<cr1901_modern> I don't know
<ZipCPU> Let's try to express this more formally.
<cr1901_modern> Just it was suggested to me
<ZipCPU> If assert(A) is true, then assume(A) shouldn't change anything.
<ZipCPU> If (A) then assert(B) passes, then if (A) assume(B) shouldn't change anything.
<ZipCPU> Hmm ... I'm not sure this is what you want.
<awygle> ravenexp: basically, you don't know. This was my biggest difficulty with getting started in the field
* cr1901_modern has to go eat, then will mull this over
<awygle> You have to make a decision that you've specified all the important properties of the system
<ZipCPU> So, here's what cr1901_modern is missing: the more assertions/assumptions you have, the faster induction will go.
<awygle> If you specify too tightly you end up specifying your verilog
<awygle> If you specify too loosely then you have holes in your spec
<ZipCPU> So, if your design verifies with N properties, it should verify in fewer steps with N+1 properties---assuming the new ones are independent.
<ZipCPU> awygle: Yes.
<ZipCPU> But there's also the problem of computation power to be dealt with.
<awygle> ZipCPU: they may not be, though. That was my train of thought too.
<cr1901_modern> Ahh independent. Like probability :D
<cr1901_modern> I wonder if analogous
<awygle> I would love to hear from cr1901_modern's contact about when that's a useful technique
<ravenexp> awygle: what's wrong with using verilog as the spec, and just proving equivalence?
<ZipCPU> ravenexp, awygle: Care for an example?
<awygle> ravenexp: well, you're just as likely to mess up your asserts as you were the verilog, at that point.
<ZipCPU> Example: A delay. I just recently wrote about creating a delay from memory elements. I then created a proof that did the same thing with a giant shift register.
<thoughtpolice> cr1901_modern: Yes, that's legal to do, you're essentially just assuming a hypothesis during the assert() step. (You often do this in theorem provers too, where you may make something easier to prove by splitting it into two proofs, where one assumes something and proves something else, based on that assumption -- and the other is proof of that assumption).
<ZipCPU> I then proved that the two wwere equivalent.
<ravenexp> I thouht you could use yosys to prove that one sequential circuit is equivalent to abnother
<ZipCPU> Yes, you can.
<ZipCPU> (See example above)
<ravenexp> so I'd write something dump and slow, but obviously correct
<ZipCPU> Yes!
<ravenexp> and use it as "the spec"
<ZipCPU> Example 2: A "counter-is-zero" flag can be used to take counter comparisons out of state machine logic. You could assert (counter-is-zero == (counter == 0)), to know if your counter-is-zero value was properly set.
<cr1901_modern> thoughtpolice: My question is, if I prove "B" holds by assuming "A" holds, what if I prove "A" doesn't hold when I assume "B" holds? It's good that the prover caught it, but...
<cr1901_modern> there is no risk I will miss any part of the state space due to a pathological interaction?
<cr1901_modern> (I'm guessing no. If either of them don't hold, then my design is wrong anyway, so there's no point in checking that part of the state space anyway)
<ZipCPU> cr1901_modern: If you prove (!A)||(B), that doesn't mean (!B)||A will be true.
<cr1901_modern> ZipCPU: (!A)||B is "A implies B", right? In the first step of my proof, I didn't prove A implies B, I don't think?
<cr1901_modern> did I?
<awygle> If A->B and B->~A? Is that what you're saying?
<cr1901_modern> Well that's easy enough to run in z3 or do it by hand :P
<ZipCPU> Didn't you start out by assuming (A) and then proving (B)? This is the same as (A) implies (B), or if (A) then (B)
<thoughtpolice> Assumption is just implication, "Assuming X, Y is True" is the same as "X -> Y"
<cr1901_modern> Oh I... didn't know that
<awygle> If you prove B under the assumption of A, you've exactly shown that A implies B. If there are no other assumptions.
<thoughtpolice> So yes, you're asking A -> B && B -> !A as awygle said
<awygle> For assumptions ABC and asserts DEF, you're proving (A^B^C) - >(D^E^F)
<awygle> Right?
<ZipCPU> Yes.
<ZipCPU> Or, equivalently, (! (ABC)) || (DEF) (Removing the ^ as implied, and using || for logic OR)
<awygle> Are those equivalent?
<cr1901_modern> Why isn't it "A -> B && B -> A"?
<ZipCPU> A->B can be equivalently written as (!A)||(B)
<cr1901_modern> I think that's what ZipCPU actually said
<ZipCPU> No, no, ... if A -> B, then the only other conclusion you can easily draw is that !B -> !A
<ZipCPU> A -> B is true means that either A isn't true (i.e. the if's predicate doesn't match), or B (the consequence) must be true.
<awygle> Ah, so it can. Wonder if I learned that at some point.
<cr1901_modern> Okay I figured out how to formulate my question. I'm prob missing something obvious, but...
<cr1901_modern> Suppose I prove A->B, then I prove B->A >>
<ZipCPU> Ok, so (A)==(B), go on.
<cr1901_modern> tangent: (3:39:49 PM) awygle: If A->B and B->~A? Is that what you're saying? Did you mean "A -> B && B -> A"?
<awygle> cr1901_modern: no, earlier you said "and then I prove A isn't true when B is true"
<cr1901_modern> awygle: Oh right, I was sloppy sorry
<awygle> Or something to that effect
<cr1901_modern> You're 100% right, I was being sloppy with what I'm asking
<cr1901_modern> (tangent end)
<awygle> That's OK, I just found code where a coworker hard coded "2" instead of querying a value, so any sloppiness was minor by comparison
<cr1901_modern> ZipCPU: Wait... (A->B && B->A) == (A == B)?!
<awygle> I would have said A<->B but yes
<cr1901_modern> I screwed up my explanation again
<ZipCPU> cr1901_modern, well, if (A) and (B) are booleans, then yes.
<ZipCPU> If they are more complicated, then by the time you have converted them to booleans, then yes.
<cr1901_modern> Let's back up for a second. When I say A->B, that is shorthand for "I'm doing a two step proof. I want to prove properties A and B hold. First I will assume A holds and then prove B holds"
<cr1901_modern> And then when I say B<-A, that is shorthand for "I've proven that B holds if I assume A holds. Now I will assume B holds and prove that A holds"
<ZipCPU> No, I don't think that's what you want.
<cr1901_modern> thoughtpolice: How is this proof legal is this ultimately "devolves" to A==B?
<ZipCPU> I think what you want to do is prove if (AB) -> C, and you are wondering if you can prove (A)->B first.
<ZipCPU> Or did I get that right?
<awygle> What you want is to prove X->AB. First you prove X->A. Then you prove XA->B
<awygle> Which... I think works
<awygle> But anyway that's what you're describing
<cr1901_modern> This is what I want to do
<ZipCPU> Let's see ... X->AB is equivalent to (!X)||(AB), thus (!X)||(A) must be true.
<cr1901_modern> what did you think I was describing here?
<cr1901_modern> Asking sincerely
<cr1901_modern> B/c I think I did a poor job explaining
<ZipCPU> cr1901_modern: I think I (and others) tried to help and just made the situation more confusing. Do you think we are on track yet?
<awygle> Okay here's a full layout of my understanding
<cr1901_modern> Well, I don't know where the "C" in (AB)->C came from, nor the "X" in X->AB came from
<cr1901_modern> My mental model was basically "can I prove A and B separately, by assuming the other is true, and get the same result as if I proved them both to hold simultaneously"
<cr1901_modern> There's no "X" or "C" in my hypothetical
<ZipCPU> If you can prove (A) is true, then (A). If you can prove (B) is true, then (B). You might also have (A)->(B), but that's not the same as proving (A)&&(B)
<awygle> "I want to prove some stuff (A and B) under some assumptions (X). I will first prove one thing (A) under those assumptions (X, so X->A). Then I'll assume the thing I've proved (A) as well as my earlier assumptions (X), and prove the other stuff (B, so XA->B). Thus, I've proven my original assertions under my original assumptions (X->AB) "
<awygle> Is that correct?
<cr1901_modern> No that wasn't what I had in mind, but it's probably the question I _should've_ asked
<awygle> Oh so you were asking if XA->B and XB->A also gives X->AB.
<cr1901_modern> No, where is the "X" coming from?
<cr1901_modern> I never once mentioned "X" in my problem statement
<cr1901_modern> it may exist, but I don't care about it for the purpose of my example
<awygle> X is the initial assumptions
<awygle> It might be 1 (that is to say, there are no initial assumptions)
<ZipCPU> Hmm ... I might rephrase the question this way ...
<awygle> That would be a hell of a proof tho lol
dys has joined #yosys
<ZipCPU> If I want to prove that my code implies (A&B) hold, then having proved my code implies (A), I should be able to turn off that part of the proof and prove instead that my code implies (B).
<ZipCPU> The only problem is ... you still need to constrain your registers during induction.
<cr1901_modern> Nevermind I need a break from this lol
<awygle> This is the least phone friendly conversation I've ever had on phone Skype
<awygle> Err, irc. Same thing.
<ZipCPU> Ever played checkers?
<ZipCPU> The point is, this illustrates something about induction.
<awygle> Those are some terrible checkers players
<awygle> I am also off to lunch
<cr1901_modern> awygle: Okay, you're right. https://irclog.whitequark.org/yosys/2018-01-05#20973848; This is what I was _trying_ to describe
<cr1901_modern> and for my own piece of mind, I just proved it in z3
<ZipCPU> Sigh. smt2 remains incomprehensible to me.
<cr1901_modern> I also think I did it wrong
<awygle> cr1901_modern: so XA->B and XB->C does give X->AB? that's actually huge
<cr1901_modern> awygle: Don't get your hopes up yet. I screwed up somewhere
<awygle> On an essentially unrelated note, I am really enjoying this conversation.
<ZipCPU> (!X)||(!A)||(B), (!X)||(!B)||(C) -> (!X)||( (!A)||(B) )&&( (!B)||(C) ), yeah, okay, I can see it ... that proves (X)->AB.
<cr1901_modern> Right, that's b/c z3 proves that there exists _any_ input that satisfies the asserts
<cr1901_modern> I need to prove that _all_ inputs satifisfy the asserts
<ZipCPU> Heh, okay, here's one to add to the discussion, though. I just proved a design with assert(A) and assert(B) in it. Now that I know my code makes (A) true, I removed assert(A) from the design. The induction engine failed.
<ZipCPU> This is stepsize dependent, though, I'm sure.
<cr1901_modern> This is the correct formula: https://rise4fun.com/Z3/C1cx
<cr1901_modern> thoughtpolice: Not sure how your advice holds in light of this formula :(
<cr1901_modern> (The whole thing should be unsat to prove that all inputs satisfy the equation)
<awygle> I want so badly to work some of this out on paper. I am in danger of drawing diagrams in Teriyaki sauce on my lunch with my fork.
<cr1901_modern> awygle: Just use z3. It's not worth it to do it on paper
<awygle> cr1901_modern: I've never used Z3. Although it looks lisp-y enough that I should be able to pick it up.
pie_ has joined #yosys
pie___ has quit [Ping timeout: 240 seconds]
MrBismuth has joined #yosys
MrBusiness has quit [Ping timeout: 265 seconds]
pie_ has quit [Remote host closed the connection]
pie_ has joined #yosys
dys has quit [Ping timeout: 268 seconds]
cr1901_modern has quit [Read error: Connection reset by peer]
m_t has quit [Quit: Leaving]