<cr1901_modern>
Oh, I just learned Verilog has a divison operator
<whitequark>
yes. but division is not very useful. at least I imagine it translates into something really obnoxious.
<whitequark>
whereas x.eq(x % 3) should just translate into a comparator and a mux
rohitksingh_work has quit [Read error: Connection reset by peer]
<cr1901_modern>
I guess it would be a LUT-based divider. I wouldn't want to try to reduce a naive combinational divider
<cr1901_modern>
And fair enough. I usually associate division and modulus together
<sb0>
do synthesizers grok %?
<whitequark>
sb0: sure
<whitequark>
now I'm wondering what / would translate into
<sb0>
did you test ise/vivado?
<whitequark>
only yosys...
<whitequark>
looks like ise only does power-of-two moduli
<sb0>
well, yosys isn't really usable for a >25MHz project, is it?
<whitequark>
you can feed it to xilinx par
<whitequark>
btw, clifford is now reversing the series 7 bitstream himself. though it will take a while
<cr1901_modern>
sb0: Be nice. I can synthesis 260 MHz signals on Icestorm FPGAs :P
<sb0>
also you didn't implement % in the simulator
<whitequark>
cr1901_modern: yeah, and one flipflop goes as far as 400MHz
<cr1901_modern>
Not sure I can coerce the PLL to generate that high, but 260 is above the 240 the datasheet says is the max IIRC
<whitequark>
probably not but you can feed an external signal
<GitHub189>
[migen] whitequark pushed 1 new commit to master: https://git.io/vP5bf
<GitHub189>
migen/master b7b92d4 whitequark: sim: add support for %.
<whitequark>
if my experience with gp4 is any indication then it will work in some fashion at least
<sb0>
bah, don't the IOs have a 250MHz limit?
<cr1901_modern>
sb0: That may be where I got 240 from, but I measured with a frequency counter on my minispartan; 260 MHz is doable. I think the datasheets are conservative.
<sb0>
it's frustrating how slow FPGA IOs are in general, even on xilinx; for example, GPUs have 5Gbps on every pin to drive GDDR5
<cr1901_modern>
you sure that's not a limit of FPGA technology :)? (I am guessing the FPGAs that GPU designers use are $1000+ each)
<whitequark>
the GPUs simulated on an FPGA are just slow
<sb0>
no
<sb0>
and xilinx can do fast IOs (badly), in the transceivers
<sb0>
and we also have $1k+ FPGAs on the next ARTIQ hardware, and they are still slow
<cr1901_modern>
what causes the bad performance? The inherent lowpass effect?
<sb0>
no market drive to implement the fancier IO standards, i suppose
<GitHub174>
[migen] whitequark pushed 1 new commit to master: https://git.io/vP5b5
<cr1901_modern>
sb0: Just throwing this out there, but would you accept a Migen-based RISCV core (yes, pipelined) into MiSoC? I don't plan on doing one right now, but it might be a fun project (against my better judgment).
<sb0>
if it's within 10% of the lm32/mor1kx performance, yes
<sb0>
and well-tested
<cr1901_modern>
Re 1st point: Hah
<cr1901_modern>
Re 2nd point: If I undertook this project I would learn how to use yosys' SAT solver
<cr1901_modern>
to prove that "certain states can't occur"
<cr1901_modern>
Then if those invariants are violated it's "not my problem" :)
<whitequark>
formal verification of a practical CPU core is a nontrivial endeavour
fengling has quit [Ping timeout: 268 seconds]
<whitequark>
to my horror, apparently even ARM doesn't do that
<cr1901_modern>
I'm aware, not even Intel does. Certainly not "the whole core".
<whitequark>
no, I mean ARM doesn't use formal verification at all or almost at all
<sb0>
Intel chips are full of bugs and this keeps getting worse, so obviously they don't
<cr1901_modern>
Ahhh. There was a talk from someone at Intel last year (I'll see if I can find the person)
<cr1901_modern>
They keep being asked "why not do formal verification", and the answer they gave is "not practical except for critical parts"
<cr1901_modern>
so a lot of stuff "just works" (doesn't work)
<sb0>
they'd rather spend resources on verification instead of garbage like IME
<cr1901_modern>
I'm fascinated with the IME CPU, but that doesn't change that it's a fascinating CPU used for a bad purpose
<whitequark>
IME is probably just some 386 core
<sb0>
nah they use some obscure ISA
<cr1901_modern>
The ARC processor
<cr1901_modern>
it's the successor to SuperFX, the first GPU
<sb0>
ARC
<sb0>
yes
<cr1901_modern>
sb0: FWIW, I don't really have allegiance to "RISCV, LM32, and OR1K", just that RISCV seems to be winning. All things being equal, I'd rather see LM32 win, have ysionneau's MMU be used more widely, and perhaps a 64-bit version
fengling has joined #m-labs
<whitequark>
lm32 doesn't have a decent LLVM backend
<whitequark>
who even needs such a CPU
<cr1901_modern>
someone could add it. And it has a GCC port (though I haven't tried compiling it in a year or so now).
<whitequark>
well better get onto that
<cr1901_modern>
Prob still works with GCC 6. Idk how that compares to LLVM now
<whitequark>
roughly on-par
<cr1901_modern>
Well, I don't claim to be an expert on this, but seems like GCC devs are digging their own grave then :(.
<sb0>
no RISC-V CPU is usable even after those years since the project started
<cr1901_modern>
sb0: "winning" == "It's the one everyone talks about and has the most popularity"
<cr1901_modern>
I barely hear anything about or1k on Twitter if it's not m-labs related. I used to hear more about LM32 wrt it being "a fully open CPU" and whitequark's "Why the RPi is unsuitable for education" article but not so much anymore.
<sb0>
this speaks against twitter more than against or1k. so those people are only discussing CPUs that don't work?
<cr1901_modern>
Seems so. I mean it's easier to talk than it is to do. I should know :P.
<cr1901_modern>
And I don't remember who told me this (clifford?) but or1k impl's are getting RISCV opcode-decoding compatibilities as well
<cr1901_modern>
Prob akin to Centuar's dual x86-ARM core that reuses functional units for both
<sb0>
yes, the RISC-V instruction formats look good
<sb0>
it's basically an improved LM32
<cr1901_modern>
Just LM32 has cleaner source ;)
<sb0>
but everything else RISC-V ranges from shitty to underwhelming
<sb0>
so yeah, fap fap fap david patterson, fap fap fap berkeley, but where are the results=?
<cr1901_modern>
well, RISC-V specifies an instruction set and what the core should do. Not what the impls can do.
<cr1901_modern>
Is the spec shitty? Or just the impls?
<sb0>
as I said the spec is good
<cr1901_modern>
Well, RISC-V, or1k, and LM32 all read to me as MIPS-likes. Of course, only the latter two have impls that approach that :P
<cr1901_modern>
But the big 2 CPUs aren't MIPS like pipelines. In fact, idk wtf they'd be classified as. But their block diagrams don't resemble anything I've seen in all other pipeline CPUs. I don't even know the pipeline stages of a modern Intel CPU.
<sb0>
if you want to make a decent RISC-V, take the lm32 source and modify the instruction decoder
<sb0>
they're close enough that it should be feasible
<key2>
well I managed to do it by giving myself the wishbone bus that CSRBank is going to use, and check there when i strobe in a read case, but I am not sure it's the way to go
<whitequark>
why do you want that anyway?
<key2>
to implement a UART16650
<key2>
according to the spec, you just need to read the CSR, and everytime you read it, it gives you the next value in fifo
<cr1901_modern>
CSR doesn't have a read strobe by design
<cr1901_modern>
However, _florent_'s UART core in misoc manages to get around this somehow/has a read/write FIFO
<whitequark>
it uses the weird FIFO library for that, which isn't tied to CSRs
<cr1901_modern>
Oh I guess then it uses CS and ~WR as a read strobe
<sb0>
why do you bother with a 16550?
<sb0>
is it 1987?
<cr1901_modern>
Well, TIL it *WAS* invented in 1987
<whitequark>
sb0: why is m-labs.hk hosted on infrastructure you cannot fix when it breaks, anyway?
<key2>
because most OS do have a uart16550 implementation already
<whitequark>
you'll spend more time writing a proper 16550 than you will spend writing a trivial UART driver (or more likely, copying it from somewhere)
<key2>
yeah maybe
fengling has joined #m-labs
mumptai has joined #m-labs
fengling has quit [Ping timeout: 268 seconds]
bcdonovan has joined #m-labs
<bcdonovan>
Is anyone available for some MiSOC questions?
<whitequark>
sure
<bcdonovan>
I am attempting to port MiSOC onto the ac701 board
<bcdonovan>
I've made some progress. I have user_leds and user_dips switches working.
<bcdonovan>
I have MiSOC built with an integrated rom, which produces junk output on the serial port
<bcdonovan>
Any suggestions for tools for debugging the running build?
<whitequark>
that sounds like an incorrectly specified sysclk frequency
<bcdonovan>
I'm basing my port on the kc705 target which has the same 200 MHz system clock
<whitequark>
I would hook up an LA and then see how long the start bit is
<whitequark>
or a scope
<bcdonovan>
a place to start. thank you. Are there any limitations with the lm32 tool chain with regard to the gcc version. Right now I'm using 4.9.4
<whitequark>
I don't use lm32
<whitequark>
(or gcc for that matter...)
<bcdonovan>
or1k with llvm then?
<whitequark>
yup
<bcdonovan>
what do you use as a simulator?
<whitequark>
I don't simulate
<whitequark>
that said qemu should work
<bcdonovan>
hmm ok I'll look into that
<bcdonovan>
Do you use anything for on-chip debug?
<whitequark>
sorta
<bcdonovan>
you were right on the clock, serial looks good now.
<whitequark>
what did you change?
<bcdonovan>
I was passing a bad clock frequency (clk_freq) to SocCore
fengling has joined #m-labs
fengling has quit [Ping timeout: 268 seconds]
<key2>
qemu wouldnt work
<key2>
as the driver mach milkymist
<key2>
unless someone did a port for misoc ?
<whitequark>
you could debug target-independent code there
<bcdonovan>
what about simulating the gateware?
<whitequark>
you can use the migen simulator, but that doesn't currently support simulating verilog parts (the mork1x or lm32 core)
<whitequark>
you can also compile everything and then simulate with iverilog, which requires a little manual work to make the testbench and provide clocking