<GitHub73>
artiq/master 7a10cb8 Sebastien Bourdeauducq: mc: use pc_rpc
<ysionneau>
sb0: basically if I understood correctly, the GENSDRPHY connects real pads (pads.a , pads.cke, pads.ba etc) to dfi.p0.address , dfi.p0.bank etc, then I should directly drive dfi.p0.* from my sdram controller?
<sb0>
yes
<ysionneau>
ok
<sb0>
note that on multiphase PHYs (like for DDR, DDR2, DDR3) you should look at rdphase, wrphase, rdcmdphase, wrcmdphase attributes that tell you on which phase to send a read command, a write command, another command while reading, another command while writing
<ysionneau>
so I can rewrite the ppro.py target to derive from GenSoC instead of SDRAMSoC ?
<sb0>
no, you should modify sdramsoc so that the sdram controller is selectable via parameter
<ysionneau>
since SDRAMSoC seems to be very tied to the lasmicon etc architecture
<ysionneau>
ok
<sb0>
then you can split __init__ into _init_lasmi and _init_wb
<sb0>
and the right function is called depending on parameter
<sb0>
since your PHY only sends one SDRAM command per FPGA cycle, you can ignore rdcmdphase/wrcmdphase
<ysionneau>
humm haven't looked at DDR/DDR2/DDR3 stuff yet
<sb0>
you can just send everything on rdphase, except write commands
<ysionneau>
I just finished having a look at the SDRAM datasheet (the micron one of the ppro)
<sb0>
DDR* won't make any control algorithm difference for you
<ysionneau>
what's the idea behind those *phase signals? or maybe I should read about DDR* stuff
<sb0>
you just need to keep in mind that the PHYs will push multiple commands into a single cycle, and that the PHY has requirements on which phases the read and writes commands are sent
<sb0>
the PHY "phase" is not DDR specific, it's just that the current DDR PHYs happen to use them
<sb0>
a PHY that has phases will send several commands into the SDRAM during a single FPGA cycle (using a clock multipler)
<ysionneau>
duplicating the clock signal but with a different phase ?
<ysionneau>
then you get 2 rising edges during the same "system cycle time"?
<ysionneau>
two or more for instance
<sb0>
and the "phase" is just on which multiplied clock cycle within the system clock cycle a particular command is sent
<sb0>
e.g. if the SDRAM clock is 4x the system clock, there are 4 phases
<ysionneau>
ah ok it's not just "phase" it's really multiplicated
<ysionneau>
ok
<sb0>
your PHY generates max. 1 command per system clock cycle
<sb0>
you can, in most cases, use any phase for this
<ysionneau>
so all the phases will receive the same command
<sb0>
NO, that would send the SDRAM the same command multiple times
<sb0>
you send the command to one phase and NOP the other ones
<ysionneau>
right ok
<sb0>
the read/write phase requirements come from the realigning of the data burst to the system clock cycle
<sb0>
e.g. on a SDR PHY that has 2x clock multiplication
<sb0>
you have 2 phases
<sb0>
the BIOS will automatically program burst length 2
<sb0>
so that a burst has the exactly the duration of a system clock cycle
<sb0>
but then, if you send e.g. a read command on the wrong phase
<sb0>
then you will get half the data in one system clock cycle, and half the data in the next
<sb0>
which is pretty bad :)
<ysionneau>
yep understood
<ysionneau>
my controller will be running at system clock, right?
<sb0>
yes
<sb0>
the high speed parts are in the PHY
<ysionneau>
how can I receive the data burst (length 2) then if I'm running twice as slow as the phy ?
<sb0>
at some point we'll probably want to use 2x clock multiplication on the ppro too
<ysionneau>
the phy is deserializing and buffering it?
<sb0>
yes
<ysionneau>
ok
<sb0>
the PHY will give you two data words at the same time
<sb0>
just concatenate them and send to bus
<ysionneau>
so I get the illusion of a 2*a bus width
<sb0>
yes
<ysionneau>
sorry 2*dq
<ysionneau>
ok
<sb0>
well you have a DQ signals for each phase, and if you respect the rdphase restriction they will have valid data at the same time
<sb0>
same for writes, you are only required to present valid data on all phase DQ during a single cycle
<sb0>
just concat/slice to make bus words
<ysionneau>
ok :)
<ysionneau>
for 2 phases something like Cat(self.dfi.p0.rddata, self.dfi.p1.rddata) or the other way around
<ysionneau>
oh and, I guess I can do just one FSM and not one per bank like in the bankmultiplexer.py
<sb0>
yes, but use python features to dynamically slice/cat an arbitrary number of phases
<sb0>
and yes only one fsm
<sb0>
the parallel fsms are there to fill the multiple phases
<ysionneau>
hum ok
<sb0>
and they require concurrent accesses to different banks, which is what embedded CPUs without DMA do not do
<sb0>
so they are basically wasting fpga resources on ppro
<ysionneau>
to build up _Operator() object, I used this code http://pastebin.com/5NMuF9xn but I think there is something less ugly, is this optree()?
<ysionneau>
(here I'm talking about hit_logic)
<sb0>
openrow[b] = Signal(geom_settings.row_a) < did you mean openrow.append()?
<ysionneau>
ah yes
<sb0>
I don't get this code. you are ignoring the bank selection signals for bank 0?
<ysionneau>
nop it's a mistake
<sb0>
the logic you want is basically page_hit = open_row_number[bank_address] == row_address
<ysionneau>
yes
<sb0>
and in the fsm, if has_a_row_open[bank_address] -> if page hit go directy to read/write, otherwise precharge/activate
<sb0>
if not has_a_row_open[bank_address] -> activate -> read/write
<ysionneau>
yes
<ysionneau>
so you mean I can just write comb += hit.eq(open_row_number[bank_address] == row_address) ?
<ysionneau>
with open_row_number and bank_address being two signals?
<ysionneau>
and it will build the same logic circuit as some hit.eq(optree('|', [(bank == b) and (open_row_number[b] == row_address) for b in range(banksnb)] )
<ysionneau>
or you were just stating the abstract logic I wanted
<sb0>
you can use a migen Array to wrap a Python list of signals. and then yes, that syntax works
<sb0>
you can also do open_row_number[bank_address].eq(sth)
<ysionneau>
ah cool !
FabM has quit [Quit: ChatZilla 0.9.91 [Iceweasel 31.1.0/20140903072827]]
FabM has joined #m-labs
sh4rm4 has quit [Remote host closed the connection]
sh4rm4 has joined #m-labs
sb0 has quit [Ping timeout: 245 seconds]
sb0 has joined #m-labs
<sb0>
numpy/json incompatibility is annoying. and in addition to that, passing a python list into numpy code will work in a lot of case, but will break e.g. on list*value, which replicates the data if a list and scales it if numpy
mumptai has joined #m-labs
MY123 has quit [Quit: Connection closed for inactivity]