<rqou>
so that the web can finally have composability for once?
<rqou>
seriously, why are web people particularly bad at this?
<whitequark>
libusb1: if self.__doomed: raise DoomedTransferError('Cannot reuse a doomed transfer')
<awygle>
i am deeply uninformed about the web. i have zero idea how anything works at the level of browsers.
<whitequark>
very gothic
<whitequark>
awygle: it mostly doesn't
<rqou>
I've learned that everything i expected to work a certain way doesn't
<rqou>
e.g. every time i write an algorithm that touches DOM, multiply another `n` into the big-O runtime
<rqou>
(housemate interned at Mozilla and told me how everything i expected was wrong)
<awygle>
the DOM in general seems... odd
<awygle>
as a design
<rqou>
apparently a huge problem with my intuition is that the JS JIT has absolutely no visibility into DOM access
<awygle>
again, deeply ignorant, so not saying it's bad, it's just not intuitive to me
<whitequark>
rqou: lol
<rqou>
so many things i expect to be fast aren't
<whitequark>
that's
<whitequark>
that's sort of true
<whitequark>
it's because you interned at mozilla.
<rqou>
not me
<rqou>
my housemate
<whitequark>
v8 doesn't really have this problem, and this is also why you don't need asm.js on v8
<whitequark>
i mean a special asm.js mode
<rqou>
hmm really?
<whitequark>
regular v8 mode produces native code that's just as fast as mozilla's special-purpose compiler, AND you can just randomly access DOM throughout it
<whitequark>
especially with turbofan
<whitequark>
(but it was pretty good on crankshaft too)
<whitequark>
@mraleph wrote a few articles about this
<rqou>
i thought the whole point of the HolyJIT experiment is to give the JS side more visibility into what's going on on the DOM side
<whitequark>
I can't believe it worked the first time
<whitequark>
both the UART and the async I/O...
<rqou>
anyways, i was told that (at least on Firefox) you can get huge performance improvements by just buffering all the changes that your code wanted to make and then doing all the DOM poking all at once
<awygle>
the web is video games from 1995
<rqou>
meaning?
<awygle>
idk something about batching up draw calls
<whitequark>
uh
<whitequark>
you have to do that today
<awygle>
yeah i didn't really think that through
<whitequark>
you did not have to do that in 1995 and that killed perf on modern drivers
<whitequark>
also, opengl is horribly designed
<awygle>
yeah it's a bummer
<awygle>
and vulkan appears to be "okay you don't like our API design? do it yourself then"
<awygle>
which is not a _bad_ choice but promises to be a lot of work in the short term
<awygle>
whitequark: should i leave the Glasgow symbol library in the repo and sym-lib-table even though we don't reference anything in it currently?
<awygle>
we probably will again during rev c
<rqou>
yeah, I don't understand how anybody actually learns opengl
<rqou>
you either follow the NeHe tutorials which I'm told are really outdated, or you have a giant pile of tutorials that are 1) draw a triangle 2) draw the rest of the fucking owl
<whitequark>
wagle: sure
<whitequark>
erm
<whitequark>
awygle: sure
<awygle>
lmao
<whitequark>
not sure how irssi mangled that
<whitequark>
rqou: well i hired a guy
<awygle>
that's more or less how people (mis)pronounce my last name
<whitequark>
vulkan is more like opengl construction kit than an api
<awygle>
people were like "I GOT SOME SHADERS AND SOME MATRICES" and i didn't... get it
<whitequark>
you don't program applications against vulkan
<awygle>
i still don't, to be frank, but i'm closer than i was
<rqou>
i think I complained about this before, but my experience is that all graphics tutorials end up just being some combination of linear algebra tutorial, operating systems tutorial, and/or computer architecture tutorial
<rqou>
supposedly the set of people already reasonably familiar with all of these topics but knows absolutely nothing about modern graphics APIs is very very small
<rqou>
awygle: that's _also_ a linear algebra tutorial
<awygle>
yes, it is
<awygle>
but it explains the connection to graphics
<awygle>
or did to me anyway
<rqou>
but that's not the part i don't know :P
<awygle>
i knew how to multiply matrices, just not _why_ to multiply them in this case
<awygle>
well good for you :p
<rqou>
all i know right now is "glBegin and magic happens"
<awygle>
you make a list of the vertices, say "this list of vertices i'm going to hand you represents triangles", and pass the list to the gpu
<awygle>
then your shaders execute
<awygle>
you don't use glBegin
<awygle>
(that's the old "immediate-mode" pipeline which doesn't exist after like OpenGL 3 iirc)
<rqou>
nobody seems to have a good tutorial explaining how e.g. glVertex3f or similar eventually turn into commands in a ringbuffer for the GPU
<whitequark>
they don't because the way graphics drivers manage to get any performance out of glBegin() is insanely complicated
<whitequark>
which is why it isn't used anymore
<rqou>
and when i pass the list of triangles, does it DMA into GPU memory? immediately? or does it fetch data from main memory over the PCIe bus? how do i control what i get?
<whitequark>
you pass a parameter for that
<whitequark>
also, not all GPUs even -have onboard RAM
<whitequark>
so you get less control, but you do get some
<rqou>
right, my complaint is that nobody seems to write tutorials that focus on these details
<whitequark>
that's because no one except driver devs actually know them
<whitequark>
and it's certainly not tutorial material
<rqou>
everybody seems to really like explaining how linear algebra works or explaining how to link libpng
<awygle>
the answer is "it depends", always
<whitequark>
in general, if you want your game to work well, you hire a guy from nvidia who takes a debug build of their gpu driver and goes through your shitty code
<whitequark>
and then a guy from amd
<whitequark>
and then one from intel, maybe
<awygle>
i have a friend who literally does this for intel
<whitequark>
this is e.g. what valve did when they ported tf to linux
<awygle>
"hi $GAME_STUDIO write your code this way and it will be good"
<rqou>
or at best explaining how to write memory pool allocators (which is most of the Vulkan tutorials I've found)
Bike has quit [Quit: Lost terminal]
<rqou>
hmm, my approach to any topic has always been to try and understand how it works one or more levels of abstraction lower
<rqou>
apparently this is completely the opposite of how "graphics people" work?
m_w has joined ##openfpga
<whitequark>
you probably can't understand how gpus really work
<whitequark>
not because they're intrinsically complicated but more because they're giant piles of hacks and shit
<awygle>
or both, maybe
<whitequark>
like i'm not sure exactly how much code in modern graphics drivers is basically "if(running_game_a) replace_shaders_with_non_shitty_version()" but it's definitely more than 50%
<whitequark>
yes gpu drivers literally ship with fixed shaders for like gta and so on
<whitequark>
also all opengl drivers still check for one specific doom version and return a limited number of opengl extensions to avoid overflowing a buffer on stack that it has
<awygle>
iiuc this is one of the reasons for vulkan, because it's lower level it's easier to write drivers for (or alternately it pushes more responsibility to engine devs)?
<whitequark>
both
<awygle>
yeah two sides of the coin
<whitequark>
vulkan actually exposes the way the gpu works, or at least does it to a much larger extent than opengl
<whitequark>
but, well, you get to write low-level code against the gpu, and that fucking sucks
<rqou>
so far most of the tutorials on Vulkan that I've found seem to focus on building memory allocators and seems to assume that you in general understand "how graphics works" for all the other bits
<rqou>
which is also useless. if i wanted a tutorial on memory allocators i can just research that topic specifically
<rqou>
nobody seems to bother explaining the actual "how 2 graphics" bits
<awygle>
i think you don't know what you want to know
<awygle>
or else you're not doing a great of conveying it
<rqou>
honestly I'd really like a tutorial that starts with "we're going to start by using libpci or whatever to steal access to an Intel/AMD GPU and we're going to poke its ringbuffers directly"
<whitequark>
that's not a tutorial
<whitequark>
that's called documentation on writing graphics drivers
<whitequark>
and uh, go read either mesa source code or i915 support in directfb (or what was it)
<rqou>
you can then build up to programming again an api like Vulkan
<whitequark>
why the fuck would anyone want to actually do that
<whitequark>
you don't have the NDA'd docs for the GPU and you don't have access to all the errata and suboptimal performance cases that will make your stuff work way too slowly
<whitequark>
or break
<rqou>
you wouldn't want to do that in a real program
<whitequark>
then don't call it a "tutorial"
<whitequark>
tutorials explain how to write real programs
<rqou>
it's specifically to go under the hood to explain how the pieces fit together
<whitequark>
they don't
<whitequark>
the GPU driver is all the hot glue that keeps the mess from falling apart
<whitequark>
you really don't want to explain that, in part because when it completely changes in two years, people will still cite your now incorrect info for decades
<rqou>
i mean, that happens anyways (e.g. the NeHe tutorials)
<openfpga-github>
Glasgow/master f44bf51 Andrew Wygle: Use upstream FPGA symbol
<rqou>
anyways, in my "i wish this existed" not-a-tutorial, it would conclude with how to program against Vulkan
<whitequark>
where do you get these weird-ass ideas lol
<awygle>
whitequark: did you select resistors for this by "go to mouser sort by cheapest" or some other method?
<rqou>
i like to see how the pieces work internally
<rqou>
why is that weird?
<whitequark>
awygle: former
<whitequark>
i think i slightly adjusted it to try and stay within one series
<whitequark>
and one vendor
<awygle>
rqou: i more-or-less agree with you (this is why i can't web) but i've resigned myself to pretending vulkan and shaders and etc are as low as it gets
<whitequark>
rqou: it is weird in case of gpus
<whitequark>
well, i guess it isn't if you don't know anything about gpus
<rqou>
i mean, EE/CS courses go into how to build 5-stage RISC pipelines and all the stuff above that like compilers and operating systems (although the explanations may be oversimplified)
scrts has joined ##openfpga
<rqou>
why can't this exist for GPUs?
<awygle>
rqou: think of it like this - you're sitting at IP, going "I want to know what the lower layer is like", but there are N lower layers, all different.
<whitequark>
um, this exists
<whitequark>
courses that go into building toy GPUs, just like there are courses that build toy CPUs
<whitequark>
you know, in-order, scalar...
<whitequark>
you don't go on the web and ask "i want to learn programming simd, now how do i build an out-of-order superscalar cpu this will run on"
<whitequark>
abstractions exist for a reason
<rqou>
sure, and i definitely use abstractions too, but i find that knowing what's under the abstraction is usually helpful
<rqou>
also, i haven't seen a toy GPU course that also talks about the magic in the driver layer or opengl layer
<rqou>
awygle: but at $FANCY_SCHOOL, you really can drill down from TCP/IP to device physics
<awygle>
rqou: yes but there's a big gap between "this is NRZ encoding" and "here's how PPP, 802.3, 802.11, ATM, etc etc etc all work"
<awygle>
my understanding is that GPUs aren't centralized enough so you can learn 802.3 and say "eh good enough"
<awygle>
also, not documented enough so that you can learn anything lol
<whitequark>
most of the magic in driver layer and device layer is NDAd, mostly because vendors are ashamed of producing this abomination
<rqou>
but Intel and AMD have open-source drivers
<whitequark>
lol
<rqou>
they at least work somewhat
<whitequark>
I think in some of the Intel's drivers they remove all comments before mainlining
<whitequark>
and possibly rename registers to uninformative names
<rqou>
i thought that was Nvidia
<awygle>
can someone please explain to me why kicad has a table view of fields that's read-only and then a text box for editing instead of just making the table view editable
<awygle>
like why
<whitequark>
pretty sure I've seen that in intel too
<whitequark>
awygle: open-source UI \o/
<awygle>
this has to be a wxWidgets thing
<rqou>
I'm pretty sure not Intel, somebody made a big deal a while back that intel released a ton of GPU documentation
<rqou>
supposedly enough to actually write a new driver from scratch
<rqou>
maybe it was AMD, but then you just need to poke marcan about those data files he managed to find
<awygle>
okay, i've accomplished some stuff for once
<awygle>
whitequark: i believe this is a Rev B now. please take a look and clean stuff up if necessary, as you have time. i'll do another pass tomorrow as well.
<rqou>
whitequark: you actually intend to use usb C?
<rqou>
i assume in a legacy mode only?
scrts has joined ##openfpga
bitd has joined ##openfpga
scrts has quit [Ping timeout: 248 seconds]
rvense has quit [Ping timeout: 256 seconds]
rvense has joined ##openfpga
<whitequark>
why not
<whitequark>
and yeah
bitd has quit [Ping timeout: 265 seconds]
bitd has joined ##openfpga
<whitequark>
huh, so I poked this router on its serial console
<whitequark>
turns out zyxel wrote the entire linux userspace as one big c++ thing with its own shell
<whitequark>
and, shockingly, it's actually good
indy has quit [Ping timeout: 240 seconds]
user10032 has joined ##openfpga
indy has joined ##openfpga
<marcan>
rqou: GPUs aren't *that* complicated once you strip away all the fluff anyway
<marcan>
it's just the usual "this is a big chip with a lot of knobs you have to tweak to get stuff going" stuff, like most SoCs
<marcan>
seeing stuff like the amd microcode disasm was fun though
nurelin has joined ##openfpga
<marcan>
tbh if you're trying to learn how GPUs work and don't mind starting with something a bit retro I would actually recommend looking at the GameCube/Wii GPU
nurelin has quit [Client Quit]
<marcan>
it lacks shaders (instead it has a nominally fixed-function fragment processing pipeline that you could call an up to 16-instruction fragment shader)
<marcan>
and no vertex processing beyond a basic transform or so
<marcan>
but it doesn't have microcode or any of that crap, requires little setup, and is vaguely documented (and you can try to read libogc source code to work it out)
nurelin has joined ##openfpga
<marcan>
and Dolphin emulates it so you can play with it with just a computer
<marcan>
the programming model is similar to how modern GPUs work (ring buffer queue with indirect buffers etc)
<marcan>
t
<marcan>
(though the *CPU* has a hack they added to optimise pushing words to the buffer, it basically does write-combining in the CPU itself and then bursts stuff out)
rohitksingh has joined ##openfpga
<marcan>
you could also try doing some raw libdrm stuff on linux. the kenel basically takes care of memory management, context switching, and low level setup
<marcan>
but then userspace gets full control of the actual command buffers used for drawing stuff on screen
<marcan>
so there's value in learning that chunk of the stack first without having to start with raw pci pokes
<mithro>
daveshah: Have you seen Hamster's DisplayPort core?
<daveshah>
mithro: yeah, that's transmit, right?
<mithro>
Yeap
<daveshah>
I am tempted to do an Rx at some point
<mithro>
Sadly it's VHDL
<mithro>
But I've been slowly trying to convince him to move to Verilog :-P
<awygle>
Oh hey netbsd still exists
<cr1901_modern>
awygle: Still? I used it as my primary OS for a few weeks when 7 came out (2015 around this time) :P. It was usable. I could even run ISE Webpack on it.
<cr1901_modern>
Mainly excited about kernel audio mixing b/c it means I can tell PA to go f*** itself
<awygle>
I forgot about it tbh. I would have listed free, open, and dragonfly.
<cr1901_modern>
the other two I know of are mir and edge. But I couldn't tell you the differences