<rqou>
you can't put them on e.g. the dishwasher :P
<brizzo>
but your dishwasher has a power profile
<rqou>
we don't know what it is :P
<rqou>
also, even if we did know what the max power was, it still doesn't help when all of us decide to play the vidya games and/or run cuda kernels at the same time :P
<brizzo>
dish washers wont burn hundreds of dollars of electricity
<brizzo>
assuming its in the range of overages :P
<brizzo>
im a fan of the kill a watt as you can actually quantify "it costs $x.xx per hour to run this device"
<brizzo>
"woah my gaming pc uses $4 of power an hour?!"
<rqou>
oh i'm not complaining about the $$$ part
<brizzo>
"budget" hehe
<rqou>
i'm complaining about how sum(power of widgets) > 30A*220V
<rqou>
or whatever it is
<brizzo>
as in, you're overloading the circuit :P
<rqou>
yeah, i take it you weren't here for those previous discussions :P
<brizzo>
hahaha no
<rqou>
highlights: power strip plugged into power strip plugged into extension cord plugged into cheater plug
<rqou>
(btw USA here in case it wasn't clear)
<brizzo>
are the lights on the same breaker?
<rqou>
other highlights: mini fridge, mini freezer, toaster, microwave plugged into a power strip
<rqou>
we're tripping the "main" breaker for our section of the apartment apparently
<brizzo>
lolol
<rqou>
the main breaker is a double-gang 30A breaker, so i'm assuming we have 2x 30Ax110V
<rqou>
but i have no idea what things are on which phase
<rqou>
for all i know everything is on one phase which is why it trips all the time :P
<azonenberg>
idk if you were around when i mentoined
<azonenberg>
They sent a tech out to investigate
<azonenberg>
Turns out, it was an electrical fault that had probably been there a while
pie_ has quit [Ping timeout: 268 seconds]
<azonenberg>
There's a T-splitter between the cable coming in from the street, my data modem, and my voice modem
<azonenberg>
installed by the tech who set up the voice service (I got that a year or so after the data service)
<azonenberg>
Apparently there was massive attenuation in the BNC connector coming off to the data modem
<rqou>
why do you need voice service?
<azonenberg>
the voice modem was getting a strong signal
<azonenberg>
Because when you live on an island full of hills, and stuck-up rich people who don't want cell towers getting in the way of their views
<rqou>
ah ok
<azonenberg>
there are many neighborhoods, like mine, where the choices for cell towers are "mainland" and "on the far side of a lot of rock"
<rqou>
my aunt's house has that problem too
<azonenberg>
Both tend to have significant attenuation involved :p
<rqou>
(Arcadia, CA)
<azonenberg>
I have three bars of LTE here and that's good RF weather
<azonenberg>
when there's more rain fade or maybe more load on the tower, it can drop entirely
<azonenberg>
i have been on calls with family, clients, etc and had calls drop entirely
<azonenberg>
or worse yet, the link went down one way (how is that even possible?)
<azonenberg>
i could hear them, and kept talking
<azonenberg>
but they couldn't hear me
<rqou>
you can't just use e.g. Google Hangouts/Skype?
<azonenberg>
Not practical (plus, if nothing else, a reliable 911 service is nice...)
<azonenberg>
and won't work for calling vendors etc that only take inbound stuff via POTS
<rqou>
I thought Google Hangouts/Skype can call POTS numbers for free?
<rqou>
no 911 though
<azonenberg>
the phones have a VoIP via wifi feature through tmobile that runs the call over 802.11 then vpn's to the tmobile office and bridges to POTS
<azonenberg>
but i've found it to be unreliable
<azonenberg>
sometimes fails to connect for reasons that are unclear
<azonenberg>
in any case comcast's voice service has been very stable
<azonenberg>
more reliable than the data service, in fact, although apparently that was a hardware fault that's now been corrected
<rqou>
our apartment has no voice service at all
<azonenberg>
We were like that in NY
<azonenberg>
Cell was stable enough that it was practical
<azonenberg>
Here, not an option
<azonenberg>
It's nice to have options other than TCP/IP and ham radio for talking to the outside world
<rqou>
at one point I was looking at doing a custom VoIP thing via a wholesale provider (Anveo Direct)...
<rqou>
and then I discovered how awful and incomprehensible Asterisk is
<azonenberg>
lol
<rqou>
why is PBX software so terrible?
<azonenberg>
Is it really that hard to make something that just runs Speex over TLS between two endpoints, via some routing fabric that resolves phone numbers to endpoint IDs?
<rqou>
yes it is hard, see SIP :P
<azonenberg>
aaaanyway, dependency scan bug fixed
<azonenberg>
And it even comes with a performance boost, lol
<azonenberg>
In addition to not randomly failing to scan files under certain hard-to-predict conditions
<azonenberg>
the fixed code takes 2.95 sec instead of 3.5ish to scan my test codebase for two configs (debug/release), one architecture (x86_64-linux-gnu), single threaded
<azonenberg>
Next up is to improve the caching so i don't re-run the "do we have libfoo? if so define HAVE_FOO" check for every single source file
<azonenberg>
but only once per ISA
<azonenberg>
That should cut another second or so off
<azonenberg>
then add parallelism so i can run scan jobs on multiple cores (but not multiple nodes by design... I want all scanning done on a single machine for now to ensure that the config is well defined)
<azonenberg>
As of now I have parallelism working fine (so it seems from casual testing) for multiple build server instances doing *compiles*
<azonenberg>
however, the dependency scanner is special since it wants to run jobs on specific nodes
<azonenberg>
and i havent yet written the code to tell it that all threads of my test server are indeed running on the same host :p
<rqou>
waiting for the "can't get work done because a computer you didn't know existed is down" bugs :P
<azonenberg>
rqou: So the basic flow for a build environment in Splash is as follows
<azonenberg>
You run "splash init" to set up some metadata in your working copy, passing the host/optionally port of the splashctl server for your cluster
massi has joined ##openfpga
<rqou>
yeah, I was quoting a joke about distributed systems
<azonenberg>
eventually i may support replication of the control server, but not for the short term (it doesnt do enough heavy lifting for it to be a scaling limit, and i'm not aiming for five nines uptime)
<azonenberg>
and i know :p
<azonenberg>
anyway, then you run "splashdev" in the background on your workstation
<azonenberg>
this pushes the sha256 of every file in your working copy of the repo to the server, and then runs an inotify watcher looking for changes to source files
<azonenberg>
So the server always knows what version of every file every client has
<azonenberg>
Any time a file changes clientside, the server checks its cache
<azonenberg>
if it doesn't have content for that hash, it asks the client to provide it
<azonenberg>
(I may eventually add an option to lazily defer this for large data files etc, and only do this up-front for sources)
<azonenberg>
Once the server gets the source file or build script or whatever was changed
<azonenberg>
it regenerates the dirty graph nodes and pushes jobs out to the cluster for dependency scanning
<azonenberg>
In order to keep builds reproducible, you define one node as the "golden node" for each compiler
<azonenberg>
Right now there is not a config for this, it's hard-wired as the first node to join the cluster with a given compiler installed (the golden node may vary per compiler)
<azonenberg>
but this obviously will be fixed shortly
<azonenberg>
The idea is, whatever headers/libs are installed on the golden node are considered authoritative
<azonenberg>
as long as you have the same compiler on the other nodes you dont have to keep all of the library packages in sync, install -dev packages on them, etc
<azonenberg>
Because when a build happens, the build daemon pulls the source files, libs, headers, etc from splashctl
<azonenberg>
and does -nostdinc etc to prevent accidentally using a local file that might not be the same
<azonenberg>
While it certainly will work fine if you have exactly replicated configs on all compute nodes, the arch is designed to not require it
<azonenberg>
So the performance bottleneck is, right now each node has its own UUID and multiple splashbuild daemons on the same host have different UUIDs
<azonenberg>
So splashctl can only schedule dependency-scan jobs on one virtual core of the golden node, vs all of them
<azonenberg>
Eventually i plan to support having the golden node be a set of nodes
<azonenberg>
So the admin can make a pool of say 4 machines with replicated configs, those are considered the standard image
<azonenberg>
Then you add developer workstations etc to the pool and can run build jobs on them
<azonenberg>
the dev can install and remove any lib packages he wants, as long as the compiler version stays the same
<azonenberg>
And if the dev makes any change that changes the hash of the compiler installation, the build still works and is still reproducible but won't schedule jobs on that node anymore
<azonenberg>
rqou: sorry for wall of text, but does that sound reasonable?
<rqou>
more or less yes
<rqou>
does the hash of the compiler installation include libc/libm?
<rqou>
I'm also waiting for a bug of the form "bug in libm causes compiler to miscompile cross-compiler which then miscompiles application"
<azonenberg>
this is the output of "splash list-toolchains" on my current test platform
<azonenberg>
The "name" column is a list of all names that this toolchain will respond to
<azonenberg>
The actual choice of which one to use if multiple matches are present is handled by the scheduler
<azonenberg>
Right now the rule is, clang is unsupported (I just need to write a wrapper class for it that does version ID and dep scanning etc)
<azonenberg>
so gnu c/c++ are the only supported compiler
<azonenberg>
pick the highest version number that supports the requested architecture
<azonenberg>
as long as at least one node has it installed
<azonenberg>
This has the potential for performance issues if only one node on a huge cluster has a new compiler and everything else is running the stable version
<azonenberg>
So down the road i will probably change the scheduler to require that the selected toolchain be supported by some reasonable fraction of the installed hosts which have a compiler for that arch
<azonenberg>
like, if you ask for c++/generic for x86_64-linux-gnu
<azonenberg>
pick the toolchain that has the most installs and then prefer the highest version if this returns >1
<rqou>
backwards-incompatible changes sound like they would cause problems
<azonenberg>
or weight them according to some heuristic
<azonenberg>
Not really, because the hash of all generated binaries is dependent on the selected toolchain
<azonenberg>
not just the toolchain name you asked for
<azonenberg>
So, if i have c++/generic mapping to g++ 4.9.2
<azonenberg>
and then bring up a server with 4.9.3 installed
<azonenberg>
all of my c++ binaries are now dirty and will be regenerated next build cycle
<azonenberg>
(but if that server goes down, the old binaries are still in cache from 4.9.2)
<azonenberg>
if you wanted a truly deterministic build you could ask for c++/gnu/4.9.2 instead of c++/generic
<azonenberg>
This is a tradeoff between portability and determinism that i intentionally leave up to the end user
<rqou>
I was thinking a compiler that is not backwards-compatible with code it accepts (emscripten comes to mind)
<azonenberg>
explain?
<azonenberg>
I will never link objects generated by different compiler versions
<azonenberg>
And my plan is to eventually add support in the top-level config file to quickly override generic specifiers to say "for the current working copy, treat c++/generic as c++/gnu.4.9.2 if targeting x86_64-linux-gnu and c++/gnu/4.8.4 if targeting mips-elf"
pie_ has joined ##openfpga
<azonenberg>
This lets you keep all the child directories generic and easily portable to different platforms, while locking down the config for your specific build to get fully deterministic results
<rqou>
iirc emscripten at some point lost the ability to treat an llvm bitcode file as a .so
<rqou>
this interacted with a really shitty build system I had
<azonenberg>
Lol wow
<rqou>
the build system tried really hard to pretend emscripten was a normal compiler that just produced .js files rather than ELF
<rqou>
idk exactly what happened because my friend wrote this part
<rqou>
but somehow the build process produced a .so
<rqou>
that wasn't an ELF
<azonenberg>
lol wow
<rqou>
it was either JavaScript or bitcode
<azonenberg>
So, if you had something like that and you wanted to use an old version
<azonenberg>
nothing would stop from doing something like script/emscripten/4.1
<rqou>
we got stuck on an old version for a while
<azonenberg>
Basically the goal here was to allow "automagic" selection of toolchains
<azonenberg>
for a sane default config that would still recompile things if you changed what compiler was installed
<rqou>
imho emscripten is really terrible at the "binutils" stuff
<azonenberg>
But still allow overriding if you actually needed an exact version
<azonenberg>
fwiw i dont think THAT will ever be a problem here
<azonenberg>
as splash is intended for use on deeply embedded applications
<azonenberg>
i dont see using javascript any time soon :p
<azonenberg>
similar bugs in fpga tools? totally plausible
<rqou>
I know that the xilinx tools just always complain about deprecated/experimental flags
<rqou>
it's just that they tend to ignore them without erroring
<azonenberg>
Lol
<rqou>
Minecraft encounters this too :P
<azonenberg>
Yeah it's a nontrivial problem and especially in the embedded space probably impossible to solve fully
<azonenberg>
But i'm trying to at least do a better job than the current state of the art
<azonenberg>
Which is not exactly a high bar :p
<rqou>
yup
<rqou>
anyways, the Minecraft problem was that people tended to append a crapload of GC tuning flags when launching
<azonenberg>
I typically had to add "increase max memory" to get it to even run
<azonenberg>
lol
<azonenberg>
or it'd OOM
<rqou>
gradually Java started sucking less at this type of workload and many of the flags became no-ops
<azonenberg>
trying to use the jvm default of like 128 MB or whatever
<azonenberg>
i never tuned the GC
<rqou>
max memory still needs to be set
<rqou>
but before (Java 6 timeframe) people would apply tons of stuff to change GC strategies
<rqou>
my absolute favorite was the need to increase PermGen space if you loved mods
<rqou>
apparently modded Minecraft has an absolutely ridiculous number of classes that would exhaust the PermGen memory where class metadata is stored
<rqou>
fortunately Java 8 no longer has PermGen
<rqou>
modded Minecraft is quite an "interesting" ecosystem
<azonenberg>
Note that the exe depends on the so even though it's dynamically linked, because i have no way of knowing if an abi-breaking change may have been made when the so changes
<azonenberg>
only safe zero-knowledge option is to rebuild the exe
<azonenberg>
you can see all of the binaries show "missing" as they haven't yet been compiled
<azonenberg>
But src/jtagd/jtagd_opcodes_enum.h (rmiddle of 4th column, level with release/jtagd on left) is ready
<rqou>
nice, except after looking for ~5 sec my phone OOMed :P
<azonenberg>
Lol
<azonenberg>
its only 7k x 11k monochrome
<azonenberg>
shouldnt be that big...
<azonenberg>
anyway, that file exists because code gen for headers has to be done at dependency scan time vs build time
<azonenberg>
since you dont know if the generated code may include something else
<rqou>
my phone is supposed to have 3GB ram but sure doesn't feel like it
<azonenberg>
i blame android
<rqou>
I swear Android apps leak memory more heavily than desktop
<azonenberg>
They do
<azonenberg>
they're java :p
<azonenberg>
it may not leak-leak
<azonenberg>
but if it allocates and keeps a pointer around for longer than it needs to...
<azonenberg>
or just doesnt gc until the whole system is in swap
<azonenberg>
my general web browsing vm has like 2 gigs ram
<azonenberg>
and i can open 200+ firefox tabs snappily
<azonenberg>
my phone has 1 or 2, i forget
<azonenberg>
and 10 tabs in chrome make it struggle
<rqou>
my phone regularly goes into a mode that I think is "out of GPU buffers" but I'm not sure
<rqou>
where the browser can try to scroll but can't render anything in the newly-onscreen areas
<rqou>
video/image-heavy sites trigger this after a while
<rqou>
most reliably Tumblr
<azonenberg>
Every day I'm tumblin' <silly dance/>
pie_ has quit [Ping timeout: 248 seconds]
pie_ has joined ##openfpga
pie_ has quit [Ping timeout: 250 seconds]
pie_ has joined ##openfpga
pie_ has quit [Ping timeout: 244 seconds]
pie_ has joined ##openfpga
pie_ has quit [Changing host]
pie_ has joined ##openfpga
m_w has joined ##openfpga
pie_ has quit [Read error: Connection reset by peer]