<nikki93>
zls through vim works for me on both windows and macos :o this might be the 'newest' a language has been while having all of these things work ever
<nikki93>
it's really refreshing and heartwarming that these things are such good quality while just being worked on in wholesome ways
<nikki93>
i just looooove that zig executable just knows where its libs are, and when i zls-goto it just goes to them etc.
<andrewrk>
:)
nullheroes has quit [Quit: WeeChat 2.9]
dermetfan has quit [Ping timeout: 260 seconds]
nephele_ has joined #zig
nephele has quit [Ping timeout: 256 seconds]
nephele_ is now known as nephele
jjsullivan1 has joined #zig
<andrewrk>
alexnask[m], you know what would be really super useful semantic highlighting? if it did liveness analysis of a variable and highlighted the final usage of each variable
<andrewrk>
so you could see with a color that the variable is dead after a certain point
<andrewrk>
maybe it could put a tombstone emoji after it
earnestly has quit [Ping timeout: 272 seconds]
<mkchan>
MFW
<mkchan>
std.json.stringify... LLVM ERROR: out of memory during Code Generation
waleee-cl has quit [Quit: Connection closed for inactivity]
<mkchan>
I thought I could short circuit encoding my network nicely into a file with JSON and boom
a_chou has joined #zig
<mkchan>
Any chance I can specify how much memory to allocate it during compilation?
<mkchan>
Actually NVM about json it can't even generate the NN structure when I finally initialize it
CommunistWolf has quit [Ping timeout: 240 seconds]
CommunistWolf has joined #zig
joey152 has joined #zig
<mkchan>
Tried building it on Linux, the zig-cache folder went >2GB in 10seconds and cancelled it lol
<mkchan>
If I reduce the input layer size by 10x then it works with 200MB zig-cache and a 2.5KB json file as output. I obviously can't reduce the input layer size so yeah
<mkchan>
Ok another question: why can't I define a comptime var (that I'm going to use in an inline while) as []anytype
<mkchan>
Or just anytype
codemessiah has quit [Quit: Leaving]
<tdeo>
anytype is not a type
<tdeo>
well, you can have a struct field as anytype though, so not sure
<andrewrk>
yeah the workaround there is `comptime var x: struct {data: anytype} = .{.data = 1234};`
joey152 has quit [Remote host closed the connection]
a_chou has quit [Quit: a_chou]
jjsullivan1 has quit [Ping timeout: 246 seconds]
a_chou has joined #zig
a_chou has quit [Remote host closed the connection]
marnix has joined #zig
<nikki93>
andrewrk: have you considered maybe making it so sth like @Self could mean @This() in any type or is it always better to be a function for some reason
<nikki93>
like it seems like `const self = @This();` happens a lot, but i also haven't explored enough in the lang to know if it should just stay that way
<nikki93>
like it seems like `const self = @This();` happens a lot, but i also haven't explored enough in the lang to know if it should just keep being that way
<andrewrk>
the only difference that would make would be introducing a new syntactical construct
<nikki93>
sorry for copied msg
<nikki93>
i see
<andrewrk>
I think it used to be a keyword but then it was nice to not take up the identifier space
<nikki93>
Self you mean?
<andrewrk>
I believe it was `this`
<nikki93>
what if there are keyword identifiers like @Self
<andrewrk>
I may be misremembering
<nikki93>
oh interesting, a lowercase keyword that is a type. i guess `type` itself and the builtin types are kinda like that
<nikki93>
in your experience do you tend to just end up doing `const Self = @This()` a lot or do you just keep doing`fn foo(self: @This(), ...)`
<andrewrk>
I prefer the former
<andrewrk>
I also have a new opinion that it's better to use a different short variable name than `self` because it makes it easier to refactor code without renaming stuff
<andrewrk>
e.g. you could lift entire sections of code out of a struct and put it in a different file (which I've done several times in self-hosted)
<nikki93>
oh yah i agree, for the parameter name yeah
<nikki93>
wait did you mean the parameter name or the parameter type
<andrewrk>
for the name
<andrewrk>
type too, actually
<nikki93>
cool
<andrewrk>
well for generics Self can be useful but for not generic types I prefer the actual type name
<nikki93>
yeah there's no need to weirdly lexically but dynamically scope this the first param i guess haha
<nikki93>
yeah it seems nice in anonymous structs
kandinski has joined #zig
<kandinski>
hi all
<andrewrk>
hello kandinski
<kandinski>
I'm reading about ziglang and about to start a short project to try it out. One question I have is about the printf example, which requires the format strings to be compile-time known. How does that comport with a program loading formatting strings at runtime, e.g. for internationalisation?
<andrewrk>
if you want text substitution at runtime you'll have to use a different API
<kandinski>
also hi andrewrk good job on the language. Thanks particularly for lowering the bar for getting into systems programming.
<andrewrk>
:)
<andrewrk>
I think it makes sense to have one API for comptime text substitition and another for runtime text substitution
jjsullivan1 has joined #zig
<pixelherodev>
kandinski: does it make sense to embed various language files and generate each one at comptime, then select which to use at runtime?
<pixelherodev>
The downside is code bloat though
<pixelherodev>
Probably pretty bad, too, so that's probably a bad idea
<andrewrk>
oh you could have your format strings in a translations.zig file and pull them from there
<andrewrk>
as long as you generate your translations at compile time
<andrewrk>
I think I just repeated pixelherodev
jjsullivan__ has joined #zig
jjsullivan__ has quit [Remote host closed the connection]
jjsullivan__ has joined #zig
jjsullivan1 has quit [Ping timeout: 246 seconds]
frett27 has joined #zig
earnestly has joined #zig
cole-h has quit [Quit: Goodbye]
<marler8997>
I got the "reloader" to work...yay!
<pixelherodev>
andrewrk: the difference is that I said "no wait don't do that" afterwards ;)
<pixelherodev>
marler8997: :O
<pixelherodev>
that's amazing!
<marler8997>
yeah, I can make an ELF exe with no interpreter, and with so dependnecies, invoke it, and it will reload itself with any linker it wants to find
<pixelherodev>
I'm currently generalizing the build.py DarkUranium and I wrote so that it can be left alone for any project, and have deps / sources / name / version / etc specified in a separate file
<pixelherodev>
ugh. andrewrk, marler8997: why is the reloader supposed to solve the issue of static game builds?
<pixelherodev>
Asking for a friend.
<pixelherodev>
*stupid freaking segfaults when running a glibc-linked UI-enabled binary on musl*
<pixelherodev>
marler8997: ... here's the thing. libg... uh what's it called. one sec.
<pixelherodev>
libgcompat.
<pixelherodev>
`ls /lib/ld-* | wc -l` -> 2
<pixelherodev>
I have both the musl loader *and* a glibc loader shim installed.
<pixelherodev>
IIRC this is a normal pattern in e.g. Alpine as well
<pixelherodev>
reloader definitely fixes the issue of "unknown interpreter path"
<pixelherodev>
But there's other issues with shared libraries as well.
<marler8997>
the issue was that we couldn't implement our own mechanism in our program to locate the loader, reloade aims to solve that, so you can use any tehnique you like to find the loader
<marler8997>
are you talking about solving how to find so's as well?
<pixelherodev>
not quite
<pixelherodev>
I've had issues in the past (and present) in which so's are found but *don't work*.
<pixelherodev>
ABI incompatibilities.
<pixelherodev>
Most frequently with GLX, I believe.
<marler8997>
That's why I use Nix, to solve those issues
<marler8997>
Every distribution implements their own solution to solve so compatibility issues
<marler8997>
I should say, usually it's the "package manager" that tries to handle that
<pixelherodev>
Yes, that wasn't my point
<marler8997>
lol...ok try to explain again :)
<pixelherodev>
My point is that fixing this is only one step of the way towards making dynamic builds on par with static ones.
<pixelherodev>
And, frankly, static builds will probably still be better.
<marler8997>
their are pros and cons to static/dynamic builds
<marler8997>
neither is "better"
riba has joined #zig
<marler8997>
there are techniques to solving issues with dynamic builds... For nix, they are all pretty much solved except this last one
<marler8997>
for other distributions, they will still have issues, and yes, more work would need to be done to solve issues there as well
<nephele>
dynamic builds are just JIT linked static builds ;) /s
<pixelherodev>
marler8997: I disagree that nix solves them at all.
<bfredl>
an ELF file is just a JIT forcefully frozen in time
<pixelherodev>
nix simply pretends they aren't there.
<pixelherodev>
It *ignores* them.
<pixelherodev>
It fixes the *symptoms*.
<marler8997>
what issues hasn't it solved?
<nephele>
"maximum symlink depth reached"
<pixelherodev>
... what actual underlying issues *has* it solved?
<marler8997>
you ready for the list :)
<pixelherodev>
Like, serious question, name one! I looked into it a while back, and I was honestly put off when I realized that its entire design goal was to hide problems with other distros instead of solving them properly.
<pixelherodev>
Go for it!
<marler8997>
Here are all the problems that other package managers that nix solves:
<marler8997>
* Dependency specifications are not validated, leading to incomplete deployment.
<marler8997>
* Dependency specifications are inexact (e.g., nominal).
<marler8997>
* It is not possible to deploy multiple versions or variants of a component side-by-side.
<marler8997>
* Components can interfere with each other.
<marler8997>
* It is not possible to roll back to previous configurations.
<marler8997>
* Upgrade actions are not atomic.
<marler8997>
* Applications must be monolithic, i.e., they must statically contain all their dependencies.
<marler8997>
* Deployment actions can only be performed by administrators, not by unprivileged users.
<pixelherodev>
... uhh
<marler8997>
* There is no link between binaries and the sources and build processes that built them.
<marler8997>
* The system supports either source deployment or binary deployment, but not both; or it supports both but in a non-unified way.
<marler8997>
* It is difficult to adapt components.
<marler8997>
* Component composition is manual.
<marler8997>
* The component framework is narrowly restricted to components written in a specific programming language or framework.
<marler8997>
* The system depends on non-portable techniques.
<pixelherodev>
A lot of these are technically solved on Gentoo lol
<marler8997>
that's the list I wrote down after going through the creator's PHD thesis
<pixelherodev>
Nix is a shiny wrapper that covers them up IMO
<tdeo>
i feel like *gentoo* is the one that papers over the issues rather than solving it, like that only very few specific slotted packages can be installed side-by-side with others
waleee-cl has joined #zig
<marler8997>
so what issues doesn't nix solve?
<earnestly>
symlink farming
<pixelherodev>
marler8997: fundamentally, it's that nix *isn't a solution*
<pixelherodev>
Nix is an *entire distro*
<pixelherodev>
"Distros are incompatible" Solution: *let's make a new distro*.
<marler8997>
nix is a package manager that can run on most distributions, even mac and bsd
<pixelherodev>
(Yes, I'm aware it can run on others)
<marler8997>
nixos is a distribution
<pixelherodev>
But running it on others misses the point
<earnestly>
Yes, I used nix on a different distro for awhile
<pixelherodev>
I think I can make a better analogy using reloader, actually
<earnestly>
As it can do per-user installs
<marler8997>
what issues doesn't nix solve though...I'm still wanting to understand that point
<pixelherodev>
I'd say that a better solution that reloader would be to remove the interp section entirely. That is, have the *kernel* invoke the dynamic loader, or maybe a different distro component.
<earnestly>
Symlink farms, elfhacks, need for patching hardcoded paths
<pixelherodev>
Instead of pushing the job off into the executable
<pixelherodev>
marler8997: fundamentally, the SOs on a Nix system are *still incompatible*.
<earnestly>
E.g. Use of dlopen needs to be patched under nix
<pixelherodev>
You just don't notice because Nix plays with paths.
<marler8997>
what does that mean? "still incompatible"?
<pixelherodev>
You still require disparate versions to be installed.
<pixelherodev>
ABIs are still a problem.
<marler8997>
I'm not understanding...
<marler8997>
You still require disparate versions to be installed? What does that mean?
<marler8997>
Nix doesn't require multiple versions of things to be installed
<pixelherodev>
e.g. package a *requires* b == 1.0. package c *requires* b == 1.1. Nix's "solution," as I understand it, is to install *both*.
<marler8997>
Yes
<marler8997>
What's the problem?
<pixelherodev>
Fundamentally, it misses the real issue.
<pixelherodev>
The issue isn't, "hey, I need to install both." That's the symptom.
<marler8997>
Which is?
<marler8997>
When libraries change, not all apps update, and some apps require updates
<pixelherodev>
That.
<marler8997>
How do you handle that issue?
<marler8997>
That's what nix solves
<pixelherodev>
"Apps require updates when libraries change."
<pixelherodev>
That's still an issue with Nix. It just hides it by continuing to use an out-of-date library.
<marler8997>
It's a solution to that problem
<pixelherodev>
IMO a better fix would be to e.g. make it so that ABIs don't exist.
<pixelherodev>
Not literally, of course.
<marler8997>
Sure, but having a solution doesn't mean it's hiding a problem
<marler8997>
that's not a solvable problem
<pixelherodev>
Sure it is.
<marler8997>
it's an issue that will always occur
<marler8997>
Libraries get updated
<pixelherodev>
The only unsolvable problem is that people think problems are unsolvable ;)
<marler8997>
some apps will update, some won't
<pixelherodev>
But my point is that they shouldn't *need* to.
<marler8997>
if you want your old software to still work
<marler8997>
you need to solve the problem
<pixelherodev>
Except that's just my point.
<pixelherodev>
*That problem shouldn't exist*.
<marler8997>
You're saying that we shouldn't use old software
<pixelherodev>
no.
<marler8997>
if it isn't kept up-to-date every time something changes, then we shouldn't suppor tit
<pixelherodev>
I'm saying that old software should be able to "just work" even with library updates.
<pixelherodev>
I'm saying that a better solution is to figure out, I dunno
<marler8997>
you're saying that all libraries should be 100% backwards compatible?
<pixelherodev>
Maybe, a linker format which hides ABI differences.
<nephele>
If you ignore the ABI differences you might then gain just normal incompatibilites :g
<pixelherodev>
Maybe, a program which automatically goes over two different SOs and generates a wrapper over the new one that exposes the older ABI or something
<pixelherodev>
Maybe, change the way engineers think so that they don't create these issues.
<pixelherodev>
I'm not pretending to have the answer
<pixelherodev>
I'm saying that we shouldn't give up just because it's an insanely complex task.
<marler8997>
That would result in an exponential explosion of cases to test
<marler8997>
you now have to test every version with every other version
<pixelherodev>
Maybe with the library wrapper.
<marler8997>
and a solution like that is orthogonal to what nix is doing
<pixelherodev>
But that's one *idea* of a solution
dermetfan has joined #zig
<marler8997>
you can still do that with tnix
<pixelherodev>
Maybe replace ELF with something which allows code multi-versioning in a way that exposes old ABIs
<pixelherodev>
I'm not saying it doesn't.
<marler8997>
it's just that nix also let's you have cryptographically hashed dependencies that you know won't change
<nephele>
that's what symbol versioning is trying to do
<pixelherodev>
And I consider that an anti-feature.
<marler8997>
essentially reducing your binary to the functional equivalent of a static executable
<pixelherodev>
nephele: I was about to bring that up :)
<pixelherodev>
marler8997: IMO, exact dependencies are a *problem*, not a *solution*.
<marler8997>
then don't use them
<pixelherodev>
I don't :P
<marler8997>
nix doesn't force you to use them, if you have other solutions, you can still use all the other features of nix
<marler8997>
it's just a feature that you have the option to use
<pixelherodev>
Hell, my latest solution to C dependencies is to have the upstream repo's master branch embedded in lib/$LIBNAME
<tdeo>
now that's a "solution"
<marler8997>
you should read the phd thesis
<pixelherodev>
I've actually started doing dev work on the libraries from within the main package
<pixelherodev>
:P
<marler8997>
it's a good read actually, easy and understandable
<pixelherodev>
tdeo: I never said it was a generally applicable solution lol
<marler8997>
When I deploy an application, I want all my dependnecies to be as fully-specified and as possible
<pixelherodev>
tdeo: It works in the specific projects I'm working on now because the libraries are either a) maintained by me anyways, because the original maintainer stepped down, b) update infrequently, or c) don't break on updates.
<marler8997>
Nix allows me to do that
<pixelherodev>
Whereas I want to fix the entire engineering culture so that that stops being necessary :P
<marler8997>
It gives me the best of both worlds, a dynamic executable that behaves like a static executable
<pixelherodev>
I think the key difference here is that you're practical ;)
<marler8997>
but it's not even the culture
<pixelherodev>
Also, I don't consider that the best of both worlds
<marler8997>
there are old programs
<pixelherodev>
marler8997: but the library updates shouldn't cause those to break.
<pixelherodev>
That's what I've been saying the *whole time*.
<marler8997>
why?
<nephele>
pixelherodev: heh, fixing engineering culture might be a bit out of scope
<marler8997>
how do you prevent a library update from breaking an application?
<pixelherodev>
Because there's no legitimate unsolvable technical reason for it
<pixelherodev>
nephele: Yeah, I know :P
<pixelherodev>
marler8997: that's the question I'm asking.
<marler8997>
any change within the library could potentially break any user of it
<pixelherodev>
Why?
<nephele>
dependency hell bleeding over is always nice and annoying :3
* earnestly
.oO(static linking)
<marler8997>
why what?
<marler8997>
why can a change break an application?
<pixelherodev>
Why can a library change break a user?
<pixelherodev>
Not literally "Why"
<pixelherodev>
I know why that's the case now.
<marler8997>
not sure what you're asking
<pixelherodev>
Why can't we devise a system that prevents that?
<pixelherodev>
Why should we accept that?
<marler8997>
it's literally impossible
<nephele>
it's always the case, no? bugs can be in, or unexpected differences between what users thought the function does with what it actually does
<earnestly>
Well, you could design a system that no longer depends on ABI stability
<pixelherodev>
^
<marler8997>
in fact, even with hashed dependnecies, you still don't solve the problem
<earnestly>
(One such did exist)
<pixelherodev>
That's what I've been thinking of this whole time
<pixelherodev>
Stop depending on ABIs.
<pixelherodev>
The easiest (but bad) way is to add a non-native layer between libraries and executables
<pixelherodev>
Maybe make dlsym itself versioned
<marler8997>
that still doesn't solve the issues
<marler8997>
compatibility isn't just to do with ABI
<pixelherodev>
Have libraries contain every version of every symbol (within reason)
<pixelherodev>
Have the linker, at startup, effectively hot-patch the executable
<marler8997>
that's what Nix does
<pixelherodev>
Except not quite.
<pixelherodev>
I'm talking deduplication
<marler8997>
even if you version every symbol
<pixelherodev>
Heck, I'm bordering JITs more than packages
<marler8997>
still not a solution
<pixelherodev>
For sure
<marler8997>
any solution you come up with...wills till break
<pixelherodev>
But I don't have to have a solution to say that one is *possible*.
<pixelherodev>
Like I said.
<marler8997>
it's an unsolvable problem
<pixelherodev>
The only unsolvable problem is that people think problems are unsolvable ;)
<pixelherodev>
I said that already.
<marler8997>
There are lots of problems that are solvable
<marler8997>
this isn't one of them
<pixelherodev>
Why not?
<marler8997>
you can mitigate it to a high degree though
<pixelherodev>
You want to claim it's unsolvable, prove it!
<marler8997>
hardware
<marler8997>
kernel
<pixelherodev>
Or, just, static binaries.
<marler8997>
atoms, time
<marler8997>
any of these things can break your application
<pixelherodev>
What advantage does a dynamic build supposedly provide?
<marler8997>
even static binaries don't solve the problem
<marler8997>
it allows you to share bits
<pixelherodev>
(I say this as someone who hasn't actually bothered to switch my system to fully static just yet)
<marler8997>
imagine you want to use firefox as a library or webkit
<pixelherodev>
Then you're insane.
<marler8997>
libraries that are gigabytes in size
<pixelherodev>
Unless you use large portions, it won't matter.
<pixelherodev>
The linker can remove dead code.
<marler8997>
If you want to be able to use big dependencies, then it becomse unreasonable to compile statically
<marler8997>
for small things, it's fine
<earnestly>
This is like a discussion from the 70s lol
<pixelherodev>
> On average, dynamically linked executables use only 4.6% of the symbols on offer from their dependencies. A good linker will remove unused symbols
xackus has joined #zig
<pixelherodev>
earnestly: ha, true.
<tdeo>
that seems like a pretty bad metric to measure how much those symbols actually pull in
<tdeo>
you can have a huge implementation for a simple api
<earnestly>
SynthesisOS was from the 80s even
<tdeo>
well, a huge api for a simple implementation too
<pixelherodev>
tdeo: sure, but on *average*.
<earnestly>
tdeo: Someone will have to do another study as was done in the 90s, to prove once again (or not) that dynamic linking isn't worth the cost
<pixelherodev>
If you want to talk about firefox, then definitionally, the average symbol is (1 / NSYMS) of the size
<earnestly>
That last one was done on X11, which was large at the time
<marler8997>
I'm not sure why you would want to store the same bits over and over again on your computer, when you can just store it once and share it
<nephele>
what? who wants to talk about firefox? D:
<pixelherodev>
marler8997: because they're not the same bits.
<earnestly>
marler8997: Content Addressable Storage solves that issue
<marler8997>
they're not the same bits?
<tdeo>
what i'm trying to say is that how many symbols a binary links from a library doesn't necessarily correlate to how much code is pulled in from the library
<pixelherodev>
Inlining, partial usage, etc
<pixelherodev>
tdeo: I agree
<pixelherodev>
I was saying that it goes both ways.
<pixelherodev>
I can use 5% of the symbols and 1% of the binary, or vice versa.
<pixelherodev>
But on average, if I use 5% of the symbols, it's *probably* less than, say, 20% of the binary.
<pixelherodev>
Practically speaking.
<pixelherodev>
Sure, it *could* be more, but it's highly unlikely
<marler8997>
sure with those numbers it can makes sense to statically compile
<tdeo>
i wouldn't say that without measuring, dunno
<marler8997>
but you seem to be ignoring the other cases
<pixelherodev>
tdeo: definitely true
<pixelherodev>
marler8997: That's because those are, for all intents and purposes, *edge cases*.
<pixelherodev>
Now, that doesn't mean they're not worth handling
<pixelherodev>
But not at the expense of *everything else which is installed*
<pixelherodev>
Sources to run the same tests locally are included, too
<pixelherodev>
> Over half of your libraries are used by fewer than 0.1% of your executables.
<pixelherodev>
Most packages don't use most libraries.
<marler8997>
what is the expense?
<earnestly>
Yeah I'm aware, but see, people in software tend to cover the same points endlessly
<earnestly>
Shiney new toys that do the same thing people did 40 years ago
<pixelherodev>
marler8997: for using a fully static system?
wootehfoot has joined #zig
<pixelherodev>
Or for using a dynamic one?
<marler8997>
why is it an expense to have both?
<pixelherodev>
It's an expense to use *primarily* dynamic systems.
<marler8997>
what's the expense?
<pixelherodev>
And, taken to its logical conclusion, dynamic systems at all.
<pixelherodev>
Well, for starters, it's impossible to inline. That's already a *gigantic* expense.
<pixelherodev>
Inlining opens up *massive* optimizations.
<marler8997>
why is it impossible to inline on a *primarily* dynamic system?
<pixelherodev>
I'm willing to bet that `gcc -O1 -march=native` with a fully static system would produce binaries faster than `gcc -O3 -march=native` on a fully dynamic one.
<pixelherodev>
Because you can't cross the dynamic library boundary.
<pixelherodev>
*because the library is dynamic*
<marler8997>
then compile statically
<pixelherodev>
You can't inline the library itself.
<pixelherodev>
This is an expense for *every application*.
<pixelherodev>
And every library too, since libraries can depend on each other
<marler8997>
I'm very confused by what you're saying
<earnestly>
Bare in mind that almost all VM subsystems are CoW
<marler8997>
I can't follow your logic at all
<earnestly>
Bear*
<pixelherodev>
marler8997: when building a Zig executable with, say, Hello World
<pixelherodev>
it can inline *everything*.
<marler8997>
yes I understand
<pixelherodev>
(and fmt is comptime, so even better)
<marler8997>
I don't know what that has to do with your original statement
<pixelherodev>
With libc, it has to call out to an external function call
<pixelherodev>
It can't inline for a dynamic build
<tdeo>
would be neat to have cross-language lto working once we have llvm support in stage2
<marler8997>
having a dynamic system...and compiling statically are not mutually exclusive
<pixelherodev>
Yes, but I'm talking aboout why a primarily dynamic system is a cost
<pixelherodev>
Given that *most* applications and *most* libraries do not benefit from being dynamic, it means they are paying a giant penalty just so that the ones that *do* can gain a relatively meager reward
<marler8997>
what does a systems' primary mechanism have to do with an application's mechanim?
<marler8997>
an applciation can use any mechanism regardless of what the system is doing
<pixelherodev>
... i haven't been talking about applications. I'm talking about the *system*.
<pixelherodev>
e.g. if I install something with a package manager.
<pixelherodev>
Say, Debian, or Alpine.
<pixelherodev>
Nearly every single package in their repos would be better off statically linked.
<marler8997>
Ok, so you're just talking about compiling dynamically vs statically in general, which to choose
<pixelherodev>
That's the starting point of my argument, yes
<marler8997>
ok gotcha
<pixelherodev>
The fixed cost of building applications dynamically is relatively high
<marler8997>
there was an article I saw recently about the importance of having a binary ABI/API, I didn't read it I"ll have to see if I can find it
<pixelherodev>
So if most applications are better off static, there's no point in *any* of them being dynamic
<marler8997>
how does that follow?
<pixelherodev>
(you then need to have dynamic copies *as well* - and thus a /lib folder)
<marler8997>
if some are better off static, then they should be static
<marler8997>
why does that mean everything should be static?
<pixelherodev>
Nearly everything should be
<marler8997>
how does that follow
<pixelherodev>
Why have logic to handle dynamic binaries if almost no applications benefit?
<marler8997>
You're saying that some should be static even if they are better off dynamic?
<pixelherodev>
Most of the gains from being dynamic are gone if other applications are static.
<pixelherodev>
Even an application which is normally better off dynamic loses its gains.
<marler8997>
If an application is better off dynamic, why wouldn't you make it dynamic?
<pixelherodev>
If only one application links against /lib/libgiganticfreakinglibrary.so then it doesn't benefit.
<pixelherodev>
What would make it better off?
<pixelherodev>
Generally, it's about sharing code, and reducing application size, no?
<marler8997>
yes
<pixelherodev>
Well, if nothing else is using it, the code isn't shared anyways.
<marler8997>
Then it wouldnt' fall into the category of "better off dynamic"
dermetfan has quit [Ping timeout: 260 seconds]
<pixelherodev>
and since only that application then depends on the library, it effectively makes the full size of the library included in the application size.
<pixelherodev>
And between inlining and DCE, the application no longer benefits from dynamic linking.
<marler8997>
Basica
<pixelherodev>
marler8997: that's my point!
<pixelherodev>
If a system is mostly static, then *nothing* is better off dynamic anymore
<marler8997>
If somthing is better off static, then make it static, if something is better of dynamic, make it dynamic
<pixelherodev>
And if most things are better off static, then that means that *everything is*
<marler8997>
why?
riba has quit [Ping timeout: 265 seconds]
<marler8997>
that makes no sense :)
<pixelherodev>
Even if only 90% of applications are better off, the fact that they are then statically linked means that the other 10% no longer benefit from being dynamic!
<earnestly>
I swear, if dynamic linking was another idea from universities
<pixelherodev>
The benefits of dynamic linking - code size and reuse - *only apply* when enough applications take advantage of them.
<marler8997>
that's like saying, if people are better off being in heterosexual relationships, then that means everyone should be
<marler8997>
please don't start a new religion :)
<pixelherodev>
Not at all!
<pixelherodev>
Other people being in a hetero relationship doesn't affect whether being in one is good or bad for me.
<pixelherodev>
Other applications being statically linked *actively makes static linking better for me [as an application]*
<pixelherodev>
If only two applications are linked against a single library, and one of them is static, the other has no gains from linking it dynamically.
<pixelherodev>
Worse, it *loses* from doing so.
<pixelherodev>
Here's an idea
<pixelherodev>
Instead of static vs dynamic linking per-*application*, we need it per-*library*.
<marler8997>
This is not a valid inference: If most thare are better of X, then that means everthing is better off X
<pixelherodev>
marler8997: I think that actually creates a perfect hybrid. If *most libraries* are used by few applications, link those specific libraries statically - since they won't be shared anyways, and that allows for inlining. For something like libc, which is used by nearly *everything*, link it dynamically - since that allows for sharing.
<pixelherodev>
marler8997: but that was never the inference.
<pixelherodev>
The inference wasn't "most are better off X so everything is"
<marler8997>
You literally said this: And if most things are better off static, then that means that *everything is*
<pixelherodev>
That wasn't the logic at all.
<marler8997>
that's a copy/paste of what you said
<pixelherodev>
Assumption: Most things are better off static. Assumption: The benefits of dynamic linking only apply when most libraries are linked dynamically.
<pixelherodev>
Conclusion:
<pixelherodev>
s/Assumption/Precondition/s
<marler8997>
damn how did you type that so fast?
<pixelherodev>
Because I'm a fast typist? :P
<earnestly>
They have a happy hackers keyboard
<pixelherodev>
I've only been doing this since I was like five years old lol
<earnestly>
Maximum torpe speed
<pixelherodev>
Nope! Crappy laptop keyboard here ;)
<marler8997>
ok so your statement was incomplete....I see
<pixelherodev>
marler8997: or you missed some of my statements ;)
<marler8997>
however, I'm not sure about your second assumption
<pixelherodev>
Well, you said yourself that the benefits are code reuse and code size, right?
<marler8997>
It only takes 3 libraries to break that assumption
<pixelherodev>
Did you see my comment about a hybrid linking model?
<marler8997>
Or I should say, 1 library and 2 users of it
<marler8997>
about static linking with optimization?
<pixelherodev>
No
<pixelherodev>
"If *most libraries* are used by few applications, link those specific libraries statically - since they won't be shared anyways, and that allows for inlining. For something like libc, which is used by nearly *everything*, link it dynamically - since that allows for sharing."
<pixelherodev>
Honestly, I'm not convinced dynamic linking is an improvement in code size for all but the most common libraries.
<pixelherodev>
libc is used by everything, so having a single 838K copy is probably less than statically llinking it in everything
<marler8997>
I think you're making generalations and inferences here that just don't follow
<pixelherodev>
Such as?
<marler8997>
"If *most libraries* are used by few applications, link those specific libraries statically - since they won't be shared anyways
<marler8997>
So take the worse possible case against your statement
<marler8997>
2 users of a single library, they use 100% of the library and the library is huge
<pixelherodev>
That's not a possible case.
<pixelherodev>
I don't think *any* binary uses 100% of *any* library
<pixelherodev>
But sure, let's take that hypothetical.
<marler8997>
well percentage doesn't really matter
<marler8997>
it's the amount of code used I supposed
<marler8997>
so lets' say, each applicaiton uses 1GB of code from the library
<pixelherodev>
Inlining also reduces total size, since a lot of it will be optimized out.
<pixelherodev>
That's already talking about 0% of the libraries I have installed.
<marler8997>
That measn that you save 1 GB every time you link to it dynamically vs statically
<pixelherodev>
Not even LLVM is remotely that big.
<pixelherodev>
And no, you don't save 1GiB.
<marler8997>
well, so long as the applicaitons are using 1GB worth of the library
<marler8997>
Maybe it's a 10 GB library, but most applications are using around 1GB of it
<pixelherodev>
If it's 1GiB as a dynamic executable, chances are it's still less than 950MiB statically linked even if every line of code is used in it.
<marler8997>
sure I'll give you that
<pixelherodev>
Inlining is *extremely* aggressive in modern compilers.
<pixelherodev>
So that's 900MiB saved, sure.
<marler8997>
per use
<pixelherodev>
I'd also point out that even in this case we're *explicitly* sacrificing performance for code size.
<marler8997>
not necessarily
<pixelherodev>
Have you been more strapped for disk space than computing resources at any point in the last decade?
<pixelherodev>
That implies both binaries are using the same part of this enormous library at once.
<marler8997>
yup
<pixelherodev>
Otherwise, switching between them will actually be *more* harmful in dynamically linked code.
<pixelherodev>
Since with statically linked, it requires less cache space.
<marler8997>
sure
<marler8997>
but the statement "Statically linking improves caching" is not true in every case
<pixelherodev>
True.
<marler8997>
A very commonly used library with very commonly used functions might pratically have better performance as a dynamic library
<marler8997>
like libc
<pixelherodev>
Even in the absolute *best possible case* in dynamically linked code, we're still giving up performance (because of the loss of inlining), which *might* be made back up, depending on the exact microarchitecture, cache size, and binaries - and I'l willing to bet more money than I've ever had that this "best case" has *never* occurred anywhere on earth.
<pixelherodev>
and I explicitly said libc was an exception.
<marler8997>
not necessarily though
<pixelherodev>
*Most libraries* - literally over 99% of them - are better off static.
<marler8997>
oh?
<pixelherodev>
Most libraries on my system are used by single-digit binaries, and are less than 10MiB tops.
<ikskuh>
marler8997: i think most functions from a libc would eventually be inlined in static linked libs
<pixelherodev>
If I reduced the percentage a bit, that number gets even better.
<pixelherodev>
^that's a good point too
<ikskuh>
drew devault did a measurement of how much of your shared libs are actually shared
<marler8997>
so is this based on tests/data?
<pixelherodev>
Statically linking libc would also probably improve performance, because inlining will most likely make up for loss of cache
<pixelherodev>
marler8997: yes.
<ikskuh>
and the percentage is … shockingly low
<marler8997>
that 99% of libraries are better off static?
<marler8997>
so your initial claim that 99% of them are "better off static"...are you saying that mean they startup faster?
<pixelherodev>
No!
<pixelherodev>
I'm saying that's what *you* were looking at xd
<marler8997>
oh, you're talking about the linked page?
<pixelherodev>
You were looking at startup time :P
<pixelherodev>
I was talking about `On average, dynamically linked executables use only 4.6% of the symbols on offer from their dependencies` and `Over half of your libraries are used by fewer than 0.1% of your executables.`
<marler8997>
On what system though?
<pixelherodev>
Inlining + DCE of those libraries into those executables *would* mean both faster and lighter executables
<marler8997>
And where is the data that 99% of programs are better static?
<pixelherodev>
He lists that at the bottom, and you can run the scripts yourself
<marler8997>
I don't see that on this page?
<pixelherodev>
That specific claim isn't on the page, no
<pixelherodev>
But the data to support it is.
<marler8997>
What data?
<pixelherodev>
The real kicker is `Over half of your libraries are used by fewer than 0.1% of your executables.`
<pixelherodev>
That means that linking those libraries statically into those executables *would* result in a system which is both faster and lighter.
<pixelherodev>
That's just a basic assumption here.
<marler8997>
The practial usage numbers of dyanmic libraries doesn't say anything about whether 99% of programs are better off static
<marler8997>
but that doesn't follow
<pixelherodev>
Second point is `On average, dynamically linked executables use only 4.6% of the symbols on offer from their dependencies.`
<pixelherodev>
That first step might make this number higher or lower, but definitely still less than 10%.
<marler8997>
It's an interesting data point
<pixelherodev>
It means that, even with large libraries, the points about cache don't apply.
<marler8997>
but you can't inferences like 99% of programs are better off static from them
<pixelherodev>
Not *directly*, but combined with the earlier points.
<pixelherodev>
If you can prove that the first 30% are better off static, then that directly causes more to be - since more and more libraries start having single-digit users.
<pixelherodev>
That cascades easily.
<marler8997>
I don't think anyone on the planet could make a claim like that, unless they actually tested it
<pixelherodev>
You can test some generalizations.
<marler8997>
Unless they actually took a large set of programs, compiled them statically/dynamically and tested it
<marler8997>
Computers and processors an kernels and hardware and electronics are too complex to be able to make large inferences like that
<pixelherodev>
If any program which uses, say, less than $(CACHE_SIZE_OF_MY_CPU) in dependencies is linked statically, it's almost certainly better off
<marler8997>
what?
<marler8997>
oh I see
<pixelherodev>
Precondition: the advantage of dynamic linking is code size and reuse.
wootehfoot has quit [Read error: Connection reset by peer]
<marler8997>
that still brings us back to that programs don't run in a vaccuum
<marler8997>
not using dynamic libraries will have an affect on the system overall
<pixelherodev>
Precondition: static linking of executables which use small fractions of their symbols results in faster code (due to various arguments already presented)
<pixelherodev>
Yes. A positive one.
<pixelherodev>
That's my point.
<marler8997>
to say that a system as a whole will run faster if you compile everything statically is something you would have to test
<pixelherodev>
If individual applications are better off like this, the system is too.
<pixelherodev>
SUre.
<pixelherodev>
Sure*
<marler8997>
you can't look at one application and extrapolate that to the entire system
<pixelherodev>
And since I plan on doing that to my PC within weeks, I can give you numbers then ;)
<marler8997>
yes that will be good
<pixelherodev>
Here's the thing
<pixelherodev>
This doesn't really *need* to be thoroughly tested - at least, not for performance.
<marler8997>
But remember, that when you compile statically, running multiple programs together means less instruction memory can be shared
<pixelherodev>
The claim that it results in lighter systems, yes.
<pixelherodev>
That needs to be tested.
<marler8997>
so there's reasons why running multiple programs statically could be slower
<pixelherodev>
marler8997: but it *also* means significantly lower imem too
<marler8997>
so you can't make the inference that if one program runs faster, then an entire system would
<pixelherodev>
Like I said, inlining + DCE.
<marler8997>
I understand
<marler8997>
you still can't make that inference
<pixelherodev>
The inference isn't "this program is faster so all are"
<marler8997>
> If individual applications are better off like this, the system is too.
<pixelherodev>
The inference is "this program is faster *because of X*, and X applies to other programs, therefore other programs are faster"
<marler8997>
you can't make that inference
<marler8997>
I'll give you a counter example
<marler8997>
If I give my application exclusive CPU access it will run faster
<pixelherodev>
lol
<marler8997>
but the system will stop
<marler8997>
that's the same rule in your inference
<pixelherodev>
the difference is, the factor here is the exact opposite.
<pixelherodev>
Programs are faster because they don't need to do as much work, and they can make better utilization of caches.
<marler8997>
still can't make the inference though, that's not the whole pictures...again, complexity
<marler8997>
out of curiosity, how old are you and how long have you been programming seriously?
<marler8997>
wait
<marler8997>
let me guess
<pixelherodev>
Sure, shoot :)
<marler8997>
5 years programming seriously?
<pixelherodev>
I'm curious what you'll think :P
<marler8997>
and 24 yrs old?
<pixelherodev>
Closer to seven or eight, I think
<pixelherodev>
Years of programming seriously*
<pixelherodev>
Not age xd
<pixelherodev>
19, actually ;P
<marler8997>
oh wow
<marler8997>
you remind me of myself 4 years go :)
<pixelherodev>
Constantly wasting time in debates on the internet that won't go anywher?
<pixelherodev>
anywher*
<pixelherodev>
anywhere*
<pixelherodev>
:P
<marler8997>
I'm enjoying it though
<pixelherodev>
same :P
<pixelherodev>
It's always good to hear other perspectives
<marler8997>
and to be clear
<marler8997>
I make no claims that dynamic or static is better
<marler8997>
I hope to see more data on it
<pixelherodev>
What I want to see is a serious, practical distro
<pixelherodev>
ifreund: no, but I'll look soon
<marler8997>
I just understand that there are alot of pieces to it, and one thing I've learned is that no matter how much you theorize, you'll never know until you actually test
<pixelherodev>
Ugh. I think I'm too used to embedded systems lol
<pixelherodev>
Much simpler systems == theoretical ideas actually work practically
<marler8997>
its' kinda funny
<pixelherodev>
Huh
<marler8997>
as processors get better, the more predictable you make your code the faster it generally goes
<pixelherodev>
I was about to bring that up
<marler8997>
it's almost as if, as processors get smarter, programs should get dumber
<pixelherodev>
I think static linking would improve branch prediction too, due to inlining
<pixelherodev>
But that's also highly hypothetical
<ifreund>
oasis is the only distro I know of that is serious about static linking
<marler8997>
well sure, and each program will have it's own branch predictions
<pixelherodev>
That's "idea I want to test", not "look at what I figured out!!!!"
<ifreund>
it is a little different in other ways too of course
<pixelherodev>
(Un)fortunately, my only web browser is ikskuh's Kristall :P
<pixelherodev>
So it's a PITA to deal with GH
<pixelherodev>
SourceHut is fine, in what really isn't a shock at all.
<ifreund>
shoulda known he'd have it on sourcehut too
<pixelherodev>
Holy crap oasis looks amazing!
<pixelherodev>
Ahhhh it's mcf?
<pixelherodev>
That explains it ;)
<pixelherodev>
ikskuh: minor bug report for kristall: it doesn't seem to understand 404s from https :P
<ikskuh>
whoopsies
<ikskuh>
write me an issue :D
<pixelherodev>
`/etc should be simple enough to be understood in its entirety.` It's... it's beautiful!
<pixelherodev>
ikskuh: with what, kristall???
<pixelherodev>
:P
<ikskuh>
yeah
<ifreund>
yeah mcf does some pretty cool stuff
<pixelherodev>
ikskuh: ...
<pixelherodev>
ikskuh: you realize that kristall doesn't support inputs, right?
<pixelherodev>
;P
<ikskuh>
... :D
<ikskuh>
it does
<ikskuh>
but nobody has written a gemini-github-bridge
<ifreund>
pixelherodev: maybe you can try the github cli they keep putting banners up about
<pixelherodev>
ifreund: ... uh... yeah, no.
<ifreund>
lol
<pixelherodev>
The only reason I still even *have* a GH account is the freaking network effect
<pixelherodev>
I can't get rid of it until a few other people leave it :P
<pixelherodev>
e.g. Zig
<pixelherodev>
lol
ask6155 has joined #zig
<ask6155>
Hello!
<ifreund>
I've decided that if/when wlroots moves to sr.ht I will follow
<ask6155>
what is the equivalent of atoi in zig?
<ifreund>
std.fmt.parseInt() ?
<ifreund>
or wait, other way: std.fmt.bufPrint()
<ikskuh>
atoi → array to integer
<ikskuh>
parseInt is the right thing
<pixelherodev>
ifreund: heh, that's probably a matter of time lol
<ifreund>
lol, tripped myself up
<ask6155>
thanks
<marler8997>
I thought the "a" stood for "ascii"?
<marler8997>
ah...I'm right :)
<ask6155>
I thought it stood for alphabet?
<ask6155>
wait that makes no sense
<ask6155>
lol
<marler8997>
lol
<pixelherodev>
:P
<pixelherodev>
marler8997: thanks for the brainfood :)
<marler8997>
yeah you too
<marler8997>
I think I gotta go to bed, 4:30 AM my time
<ifreund>
night
marnix has quit [Ping timeout: 260 seconds]
<pixelherodev>
heh
<pixelherodev>
heh
<pixelherodev>
4:30? Only?
<pixelherodev>
(i'm two hours ahead :P)
<pixelherodev>
You know, when I see code like this: `constructor(int a) { assert (a >= 0); }` it doesn't convince me C++ devs know what they're doing :P
<pixelherodev>
Using a signed type for unsigned data? really?
<pixelherodev>
There were numerous casts to "fix" that, too.
<pixelherodev>
Gah, so many latent bugs in this codebase
* pixelherodev
shivers
<pixelherodev>
The bugs-per-line in this library I'm using is ludicrous, but I don't want to rewrite it from scratch, so I'm just cleaning it up with `-pedantic -Wall -Wextra -Werror` for now and then redoing it later when I have time :P
<ask6155>
If I remember correctly assert crashes the program on false right?
<pixelherodev>
yes
<pixelherodev>
Traps it, rather
<pixelherodev>
So if you're in e.g. GDB you can backtrace
<pixelherodev>
I think.
<ask6155>
Does it make sense to just put asserts in your library?
<pixelherodev>
Depends.
<pixelherodev>
If the goal is to test that the library is behaving, yes.
<pixelherodev>
e.g. preconditions, postconditions.
<pixelherodev>
If the assert is testing input provided to the library, no.
<pixelherodev>
Well
<pixelherodev>
Hmm
<pixelherodev>
I don't think "yes" or "no" really cuts it here. It's a bit contextual.
<pixelherodev>
A better question is whether it is recoverable
<ask6155>
I guess good software is somewhere between crashing all the time and java
<pixelherodev>
lol
<pixelherodev>
I'd say that if it's an *error*, it should be passed onwards, but if it's a *bug*, it should be an assertion
<pixelherodev>
andrewrk: given that we're going to support parallel compilation, what are the odds we can extend that over networks?
<ask6155>
I'm getting a invalid character error in parseint(u8, buffer, 10) even though I'm only giving it numbers
marnix has joined #zig
ur5us has joined #zig
dermetfan has joined #zig
dermetfan has quit [Remote host closed the connection]
<adamkowalski>
Can we use an ArrayList(str) as a key to a hash map?
<adamkowalski>
And would you all recommend using a hash map if the number of entries is pretty small? I was considering using an ArrayList instead and just doing a linear scan
<Snektron>
Thats probably faster yea
<Snektron>
You could use an arraylist as hash map key but you'll need to make consistent hash/equality functions yourself
<adamkowalski>
Yeah thats what I was thinking, hashing that would be quite tough
<adamkowalski>
If the number of unique strings is fairly small (there will be lots of repeated strings)
<adamkowalski>
would it make more sense to intern them first?
<Snektron>
depends on what you want to do with them
<adamkowalski>
have a mapping of strings to ints, then rather than storing an ArrayList(ArrayList(str))
<ifreund>
this sounds like a rather unique use-case
<adamkowalski>
well i'm working on a toy language and I am trying to implement overloading of functions
<Snektron>
btw there is already a StringArrayList i believe, you could take a look at how hashing is implemented for those
<adamkowalski>
so when you are doing name lookup, and you know something is a function, you don't know which one it refers to
<adamkowalski>
you need to look at the types of the arguments and see which one you actually meant
<adamkowalski>
so every function name actually resolves to an overload set
<adamkowalski>
then inside the overload set, you have a "key" which is the argument types and the value is the index of the actual function you meant to call
<Snektron>
right
<adamkowalski>
now that i'm thinking about it, that actually doesn't work either
<adamkowalski>
because when you have generics, you will have multiple matches
<adamkowalski>
and if you have a generic function and a non generic function that both accept that type
<adamkowalski>
you should prefer the specialized version
<Snektron>
I would probably have a hash map that maps to a list of functions, which you manually search through for the given overload
<Snektron>
You can't hash the parameters, considering you might need to have an implicit cast when calling a function
<adamkowalski>
I won't have implicit casting
<adamkowalski>
only caveat to that is integer literals to flaots
<Snektron>
Even stuff like integer widening are implicit casts
<adamkowalski>
and integer literals to i64, i32, u64, etc
<Snektron>
or int literal -> signed/unsigned
<adamkowalski>
but those are special cases
<adamkowalski>
user defined implicit conversion will be disallowed
<adamkowalski>
the trickiest part will be I want to implement something like concepts in c++ with subsumption
<adamkowalski>
that means that you can have multiple generic functions, for example you can do a matrix multiply
<adamkowalski>
if you get passed in a random access container you want to do one thing
<adamkowalski>
but if you get passed in a random access SPARSE matrix you want to do another
<adamkowalski>
so multiple generic functions can both match on the given argument type, then you need to figure out which one takes precedence
<adamkowalski>
which pretty much means if function f takes any parameter x given that it satisfies contraints A
<adamkowalski>
and you have a function g that takes x given that it satisfies A and B
<adamkowalski>
then you prefer function g
<adamkowalski>
since it's more specialized
<Snektron>
If you want to do any kind of specialization hashing on anything but name seems infeasable to me
<adamkowalski>
yeah exactly, talking it through with you helped me realize that
<adamkowalski>
it seems like you have to do a linear scan over every function that matches the name
<Snektron>
Praise rubber duck developing
<adamkowalski>
then figure out the "best match"
<adamkowalski>
but what do you actually store about the function? it's not really the type name that matters
<adamkowalski>
it seems like it's more like a set of constraints that must be satisfied
<adamkowalski>
the most primitive one being, the type is X
<adamkowalski>
so maybe it's a list of list of constraints?
<adamkowalski>
the outer list represents the number of parameters the function accepts
<adamkowalski>
the inner list represents the constraints for that parameter
<adamkowalski>
if the number of parameters match, and constraints all evaluate to true, then you are a candidate for overload resolution?
<adamkowalski>
I think we are getting somewhere
<ifreund>
ugh, stage1 is crashing on me :/
<adamkowalski>
Snektron: do you see any glaring flaws with that plan?
<adamkowalski>
It seems like it might be a bit slow
<adamkowalski>
if every time you call a function you have to look through all functions with that name
<Snektron>
let me see
<adamkowalski>
but I guess it's pay for what you use. It's linear in the number of functions with that name
<adamkowalski>
so if you do a lot of overloading, then you will pay more
<Snektron>
The way i would implement an analyzer in general is that i would have a hashmap per scope with all variables, including functions
<adamkowalski>
yeah that's roughly what I do
<adamkowalski>
I scan the entire module
<adamkowalski>
in parallel
<Snektron>
The case of overloads is quite ugly though since you suddenly have to change that into a multimap
<adamkowalski>
so I first start lexing, and figure out where each top level expression starts and ends
<adamkowalski>
but I don't parse it yet
<adamkowalski>
once I know where each segment is
<adamkowalski>
I then start parsing each segment simultaniously
<Snektron>
The easiest is probably to make the lookup function return an iterator or something over the possible overloads in the case of a function
<adamkowalski>
then that AST is instantly transformed into SSA form
<adamkowalski>
again in parallel
<adamkowalski>
this means the functions do not know about each other
<adamkowalski>
so each function maintains a list of scopes
<Snektron>
Then you generate a list of candidates from that by filtering the ones that are applicable, and pick out the most specialized. You might need to look up the c++ rules for that to get inspiration on how that is handled
<Snektron>
if there are ones that are equally specialized you generate a compile error
<adamkowalski>
yeah that's roughly what I wanted to do
<adamkowalski>
I'm gonna trhow one more wrench at you
<adamkowalski>
I don't want classes
<adamkowalski>
so if you define a struct called array in your module
<adamkowalski>
and you have a free function named append in that same module
<adamkowalski>
then I need to know about it.
<adamkowalski>
why? because if you have a generic function, and you want to accept any data structure, given that you can append elements to the end
<adamkowalski>
you can't say container.append
<adamkowalski>
you say append(container, element)
<adamkowalski>
at that point it will look at the current module for a function named append
<adamkowalski>
if one is not found
<adamkowalski>
if needs to look at the module where container is defined, and see if one is found there
<adamkowalski>
BUT, suppose that you didn't define the contrainer struct, so you can't open up it's module and add an append method
<adamkowalski>
yet you still want to use that container with that generic algorithm
<adamkowalski>
so you need to create a new append free function in your module
<adamkowalski>
then call the generic function
<adamkowalski>
that means generic functions will also need to look into the callers module if the first two checks fail
<adamkowalski>
does that all make sense?
<adamkowalski>
but the one thing that makes it easier, is I don't want any virtual functions in my language
<adamkowalski>
everything is resolved at compile time, I won't help people write object oriented code with runtime polymorphism since I disagree with it
<Snektron>
> if needs to look at the module where container is defined, and see if one is found there
<Snektron>
You want to implement ADL?
<adamkowalski>
yeah essentially
<adamkowalski>
structs are just data
<adamkowalski>
they don't need methods
<adamkowalski>
functions are just functions, they don't "belong" to anybody
<adamkowalski>
so all "behavior" is defined in terms of free functions
<adamkowalski>
so you're forced into adl if you want to support generic programming
<adamkowalski>
Are you not a fan?
<Snektron>
it seems unneccesarily complex to add adl. You could instead do like Zig and make the programmer require to explicitly import the freestanding function
<adamkowalski>
Yeah I think zig is a step in the right direction
<adamkowalski>
but I don't think that would work
<adamkowalski>
how would you do the append example I discussed
<adamkowalski>
In zig the method lives on the class
<adamkowalski>
It's statically resolved, so that part is nice. But the only way to create a generic function which takes anything "appendable" is to make the function live on the struct
<adamkowalski>
if it's a free function, how would your algorithm know how to call it?
<Snektron>
It depends on some other ergonomics of the language, but you could for example have a programmer write `import my_container_module.append`
<Snektron>
That would add the overload of `append` to the current scope
<adamkowalski>
but then the algorithm needs to know about the container right
<adamkowalski>
suppose you wrote algorithm A
<adamkowalski>
I wrote struct S
<adamkowalski>
and some user U wants to call A with S
<Snektron>
not really. You look up the set of overloads of `append` and you find the one applicable for your container type
<adamkowalski>
how does A know about S
<Snektron>
The only issue is if you have `import my_other_module.append` which takes the same `container`
<adamkowalski>
yeah exactly! The overload set is my solution right?
wootehfoot has joined #zig
<adamkowalski>
I'm saying in algorithm A you just call append
<Snektron>
But you could simply store the origin module in some kind of "function alias" and select on that
<adamkowalski>
and the overload resolution mechanism will know to check the module where S is defined
<adamkowalski>
or check the callers module, because maybe whoever created S didn't define append for it
<adamkowalski>
are we not saying the same thing?
<Snektron>
My initial understanding was that you would have `import my_module.my_container` and then `append(my_container, item)` would be magically extracted from the module
<Snektron>
Thats also possible, but requires a more sophisticated system to find that function
<adamkowalski>
well it depends
<adamkowalski>
it may be extracted from the module where my_container is defiend
<adamkowalski>
but only if the usage site is a templated function
<adamkowalski>
if I have a generic function which says I accept anything as long as there is an append function on it, matching a particular signature
<adamkowalski>
that generic function can be defined in a module which does not know about an append function, nor does it know about your struct
<adamkowalski>
append in that context is a placeholder for a function
<adamkowalski>
so essentially ADL
<adamkowalski>
but if you look at more modern languages like Julia they do something called multiple dispatch which is a similar concept and it led to some amazingly composable libraries
<adamkowalski>
I want to do that, but just resolve everything at compile time, rather than runtime
<Snektron>
Multiple dispatch is runtime right?
<adamkowalski>
yeah it's at runtime, but they have a jit compiler, which if it can monomorphise as all the types are known statically, it will
kristoff_it has quit [Ping timeout: 265 seconds]
<adamkowalski>
but i'm getting off topic here
<adamkowalski>
the main question I have for you is how to solve this problem
<adamkowalski>
you defined some algorithm which wants to sum the elements of a container and return the total
<adamkowalski>
you want to write two generic versions of it
<adamkowalski>
one which says, hey if you are forward iterator (I can only iterate sequentially through the elements) then do that sum things and return the total
<adamkowalski>
if you are a random access iterator then split the container into as many pieces as you have cores, sum each chunk, then sum the partial sums
<adamkowalski>
random access iterators are by definition forward iterators
<adamkowalski>
so both would be correct matches
<adamkowalski>
clearly if I support random access I want to do the parallel version though right
<adamkowalski>
now you found some vector library you want to use
<adamkowalski>
but they only implemented the sequential iterator functions
<adamkowalski>
I want to add new functions to suport random access (since it's a vector)
<adamkowalski>
then call the algorithm
<adamkowalski>
it should realize that even though the vector is defined in one place, the random access function is defined in another
<adamkowalski>
I should combine the two, and still dispatch to the parallel implementation
<adamkowalski>
does that make sense?
<Snektron>
i see
<Snektron>
Well its entirely possible using just the overload searching method i described, all you need to do is add some way to discover the list of possible overloads, and a way to select the most preferable one
<Snektron>
The former you could take inspiration from Rust from, which allows you to implement a trait for any type, but you have to explicitly import it to enable it
<Snektron>
That saves you from having to scan every file for functions, and also makes it possible for the programmer to select the overload they want
wootehfoot has quit [Read error: Connection reset by peer]
vgmind has joined #zig
<adamkowalski>
Snektron: yeah Rust is pretty nice in that regard
<adamkowalski>
The main thing I love from there is the lifetime tracking / borrow checker
<adamkowalski>
I think that combined with an allocator model like zigs would be killer
<adamkowalski>
but I actually think that concepts from C++ are nicer to work with then rust traits
<adamkowalski>
I say that because traits, like methods belong to a single type
<adamkowalski>
free functions, like concepts, belong to nobody
<adamkowalski>
concepts can be between a set of types
<adamkowalski>
I also want to avoid the long compile times of c++/rust so I want to make sure whatever strategy I use needs to be fast
<adamkowalski>
people tend to avoid templates/compile time meta programming because they know it's so slow
<adamkowalski>
but in a language like D where the compiler is crazy fast, it enables a new style of programming, where you worry more about being expressive rather than the impact on compile time
cole-h has quit [Quit: Goodbye]
reductum has joined #zig
adamkowalski has quit [Quit: Lost terminal]
<oats>
I haven't checked in on zig stuffs in a while, is there a cohesive story for traits/interfaces yet?