<adamkowalski>
Snektron I really like that, however, there's something quite nice about the built in slices and arrays. That means it's easy to mix dimensions that are statically known and things that are known at runtime
<adamkowalski>
And it would play nicer with the language, then immediately coming in and building in my own matrix or nd array type
<adamkowalski>
It would be nice to have [][5][]int
<adamkowalski>
I just think that the language should be consistent and if you can cast arrays to slices, you should be able to convert nd arrays to nd slices
<adamkowalski>
And mix and match at will to encode by the type system which dimensions are known at runtime vs compile time
<adamkowalski>
Then you can have something like a jagged array, where you have 5 rows, but each row has a variable amount of columns
adamkowalski has quit [Client Quit]
adamkowalski has joined #zig
<fengb>
nd arrays don’t really exist. It’s basically the same as how C works with them
wootehfoot has quit [Quit: Leaving]
<Snektron>
adamkowalski: you can already have the type [][5][]i32
<Snektron>
I think the first thing youre going to have to ask yourself is: Do you want to have an ND array blas-like library
<Snektron>
Or do you want 3D matrices
<fengb>
An array of slices is almost definitely not what he wants though
<Snektron>
Because its starting to sound like the latter and i doubt that construct even makes mathematical sense
<Snektron>
fengb: no i think they want is slices and pointers to arrays of static size
<Snektron>
The problem with that approach is, adamkowalski, that you then have to store an unknown amount of pointers in your dynamic matrix structure
<Snektron>
Which binds your structure to the heap
<Snektron>
I think its hard to make a structure which generalizes over both
<Snektron>
So instead you could make a matrix storage type and generalize operationd over that
<fengb>
Arrays are fixed memory blocks known by the compiler. Slices are pointers into those memory blocks. They translate well only at 1 dimension level
<Snektron>
So what you'd have with the last approach is `fn Matrix(comptime T: type, comptime S: type) type { returnt struct { storage: S }; }`
<Snektron>
Where S has a few getters and setters for elements as well as getters for the dimensions
<Snektron>
In fact this approach is taken by libraries such as Eigen
<Snektron>
Personally i dont really use dynamically sized matrices very often so i just dont bother
<Snektron>
Anyway, the storage could be a number of things here:
<Snektron>
- statically sized with array backing
<Snektron>
- dynamically sized with heap backing
<Snektron>
- different types of views of other matrices, either static or dynamically sized, with different types of pointers to other matrices
<Snektron>
Those different pointer types could be something like pointers to dense matrices, a list of pointers to individual memory locations, or a pointer/stride kind of system
<fengb>
Probably my C experience speaking but it seemed to work a lot better than 2D arrays
<fengb>
I also didn’t do math so not sure how well it applies to your usecase
<Snektron>
Some example code which you can do is use `Matrix(MatrixView(0, 0, 2, 2, DenseSrorage(4,4))).init(MatrixView(0,0,2,2, DenseStorage(4,4)).init(&other).transpose()` to transpose only the upper 2 by 2 submatrix of a 4 by 4 matrix
<Snektron>
But as you can see thats a lot of code and nasty types, something which might be better suited for a language like rust
<Snektron>
fengb: should the width/hight of Matrix not be made comptime
<Snektron>
Also note how you can abstract out storage types and then you'd basically what i was talking about
adamkowalski has quit [Ping timeout: 265 seconds]
<fengb>
Returning arrays requires something to be comptime known. I suppose the concrete type could be an oversized array
<fengb>
Returning arrays on the stack*
<Snektron>
Btw, there was an article about designing matrix libraries somewhere, might be relevant
<scientes>
I didn't fully rebase after the last merge
<scientes>
but I will get around to it eventually
<Snektron>
Does that mean @Vector will be hardware backed or does it mean turning on llvm simd optimization?
<scientes>
it already is
<scientes>
you just can only do limited things with it
<scientes>
Snektron, vectorization is very difficult, it is much easier to devectorize
dbandstra has joined #zig
<scientes>
that is the whole point of using these intrinsics
<scientes>
and LLVM's SIMD features
bjorob has quit [Ping timeout: 252 seconds]
<Snektron>
So this is just @Vector stuff?
<Snektron>
Also, note that using simd for 3D math is an anti pattern
<Snektron>
Dont confuse the two
<scientes>
Snektron, you want simd for ray tracing
<Snektron>
Usually i rely on LLVM autovectorize for stuff like that
<Snektron>
Its priddy gud
<Snektron>
Though ray tracing i usually offload to a gpu
<scientes>
AMD does their compilation with llvm
<scientes>
it uses these features
<scientes>
I met a top AMD LLVM engineer at the LLVM conference
<scientes>
same with mali at ARM
<Snektron>
But thats something completelt different from LLVM autovectorization
<Snektron>
If you write code with intrinsics llvm will literally try to vectorize that
<Snektron>
A friend was working on llvm for his bachelors thesis and was complaining how aggressively it kept trying to autovectorize stuff, even his additions (which where stuff like bounds checking for arrays)
<Snektron>
(And some pointer stuff)
adamkowalski has joined #zig
stratact has quit [Quit: Konversation terminated!]
marijnfs has quit [Quit: WeeChat 2.6]
adamkowalski has quit [Ping timeout: 276 seconds]
lunamn has quit [Quit: leaving]
<daurnimator>
Aransentin: what do you mean by "comptime-omit"?
<daurnimator>
mq32: its the *only* solution I've got though >.<
<daurnimator>
everything else I've come up with fails for some reason (usually inability to reference the type; or something about generic being unintrospectable)
<gruebite>
anyone know why i would get wchar.h not found when building a shared library?
<gruebite>
cImport failed
<daurnimator>
gruebite: did you link against libc?
<gruebite>
it's an include error
muffindrake has quit [Ping timeout: 276 seconds]
<gruebite>
i have tried linking libc
<gruebite>
i ended up adding c.addSystemIncludeDir("/usr/include")
<gruebite>
paths might be messed up? env vars?
<daurnimator>
gruebite: zig doesn't include your system include dirs by default
muffindrake has joined #zig
<gruebite>
gotcha
<daurnimator>
though I vaguely recall something like linking the system libc adds those include dirs
<daurnimator>
but now I can't find that code path...
<daurnimator>
hrm... reading the source of std/build, the only thing that sets `need_system_paths` is linkSystemLibraryName....
doublex_ has quit [Ping timeout: 240 seconds]
<daurnimator>
but `linkSystemLibraryName` doesn't get hit from `linkSystemLibrary("c")`
doublex has joined #zig
<gruebite>
hmm i coulda looked there
<gruebite>
:D
<gruebite>
thanks
<gruebite>
now i'm getting an interesting issue. some symbols are not being generated. skipped a lot of the functions
<gruebite>
everything else i got, just no functions. except for a couple
return0e has joined #zig
return0e_ has quit [Read error: Connection reset by peer]
<daurnimator>
fengb: oh wait. > Note that you can subtract integers from (unknown length) pointers, and you can add integers to pointers. What does not work is adding pointers to pointers or subtracting pointers from pointers.
<fengb>
nvm, i'm translating the C too literally >_>
<gruebite>
daurnimator: yep, i did that and put the cimport.zig into the gist. the function symbols are completely gone, but everything else is there
<gruebite>
silent error?
<fengb>
Although it'd be nice to subtract 2 pointers and get the size diff
<daurnimator>
gruebite: add --verbose-cimport
<gruebite>
not sure that changed anything, same output
<gruebite>
ahh, yeah i just searched the issue and found that command.
<gruebite>
is there a method on Builder to add that option?
<gruebite>
just greping build.zig heh
<daurnimator>
gruebite: yeah for now greping std/build.zig is the way to find out. seems like you want: .setDisableGenH
adamkowalski has joined #zig
<adamkowalski>
Thanks for all the information everyone. I really like the article about building matrix/vector libraries.
<adamkowalski>
For my use case, I need ND array support, scalars, vectors, matrices, etc are all just special cases of the more general notion. Unfortunately, you don't know the size until runtime for a lot of problems where you won't know the dataset until runtime either. This means there will have to be a heap allocation. The dream is to have something which can handle both statically known and dynamically known
<adamkowalski>
dimensions and can provide type safety where appropriate. Meaning if you know the dimensions at compile time and you try to multiply something that doens't make sense, then it should be a compile error. Otherwise you should be forced to `try` since the dimensions may not line up at runtime
<adamkowalski>
I think the approach I will go with is to have a 1 dimensional array which is the backing memory, and then translate the nd cartesian index into the linear index. The type will be parameterized on the element type, and at runtime accept the dimension sizes as well as the strides.
<gruebite>
daurnimator: woo, got it working and calling from godot
<gruebite>
thanks
<daurnimator>
adamkowalski: you could easily follow the same pattern I did in the commit in a vector/matrix library
<adamkowalski>
If you all are also working on linear algebra libraries we should consolidate our efforts
<daurnimator>
adamkowalski: I am not; that is a buffer structure mainly to be used for network/file operations
<daurnimator>
adamkowalski: sometimes you know the size of what you're going to receive; other times its unbounded => so similar sort of thing to what you're talking about
<daurnimator>
However I'm not looking at linear algebra stuff at all (nor is it on my personal roadmap)
<adamkowalski>
I'm baised towards machine learning, but I know they are also commonly used for game development. I'm not sure how much things will overlap, but it would be nice to have one set of abstractions everyone can agree upon
<daurnimator>
gruebite: hopefully the teething troubles weren't too much for you; if you ever hit something just file an issue; we're usually pretty quick about things that block people's progres.
<gruebite>
:D
<adamkowalski>
Is there a data science community within zig? I'm also starting to work on a plotting package, and would also want something like a data frame
<daurnimator>
adamkowalski: not really. we're all in this together at this point :P
<adamkowalski>
We currently still use Python at work, but I really want to evaluate using Zig since some of our simulations are starting to take extremely long
<adamkowalski>
The only thing Python has going for it is all the libraries, but then all tend to break down at scale
<adamkowalski>
I also love that Zig takes reliability and error handling so seriously
<daurnimator>
adamkowalski: I would think zig is going to be too immature for anything not a once-off right now
<adamkowalski>
Perhaps, but it's got a lot of things going for it
<adamkowalski>
The alternatives we've looked at are C++ , D, Rust, and Julia
<daurnimator>
indeed; and it'll only get better if people keep using it and keep contributing things
<adamkowalski>
Julia has a really great ecosystem, and is something I want to steal many ideas from. However, it uses a jit and so the startup times are unacceptable
<adamkowalski>
The one thing they do that i'm not sure how to mimic here is automatic differentiation
<adamkowalski>
They actually provide hooks into the compiler so that you can record every operation that is happening
<adamkowalski>
Then they can play them back in reverse and use the chain rule to give you the derivatives
<adamkowalski>
Which is the main thing you need in order to solve an optimization problem
<adamkowalski>
When you want to approximate integrals they also use monte carlo
<adamkowalski>
But if we had those two things solved, I think zig could be a strong contender
<daurnimator>
adamkowalski: interesting. I'm curious what andrewrk would say about getting automatic differentiation
<adamkowalski>
hopefully it can happen as a library, since that would show off how flexible the language is
<daurnimator>
adamkowalski: for clang they have things like clad... I don't know if that sort of approach is worthwhile for zig
<adamkowalski>
But if we want it to be a part of the language, we can look at swift for tensorflow
<daurnimator>
adamkowalski: I'm wondering if it violates one of the tenants of zig: "no hidden control flow" => automatic differentiation sounds like hidden control flow to me
<adamkowalski>
They actually build a static computation graph and then you differentiate it, and can automatically distribute it across your cluster
<adamkowalski>
daurnimator there is no hidden control flow
<adamkowalski>
You still need to call loss.backward() in libraries like PyTorch
<adamkowalski>
Or you call gradient(f, with_respect_to=parameters) in Julia
<adamkowalski>
It's similar to defer in my opinion
<adamkowalski>
Within the nd array type itself you simply have a "tape" (really it's just a stack)
<adamkowalski>
then if you call add(x, y)
<adamkowalski>
you record the "add" onto the stack
<adamkowalski>
but you need to specify that you wish to track that particular array so you don't pay for what you don't use
<daurnimator>
adamkowalski: where is the stack? where does its memory come from?...
<adamkowalski>
in zig it would make sense to pass in an allocator
<adamkowalski>
Are you all familiar with how a neural network works?
<adamkowalski>
if I gave a really simple example it might make the workflow more clear
<adamkowalski>
Lets say you want to predict the price of a house given a "feature vector" (the square footage, number of bedrooms, number of bathrooms, etc)
<adamkowalski>
You each of these numbers into a vector x
<adamkowalski>
now you have a matrix m and a vector b which is the same length as x
<adamkowalski>
y = mx + b
<adamkowalski>
y is your prediction
<adamkowalski>
now you take the difference between your prediction and the true house price
<adamkowalski>
so mean(absolute(y_true - y_pred))
<adamkowalski>
mean because y_true and y_pred are vectors
<adamkowalski>
abs because if the true house price is 400k it's just as bad to predict 350k and 450k
<adamkowalski>
so loss = mean(abs(y_true - y_pred))
<adamkowalski>
you want to take the derivative of loss with respect to m and b
<adamkowalski>
since those are the parameters of your "model"
<adamkowalski>
this gives you a vector of "partial derivatives" or your "gradient vector"
<adamkowalski>
this tells you how to adjust each parameter such as to minimize your loss
<daurnimator>
I guess you'd want to specify the numeric type of your derivitives?
<adamkowalski>
If your loss is 0, your prediction matches the true value
<adamkowalski>
yeah usually those are f16
<adamkowalski>
or even quantized i8
<daurnimator>
==> if you have a function that mixes doubles and u16s.... you end up on f16 how?
<adamkowalski>
you take your floats and chop of as much precision as possible
<adamkowalski>
well f16 or i8 is best because the GPU can churn through that
<adamkowalski>
and it turns out the lower precision actually helps your model generalize
<daurnimator>
I guess with zig you could do: df = gradientf(f, f16); df(some, args);
<adamkowalski>
that would be the dream
<adamkowalski>
a one function api
<adamkowalski>
which is a higher order function
<adamkowalski>
it would take a function
<adamkowalski>
and return a function which gives the deriviate
<daurnimator>
how do you deal with functions that have side effects?
<daurnimator>
e.g. what the derivitive of the write() syscall?
<adamkowalski>
I would recommend a functional style
<adamkowalski>
However, there is nothing wrong with side effects
<adamkowalski>
for example in reinforcement learning you have an agent which interacts with the environment
<adamkowalski>
you can still take the derivative
<daurnimator>
when calculating the derivitive, you still perform all the operations?
<adamkowalski>
well you only do the forward pass once
<adamkowalski>
you just record which operations were performed
<daurnimator>
How would you only do the forward path once if not all branches are taken?
<adamkowalski>
you're asking all the right questions
<adamkowalski>
that was a problem back in the day with tensorflow 1.0 and static graphs
<adamkowalski>
you had to define your entire model as a static computation graph
<adamkowalski>
it had to be pure with no side effects and no control flow
<adamkowalski>
they then introduced control flow nodes
<adamkowalski>
called if_ and while_
<adamkowalski>
which took lambdas for the the condition and for the body
<adamkowalski>
so you only call the labmda for the branch that was taken
<adamkowalski>
then the derivative would be the deriviative of the chosen branch
<adamkowalski>
now people do whats called dynamic graphs in tensorflow 2.0, pytorch or flux
<adamkowalski>
so you just write down regular looking code like y = m * x + b
<adamkowalski>
and it will automatically generate the graph behind the scenes and figure out the derivatives
<adamkowalski>
However, you then pay a performance price since you don't know what ops are going to be taken and which control flow branches you will go down
<adamkowalski>
But as long as you are dealing with big enough arrays it tends to not be the dominating factor
<daurnimator>
`y = m*x + b; if (y > 5) { y += somesyscall(); } else { y << 10 }` ==> how do you get the derivitive of this?
<adamkowalski>
okay so assume that you are tracking m and b
<adamkowalski>
you do m * x
<adamkowalski>
record * on the tape
<adamkowalski>
then you add b so you record +
<adamkowalski>
now assuming we are talking about the dynamic graph frameworks
<adamkowalski>
you literaly run the control flow
<adamkowalski>
and y is now a tracked array
<adamkowalski>
since a tracked array + another array is still tracked
<adamkowalski>
it just records the operations
<adamkowalski>
so it all just works
<adamkowalski>
you just record only for the branch you went down
<adamkowalski>
the operations themselves are the things responsible for writing down to the tape
<adamkowalski>
The only thing you can't do that you listed here is y +=
<adamkowalski>
you can't modify in place
<adamkowalski>
unless the semantics of your language is that y += blah -> y = y + blah
<adamkowalski>
now you are creating a new variable and all is good
<daurnimator>
I'm not sure I understand. Perhaps you could write out the above sample with the contents of the tracking vector at each point?
<adamkowalski>
theres a really good visualization, let me find it
<adamkowalski>
Scroll down to the graph is created on the fly
<adamkowalski>
Also keep in mind you can't just take derivatives of anything, it has to be of things that operate on nd arrays
<adamkowalski>
Pretty much only things that are part of the pytorch library
<adamkowalski>
but if you generate an ndarray from some other source, then turn it into a pytorch ndarray things will still work
adamkowalski has quit [Ping timeout: 240 seconds]
adamkowalski has joined #zig
noonien has quit [Quit: Connection closed for inactivity]
adamkowalski has quit [Read error: Connection reset by peer]
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 240 seconds]
return0e has quit [Read error: Connection reset by peer]
return0e has joined #zig
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 240 seconds]
adamkowalski has joined #zig
<tdeo>
is there something in the standard library to turn null-terminated strings into slices? there was an issue that mentioned the cstr module but i don't see anything in there
adamkowalski has quit [Ping timeout: 250 seconds]
<tdeo>
wrote my own simple one, would the three lines be welcomed in the standard library? :)
<dbandstra>
i think std.mem.toSlice and std.mem.toSliceConst do that
<tdeo>
weird that it doesn't mention anything about null termination
adamkowalski has quit [Ping timeout: 240 seconds]
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 240 seconds]
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 246 seconds]
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 246 seconds]
adamkowalski has joined #zig
hooo has joined #zig
<hooo>
this is messed up naming: std.heap.direct_allocator vs. net.IpAddress or std.AutoHashMap
<hooo>
either camel case or no right
<tdeo>
direct_allocator isn't a type
<tdeo>
it's a global
<hooo>
I think 'const' should be removed from the language
adamkowalski has quit [Ping timeout: 276 seconds]
<dbandstra>
why should it be removed?
<hooo>
it's compiler metadata, as a reader I dont care and as a writer I barely care that things are const.
<tdeo>
i think const is very useful for communicating intent
<hooo>
I think it's very confusing actually. If you make something const, what does it mean? I have no idea, now I have to assume that you had very good reason to make it const. But in reality that isnt the case, people make stuff const "just to be safe" with no thought
adamkowalski has joined #zig
<dbandstra>
i think you should make a variable const unless you have a reason not to
adamkowalski has quit [Ping timeout: 252 seconds]
adamkowalski has joined #zig
<hooo>
so in other words, a zero thought decision that basically means nothing
<hooo>
yet I have to read "const" all over the place and it confuses me and I have to assume that I cant just change it to var
<tdeo>
is `field: []const u8 = [_]u8{}` the best way to have an empty default for strings? (unrelated to above)
<dbandstra>
i don't see how const could possibly be confusing... out of all the things in zig it's one of the more straightforward things
<dbandstra>
i do find it helpful when reading code, helps me narrow down more quickly what's going on in a function when i can see at a glance that a certain variable won't be mutated
adamkowalski has quit [Ping timeout: 240 seconds]
<dbandstra>
tdeo: `""` is the same thing
<tdeo>
aha, thanks
<tdeo>
don't know why i didn't realize that
adamkowalski has joined #zig
jjido has joined #zig
adamkowalski has quit [Ping timeout: 240 seconds]
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 240 seconds]
dbandstra has quit [Quit: leaving]
<mq32>
heyhoh
<mq32>
there isn't an online archive of older zig versions dailies, right?
<mq32>
error: expected type 'block-iterator.enum:3:28', found 'block-iterator.enum:3:28'
<mq32>
okay, this is the most ... weird bug i've seen yet :D
<mq32>
okay wtf
<mq32>
this is obscure
<daurnimator>
hooo: const should be the default choice; you make something variable only if you intend to modify it
<mq32>
does someone here already use github actions?
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 276 seconds]
<daurnimator>
mq32: sorry; nope
<daurnimator>
mq32: did you have any further thoughts about my callback thing?
adamkowalski has joined #zig
<mq32>
no, sorry
<mq32>
your solution is kinda ... interesting, but i cannot tell how good that would work out in a real environment
<mq32>
i'm playing around with Github Actions right now
<mq32>
builing my retros project with that
<mq32>
i should inform myself how those "modules" work
<daurnimator>
oh my solution is super unergonomic. but I can't find anything else that even works
<mq32>
because then we could have a zig provider for github CI
<daurnimator>
nor can my solution be made prettier with helpers
<mq32>
=> everybody could CI the zig projects :)
adamkowalski has quit [Ping timeout: 246 seconds]
adamkowalski has joined #zig
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
adamkowalski has quit [Ping timeout: 240 seconds]
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 252 seconds]
adamkowalski has joined #zig
jjido has joined #zig
jjido has quit [Client Quit]
adamkowalski has quit [Ping timeout: 240 seconds]
Ichorio has joined #zig
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 240 seconds]
adamkowalski has joined #zig
wootehfoot has joined #zig
adamkowalski has quit [Ping timeout: 265 seconds]
adamkowalski has joined #zig
wootehfoot has quit [Quit: Leaving]
adamkowalski has quit [Ping timeout: 245 seconds]
lunamn has joined #zig
wootehfoot has joined #zig
jjido has joined #zig
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 246 seconds]
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 252 seconds]
adamkowalski has joined #zig
adamkowalski has quit [Ping timeout: 250 seconds]
casaca has quit [Ping timeout: 240 seconds]
samtebbs has joined #zig
_whitelogger has joined #zig
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<mq32>
daurnimator, yeah that's the biggest drawback of this closure style
<daurnimator>
mq32: not really; it's just how they implemented them
<mq32>
yeah
<daurnimator>
for zig we can actually change the function signature of the inner function to inject a hidden "pointers to upvalues" parameter
<mq32>
a separate thread_local closure stack would probably sort this out
<daurnimator>
otherwise you can do it by e.g. mapping separate executable area and loading it in; instead of sharing it with your stack
doublex has quit [Read error: Connection reset by peer]
doublex_ has joined #zig
<daurnimator>
(essentially acting as your own runtime dynamic linker)
<mq32>
daurnimator "pointer to upvalues" would be a "fat" function pointer?
<daurnimator>
mq32: e.g. `const adder = fn (x: usize) var { return fn (y: usize) usize { return x+y; } }; const increment = adder(1);` => `increment` gets compiled to a function that you could write like: `fn increment(upvalues: struct { x: *usize }, y: usize) usize { return upvalues.x.* + y }`
<mq32>
ah
<mq32>
yeah, that side is clear to me
<mq32>
the question is: how does the other side work?
<daurnimator>
"other side"?
<mq32>
passing that function pointer to "somewhere"
<mq32>
because hiding the parameter is one thing
<daurnimator>
mq32: if we're in pure zig land we don't really have function pointers.
<mq32>
passing the upvalues as an argument is another thing
<daurnimator>
The real tricky bits are if upvalues are captured by value or by referencce.....
<mq32>
i really like the C++-Approach here
<mq32>
it's quite transparent
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jjido has joined #zig
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<mq32>
error of the day:
<mq32>
error.ExpectedValueWasNotTwo
samtebbs has quit [Quit: leaving]
Ichorio has quit [Ping timeout: 250 seconds]
Ichorio has joined #zig
<companion_cube>
I like the rust approach: all captures by ref, or all by move
Aransentin has quit [Ping timeout: 260 seconds]
<Snektron>
Isnt that annoying
<companion_cube>
it's more about whether the closure is its own thing, or just a wrapper around existing references
lunamn_ has joined #zig
lunamn has quit [Ping timeout: 265 seconds]
reductum has joined #zig
adamkowalski has joined #zig
mahmudov has joined #zig
jjido has joined #zig
reductum has quit [Remote host closed the connection]
<adamkowalski>
If I have a static array thats being passed as an argument to the function, is there a way to say that it's length must be the same as the other argument?
<adamkowalski>
Currenlty I'm passing a comptime unsigned int and the type of both of the arguments followed by the arguments themselves
<adamkowalski>
I'm used to templates where those things would get infered rather then having to explictly type them
<mq32>
fn(comptime L : comptime_int, a: [L]u8, b: [L]u8) void;
<adamkowalski>
template <type N, size_t L>
<adamkowalski>
mq32 right but then I have to pass in the length as the first parameter right?
<adamkowalski>
shouldn't it just know what L is and constrain a and b to be the same
<adamkowalski>
I see that the allocators are passed by pointers
<adamkowalski>
but why are we using runtime polymorphim instead of static dispatch
<Snektron>
Less annoying to program
<adamkowalski>
is passing by var kindof equivalent to passing by template?
<adamkowalski>
so you can pass by const ref of a templated type
<Snektron>
Allocating is a large overhead anyway
<Snektron>
Its not like that virtual call is gonna cost you
<adamkowalski>
and then as long as the compile time interface is satisfied you can pass in the param
<adamkowalski>
Well I guess, but I'm not sure what the advantage of the virtual call is?
<Snektron>
Not having to make everything that uses an allocator generic
<adamkowalski>
And if zig is aiming for the zero overhead principl I feel like we should reserve runtime polymorphism for when you only know the type at runtime
<adamkowalski>
Hmm, is there something wrong with being generic?
<adamkowalski>
I try to make everything generic most of the time
<adamkowalski>
It makes it easy to unit test, because you can pass in anything that meets the interface
<adamkowalski>
So if I want to not actually connect to a database, but just have a mock object
<adamkowalski>
then that all just works
<fengb>
andrewrk’s goal is to have an “interface” that can seamlessly switch between comptime and runtime dispatch
<adamkowalski>
or if you want to not actually do file IO, or you want to simulate memory allocation failure
<mq32>
adamkowalski: it makes your binary larger and your code harder to debug
<adamkowalski>
fengb that would be cool, but how would that work?
<adamkowalski>
rust does that with traits and they have dyn traits for runtime polymorphism
<adamkowalski>
mq32 it's a tradeoff between potentially having instruction cache misses versus the cost of a virtual dispatch
<adamkowalski>
in general it seems the consensus seems to be that anythign dynamic/runtime is slower
<mq32>
adamkowalski: if "code size" doesn't matter to you, yes
<mq32>
if you are in a heavily restrained environment, it gets much more important to be small than "nice"
<adamkowalski>
code size only blows up if you specialize for a bunch of different types
<adamkowalski>
for me performance matters more (as in throughput) then code size
<adamkowalski>
for safety critical machine learning systems you want to allocate all the memory you're going to use up front
<mq32>
:D
<adamkowalski>
then you build a model which you know will run in a fixed memory/time complexity
<mq32>
i don't have space for "allocating" in half of my projects anyways :D
<adamkowalski>
and you can reuse the same memory over and over again
<fengb>
We haven’t found a solution yet, but comptime is still a newish concept so there’s a few potential ideas
<adamkowalski>
well it seems like that is more niche then the case for optimizing for performance
<adamkowalski>
I feel like we have release safe and release fast
<adamkowalski>
but also having a way to explicitly decide if you want static or runtime polymorphism means you can choose
<adamkowalski>
if everyone is forced into virtual dispatch because some people prefer code size
<adamkowalski>
that doesn't seem like a good choice either
<adamkowalski>
we should have a mechanism of choice like in c++/d/rust/julia
<mq32>
yeah i'm with you there ;)
<mq32>
i just have the problem right now that i cannot even use release-safe :D
<mq32>
executable is too big
<adamkowalski>
which is something that is great to support as well. however, there is a cost to doing dynamic dispatch
<adamkowalski>
when doing numeric code I tend to have nd arrays of floats haha
<adamkowalski>
so specialization doesn't harm me
<Snektron>
I thought julia was pretty interesting but its sad they adopted 1-indexing
<adamkowalski>
Eh thats a non issue, you stop thinking about that after day 1
<adamkowalski>
they followed the Fortran style of column major + 1 based indexing
<adamkowalski>
but they also have arbitrary indexing
<adamkowalski>
you can choose an start index that makes sense for your domain
<adamkowalski>
under the hood it gets inlined and turned back into the one based indexing
<adamkowalski>
it's a great language for machine learning, maybe better then Python since it's producing really fast code
<adamkowalski>
but your startup time is in the minutes if you include any libraries since it doesn't cache the compilation artifacts
<adamkowalski>
so you can't ship your product to end users
<adamkowalski>
thats actually the language I'm migrating from haha
<adamkowalski>
Snektron did you see the visualization I posted last night?
<adamkowalski>
Did that help with the side effect question?
<fengb>
Embedded is a big target of zig so I don't think small size would be "just a niche"
<adamkowalski>
My point was that we should make these choices explicit right? Just like we have no hidden control flow or allocation
<adamkowalski>
why would static vs runtime polymorphism be impliicit
<fengb>
Is godbolt buggy? I can't seem to get it to output anything useful
<Snektron>
fengb: yea
<Snektron>
adamkowalski: i didnt see
adamkowalski has quit [Ping timeout: 252 seconds]
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Akuli has joined #zig
dbandstra has joined #zig
<muffindrake>
The @"this is a function name" syntax can be used without restriction on all zig functions, no?
kristoff_it has joined #zig
<mq32>
muffindrake: in theory, yes
jessemeyer has joined #zig
<jessemeyer>
o/
<jessemeyer>
As per the docs: '// The stdcallcc specifier changes the calling convention of the function.' What does the calling convention change to?
<jessemeyer>
I'm making a WinMain application, and I want to ensure the __stdcall convention is satisfied.
<mq32>
it changes it to stdcall
<jessemeyer>
But the only example I see is to just export the function. How does the linker know the convention to use?
<jessemeyer>
I was hoping so. Cheers!
<jessemeyer>
Exporting takes care of that too?
<jessemeyer>
I do not seem able to specify the calling convention of an exported function.
<mq32>
jessemeyer: the linker doesn't care for calling conventions
<jessemeyer>
mq32 Sure, but the compiler does, right?
<mq32>
also: zig exports WinMain on windows already. just have a pub fn main() void { } in your root file :)
<jessemeyer>
That produces the code.
<jessemeyer>
Thanks! So why doesn't Zig complain if I don't provide the calling convention?
<jessemeyer>
I'm curious if it's a spurious error I should report on the forums.
<jessemeyer>
I mean Github.
<mq32>
because nobody doens't care for bad calling conventions in the wild world
<mq32>
you could also export WinMain as an i32 and the linker and compiler would be happy to eat that
<mq32>
and then windows wants to call your i32 and your program will explode
<jessemeyer>
I understand that conventions have to be upheld by shared parties. I don't understand how that's connected to Zig's main() missing error. Seems totally unrelated.
<mq32>
zig expects you to either have a pub fn main() or a pub nakedcc _start()
<mq32>
there are no other entry points allowed afaik
<jessemeyer>
Makes sense. So it should warn about the lack of an entry point for this code (but it does not):
<jessemeyer>
What would cause lpCmdLine and hInstance to share the same reported address?
<jessemeyer>
That looks like a compiler bug, no?
dbandstra has quit [Ping timeout: 240 seconds]
Akuli has quit [Quit: Leaving]
dimenus has quit [Ping timeout: 276 seconds]
kllr_sbstn has joined #zig
ltriant has joined #zig
bjorob has joined #zig
mahmudov has quit [Remote host closed the connection]
kllr_sbstn has quit [Quit: leaving]
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Ichorio has quit [Ping timeout: 250 seconds]
wootehfoot has quit [Read error: Connection reset by peer]
ltriant_ has joined #zig
ltriant has quit [Ping timeout: 276 seconds]
cota has joined #zig
<cota>
hi, I'm having trouble writing my first ever Zig program -- I'm generating a dynamic library to interact with C code. The library should export a function "foo_func" to get callbacks from C code, e.g. "void foo_func(struct opaque *cookie);", where struct opaque is only declared (not defined) in the imported C header file. I'm getting "error: C pointers cannot point opaque types" when building the library; is
<cota>
there a way around this, e.g. defining the struct in zig to some dummy content?
lunamn_ has quit [Quit: leaving]
adamkowalski has joined #zig
<adamkowalski>
I made my first pass at a matmul for arrays. Would you all say this is idiomatic zig? https://pastebin.com/y4QtSKsi
<adamkowalski>
What can I change or improve to make things more in line with the ways of the language
<adamkowalski>
Also how do you compare values for equality? == is not overloadable, and std.mem.eql seems like it only works for 1 d arrays
<adamkowalski>
Sorry for all the questions haha but I have one more. How do you debug test cases? Can I spit out a binary for those and attach lldb to em?