stripedpajamas has quit [Ping timeout: 244 seconds]
klltkr has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dddddd_ has joined #zig
a_chou has quit [Ping timeout: 258 seconds]
dddddd has quit [Ping timeout: 260 seconds]
joey152 has quit [Ping timeout: 240 seconds]
a_chou has joined #zig
ur5us has joined #zig
ur5us has quit [Client Quit]
ur5us has joined #zig
ur5us has quit [Remote host closed the connection]
ur5us has joined #zig
a_chou has quit [Quit: a_chou]
marnix has joined #zig
marnix has quit [Read error: Connection reset by peer]
marnix has joined #zig
<andrewrk>
oh, ifreund, you may be able to work around your glibc issue with the --libc feature and providing explicit paths
<andrewrk>
with zig 0.6.0
gert_ has quit [Quit: WeeChat 2.9]
ur5us has quit [Ping timeout: 260 seconds]
drewr has quit [Ping timeout: 244 seconds]
xackus has joined #zig
xackus has quit [Ping timeout: 260 seconds]
waleee-cl has quit [Quit: Connection closed for inactivity]
_whitelogger has joined #zig
<JimRM>
@andrewk looks good
oxymoron93 has joined #zig
Michcioperz has quit [Quit: Michcioperz]
Michcioperz has joined #zig
cole-h has quit [Quit: Goodbye]
omglasers2 has joined #zig
<omglasers2>
I assume it's not ok to be able to get an infinite loop while compiling? it goes on forever at the Semantic Analysis stage (master on win64/lin64)
<jorangreef>
`union1`, `union2` naming that we have?
<daurnimator>
jorangreef: I think the answer is helpers to create the struct
_whitelogger has joined #zig
<jorangreef>
We could future-proof these `unionN` fields by renaming them after the name of their first field, e.g. `union1` becomes `off` and then you set the off with .off = .{ .off = 0 } in a helper method, but that's still not as a good as C, where you could just do `.off = 0` and adding unions later doesn't break anything as it does in Zig.
<jorangreef>
I was also thinking we could just go back to plain u32s and u64s without unions plus padding as the earlier `io_uring_sqe` struct had.
zippoh has quit [Ping timeout: 256 seconds]
xd1le has quit [Remote host closed the connection]
mhi^ has joined #zig
xd1le has joined #zig
<JimRM>
I have zig installed using Snap on Ubuntu. Is there a way to make the Zig library source code browsable via ZLS? I currently see this error for example:
<JimRM>
Unable to open 'std.zig': Unable to read file '/usr/bin/lib/zig/std/std.zig' (Error: Unable to resolve non-existing file '/usr/bin/lib/zig/std/std.zig').
<JimRM>
Should I just clone the source into that folder? or is there a better way to configure this? (I am using ZLS + VSCode)
<alexnask[m]>
You can configure the zig_lib_path in a zls.json file
<JimRM>
I am building for a baremetal target - and I have added a custom llvm flag (I think) which disables NEON instructions. However, when I step through the code and enter memcpy - I see there are a bunch of Qx registers used. Is this because memcpy etc is precompiled and just linked? or should that code also be built with the same flags as my project?
<ikskuh>
nothing in zig is precompiled
<omglasers2>
Great, thanks! seems to be what I need.
<ikskuh>
JimRM: does your project contain C code?
<JimRM>
My stuff is only Zig + Assembly
<ikskuh>
you should be able to just specify the right target with -target as you can exclude/include CPU features
<JimRM>
and it crashes at 0xffff0000000820bc (line 9) due to the use of Q0, Q1 (I haven't enabled the NEON instruction set at this point, nor do I intend to)
<ikskuh>
do you link libc?
<JimRM>
I dont believe so (due to the freestanding target parameter)
Akuli has joined #zig
<ikskuh>
hm
<ikskuh>
err why do you use memcpy then?
* ikskuh
is a bit confused
<JimRM>
I am using std.fmt - which uses memcpy under the hood. I assumed memcpy would be a version provided by Zig not linked in from the standard library
<JimRM>
(I am not very familiar with Zig)
xackus has joined #zig
* ikskuh
is even more confused
<ikskuh>
maybe you're running into optimizations of llvm?
<JimRM>
Looking at std/special/c.zig:105 I can see a zig version of memcpy
<ikskuh>
yeah that file is a partially implemnted libc
<ikskuh>
can you share the whole project of yours?
<JimRM>
OK, so I copied the zig memcpy into my main file + recompiled (after renaming it etc) and I see it still generates code using Q* registers. So I think my compiler flag settings are not correct
zippoh has joined #zig
<ikskuh>
probably
drewr has joined #zig
<JimRM>
Hmm - pulling my hair out a little - this is the verbose output from zig build
<JimRM>
Added to the cpu_features_sub list to disable code generated using Q* registers
<JimRM>
That was some crazy journey through the zig code - don't think I am any the wiser!
<ifreund>
glad you got it worked out!
xackus has quit [Ping timeout: 264 seconds]
msingle has joined #zig
klltkr has joined #zig
waleee-cl has joined #zig
<smolck>
is there any way to declare an array with a length known at runtime without using the heap?
jorangreef has quit [Remote host closed the connection]
<smolck>
i.e. is there an equivalent to C99's int array[size];
<smolck>
?
<smolck>
(where size is a size known at runtime)
<ifreund>
no, zig doesn't have VLAs
<ifreund>
there's no way to handle allocation failure
<smolck>
yeah that's what I thought. hmm
<ikskuh>
smolck: you need a heap allocation for that
<ikskuh>
or you can use a big-enough array when you know your upper bound and slice that
<smolck>
yeah, I think I'll have to do heap allocation for this
<smolck>
quick question, if I have nested ArrayLists, do I have to free each one of the nested ones in addition to the parent?
<smolck>
I'm guessing yes but just want to verify
<ifreund>
yes
<ikskuh>
yeah, you do
<JimRM>
Could you use an arena allocator for the list of lists?
<JimRM>
Then free everything all in one go?
<JimRM>
(Assuming that is what you are trying to do)
<smolck>
hmm I'm not sure, this is more of a library so the allocator is passed in
<smolck>
so I don't choose or make it
<JimRM>
You could create an arena allocator from the allocator passed in right?
<JimRM>
Or even if it is not an arena allocator - an allocator which frees all of its memory in a single call
<smolck>
yeah I guess so
<smolck>
I'm writing a msgpack library btw, and trying to figure out how to work with arrays, to add some context
<JimRM>
Ah right
<smolck>
and so serializing is pretty straightforward, but deserializing is the part I'm now struggling with
<JimRM>
I assume you are talking about the encoding phase?
<smolck>
no decoding
<smolck>
because I have to convert a slice/array of u8's to a []Value, but I can't really do that without VLA
<smolck>
(at least not that I know of)
<smolck>
and dealing with the heap makes this so much clunkier
<ikskuh>
huh
<ikskuh>
just reinterpretation?
<smolck>
I need to go from []u8 -> []Value
<companion_cube>
with an allocator it shouldn't be that clunky?
<companion_cube>
especially since you know the length of the array
<smolck>
it's clunky because I'd like the user to be able to pass in an array allocated on the stack, because it shouldn't be required to allocate all arrays on the heap in my mind
<companion_cube>
with an arena I think it's simpler this way; the user cannot know how big the deserialized array would be anyway?
<smolck>
no but they know the input array, and having to allocate that on the heap seems wasteful
cole-h has joined #zig
<companion_cube>
ah I see what you mean.
<companion_cube>
but the input array might not even be aligned…
<smolck>
ikskuh: yes, I think it's just reinterpretation . . .
<ikskuh>
std.mem.sliceAsBytes
<ifreund>
smolck: you want std.mem.bytesAsSlice
<ikskuh>
and the other one i cannot remember
<ikskuh>
…
<ikskuh>
ifreund: this one!
<smolck>
what does bytesAsSlice do?
<companion_cube>
smolck: what if you have an array of arrays? it's not bitwise compatible
<ifreund>
it does []u8 -> []T
<ifreund>
for whatever T you chose
<smolck>
ifreund: how does that work though, because I'd need to specify a conversion function from u8 -> a union
<smolck>
companion_cube: can you elaborate a bit more? not sure I see what you're asking
<ifreund>
it just does a pointer cast, reinterpreting the memory
<smolck>
oh so they would still technically be u8's until I reassigned them then?
<companion_cube>
smolck: if you have an array of arrays of i32, in msgpack, you can't just cast it to a zig array
<companion_cube>
there's be "junk" in it because of the metadata (length of subarrays, etc.)
<ifreund>
sure?
<companion_cube>
I really see no way of deserializing msgpack without allocating stuff on the side
<ifreund>
it takes an arbitrary chunk of memory and interperts it as a slice of some given type, with a few safety checks
<ifreund>
the function's only like 10 line long if you want to see exactly what it does
<smolck>
companion_cube: yes, I do allocate on the heap, but not for the input list
<smolck>
ifreund: okay, I'll try that!
<smolck>
ifreund: wait, sliceAsBytes is for non-const slices, right?
<ifreund>
it preseves constness, if you pass it a const slice it will return a const slice and vice-versa
<smolck>
oh it returns a slice, ah I see
<companion_cube>
ah, the input buffer you mean? the raw bytes?
<smolck>
companion_cube: hmm . . . yes I think? The input, whether for serialization or deserialization, is stack-allocated
xackus has joined #zig
<smolck>
ifreund: is there a way to make bytesAsSlice() return a non-const slice given a const slice?
<ifreund>
no, that would be UB
<smolck>
hmm
<companion_cube>
smolck: you probably want to take an input slice anyway, I guess
<JimRM>
So.. would it be possible using zig's build system to build a binary and embed it into another target? Concrete case, I have a kernel binary that I build, I would also like to build a couple of other binaries (for example an idle process) and embedd that into the kernel binary. Ideally all within the same build.zig file
<companion_cube>
whether it's on the heap or stack
<ifreund>
JimRM: I don't think so currently, but that sounds like a valid use-case that the build system would ideally be flexible enough to support
<smolck>
hmm, welp, thank you all for your help, I'm gonna have to do some thinking about how to do this later, but for now I have to go. later!
smolck has left #zig [#zig]
<ifreund>
anyone use signalfd with the std event loop yet?
jorangreef has joined #zig
<jorangreef>
Hey everyone, I get "no field named 'ptr' in '*[100]u8'" for this:
<jorangreef>
Then I get "no field named 'ptr' in '[100]u8'"
<jorangreef>
Yes, that works, but I wanted to understand why? I thought slices and arrays would have a .ptr field?
<Nypsie[m]>
Only arrays
<jorangreef>
Yes but when I call .ptr on buf which is an array I get "no field named 'ptr' in '[100]u8'"
<Nypsie[m]>
My bad, only slices do..
<Nypsie[m]>
My bad, I keep mixing them up
<jorangreef>
Slices also don't seem to have ptr as per first example :)
<Nypsie[m]>
I think only in the case of [0..buf.len].ptr perhaps
<fengb>
There’s a small hole in the Zig semantics atm. Slicing with comptime known values produces a pointer to array, which doesn’t have .ptr
<fengb>
I think there’s a proposal to add .ptr for this reason
<jorangreef>
Thanks for the help Nypsie[m] and fengb!
klltkr has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
mokafolio has quit [Quit: Bye Bye!]
<Nypsie[m]>
Sorry I couldn't explain why :)
mokafolio has joined #zig
zippoh has quit [Ping timeout: 256 seconds]
<leeward>
I'm missing something in std.net. I made a std.net.StreamServer, called accept on it, read from the connection, then called (through defers) connection.file.close(), StreamServer.close(), and StreamServer.deinit(). When I go to run the server again, it can't bind because the address is already in use.
<leeward>
Does StreamServer.close() not unbind from the socket?
jorangreef has quit [Remote host closed the connection]
yeti has joined #zig
<Nypsie[m]>
You need to provide the option .{ .reuse_address = true } on StreamServer.init()
<Nypsie[m]>
I kind of assume this is intended, considering this option exists
<leeward>
It's weird that it doesn't leave the system in the state it was in when I got to it.
<Nypsie[m]>
It was unexpected for me too, but never cared enough to ask if this is intended behavior or not.
<leeward>
Well, I guess std.net is going to be replaced eventually anyway.
<Nypsie[m]>
True it will
<ikskuh>
<leeward> Does StreamServer.close() not unbind from the socket?
<ikskuh>
it does, but most OS keep the socket "busy" for some time and prevent re-binds
<leeward>
Yeah, it seems like there's a 30 second or so timeout.
<ikskuh>
when you have reuse addr set, it will prevent other programs from rebinding your socket and only *your* executable is allowed to rebind
<Nypsie[m]>
Aaaaah that makes sense actually
<Nypsie[m]>
TIL :)
<leeward>
The man page does a terrible job of explaining that. Weird that I've never run into this before.
<companion_cube>
you can disable that
<companion_cube>
the "LINGER" option somewhere i think
zippoh has joined #zig
jayschwa has joined #zig
marnix has quit [Ping timeout: 246 seconds]
marnix has joined #zig
<ifreund>
andrewrk: thanks for all the merges, now my project builds on master zig :)
klltkr has joined #zig
<andrewrk>
nice!
<leeward>
Is there a reason to use std.atomic.Queue over std.event.Channel for code that just wants a way to send messages between threads?
reductum has joined #zig
<leeward>
It seems like Channel is just higher level, but I can't be sure I'm not missing something.
marnix has quit [Ping timeout: 265 seconds]
Akuli has quit [Quit: Leaving]
ur5us has joined #zig
wootehfoot has joined #zig
mokafolio has quit [Quit: Bye Bye!]
traviss__ has quit [Ping timeout: 260 seconds]
<andrewrk>
leeward, Channel is for evented I/O currently and std.atomic.Queue uses kernel thread mutex locking to protect its state
<andrewrk>
leeward, std.atomic.Queue should be renamed to std.threadsafe.Queue
<andrewrk>
the plan is to have 2 API layers: a higher level API layer that works in both evented I/O mode and blocking mode, and a second API layer that deals exclusively with kernel threading
mokafolio has joined #zig
<andrewrk>
it's a bit messy currently, but it will become more clear and organized as the std lib event loop and surrounding APIs matures
<andrewrk>
for your purposes, if you are using normal blocking mode I suggest the thread safe queue
<andrewrk>
I hope you can forgive the mess, as some of these APIs were constructed before 2 complete redesigns of how concurrency would work in zig
xackus has quit [Ping timeout: 240 seconds]
traviss has joined #zig
wootehfoot has quit [Read error: Connection reset by peer]
nephele has quit [Quit: I dropped something]
nephele has joined #zig
omglasers2 has quit [Read error: Connection reset by peer]