ChanServ changed the topic of #zig to: zig programming language | ziglang.org | be excellent to each other | channel logs: https://irclog.whitequark.org/zig/
cenomla has quit [Quit: cenomla]
hoppetosse has joined #zig
<itsMontoya> "If you never initialize a heap allocator, then you can be sure your program is never going to cause heap allocations." :thumbsup:
<hoppetosse> I just arrived, but even with no context I agree
<itsMontoya> "llvm/ADT/Hashing.h: No such file or directory
<itsMontoya> I'm on clang 5.0.1
hoppetosse has quit [Ping timeout: 252 seconds]
<itsMontoya> Has anyone encountered that issue building master?
<itsMontoya> Oh amazing, someone wrote a zig extension for VSCode..
<itsMontoya> andrewrk: Any ideas about my build issue?
<itsMontoya> Oh! cmake has an issue
<itsMontoya> Can't find llvm
Tobba_ has quit [Read error: Connection reset by peer]
cenomla has joined #zig
oaeui has joined #zig
oaeui has quit [Quit: Page closed]
itsMontoya has quit [Quit: Lost terminal]
cenomla has quit [Quit: cenomla]
<GitHub124> [zig] andrewrk pushed 1 new commit to master: https://git.io/vN6Sy
<GitHub124> zig/master e5bc587 Andrew Kelley: rename "debug safety" to "runtime safety"...
Tobba has joined #zig
davr0s has joined #zig
<davr0s> anynoe around?
davr0s has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<andrewrk> hi davr0s
<lqd> andrewrk: are you planning on doing coroutines using llvm's coro support ?
<GitHub91> [zig] andrewrk opened pull request #720: syntax: functions require return type. remove `->` (master...require-return-type) https://git.io/vN6Nv
davr0s has joined #zig
hoppetosse has joined #zig
davr0s has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
aiwakura has quit [Quit: Ping timeout (120 seconds)]
aiwakura has joined #zig
arBmind has joined #zig
hoppetosse has quit [Ping timeout: 240 seconds]
davr0s has joined #zig
davr0s has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
hoppetosse has joined #zig
davr0s has joined #zig
hoppetosse has quit [Read error: Connection reset by peer]
davr0s has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
davr0s has joined #zig
davr0s has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
Hejsil has joined #zig
<GitHub73> [zig] andrewrk closed pull request #720: syntax: functions require return type. remove `->` (master...require-return-type) https://git.io/vN6Nv
<GitHub127> zig/master f767088 Andrew Kelley: Merge pull request #720 from zig-lang/require-return-type...
<GitHub127> [zig] andrewrk pushed 1 new commit to master: https://git.io/vNizH
davr0s has joined #zig
arBmind has quit [Quit: Leaving.]
<davr0s> anyone around?
Hejsil has quit [Quit: Page closed]
MajorLag_ has left #zig [#zig]
MajorLag_ has joined #zig
legge has joined #zig
legge has quit [Client Quit]
<andrewrk> hi davr0s
<andrewrk> lqd, yes I am planning on experimenting with that soon
<lqd> oh cool! I’ve heard complaints that they allocate too much, so it’ll be an interesting exploration for sure
<andrewrk> lqd, I still don't fully understand how it's supposed to work
<andrewrk> there's a builtin for determining how much the coro needs to allocate, and then you pass the memory to another coro builtin
<andrewrk> but it's not clear to me how it can know how much stack space it would need
davr0s has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
dimenus has joined #zig
Topa has joined #zig
<lqd> yeah, + it’s not even stable yet IIRC (maybe for llvm 7)
<andrewrk> my plan is to expose the llvm coro primitives somewhat directly, experiment with these building blocks, and then do another design iteration
<dimenus> hi all
<andrewrk> welcome back dimenus :)
<dimenus> you've been busy I see
<andrewrk> indeed
<dimenus> andrewrk, did you experiment with the LLVM C API at all or did you always gravitate to the C++ one?
<andrewrk> we use the C API for codegen as much as possible, only supplementing it where the C API is deficient
<andrewrk> for debug info there is only a C++ API
<andrewrk> and for clang I tried the C API but it was woefully incomplete
<dimenus> that makes sense
<andrewrk> the plan for self hosting is to expose our own C API for only the C++ API we need
<andrewrk> then both the c++ compiler and the self hosted compiler will use the C API
<dimenus> iteration should improve when we're not constantly recompiling templates :)
<dimenus> well, constantly is an exaggeration - but still
<dimenus> catch feels more ergonomic than %%, but in the back of my head I just think of exceptions
<dimenus> which I want no part of
<dimenus> :D
<andrewrk> don't worry. it's not exceptions
<andrewrk> dimenus, you're on windows right?
<dimenus> yessir
<andrewrk> I added a killer new feature having to do with getting stack traces for errors, but we need windows debug info support for it to work on windows
davr0s has joined #zig
<andrewrk> which I think makes sense to do in the llvm6 branch
<davr0s> hi,
<andrewrk> hi davr0s
<davr0s> i posted an issue on the zig github, my nick there is dobkeratops
<davr0s> zig reminded me of my own language experiment 2-3 years ago, i thought i'd compare notes
<davr0s> i gave up on this project figuring its too much work with no community to build IDE/debugger integratoin
<davr0s> my 'bypass' of needing a community was to make it more C++ inter-operable, but i never got round to writing the transpiler- it remained theoretical possibility
<davr0s> I find the tooling (IDE with dot-autocomplete) counts for a hell of a lot... C++is messy but has momentum behind it
<davr0s> it's taking years to get rust tooling up to scratch - it's IDE support is still flaky
<davr0s> anyway it would be nice to know how many of my original goals you may share or have even delivered on already
jinank has joined #zig
<davr0s> it seems you have managed to get more people interested than i did
<andrewrk> davr0s, I suggest browsing these docs: http://ziglang.org/documentation/master/
<andrewrk> it should give you an idea of the language
<dimenus> so you're saying you want someone to work on that? *hint hint*
<andrewrk> it would be pretty great :D
<davr0s> whats the %void mean (the % ..)
<davr0s> what does^
<andrewrk> dimenus, I realized there was no issue for it so I made https://github.com/zig-lang/zig/issues/721
<dimenus> davr0s: the '%' on a type indicates that the function can return either a valid value of that type or an error
<dimenus> see the docs under the 'Errors' section
<davr0s> is %T shorthand for Result<T,E> or similar
<andrewrk> davr0s, yes
<davr0s> just as ?T is shorthand for Option<T> perhaps
<andrewrk> davr0s, that syntax is about to change. see https://github.com/zig-lang/zig/issues/632#issuecomment-360021237
<davr0s> sort a makes sense, although a bit cryptic. but ?T is certainly intuitive and has precedent
<andrewrk> ?T and %T are basically the same, except %T can be any error code instead of only null
<davr0s> is that like Error<T, E=()>
<davr0s> i guess you might use 'void' instead of rust syntax ()
Topa has quit [Ping timeout: 265 seconds]
<davr0s> "allocator ... For example, Rust seems to encourage a single global allocator strategy, "
<davr0s> i was sort of ok with the way that in C++ you can override the allocator per type, at least,
<davr0s> and you could then build abstractions for custom 'entity buffers' which allocate internally
<davr0s> with everything then backing onto a default global allocator
<andrewrk> that's a stdlib decision rather than a language decision
<andrewrk> you can build an ArrayList that takes a comptime allocator argument
<davr0s> it might influence language choices - it has influenced how Rust is going with 'box'
<davr0s> I like a lot of what jonathan blow says, but
<andrewrk> I think it's settled that in zig, the language will not call into the standard library
<davr0s> he's got the wrong end of the stick on some restrictions/capabilities in C++ - his first video shows some mistakes in his perceptions
<andrewrk> the closest thing to that is builtin.zig, which you can see an example of here: http://ziglang.org/documentation/master/#Compile-Variables
<davr0s> C++ is certainly a mess , but some parts aren't as bad as he thinks
<dimenus> davr0s: do you have retorts to any specific criticisms?
<davr0s> dimenus jonathan blows criticisms of C++, yes
<dimenus> he's softened on a few things, for instance types CAN have a constructor/destructor but it's an opt-in thing
<davr0s> ok.
<andrewrk> interesting
<davr0s> 'pointers and references' are unpopular with many peopel , i think he was amongst them
<davr0s> but I've learned from Rust that references make operator overloading much nicer
<davr0s> I dont like rust operator overloading
<davr0s> i've always liked C++ operator overloading .
<dimenus> he uses const reference for anything over 8 bytes by default
<dimenus> if you want mutable variablefs, you take a pointer to it instead
<dimenus> i think the beef is that in c++ you can do non-const references AND pointers which just brings confusion
<davr0s> so if you want to say foo += bar .. you needed to say &foo += bar like in rust maybe . actually he might not even allow that kind of overload
<davr0s> i know he doesn't like "dot-calls" e.g. "a.foo(b)"
<davr0s> he mentally associates that with OOP, but thats not necaserily true
<andrewrk> dot calls are great
<andrewrk> everybody loves dot calls
<davr0s> rust and go both show 'dot-calls' without C++ class model
<dimenus> not to rust's degree, andrew :P
<dimenus> imo
<davr0s> i am bitterly dissapointed C++ didn't get UFCS
<andrewrk> oh I see. yeah I don't like it when it's hard to figure out which function is being invoked
<davr0s> bouncing between methods and free functions makes me want to stab the committee to death
<dimenus> IIRC you can have a procedure that has a using for the first argument, which will allow you to call it like its a dot call
<andrewrk> in zig currently the only way to not know which function is being invoked is by using a function pointer explicitly
<davr0s> i know he has the idea of making functions infix, thats also intersting but only applies to args=2 ?
<dimenus> eg, struct foo - proc_on_foo(using foo) - afoo.proc_on_foo()
<davr0s> anyway i hope i'm amongst likeminded people in observing: dotcalls give synergy between 'dot-autocomplete' and reducing nesting
<andrewrk> autocomplete makes sense. what do you mean by reducing nesting?
<davr0s> a.foo().bar(y).baz(x,y,z) .. gave you handy suggestions as you type, and follows the order of calls
<dimenus> i personally don't like chaining like that
<dimenus> but that's just my opinion
<davr0s> baz(bar(foo(a) ,y),x,y,z)
<andrewrk> I think function calls should mostly be on their own lines
<dimenus> it also stems from the C# world (which is what I do professional work on) where people chain dot calls liek that
<davr0s> depends on the context
<dimenus> and are unaware they're making cosntant copies of strings over and over
<dimenus> my_string.TrimLeft('0').Pad().Blah().ToString()
<davr0s> well to my mind, that should be down to instrumentation and making 'the efficient way' easier, and library design
<dimenus> makes 4 or 5 copies of immutable data
<davr0s> some of those steps could be lazy eval stufd
<davr0s> there could be a specialization for some cases, then what that gives you is an intuitive interface to find the functions
<dimenus> i'm not against dot calls, but I don't like chaining, especially when it's long enough that it requires multiple lines
<davr0s> composing a few functions rather than figuring out a name (which someone then has to discover)
<davr0s> multiple lines I may draw the line. i dont like the 'builder pattern' heh
<davr0s> but maths type cases.. i find it's fine. target.sub(point).normalize() //get me a normal vector pointing at something
<davr0s> anyway thats why i like the idea of UFCS, i'm hopefully preaching to the converted but in C++ having to bounce between members and free-functions is hell
<andrewrk> as far as how zig decides about these things: the only important things are: * does the syntax/design lead to correct, maintainable software? * does it lead to the most optimal code?
<davr0s> i'm sure we agree on the goals
<andrewrk> other things besides these, such as does it look aesthetically pleasing, and is it fast to type, are not concerns
<davr0s> aesthetically pleasing is in the eye of the beholder: it all depends on one's path through life
<andrewrk> agreed
<davr0s> there's things where I agree analytically with a rust choice, but it still trips me up a little due to decades of C/C++ intuition
<davr0s> i usually write Type varname first then correct it, even thogh i think varname:Type is 'objectively' superior
<davr0s> thats why i'm big on wanting an option to omit types altogether
<davr0s> i want fn foo(a,b,c){...} to do the equivalent of template<typename A,typename B,typename C> auto foo(A a,B b, C c){ ...}
<dimenus> CRLF / tabs are two things that have caused me the most friction in Zig
<dimenus> otherwise, I've had a pretty good experience I think
<andrewrk> oh yeah. I need to get zig fmt going for you
<davr0s> one feature request I have for Rust is the abiltiy to omit types in "impls", like haskell does
<dimenus> CRLF I kind of understand, but the tabs I don't think I've asked about.
<davr0s> at that point I would accept compulsory traits a bit more
<davr0s> 'writing the trait saves you a bit of typing in repeated impl's, by specifying the pattern of types for a group of functions'
<dimenus> rust's compile time is way too long for me to entertain using it at this point
<dimenus> this obsession with the javascript style micro framworks while having to build dependencies from scratch each time
<dimenus> is just too much friction
<davr0s> i would hope this are details which can be solved with caching
<davr0s> ..rather than a design limitation
<andrewrk> dimenus, oh yeah btw I decided to work on compiler speed in the self hosted one only
<andrewrk> which is underway
<dimenus> that's logical, you don't really want to do all of the work twice
<dimenus> and focusing on it in the self-hosted version allows you to refine the language as you do it
<davr0s> i actually wonder if rust suffered for being self-hosted
<davr0s> they made decisions based on 'what does the compiler sourcebase look like'
<davr0s> and had to refactor as they made changes
<dimenus> Rust pays an enormous complexity cost in their pursuit of correctness
<dimenus> which may be super useful in certain problem domains
<davr0s> yeah i am sold on the idea that it would help collabortion across the internet - you'd have more faith in commits from disparate sources
<andrewrk> davr0s, you may be interested to read about zig's self hosting strategy
<davr0s> it does have a slighlty haskel-esque feel in that 'once you fought the compiler, it's surprising how often your code will actually work'
<andrewrk> I think we have already accomplished something very close to this in zig
<andrewrk> there are obviously some issues that the compiler cannot catch since we are not doing a borrow checker
<andrewrk> here's zig's self hosting strategy: https://github.com/zig-lang/zig/issues/89#issuecomment-328214707
<davr0s> yeah i'm happy with the middle ground, e.g. i point out how C++ references are 'not as verbose as rust borrows, but not as unsafe as raw pointers'
<andrewrk> in summary: compiling zig is always exactly a 3 step process: 1. compile C++ compiler 2. use 1 to compile zig compiler 3. use 2 to compile zig compiler again
<andrewrk> no matter how much we change the language, it's always a 3 step process. we never delete the C++ code from the repo
<davr0s> ah so you will keep the c++ version going
<andrewrk> yes. this makes it easier for package maintainers such as debian
<davr0s> i dont know if you read my idea , i never got round to it - it was to ensure my C++ compiler used a C++ subset that I could transpile to my language :)
<andrewrk> I did read that
<andrewrk> sounds ambitious
<davr0s> the design of the language had to include 'enough features to match what I used in C++' and of course i could constrain my C++ useage accordingly
<davr0s> ambitious if you considered a huge feature set
<dimenus> i'm actually writing a JAI-lite thing on the side
<dimenus> with a bytecode backend and AST injection
<dimenus> i know nothing about language design though
<davr0s> everyone in the world could have their own personal LLVM front-end lol
<dimenus> so its been fun
<davr0s> maybe thats what we need, just make the tools for making languages better :)
<dimenus> i'm just exploring ideas we're not doing in zig to better understand them
<dimenus> sometimes Jon gets too ranty and is tough to listen to.
<davr0s> i haven't looked at jai for a while
<andrewrk> dimenus, that's smart
<davr0s> recently i spent a while trying to get back into rust
<andrewrk> have you come up with anything that we should reconsider?
<dimenus> andrewrk, that's actually where I've been the past month or two
<davr0s> i figured the safety stuff would introduce me to new problems ii haven't considered
<davr0s> i've not dealt with web servers etc much at all
<davr0s> my background is console gamedev
<andrewrk> neat
<andrewrk> I've done a little PC game dev, but never made money doing it
<dimenus> andrewrk: I'm not far enough along yet, i have never even taken a compiler course in college so I'm just experimenting and remaking stuff as I go
<davr0s> i'm kind of stuck these days,
<dimenus> i have valid ast representation of simple expressions and a C output backend (which seemed like the easiest first step rather than LLVM)
<davr0s> i'm most productive in C++, but would be loathed to ever work with other people on C++ projects again
<dimenus> but the bytecode backend is barely started
<dimenus> jon can call arbitrary functions in bytecode from the demo, but maybe he's only looking up addresses in a table of things he defined
<davr0s> i've dabled with haskel a bit and actually it made more sense after rust, but i dont think i want to move to a GC'd pure functional language
<dimenus> i like the compiler message pump and his pipelining of small pieces of code though
<davr0s> over time i've actually begun to appreciate it's syntax
<dimenus> but that seems difficult to juggle once you get threading involved
<dimenus> i've spent 0 time w/ haskell
<davr0s> i now think we've all got it wrong with the C family languages, ... but it's too hard to change because it's burned into so many skulls, mine included
<davr0s> (i mean r.e. syntax)
<davr0s> the currying /function call idea is really awesome IMO
<davr0s> anyway as a middle ground, i'm content with a compact lambda syntax. Rust's is ok, the world's broader use of x=>x*x is also ok
<davr0s> anyway i'm not about to ask for haskell-esque syntax
<davr0s> i do like expression based syntax a lot, i gather you have this
<davr0s> you did the same 'for ... else' (after i did this, i discovered it alraedy exists in python..)
<davr0s> do you have a stance on default args etc
<davr0s> this is one thing i find irritating about rust: its great having a macro system, but i'd prefer the inbult function call syntax to be as powerful as possible
Hejsil has joined #zig
<davr0s> rust sometimes wants you tto wrap macros to cover ommissions in the core features (like default args, n-ary functions)
<davr0s> which means mixing 2 syntaxes to acheive one thing
<andrewrk> davr0s, stance on default args is "no"
<davr0s> i'm guesing that means you dont want named parameters either
<davr0s> my take is that can save quite a bit of repetition , wrapping trivial helpers for common calls
<davr0s> you could have 3 args and you commonly ommit any one of them
jinank has quit [Ping timeout: 260 seconds]
<davr0s> you've saved writing 3 functions
<davr0s> i find rusts's stance on this clunky ... "make_window_named()" "make_window_unnamed()" etc
<davr0s> C++ is kind of ok with trailing defaults but i've always wanted true named args... the ability to ommit any .. and it's not just 'functions with a lot of args..'
<Hejsil> Ye
<Hejsil> Yo*
<Hejsil> Is that an argument on defaults args I see?
<davr0s> hehe yes
<davr0s> r.e. Default Args vs Currying: i think you have 2 contrasting use cases. Maths functions suit currying... GUI functions , AI framework setup etc suits Defaults
<davr0s> so my take would be 'the first default ends curry-ability'
<davr0s> dont give defaults for funtions like "matrix_mul"
<davr0s> give defaults for functions like "create_window"
<davr0s> i think currying might be weird outside of haskells's syntax anyway, but i'm not opposed to it
<davr0s> i think both could co-exist
<Hejsil> Unless Zig get's closures, then I don't think it'll have currying either
<Hejsil> They are kinda the same, in that you construct something to capture values
<davr0s> whats rather important IMO is at least 'inlineable closures', for writing code in abstractions
<davr0s> (or by the term closure do you really mean objects that capture state at runtime)
Topa has joined #zig
<Hejsil> Yes
<davr0s> i think 'inlineable lambdas' are really handy for writing loops in a way that you can parallelize
<davr0s> foreach(foos, |x|...) ---> swap in par_foreach(foos, |x| ...)
<MajorLag_> I'm missing something about array declaration, or there's a bug: const size = 25; var arr = [size]u8; ==> error: variable of type 'type' must be const or comptime
<davr0s> i really liked Rust's original 'do notaiton' where they basically had a trailing lambda syntax that looked more like 'inbuilt constructrs'
<MajorLag_> wait, I see what i'm missing, nevermind
<davr0s> do foo.foreach |x| { ... }
<MajorLag_> I'm useless pre-lunch
<Hejsil> Lol, was about to answer that
<Hejsil> Hmm, idk. Have never written lambda heavy code
<Hejsil> But I do find, that being able to have procedures without names is super useful
<davr0s> from my experience in console gamedev i've yearned for it to be better
<Hejsil> I rarely need the closure
Topa has quit [Ping timeout: 252 seconds]
<davr0s> the mentality to write parallizeable stuff fits the functional paradigm 'apply this algorithm to this colleciton' etc etc
<davr0s> and we needed that even for singlethreaded stuff
<davr0s> the issue shows up in pipelineing on some processors, and modern processors are moving toward generalized SIMD
<davr0s> e.g. the intel 'vgather' stuff
<Hejsil> vgather? I'll look into that!
<davr0s> basically it's vectorized indexed addressing
<Hejsil> Aah
<davr0s> src1[i=0..n-1] = src2[ src3[i=0..n-1] ]
<davr0s> where 'src2' is an address register, 'src1, src2' are vector registers
<davr0s> the existance of this kind of instruction facilitates vectorizing more general purpose code... so long as your 'loop iterations' can be proved independant
<davr0s> now what if you wrote everything as maps in the first place
<Hejsil> Well, then you depend on your optimizer to inline and all that stuff
<davr0s> its the same scenario we had back on the xbox 360... your code had to be unrolledfor pipelineing, assuming the iterations were independant
<davr0s> 'optimizing' became refactoring traversals until loop bodies were independant hence unrollable
<davr0s> yeah but if you actually write it as a map.. it's actually , iroinically, more explicit
<davr0s> you're saying "apply this function across this collection"
<davr0s> rather than "here's some serial steps, can you figure out if they're parallelizable?"
<davr0s> the point is the universe has this underlying capability - parallelism - and our tools (and even CPU designs) are evolving toward it
<Hejsil> Would be interesting if CPU's where designed with a more functional mindset
<Hejsil> I don't even know if it could work buut
<davr0s> you probably know they have all sorts of madness going on at runtime to figure out if opereations are dependant or not
<davr0s> but SIMD basically makes it explicit
<Hejsil> Yeye
<davr0s> its like they had to do the interim step of OOOE etc because our tools/mindset hadn't caught up
<davr0s> but now we have GPUs and compute shaders, and FP
<davr0s> r.e. 'functional mindset', the SIMD stuff is already explicitely 'parallel mindset'
<davr0s> and vgather broadens it
<davr0s> i would really like to do away with the CPU/GPU divide eventually, just have a large number of cores maybe paired up sharing wide vector units, so they dont get crippled on scalar threads (more like xeon phi)
<davr0s> not sure if we'll get there
<davr0s> trying to make serial stuff go faster is flogging a dead horse
<davr0s> has been for a long time
<dimenus> just use liquid nitrogen
<dimenus> problem solved
<davr0s> anyway, thats why i like inlineable lambdas and high-order-functions, and the ability to use 'a trailing lambda without nesting'
Topa has joined #zig
<MajorLag_> Ok, I wasn't testing what I wanted to test anyway with the last bit. var size = usize(25); var temp: [size]u8 = undefined; => error: unable to evaluate constant expression
<Hejsil> Size is var, so the compiler doesn't know it's value
<MajorLag_> I'm confused as to why that matters.
<MajorLag_> You can't push an array onto the stack at runtime?
<Hejsil> Let's say you have a global size
<Hejsil> Aka it can change whenever
<Hejsil> Then you have no idea how big temp will be
<Hejsil> aka you should heap allocate it
<Hejsil> Btw, arrays have a comptime known size in Zig
<MajorLag_> Alright, fine, not allowed because its dangerous. Whatever.
<Hejsil> Indeed
<Hejsil> Also [1]u8 != [2]u8. They are different types
<Hejsil> And types only exists at compile time
<Hejsil> Soo the compiler have to know size, before it can figure out the type of temp
<dimenus> MajorLag_: VLAs are kind of a hack in C
<dimenus> also, what's to stop you from declaring an array larger than the size of the stack?
<Hejsil> Indeed
<Hejsil> But you could do that at compile time too
<Hejsil> I don't think Zig is gonna stop you from [@maxValue(u64)]u64
<Hejsil> :)
<MajorLag_> I'm a little annoyed by the "mommy knows best" approach being taken here if I'm honest.
<Hejsil> Use alloca then :)
<Hejsil> From libc
<Hejsil> Isn't that in libc
<Hejsil> Idk
<dimenus> Yes, alloca is more appropriate
<dimenus> MajorLag_: I understand the sentiment, but consider my question.
<dimenus> if you rely on a library function to call it, the stack limit can be specified explicitly
<dimenus> eg, if size_requested > stack_size -> panic
<dimenus> where as if the compiler just assumes the size is valid, there's no way to explicitly check it in release build without paying an additional runtime cost
<dimenus> w/ guard pages etc
<MajorLag_> Great, so I'm supposed to link libc to use Zig? That's silly. Zig had an @alloca, it was removed, along with = zeroes.
<MajorLag_> Maybe I'm just too used to C, but these things are starting to add up for me.
<dimenus> MSVC doesn't support VLAs either
<dimenus> as for the alloca removal, you'll have to ask andrewrk
<dimenus> i'm not on the rust end of 'easy to abuse means we should remove it' type of thinking
<Hejsil> I mean, couldn't alloca be implemented in userspace with inline assembly
<Hejsil> Maybe we want that in the std
<Hejsil> Idk
davr0s has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<dimenus> I do think we should have an alloca or eq personally
<Hejsil> Having alloca return a slice, instead of pointer is reason alone to have it in Zig instead of using libc's
<Hejsil> good enough reason alone*
<Hejsil> Anyways, I haven't needed alloca yet in Zig, soo I'll leave this discussion to people who knows it's usecase
<dimenus> MajorLag_: this illustrates the example with alloca/VLAs: https://pastebin.com/9iaiPrzk
<dimenus> this code crashes in linux on gcc, but is fine on windows
<dimenus> because alloca in windows actually allocates on the heap behind the scenes if you ask for too much memory
<dimenus> soooo, the behavior of alloca isn't even agreed on between C compilers
<dimenus> VLA memory is defined at function scope rather than block scope, unlike everything else
arBmind has joined #zig
<andrewrk> MajorLag_, why do you think you have to link libc to use zig?
<MajorLag_> andrewrk, I don't. It was a suggestion from Hejsil, probably in jest.
<andrewrk> oh I see
<andrewrk> let me ask you this question, what's the use case for alloca?
<andrewrk> do you know the upper bound of the size of the data you want to add to the stack at compile time?
<MajorLag_> In this case, I needed a temporary buffer of variable size and didn't want to heap allocate it. In my case, I can certainly heap allocate but it wasn't the way I wrote the code at first, but I imagine there are use cases where that isn't desired. I honestly don't know. alloca exists, so someone had a use for it.
<andrewrk> so the size of the amount you need to alloca is only known at runtime?
<andrewrk> how do you know that size isn't larger than your stack size?
<MajorLag_> I see where you're going, if I know the upper bound I could allocate an array of that size and take a slice to it, which is another option.
<andrewrk> one of the features that I plan to add to zig is figuring out the stack size upper bound at compile time. then we can use an actually correct value for the initial stack size. this is especially valuable for threads
<MajorLag_> In this case I knew it would fit because I knew what I was loading. I'm probably not a good candidate for trying to understand alloca use cases, it was just where I went on first attempt out of habit.
<andrewrk> if you know it will fit, then uses a compile time known size for an array. this tells zig that you might need up to that many bytes in this stack frame, and it can take that into account in measuring the appropriate stack size
<andrewrk> one of zig's core values is robustness, and knowing the correct stack size is a legitimate concern
davr0s has joined #zig
<andrewrk> MajorLag_, just throwing this out there - I really appreciate the fact that you're using zig but you're also skeptical. I hope you don't feel like an outsider here, I really value your opinion especially if it seems like everybody disagrees with you
<andrewrk> we need diversity of thought
<DuClare> I am so disappointed that people on lobsters ended up in a completely tangential argument about the title :(
<Hejsil> Lol
<MajorLag_> andrewrk, thanks. I really do appreciate Zig's existence even if I sometimes disagree with or fail to see the rationale behind its choices.
<andrewrk> Hejsil, your zig-crc made me realize that git submodules is a pretty reasonable way to accomplish zig packages
<Hejsil> Until multible packages rely on the same package :)
<Hejsil> But ye, it's decent
<Hejsil> Just have a top level index.zig in you lib, and all is good
<andrewrk> if you want you can add this in your build.zig: randomizer.addPackagePath("crc", "src/zig-crc/index.zig");
<andrewrk> then you can @import("crc") anywhere in your app
<Hejsil> Omg really?
<Hejsil> I always though "std" was some hardcoded thing
<Hejsil> Nice
<andrewrk> lmk if it works, I haven't tested it too much yet
<Hejsil> That build system needs documentation lol
<andrewrk> yeah. #367
<andrewrk> I'm a lot more motivated to add docs now that the example code is tested
<Hejsil> Understandable
<andrewrk> there are some pretty major speed improvements we can do to the build system too, but that got pushed back to 0.3.0
<andrewrk> you'll also be able to print a visualization of the dependency graph of a build
<Hejsil> Exciting stuff
dimenus has quit [Quit: Leaving]
davr0s has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
davr0s has joined #zig
wilsonk has quit [Remote host closed the connection]
aaa_ has joined #zig
davr0s has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
Hejsil has quit [Read error: Connection reset by peer]
davr0s has joined #zig
DuClare has quit [Changing host]
DuClare has joined #zig
arBmind has quit [Quit: Leaving.]
davr0s has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
Topa has quit [Ping timeout: 265 seconds]
aaa_ has quit [Quit: Page closed]