<FromGitter>
<Timbus> the --production switch forces the lockfile to be used, and doesn't install dev dependencies. it's not a 'build' option. you want --release
<FromGitter>
<Timbus> any extra options given to `build` are just passed onto the compiler btw
Heaven31415 has quit [Quit: Leaving]
baweaver is now known as baweaver_away
<FromGitter>
<proyb6> This is interesting for a Gold badge.
<FromGitter>
<proyb6> The site itself is also a Gold but how about getting Crystal and web frameworks there?
akaiiro has joined #crystal-lang
<raz>
timbus: thanks! that's good to know. docs are kinda shallow on this part, but it makes sense
<FromGitter>
<girng> @proyb6 i should prob being use gist a lot more but i usually use paste.ee or just crystal playground :/
_whitelogger has joined #crystal-lang
baweaver_away is now known as baweaver
return0e has quit [Remote host closed the connection]
return0e has joined #crystal-lang
akaiiro has quit [Remote host closed the connection]
rohitpaulk has joined #crystal-lang
_whitelogger has joined #crystal-lang
return0e has quit [Remote host closed the connection]
return0e has joined #crystal-lang
DTZUZO has quit [Ping timeout: 246 seconds]
<FromGitter>
<roblally_twitter> Hey everyone, looking for a little help. I've got some data in a gzipped file and I can read it in python in under 100 milliseconds but when I read it with crystal it takes 5+ seconds. I suspect I'm doing something wrong.
<FromGitter>
<roblally_twitter> It is line oriented data and I'm basically trying to perform a map over the unpacked lines in the file to turn it into a sequence of objects.
<FromGitter>
<girng> other devs will give u a better answer
<FromGitter>
<proyb6> I wonder if you knows the answer for https://github.com/the-benchmarker/web-frameworks ⏎ Have a quick read on all Crystal web frameworks, are those running single or multiple process? From the script, Spider-Gazella seems to launch more than one process if I'm correct and how do I see if other Crystal framework if they are really ran more than one process? ⏎ ⏎ I assume both Laravel and Vapor are
return0xe has quit [Remote host closed the connection]
return0xe has joined #crystal-lang
<FromGitter>
<j8r> @bararchy your company is awesome 🎉
<FromGitter>
<bararchy> Thanks @j8r :)
rohitpaulk has quit [Ping timeout: 252 seconds]
<FromGitter>
<j8r> @roblally_twitter with `map` and `to_a` you create 2 arrays. Intialize the array before the the `file.each_line`, and then do in the block `array << line.transform`
<FromGitter>
<proyb6> Oh, I get what the code mean thanks!
DTZUZO has quit [Ping timeout: 244 seconds]
DTZUZO has joined #crystal-lang
DTZUZO has quit [Ping timeout: 252 seconds]
<FromGitter>
<roblally_twitter> Thanks @girng , but even when I have the transform be a no-op the performance is still surprisingly poor.
<FromGitter>
<roblally_twitter> @j8r The first map doesn't seem to create an array, there's a lazy iterator, that's why I added the to_a. It works the way I did it, it works the way you suggest, too. The performance is the same, and it is still much slower than the python version.
<FromGitter>
<roblally_twitter> @j8r If I compile it with --release it is still at least an order of magnitude slower than the python version.
<FromGitter>
<roblally_twitter> Is there a way to configure the buffer size that a Gzip::Reader will use? I suspect that the problem is that the 1K or 4K buffers that seem to be used in much of the IO code is too small.
<FromGitter>
<j8r> @roblally_twitter what's th size of the file?
<FromGitter>
<bew> I guess it's the string creation for each line that slows everything down...
<FromGitter>
<roblally_twitter> @j8r The data is about 5Mb gzipped so a 1K buffer is going to loop 5 thousand times creating strings and glueing them together.
<FromGitter>
<roblally_twitter> @bew That's probably part of it, fewer round-trips would also create fewer strings.
rohitpaulk has joined #crystal-lang
rohitpaulk has quit [Ping timeout: 240 seconds]
<FromGitter>
<roblally_twitter> Looking at Python's implementation, under the covers it has this:
<FromGitter>
<roblally_twitter> "DEFAULT_BUFFER_SIZE An int containing the default buffer size used by the module's buffered ⏎ I/O classes. open() uses the file's blksize (as obtained by os.stat) if ⏎ possible."
<FromGitter>
<roblally_twitter> I believe that'll be 4K on most SSD systems.
<FromGitter>
<roblally_twitter> The 1K buffer that crystal seems to prefer using will save a tiny amount of memory, but trade it for a lot of small garbage collection.
<FromGitter>
<roblally_twitter> Of course, I could be wrong, I am not an expert in Crystal.
<FromGitter>
<asterite> Could you share the gzip file and the python code? And also the Crystal code. I'd like to see if I can optimize it, or where is the problem.
rohitpaulk has joined #crystal-lang
<FromGitter>
<roblally_twitter> I'm playing with converting examples from the "Think Stats" book from Python to Crystal.
<FromGitter>
<roblally_twitter> If you download the data files you can put them in a directory called "data" in the root of the project and it should work.
<FromGitter>
<roblally_twitter> You can put the python file in the same place and it should run.
return0xe has quit [Ping timeout: 245 seconds]
return0e has joined #crystal-lang
<FromGitter>
<asterite> I'll take a look
akaiiro has joined #crystal-lang
return0e has quit [Remote host closed the connection]
return0e has joined #crystal-lang
rohitpaulk has quit [Ping timeout: 260 seconds]
return0e has quit [Ping timeout: 246 seconds]
return0e has joined #crystal-lang
<FromGitter>
<asterite> So yeah, Crystal is slower because Gzip::Reader and Flate::Reader don't include IO::Buffered. When I do that reading lines of gzip is twice as fast as python, and the overall program works 4 times faster. I'll send a PR to fix that. There will be two buffered for this (one in Flate::Reader, the other in Gzip::Reader) but each buffer is just 8KB and one usually doesn't work with thousands of IOs at a time so I
<FromGitter>
... guess that's fine
<FromGitter>
<roblally_twitter> That's great. Thank you for looking at this.
DTZUZO has joined #crystal-lang
DTZUZO has quit [Ping timeout: 252 seconds]
<lvmbdv>
i'm fixing openssl bindings for libressl again
<FromGitter>
<j8r> lvmbdv : how would you do this time?
<lvmbdv>
_carefully_ :P
<lvmbdv>
make spec froze my machine, I'll elaborate when it's okay again
<FromGitter>
<asterite> @roblally_twitter here's the PR: https://github.com/crystal-lang/crystal/pull/6916 . You can try it out by checking out that branch and using `path_to_that_branch/bin/crystal your_program --release` and check if it's faster with that
<lvmbdv>
i think libressl has a different error code for that
<lvmbdv>
other than that, it seems solid
return0xe has joined #crystal-lang
return0e has quit [Ping timeout: 272 seconds]
<FromGitter>
<roblally_twitter> @asterite I will try it out.
<FromGitter>
<herrcykel> heyo, how can I cast a Byte slice to a C struct?
<FromGitter>
<herrcykel> or even read directly from an IO::FileDescriptor into a C struct
akaiiro has quit [Ping timeout: 252 seconds]
<oprypin>
herrcykel, read bytes into a `Slice` of `sizeof(Type)`, then `slice.to_unsafe.as(Type*).value`
Heaven31415 has joined #crystal-lang
<Heaven31415>
Hi
<FromGitter>
<herrcykel> @oprypin That did the trick. Thank you, sir!
akaiiro has joined #crystal-lang
akaiiro has quit [Remote host closed the connection]
<FromGitter>
<HarrisonB> Is there an open task to make generated docs more mobile-friendly?
<Heaven31415>
I have seen some mentions about refactoring docs generator in general, but I don't recall anything specific about mobile platforms.
<FromGitter>
<straight-shoota> @HarrisonB Not exactly, no
<FromGitter>
<straight-shoota> It is evident that the docs generator needs a lot of improvements and probably a complete rewrite separating the doc extraction from rendering the output format.
<FromGitter>
<straight-shoota> IMHO it makes most sense to tackle the internal rewrite first and then tackle the UI redesign. But that take a lot of effort and nobody is currently proceeding that.
<FromGitter>
<straight-shoota> Simple improvements to the current implementation are still welcome, though 👍
<FromGitter>
<HarrisonB> Is there an issue opened for discussing the desired changes? Might be interested in lending a hand 🙂
<FromGitter>
<roblally_twitter> Hey @asterite - Your patch dramatically improves performance when I run specs, but that branch from your repo fails a couple of specs when I run make spec and I can't build - I get an LLVM error when I try. (This is on OSX)
<FromGitter>
<roblally_twitter> crystal spec spec/std/socket/tcp_socket_spec.cr:102 # TCPSocket fails when host doesn't exist ⏎ crystal spec spec/std/socket/tcp_socket_spec.cr:108 # TCPSocket fails (rather than segfault on darwin) when host doesn't exist and port is 0
<FromGitter>
<roblally_twitter> And when I try to build a binary I get another error:
<FromGitter>
<roblally_twitter> Assertion failed: (cast<DISubprogram>(Scope)->describes(MF->getFunction())), function getOrCreateRegularScope, file /private/tmp/llvm@3.8-20170130-27399-1g3iurp/llvm-3.8.1.src/lib/CodeGen/LexicalScopes.cpp, line 160.
<FromGitter>
<roblally_twitter> (I built this from your repo at github.com/asterite/crystal-1)
DTZUZO has joined #crystal-lang
sagax has joined #crystal-lang
<FromGitter>
<proyb6> Do you think it’s a good idea to run a set of benchmark suite on timing each time Crystal team made a changes to Master branch? To reflect if there is any performance issue and fix as early as possible?