<panderssen>
the error i get is "Repeated allocation of very large block"
<panderssen>
i am opening dozens of large files, each aroudn 30MB, but after some light search/replace then they are written to disk and on to the next
<panderssen>
is there a way to 'deallocate' or 'garbage-collect' them? (i would have thought that the loop ending would do so automatically...)
<panderssen>
and eventually the script will eat up all available RAM and crash
bjz has quit [Read error: Connection reset by peer]
bjz has joined #crystal-lang
panderssen has quit [Quit: Page closed]
Ven has joined #crystal-lang
soveran has joined #crystal-lang
soveran has joined #crystal-lang
soveran has quit [Changing host]
mark_66 has joined #crystal-lang
<FromGitter>
<drosehn> It's 3am for me right now, but I might look at that tomorrow. But one question that I have is why do you have to read in the entire file all at once, instead of doing it a line-at-a-time? Is the gsub matching a multi-line pattern?
<FromGitter>
<drosehn> It *might* help to add a `xml = nil` or `xml = ""` after the `puts`, before it loops back up. But you shouldn't need to do that.
bjz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
bjz has joined #crystal-lang
Ven has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Ven has joined #crystal-lang
badeball_ is now known as badeball
gloscombe has joined #crystal-lang
Ven has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
soveran has quit [Remote host closed the connection]
bjz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
Rinkana has quit [Ping timeout: 260 seconds]
soveran has joined #crystal-lang
soveran has joined #crystal-lang
soveran has quit [Changing host]
soveran has quit [Ping timeout: 252 seconds]
soveran has joined #crystal-lang
soveran has joined #crystal-lang
soveran has quit [Changing host]
<Papierkorb>
As it's quiet in here right now, I'll take this opportunity to shamelessly plug my new data serialization and RPC shard `cannon`: https://github.com/Papierkorb/cannon
<RX14>
Papierkorb, 5 cycles per to encode that array?
<RX14>
that's insane
<Papierkorb>
A simple Array(Int32) is nothing else than a Slice, so what you see there is memcpy() overhead
<Papierkorb>
But thanks for the compliment ^^
<Papierkorb>
I can still improve the slice decoding, atm it's doing useless memory zeroing before I copy stuff into it
soveran has quit [Remote host closed the connection]
<RX14>
the only applicable difference is CPU generation
<Papierkorb>
Is that DDR4 at work?
<Papierkorb>
More recent RAM is actually making a difference at once? that's something new.
<RX14>
i've gotten desperate
<RX14>
using --mcpu=ivybridge
<RX14>
great
<RX14>
it's now worse
<RX14>
lol
<RX14>
huh
<RX14>
no
<RX14>
it's crystal master thats the problem
<RX14>
weird
<RX14>
as llvm does 90% of the work
<RX14>
well
<RX14>
99% of the optimization
<RX14>
weird
<RX14>
wait a sec
soveran has joined #crystal-lang
<RX14>
the cache of object files doesn't include --mcpu options then I guess
<RX14>
oh I guess it does?
<RX14>
nvm
<Papierkorb>
RX14: Any idea how to trace a fiber deadlock which may or may not happen in the scheduler?
<RX14>
attempt to reduce with as few fibers as possible
<RX14>
and then puts debug
<RX14>
the alternative is to cry I think
<Papierkorb>
Have to test it, but all spawn{} heavy servers I write have had issues with not receiving any data, or not even accepting a tcp socket anymore at some point at random intervals, while those without fibers work just fine
<RX14>
i've had probalems with camo.cr just failing
<RX14>
just it refuses to do TCP connections outwards
<RX14>
and it just hangs
<RX14>
until the socket data timeout triggers
<RX14>
so all the response times are multiples of 10 seconds
<Papierkorb>
did you mess with #sync= or #tcp_nodelay ?
<RX14>
creating a pool of 10 fibers to do the requests would be very helpful in boosting that
<RX14>
same on the server
<RX14>
it's limited by not being concurrent
<Papierkorb>
the pool on the server doing what?
<RX14>
uhh
<RX14>
idk
<RX14>
calling the accept loop
<RX14>
I guess
<RX14>
i guess multiple fibers can accept
<Papierkorb>
I added a spawn around the loop{}, which somewhat resembles what I usually do
<Papierkorb>
That is, a fiber accepts and spawns a fiber per incoming connection, reconnects per second isn't that important to me
sz0 has joined #crystal-lang
<Papierkorb>
Updated the gist, the server.cr now somewhat resembles what I do in the RPC stuff: One accept fiber, a fiber per connection, and for each fully read incoming request, spawn a fiber doing the calculations (or nothing in this case), and writes the response out
<Papierkorb>
That spawn galore costs me 10k requests/sec, so I now get 105k requests/sec. Which as a price on its own is acceptable