<Holo_>
Hello, I wrote a simple one liner that should list out the currently running process names. Its works perfectly fine in ruby but I'm trying to figure out why it doesn't work in Crystal as well . https://gist.github.com/Holofag/2bb29baacec93840ce28
<Holo_>
Oh and this will only work on linux fyi
Philpax_ has joined #crystal-lang
<BlaXpirit>
yeah, Holo_, I have no clue
<BlaXpirit>
well actually I do
<BlaXpirit>
the file reports its size as 0
<Holo_>
Ohhh I see.
HakanD___ has joined #crystal-lang
Sadin has joined #crystal-lang
<BlaXpirit>
Holo_, i commented on gist
<Holo_>
Thank you
Sadin has quit [Ping timeout: 240 seconds]
barosl has joined #crystal-lang
Virviil has joined #crystal-lang
Virviil has quit [Read error: Connection reset by peer]
Virviil_ has quit [Ping timeout: 272 seconds]
barosl has quit [Quit: Leaving]
Philpax_ has quit [Ping timeout: 240 seconds]
Philpax_ has joined #crystal-lang
<ytti>
doesn't File.stat(file).size work
Sadin has joined #crystal-lang
bjmllr_ has quit [Ping timeout: 246 seconds]
bjmllr_ has joined #crystal-lang
Sadin has quit [Ping timeout: 256 seconds]
vikaton has quit [Read error: Connection reset by peer]
manveru has quit [Ping timeout: 260 seconds]
n1ftyn8_ has quit [Ping timeout: 250 seconds]
wminor has quit [Read error: Connection reset by peer]
emmanueloga has quit [Ping timeout: 240 seconds]
iamstef has quit [Ping timeout: 250 seconds]
avdi has quit [Ping timeout: 240 seconds]
Holo_ has quit [Ping timeout: 250 seconds]
bjmllr_ has quit [Ping timeout: 240 seconds]
jwaldrip has quit [Ping timeout: 250 seconds]
victor_lowther_ has quit [Ping timeout: 250 seconds]
manveru has joined #crystal-lang
<BlaXpirit>
ytti, what do you mean
avdi has joined #crystal-lang
<BlaXpirit>
sure, it works and returns 0 as the OS reports
Holo_ has joined #crystal-lang
HakanD___ has quit [Ping timeout: 272 seconds]
HakanD___ has joined #crystal-lang
avdi has quit [Read error: Connection reset by peer]
vikaton has joined #crystal-lang
jwaldrip has joined #crystal-lang
bjmllr_ has joined #crystal-lang
emmanueloga has joined #crystal-lang
victor_lowther_ has joined #crystal-lang
<Holo_>
If I have an array of four UInt8, is there a clean way of convert this into a uint32?
n1ftyn8_ has joined #crystal-lang
<Holo_>
Do I just convert them all to a int32 and then add them together or no?
avdi has joined #crystal-lang
iamstef has joined #crystal-lang
wminor has joined #crystal-lang
<crystal-gh>
[crystal] chastell opened pull request #1948: Fix as-in-block-return example (gh-pages...as_grammar_fix) http://git.io/v0n72
<FLOOR_9_>
you mean quadcore , hexa is a six core not mentioning HT
<FLOOR_9_>
4 physical cores with HT, so the OS sees 8 cores
<FLOOR_9_>
that's really nice tho 75K REQS, but i would of thought it scaled linear , i do the same benches with my node app and wrk
<sdogruyol>
well it's probably os limited
<FLOOR_9_>
ulimit -n you can see your limit
<FLOOR_9_>
in a bash shell
<sdogruyol>
lol it's 256
<sdogruyol>
on os x 10.11
<FLOOR_9_>
you can bump it up to 1 milllion no problem
<FLOOR_9_>
mine is at 9999 atm and benching with both AB and wrk
<sdogruyol>
does it make difference?
<FLOOR_9_>
of course
<sdogruyol>
let me check
<FLOOR_9_>
because the OS opens up a file descriptor for ever socket the server has to open
<FLOOR_9_>
server - wrk 75K tcp sockets it has to open, so 75K file discriptors
<FLOOR_9_>
i'm on mac myself, 10.8 right now, so i know the limitations
<FLOOR_9_>
i also checked this with a nodejs cluster app, and after raising that limit , i gained almost 2X more REQ/S
<sdogruyol>
is it really? i actually think that the request finishes and closes the socket
<jhass>
it might just delay the "ewk, out of FDs, need to cleanup now!" phase, if there's such a thing
<FLOOR_9_>
it goes like so, app sends a signal to open a tcp socket, os kernel creates a file handler( unix stream/ file descriptor)
<FLOOR_9_>
that way , data can pass through it, wrk established a connection and that gets calculated in the final result
<jhass>
does accept() open an FD on call or is that delayed to an actual connection coming in?
<FLOOR_9_>
accept does open a tcp socket/ file descriptor
<jhass>
I know, that wasn't my question
<jhass>
my question was *when* it does that
<FLOOR_9_>
right away when you launch an app
<sdogruyol>
is doesnt delay?
<FLOOR_9_>
you can check , with netstat when it opens
<FLOOR_9_>
after that for every connection, kernel creates a socket, in nanoseconds
<sdogruyol>
is it only os x specific ?
<FLOOR_9_>
all unix-like systems
<jhass>
FLOOR_9_: I'm talking about accept(), not listen()
<FLOOR_9_>
BSD, mac linux
<jhass>
or bind()
<FLOOR_9_>
yeah of course, because you can see it listening with netstat
<FLOOR_9_>
right?
<FLOOR_9_>
ow w8
<FLOOR_9_>
uhm
<sdogruyol>
guess i just messed up my system while trying to mess up with file descriptors :P
<sdogruyol>
hold on
<FLOOR_9_>
The accept() function shall extract the first connection on the queue of pending connections, create a new socket with the same socket type protocol and address family as the specified socket, and allocate a new file descriptor for that socket.
<FLOOR_9_>
see
<jhass>
yes
<jhass>
now if there's nothing in the queue
<jhass>
accept() blocks
sdogruyo_ has joined #crystal-lang
sdogruyol has quit [Read error: Connection reset by peer]
<jhass>
my question is, does it allocate the FD prior going into the blocking period or after when a connection arrived in the queue?
<sdogruyo_>
that's a really nice question
<FLOOR_9_>
good question, the only way to know is to check your open files before you launch
<FLOOR_9_>
and right after you started wrk
<FLOOR_9_>
check you file descriptor count
<sdogruyo_>
i think only when in the blocking phase
sdogruyo_ is now known as sdogruyol
<FLOOR_9_>
i only know from experience on linux , it immediately shows results, after eith raising or lowering that limit
<FLOOR_9_>
same on mac
<FLOOR_9_>
*either
<FLOOR_9_>
but never heard of preallocating FD
<FLOOR_9_>
but you maybe be more knowledgeable in that than me jhass
<FLOOR_9_>
sounds to me you dealt with socket programming in C
<FLOOR_9_>
maybe it has something to do with TCP_NODELAY jhass?
bjz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
vegai has left #crystal-lang [#crystal-lang]
<jhass>
idk
<FLOOR_9_>
a good thing to do is check open file desc before wrk
<FLOOR_9_>
and after
<FLOOR_9_>
sdogruyol: lsof -pid | wc on mac
<sdogruyol>
i see
<FLOOR_9_>
one moment this is for linux that command
<FLOOR_9_>
lsof | wc -l for all
<FLOOR_9_>
lsof -p 84091 | wc
<FLOOR_9_>
for pid
DeBot has joined #crystal-lang
HakanD_____ has quit [Quit: Be back later ...]
<FLOOR_9_>
honestly sdogruyol if you get 65K on 1 cpu, and 71K on 4 cores with HT, you are hitting a bottleneck i think
<sdogruyol>
yeah that's why i said os specific
<sdogruyol>
why dont you try it on your system
<FLOOR_9_>
yeah i was just thinking trying a ubuntu live cd
<FLOOR_9_>
for removing the HD bottleneck and installing crystal and kemal
<FLOOR_9_>
the better i can locate the bottleneck, don't think it's the cpu
<FLOOR_9_>
have you updated your code with the new worker code?
<jhass>
keep in mind Crystal is still single core
<jhass>
coroutines are not distributed among the available CPUs yet
<jhass>
single threaded I should say
<FLOOR_9_>
so his worker modification, is the result of the HT kicking in?
<FLOOR_9_>
better utilising that single core
<FLOOR_9_>
sdogruyol: how long are you still awake tonight?
<sdogruyol>
that might be the case also
<sdogruyol>
it's 23:11 here in Istanbul
<FLOOR_9_>
i will post the results on your reddit post ok?