<FromGitter>
<bew> according to his implementation for non-bang versions, he can change the type of key or value, here you're not changing the type of the value
<ryanf>
that seems kind of inevitable though? how could you mutate the value type of a hash without messing up existing references to it?
<ryanf>
it seems like a Crystal version of transform_values! would always have to preserve the type of the values
Yxhvd has quit [Remote host closed the connection]
Yxhuvud has joined #crystal-lang
zipR4ND has quit [Ping timeout: 268 seconds]
bjz_ has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
LastWhisper____ has quit [Read error: Connection reset by peer]
bjz has joined #crystal-lang
bjz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
_whitelogger has joined #crystal-lang
bjz has joined #crystal-lang
_whitelogger has joined #crystal-lang
<FromGitter>
<elorest> It seems that I can set a variable to a constant then call methods on it or initialize it. I can even pass it into a function and initialize and call it. For some reason I can’t see to save it as an instance variable though. Any ideas? https://play.crystal-lang.org/#/r/1zni
bazaar has quit [Quit: leaving]
bazaar has joined #crystal-lang
_whitelogger has joined #crystal-lang
bjz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<unshadow>
Maybe Ubuntu and Arch just handles this a little diferent
<RX14>
well if it works
<RX14>
you should celebrate :)
<RX14>
maybe it's a debian-specific patch
<RX14>
although I do remember having to do the same when I was testing on my debian 8 server
<RX14>
I started a scaleway server with 64GiB of ram and got to something like 5 million fibers
<RX14>
was using something crazy like 100TiB of virtual memory
<RX14>
half the memory usage was in the kernel's page tables
<unshadow>
On my Arch I get to 32731 using default configs
<RX14>
having fibers with only 1 page of stack is really inefficient
<RX14>
unshadow, yeah that's the same for me
<unshadow>
Testing On ubuntu now
<unshadow>
RX14: Yap, made sure, I got 65530 at max_map_count on the server, and running the loop with array size I get to 500K before losing the patiance lol
<RX14>
that's so strange
<RX14>
a good surprise
<RX14>
but a surprise nonetheless
<unshadow>
It's interesting to see how libevent actually handles switching between so many fibers
<unshadow>
when 500k actually do some work
<unshadow>
I'll try to push My reverse proxy higher
<unshadow>
Not sure why on My arch I'm getting the fiber error though
<unshadow>
how high did you set it ?
<RX14>
just insanely high
<RX14>
high enough you'll never reach it
<unshadow>
RX14: cool, did 99999999 now I'm getting to 554012 fibers before OOM XD
<FromGitter>
<bew> awesome stuff!
<unshadow>
Now spawning sockets with the fibers
<unshadow>
8 machines trying to create 65500 sockets against a single server
<FromGitter>
<bew> I can get instance var names & types, but it doesn't seems possible to get the default value from a `MetaVar`... https://carc.in/#/r/1zro
<RX14>
does crystal really have incompatible defaults between SSL::Socket::Server and SSL::Socket::Client
<RX14>
then oprypin gave me the solution in 2 minutes over irc
kochev has quit [Ping timeout: 240 seconds]
<FromGitter>
<bararchy> haha
<FromGitter>
<sdogruyol> i mean what's broken :P
<FromGitter>
<sdogruyol> or how :D
<RX14>
its a 1 line fix
<RX14>
read the PR
<RX14>
it's hillariously simple
<FromGitter>
<sdogruyol> simple is always complex
<FromGitter>
<bew> well, here simple is really simple x)
<FromGitter>
<bararchy> Is there a way to set `IO.sync = true` or something to always flush ?
<RX14>
yes but you don't want ot
<FromGitter>
<bararchy> why ?
<RX14>
not if you want performance
<FromGitter>
<bararchy> Oh
<FromGitter>
<bararchy> because it will block ?
<RX14>
if you always wanted sync = true why would it be an option :)
<RX14>
we buffer writes to sockets to reduce syscalls
<RX14>
every syscall has a large overhead
<RX14>
they require an interrupt, the kernel has to do a lot, the CPU cache gets busted a bit
<RX14>
in general you want to minimise them
<FromGitter>
<bararchy> Better yet then, Can I decide on a timeout for flush then ? as in , if no more data was read by X time, then flush ?
<RX14>
not currently
<RX14>
not in the stdlib
<RX14>
you could spawn a fiber which did it every 10ms or so
<RX14>
but that'd just mask issues
<RX14>
instead of doing the correct thing
<RX14>
which is to flush properly
<FromGitter>
<bararchy> btw, when a Fiber flushes, as it is an IO action, it will move working on another Fiber right ?
<Papierkorb>
maybe
<Papierkorb>
it does when it would block
<FromGitter>
<bararchy> So as long as I multi-fiber it's not that big of a performance hit
<RX14>
flush can't be a performance hit because all it does is do the write calls that would be happening anyway
<RX14>
if the buffers weren't there
<RX14>
it could be a performance hit if you flush after every write but it still woun't be too abd
<RX14>
just an extra memcopy
<FromGitter>
<bararchy> I'll need to experiment with that, I have lots of small writes, and I see that sometimes it will give a little latency to the client
<RX14>
well you need to figure out the points where you actually want to be sending data to the client
<RX14>
it's usually just before you wait for a response from the other end
<RX14>
or at the end of sending (but you should use .close then anyway, which would call flush)
<RX14>
flushing is all about making sure the client gets the data you've just written *sometime* in the future
<Papierkorb>
bararchy, if you have something like many small writes in a loop or so, you can try writing first into a IO::Memory and then send off its buffer as a whole. Sometimes it helps.
<RX14>
Papierkorb, honestly IO::Buffered should be pretty much the same as memoryIO there
<Papierkorb>
In practice this was not the case for me.
<RX14>
how much?
<FromGitter>
<bararchy> Thanks RX14, I'll go over the project and profile behavior with regards to flush\placment etc ..
<Papierkorb>
no numbers, months ago. it also helped untangle some other code paths, but it was a noticable improvement
<RX14>
hmm
<RX14>
interesting
<RX14>
Papierkorb, how much of these small writes were there
<FromGitter>
<bararchy> Papierkorb Interesting idea
<RX14>
was it more or less than the IO::Buffered buffer size?
<Papierkorb>
many many writes of a few bytes
<FromGitter>
<bararchy> What's the size of IO::Buffered buffer size ?
<RX14>
8KiB
<FromGitter>
<bararchy> Can I change that ?
<RX14>
not currently
<FromGitter>
<bararchy> Hm.... I would love 32k
<RX14>
why?
<RX14>
that's half the L1 cache!
<FromGitter>
<bararchy> Because in other cases I got file uploads and it's a waste to send small packets instead of big one
<Papierkorb>
copying 1x32K is most likely faster than copying 4x8
<FromGitter>
<bararchy> or maybe I'll just use nigel
<FromGitter>
<bararchy> and let kernel handle it
<RX14>
well the ideal would be to have it configurable
<FromGitter>
<bararchy> True
<RX14>
as the size would be highly dependent on the application
<RX14>
but 32K is probably quite a big default
<Papierkorb>
too large, 8KiB sounds fine as default without more hard data
<FromGitter>
<bararchy> Yeha it's not one size fit all scenario
<Papierkorb>
Maybe writing into a IO::Memory first could help you in this case then
<FromGitter>
<bararchy> Maybe as `IO.buffer_size` or something
<RX14>
i'd say it's be microseconds at best
<FromGitter>
<bararchy> Would it ? the socket will still send it 8 k at a time
<Papierkorb>
for low latency applications that's plenty
<Papierkorb>
Nope, when you're writing a larger slice you sidestep the buffer
<RX14>
sure but if you know you need to save microseconds on your latency you likely wouldn't be asking here
<RX14>
or using a GCed language for that matter
<Papierkorb>
GC isn't the issue, more often than not you have free time in between requests where you can free stuff easily
<FromGitter>
<bararchy> Not true :) ⏎ Sleep as Ruby , Fast as C no ? ;)
<FromGitter>
<bararchy> sleek*
<Papierkorb>
Can be faster in result than not having a GC approach to it for this case
<RX14>
for throughput sure
<RX14>
but latency? people with latency-sensitive apps would be much better using something like rust
<oprypin>
deepj, why `Class`? Can't it be more specific? `Module` might just work
yogg-saron has joined #crystal-lang
<FromGitter>
<bew> `Module` doesn't exist
<oprypin>
mkay
<oprypin>
deepj, let's think of it this way: if you're making a plugin system, you probably expect these modules to implement some methods
<oprypin>
so why not make a concrete module with abstract methods and accept those
<FromGitter>
<deepj> @oprypin Yes, `Module` doesn’t exist. I was my first suprise. I’d expect there is Module object in Crystal similarly like in Ruby
<FromGitter>
<bew> @deepj after reading your linked post on Roda, it seems that it rely heavily on dynamic code evaluation (methods override, etc..) which is not supported by crystal.. Plus as crystal allows you to re-open any class/module and redefine everything, I don't see where/how something like roda could be useful in crystal
Yxhuvud has quit [Remote host closed the connection]
Yxhuvud has joined #crystal-lang
<Papierkorb>
bew, because Roda allows you to specifically override stuff in certain instances only
<Papierkorb>
if your "plugin system" were to simply override stuff, that'd make it application global
<FromGitter>
<bew> "Roda allows you to specifically override stuff in certain instances only" hmmm ok, then I have no idea how it could possibly be done in crystal (if it's even possible!)
<Papierkorb>
inheritance and interface models
<Papierkorb>
to some extend. however, roda tries hard to use something like that so you still can use `super` and the like
elia has joined #crystal-lang
yogg-saron has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
bmcginty_ has quit [Ping timeout: 260 seconds]
yogg-saron has joined #crystal-lang
yogg-saron has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
sjaustirni has joined #crystal-lang
elia has quit [Quit: (IRC Client: textualapp.com)]
zipR4ND1 has joined #crystal-lang
zipR4ND has quit [Ping timeout: 240 seconds]
Raimondii has joined #crystal-lang
Raimondi has quit [Ping timeout: 268 seconds]
Raimondii is now known as Raimondi
sjaustirni has quit [Remote host closed the connection]