parndt has joined #rubinius
enebo has joined #rubinius
elia has joined #rubinius
enebo has quit [Quit: enebo]
|jemc| has quit [Ping timeout: 255 seconds]
elia has quit [Quit: Computer has gone to sleep.]
dimday has joined #rubinius
parndt has quit [Remote host closed the connection]
parndt has joined #rubinius
parndt has quit [Ping timeout: 250 seconds]
havenwood has quit [Remote host closed the connection]
|jemc| has joined #rubinius
tenderlove has quit [Quit: Leaving...]
dimday has quit [Ping timeout: 250 seconds]
dimday has joined #rubinius
dimday has quit [Ping timeout: 255 seconds]
dimday has joined #rubinius
amclain has joined #rubinius
meh` has quit [Ping timeout: 256 seconds]
<jc00ke> Hmm, calagator specs segv rbx
_santana has joined #rubinius
_santana is now known as santana
<brixen> jc00ke: segv how?
JohnBat26 has joined #rubinius
parndt has joined #rubinius
parndt has quit [Ping timeout: 264 seconds]
santana_ has joined #rubinius
santana has quit [Disconnected by services]
JohnBat26 has quit [Quit: KVIrc 4.3.1 Aria http://www.kvirc.net/]
santana_ has quit [Remote host closed the connection]
santana has joined #rubinius
santana has quit [Client Quit]
<jc00ke> brixen: sorry, just saw your msg - https://github.com/rubinius/rubinius/issues/3201
amclain has quit [Quit: Leaving]
|jemc| has quit [Read error: Connection reset by peer]
|jemc| has joined #rubinius
noop has joined #rubinius
yipstar has quit [Ping timeout: 272 seconds]
JohnBat26 has joined #rubinius
dimday has quit [Remote host closed the connection]
josh-k has joined #rubinius
josh-k_ has joined #rubinius
josh-k has quit [Ping timeout: 258 seconds]
flavio has joined #rubinius
flavio has joined #rubinius
jnh has quit [Read error: Connection reset by peer]
jnh has joined #rubinius
|jemc| has quit [Ping timeout: 256 seconds]
GitHub176 has joined #rubinius
<GitHub176> [rubinius] jimmycuadra opened pull request #3203: RubySpec for creating a Proc using the block from an enclosing method (master...spec-proc-new-without-block) http://git.io/n9nEtQ
GitHub176 has left #rubinius [#rubinius]
goyox86_ has joined #rubinius
<goyox86_> morning!
goyox86_ has quit [Ping timeout: 240 seconds]
elia has joined #rubinius
<yorickpeterse> morning
goyox86_ has joined #rubinius
josh-k_ has quit [Read error: Connection reset by peer]
josh-k has joined #rubinius
goyox86_ has quit [Client Quit]
josh-k_ has joined #rubinius
josh-k has quit [Ping timeout: 265 seconds]
benlovell has joined #rubinius
goyox86 has joined #rubinius
lbianc_ has joined #rubinius
lbianc has quit [Ping timeout: 245 seconds]
lbianc_ is now known as lbianc
benlovell has quit [Ping timeout: 245 seconds]
benlovell has joined #rubinius
<yorickpeterse> brixen: so my dtrace alloc tracking is actually useful :D
<yorickpeterse> brixen: seeing a whole bunch of allocations every few seconds, I suspect that happens every time this code polls SQS for new data
<yorickpeterse> brixen: also, I'm seeing allocations happen without a GC ever running in between
<yorickpeterse> this is very interesting, it would suggest either objects still have references pointing to them, or the GC needs a new pair of glasses
<yorickpeterse> Yup, that's the polling of SQS, same call/allocation pattern every time
<yorickpeterse> yet not a single GC run in between
<yorickpeterse> At least, I can't see any with -Xgc.show -Xgc.immix.debug nor in systemtap
<yorickpeterse> I wonder what happens if I force GC runs
josh-k_ has quit [Remote host closed the connection]
josh-k has joined #rubinius
josh-k has quit [Ping timeout: 256 seconds]
<yorickpeterse> It appears that with forcing a GC run every 10 seconds the leak rate is much, much lower
lbianc_ has joined #rubinius
lbianc has quit [Ping timeout: 245 seconds]
lbianc_ is now known as lbianc
<yorickpeterse> That however would, I think, mean the GC needs a new pair of glasses instead of objects being retained forever
<yorickpeterse> (by Ruby code that is)
benlovell has quit [Ping timeout: 244 seconds]
benlovell has joined #rubinius
jfredett-w1 has joined #rubinius
jfredett-w has quit [Ping timeout: 258 seconds]
meh` has joined #rubinius
benlovell has quit [Ping timeout: 255 seconds]
benlovell has joined #rubinius
DireFog has quit [Remote host closed the connection]
DireFog has joined #rubinius
<dbussink> yorickpeterse: maybe it creates pressure somewhere that isn't tracked properly? so no gc is scheduled?
<dbussink> or the code doesn't end up with a checkpoint instruction somehow?
<yorickpeterse> I deployed a change to run immin non-concurrently, waiting for more data to come in
<yorickpeterse> * immix
<yorickpeterse> Because that looked to be fairly stable locally
<yorickpeterse> (ignoring the random JIT crashes)
parndt has joined #rubinius
<headius> dbussink: planning to be at RubyConf?
<dbussink> yorickpeterse: if that's a fix, maybe it starves it so it never ends up finishing runs
<dbussink> headius: no plans this year
<headius> ahh, too bad...hopefully you can come down for the FOSDEM ruby room in Feb though
<dbussink> that's probably going to be a lot easier :)
<headius> should be a good time
<dbussink> i guess maybe yorickpeterse wants to come too
parndt has quit [Client Quit]
<headius> I certainly hope so!
<yorickpeterse> dbussink: not sure if I can afford it
<yorickpeterse> no idea what it costs
enebo has joined #rubinius
<dbussink> yorickpeterse: free access
<yorickpeterse> wait wut
<dbussink> yorickpeterse: it's in brussels so it's not too far as well
<dbussink> yorickpeterse https://fosdem.org/2015/
<yorickpeterse> what the hell, FOSSDEM is free?
<yorickpeterse> :O
noop has quit [Ping timeout: 258 seconds]
<yorickpeterse> oh hey, even a Ruby room
<yorickpeterse> brixen: dbussink: ok seems immix with concurrent disabled _really_ cuts down memory usage
<yorickpeterse> at the cost of much higher timings though
<dbussink> yorickpeterse: always has been free
<dbussink> yorickpeterse: makes sense if there's a high immix churn rate
<dbussink> yorickpeterse: a lot of objects that get immediately allocated in immix?
<yorickpeterse> dbussink: this particular app, even when idles, allocates a bucket of objects every 10 seconds or so (= basically every time it polls Amazon SQS)
<yorickpeterse> But I expected those to be short-lived objects
<yorickpeterse> (unless I'm mixing things up again, Immix is the mature GC right?)
<dbussink> maybe they are huge? would be interesting to see what the pattern is
<dbussink> yeah
<yorickpeterse> well, there's a bunch of XML being thrown around
<dbussink> but if objects are beyond certain sizes etc. they get allocated in mature anyway
<yorickpeterse> Could be that that's too big for the young generation
<dbussink> maybe, you could tweak with a bigger young gen
<dbussink> but young gen size also gets auto tuned
<yorickpeterse> Interesting enough it does basically keep growing until it gets OOM killed (with concurrent immix)
<yorickpeterse> which is around 2-2,5 GB of memory for this process
<yorickpeterse> so if it's tuning, it's doing it wrong :P
<brixen> increasing the young gen is the wrong answer
<brixen> it leads to very bad GC perf
<brixen> we've had to fix this in production
<brixen> there are several big issues with the concurrent GC at the moment
<yorickpeterse> huh wtf, memory.jit.bytes is being reported as sitting at a peak of 361 MB
<brixen> but GC algo aside, unless you are actually retaining objects, nothing should make the heap grow to many GBs
<brixen> but there could definitely be leaking memory in other data structures
<brixen> anyway, gotta run
<yorickpeterse> Hm, metrics wise (besides general memory usage) I'm not seeing a drop in large objects
<yorickpeterse> oh wait, that's just the graph timeframe
<yorickpeterse> it's actually cut in about half
<headius> dbussink: how does the autotuning work? based on young gen evacuations or something?
<dbussink> i think it's currently a metric of number of young gen runs during one full gen run
<yorickpeterse> seems to do something at least
<yorickpeterse> I wonder why the capi handles went up though
<yorickpeterse> and that massive JIT peak is also weird
<yorickpeterse> meanwhile total memory usage is pretty solid
<brixen> yorickpeterse: what is the event between the areas on these graphs?
<yorickpeterse> brixen: deploy
<yorickpeterse> this graph plots the past 3 hours grouped per minute
<brixen> but what changed?
<yorickpeterse> deploy takes a few minutes
<yorickpeterse> oh
<yorickpeterse> I added -Xgc.immix.concurrent=false to the app
elia has quit [Read error: Connection reset by peer]
<brixen> ok
<yorickpeterse> I wanted to see if the problem was related to immix, which more or less seems to be the case
<yorickpeterse> basically there was this pattern of: idle for several seconds, poll SQS, allocate a whole bunch of objects, repeat
<brixen> so the size you saw in the young gen before resulted from the concurrent algo trying to tune the young gen size
<yorickpeterse> but with no GC runs in between at all
<brixen> very badly at that
<yorickpeterse> Ah, so that explains the small increments?
<brixen> the young gen should not be 100 MB
<yorickpeterse> (the two steps to the right)
<brixen> the young gen tunes itself based on occupancy after a collection
<brixen> the question you need to answer is: Are you retaining objects unnecessarily
<yorickpeterse> Actually, I looked into that and the answer seems to be "no"
<yorickpeterse> so initially I started tracing stuff with systemtap, which lead me to the above pattern
<brixen> also, you can turn the concurrent GC back on and restrict it from growing the young gen more than X bytes
<brixen> which you should experiment with
elia has joined #rubinius
<yorickpeterse> in particular I saw a bunch of "AWS::Core::XML::Frame" objects flying around, so I used good ol' ObjectSpace.find_object + ObjectSpace.find_references
<yorickpeterse> but at any given time there are only 8 of those instances around
<yorickpeterse> hmm
<yorickpeterse> also this code doesn't leak on MRI :P
<brixen> set -Xgc.young_max_bytes
* yorickpeterse runs
<brixen> that's a good comparison
<yorickpeterse> So if I just set that bluntly to, say, 50MB, what would happen?
<brixen> set it to like 8MB
<brixen> or 10
<yorickpeterse> wasn't the default 10?
<brixen> the default is 15 now
<brixen> which you can see by running -Xhelp :p
<yorickpeterse> ah, 15
<yorickpeterse> yeah I just saw that
<brixen> it's possible that the allocation rate combined with SLAB allocators is swamping young gen and forcing mature allocations
<brixen> in which case, the solution is NOT a bigger young gen
<brixen> yorickpeterse: you should give me access to investigate this :p
max96at|off is now known as max96at
<brixen> yorickpeterse: also, we see much more stable memory going from 2.2.10 to 2.3.0
<yorickpeterse> allrighty, trying 8MB
<brixen> so it would be interesting to see your workload
<yorickpeterse> haha oh hell no
<yorickpeterse> I'd have to fire myself
<brixen> ok, train time
<yorickpeterse> I can however, once I figure out what's actually going on, set up a repro
<brixen> and we don't even have wifi :(
<yorickpeterse> since it even occurs when just polling SQS it shouldn't be too hard to repro
<brixen> yorickpeterse: you need to focus on the allocation
<brixen> there should be a counter for objects
<yorickpeterse> why yes, I have that graphed :P
<yorickpeterse> well I can that is
<brixen> if you can correlate the allocation with code, you can narrow it down
<yorickpeterse> right now I only graph current large objects
<brixen> you should graph things like slab allocations and allocation failures
<brixen> I need to tune the metrics a bit but they are there for a reason :p
<yorickpeterse> well, I have no idea what they mean :<
<brixen> you're squinting through one eye right now
<yorickpeterse> so I'd see it and go "yeahh....nice bump there"
<brixen> you don't need to have an idea, you need to see them
<yorickpeterse> and have now idea what it does
<brixen> yep
<yorickpeterse> * no
<brixen> yorickpeterse: anyway, awesome to see you accessing this stuff
<brixen> obviously, there are some missing tools right now
<brixen> but soooooon :)
<yorickpeterse> well there's this wonderful thing of having commit bit and a somewhat capable brain, so I can fix it myself if needed :P
<yorickpeterse> and quite easily deploy it
<yorickpeterse> but yeah, I'm going to push this thing up the hill whether it likes it or not
<brixen> cool
<brixen> bbl...
<dbussink> brixen: yorickpeterse: i've seen rails apps that go to like 64mb young gen and are totally fine
<dbussink> but it depends on the usage pattern really
<yorickpeterse> I'll try to setup a simple repro this evening, should make things much easier to debug
elia has quit [Read error: Connection reset by peer]
elia has joined #rubinius
flavio has quit [Quit: WeeChat 1.0]
flavio has joined #rubinius
havenwood has joined #rubinius
benlovell has quit [Ping timeout: 258 seconds]
tenderlove has joined #rubinius
<yorickpeterse> brixen: with gc.young_max_bytes set to 8MB the memory still grows like crazy
<yorickpeterse> hm, for whatever reason the young generation size is ~84MB, which is even higher than before
<yorickpeterse> also my graphs are wrong
<yorickpeterse> fkn NR
mustmodify has joined #rubinius
<mustmodify> I'm getting "no such file iconv" -- I'm reading that iconv should not be required but it's not clear when that change happened, and no context is presented. Can someone tell me when it became no-longer-in-vogue?
<mustmodify> version-wise
<yorickpeterse> iconv is not supported on Rubinius
<yorickpeterse> and has been dropped from MRI since 2.1
<yorickpeterse> or 2.0, one of the two
<mustmodify> ok
<mustmodify> I get that error when running this app in rubinius. One of my gems must be requiring it... hm...
josh-k_ has joined #rubinius
<headius> mustmodify: it was largely superceded by encoding support in 1.9 and removed later as yorickpeterse mentioned
<headius> I think the gem works though
<mustmodify> it's weird... I switched from mri to rubinius on one of my older projects and I got an error about iconv... including the iconv gem seemed to get rid of that problem, but now I'm seeing a weird rails-related error that seems like one of those "I'm telling you the error is X but really it has nothing to do with X" kinds of problems.
<mustmodify> it's strange because my other transitions from MRI to rbx have been nearly painless.
<yorickpeterse> There should not be a reason to use iconv post 1.8
<mustmodify> "Coercion error: "id".to_int => Fixnum failed"
diegoviola has joined #rubinius
elia has quit [Quit: (IRC Client: textualapp.com)]
JohnBat26 has quit [Quit: KVIrc 4.3.1 Aria http://www.kvirc.net/]
<mustmodify> Just as a reminder, I suggest you name Rubinius 3 after Jim Weirich. He was awesome and deserves a major-version-namesake-situation.
<brixen> yorickpeterse: something is really wrong if your on 2.3.0 and the young gen is that big
<brixen> s/your/you're/
<abbe> brixen: howdy!
<brixen> abbe: hello!
<abbe> sorry for delay in getting back to you, w.r.t. rbx waiting for few seconds before exiting on FreeBSD. I've tested on 10.1-RC4 host as well, and it's reproducible there as well.
<abbe> I can create a bugreport, let me know what details I can provide to help with this.
<mustmodify> maybe I'm dumb, but I can't tell from this stack track which gem-or-whatever is requiring iconv. Can someone give me a push? https://gist.github.com/mustmodify/6f47e70f8b49ca82141c
<mustmodify> updated by removing newrelic
<enebo> mustmodify: Could it be visible by looking at your Gemfile.lock?
<headius> mustmodify: looks like activesupport triggering dependency load, but that could be from anything
<mustmodify> Good idea but no.
<headius> json would be my next guess
<mustmodify> enebo: good idea but no.
<headius> json-pure
<yorickpeterse> brixen: well, yeah
<brixen> abbe: sure, please open an issue
<mustmodify> headius: Whoop! Thanks. `bundle update json-pure` fixed it. THANK YOU! That's been blocking me for an hour or more.
|jemc| has joined #rubinius
<abbe> Okay!
<yorickpeterse> brixen: back at non concurrent immix, young gen sits nicely at 20MB
<yorickpeterse> I do suspect counts for "memory.immix.bytes.current" are off, it's reporting 800MB when total memory size is only 220MB
<brixen> yorickpeterse: I have to run in a bit, but check metrics for young gen occupancy and promotion
<brixen> there should also be a metric for mature gen allocation, or we need to add that
<brixen> abbe: thanks
elia has joined #rubinius
havenwood has quit [Remote host closed the connection]
havenwood has joined #rubinius
|jemc-bot| has quit [Quit: WeeChat 0.4.2]
|jemc-bot| has joined #rubinius
postmodern has joined #rubinius
josh-k_ has quit [Remote host closed the connection]
josh-k has joined #rubinius
<yorickpeterse> brixen: hm, "memory.young.objets.current" is never incremented
<yorickpeterse> but "memory.young.objects.total" is
josh-k has quit [Ping timeout: 255 seconds]
mustmodify has left #rubinius [#rubinius]
benlovell has joined #rubinius
<yorickpeterse> brixen: I think I have a standalone repro
<yorickpeterse> letting it run for a while to see if it's actually leaking, or just settling in
<yorickpeterse> ok so this is definitely leaking
<yorickpeterse> started idling at 90MB, now it's at 140MB
<yorickpeterse> time for a bug report \0/
<yorickpeterse> makes me wonder if I'm the only one with these problems
<yorickpeterse> bbl, train
max96at is now known as max96at|off
amsi has joined #rubinius
flavio has quit [Quit: WeeChat 1.0]
max96at|off is now known as max96at
elia has quit [Quit: Computer has gone to sleep.]
havenwood is now known as havy
havy is now known as havenwood
elia has joined #rubinius
benlovell has quit [Ping timeout: 258 seconds]
elia has quit [Ping timeout: 240 seconds]
elia has joined #rubinius
elia has quit [Quit: Computer has gone to sleep.]
<yorickpeterse> wow, I have somewhat working wifi :O
<yorickpeterse> oh, now it's turning sour again
<cremes> brixen: where’s part3? I NEED MY RUBINIUS FIX NOW!!! :P
goyox86 has quit [Ping timeout: 255 seconds]
parndt has joined #rubinius
benlovell has joined #rubinius
<yorickpeterse> Rubinius Part 3: check back tomorrow
<yorickpeterse> why hello there part 3 preview
benlovell has quit [Ping timeout: 272 seconds]
<cremes> yorickpeterse: tease
<yorickpeterse> hmmm it's really good
<yorickpeterse> hmmm
<|jemc|> too good for the rest of us swine?
<yorickpeterse> yes
<yorickpeterse> peasants
<yorickpeterse> :>
carlosgaldino has quit [Quit: Textual IRC Client: www.textualapp.com]
<jc00ke> yorickpeterse: haha
<headius> today is "instruction set" eh?
<yorickpeterse> maybe
<yorickpeterse> it's for me to know muahaha
<headius> well, that's what it said in part 1...I'm just going off that :-)
<cremes> headius: if it is on instruction set today then pass it on to subbu; it will be interesting to compare & contrast with jruby
<headius> I assume that means there's a new instruction set?
<cremes> ^^ “if”
<headius> subbu already knows the existing instruction set, but if there's something new I'm sure we'll have a look
<cremes> good bet though… there’s been lots of chatter in channel about how LLVM IR is too low-level and we need an intermediate “intermediate representation” like Swift
<yorickpeterse> hihi
<|jemc|> brixen mentioned some new instructions on the horizon about a month or so ago
<headius> indeed...that's why we built our own IR
* |jemc| turns on an auto-refresher in a new browser window
<chrisseaton> a new IR in Ruby, abstracted from LLVM would be a great project
<headius> for sure
heroux has quit [Ping timeout: 256 seconds]
<yorickpeterse> queue somebod saying "oh that's a problem with Gem X, here's how to fix it"
heroux has joined #rubinius
JohnBat26 has joined #rubinius
<headius> yorickpeterse: but it works on MRI?
<yorickpeterse> Yes
<headius> other than just being on rbx, maybe multithreading is a factor?
<yorickpeterse> It definitely is
<yorickpeterse> (threading that is)
<headius> trying to think of things that could cause it in a gem on rbx but not mri
<yorickpeterse> I can crank up the threads on rbx and it will leak a lot faster
<headius> some resource cleanup that's racy could do it
<yorickpeterse> I wouldn't be surprised if it's aws-sdk, but I've never had my MRI version go above ~180MB
<yorickpeterse> (the actual app affected that is)
<headius> you're using a separate aws-sdk instance in each thread, eh?
<brixen> yorickpeterse: using more threads that are allocating objects *should* result in more objects
<brixen> that's a positive correlation to leaking if any of those operations leak
<brixen> at any level (ie code or infrastructure)
<headius> I think I mentioned this before, but I believe it was that aws-sdk that claimed thread-safety but wasn't at all
<yorickpeterse> brixen: yes, that's what I figured
<headius> using isolated instances worked though
<yorickpeterse> headius: correct
<yorickpeterse> it's not thread-safe
<yorickpeterse> you have to eager load a bunch of shit, then add global locks (https://github.com/aws/aws-sdk-ruby/issues/455) for it to even work
<headius> awesome
<yorickpeterse> not yet sure if the latter is aws or OpenSSL/Digest
<yorickpeterse> still have to re-test that
<headius> in theory it should leak the same way on JRuby then if it's a threading issue
<yorickpeterse> Hm, lemme try that actually
<yorickpeterse> but yeah, this was kinda fun to debug because my dtrace mess actually helped :D
<yorickpeterse> except the parts where it would emit file names for methods
<headius> yeah that's cool
<headius> I need to get our student dtrace work merged in
<yorickpeterse> headius: JRuby seems fairly stable so far
<headius> ok...that's too bad, I was hoping it would leak so we could use some other analysis tools
postmodern has quit [Ping timeout: 244 seconds]
<yorickpeterse> actually JRuby loses memory :P
parndt_ has joined #rubinius
mustmodify has joined #rubinius
<mustmodify> can I set up .ruby-version to use -X19?
<mustmodify> s/use/specify
<yorickpeterse> No
<yorickpeterse> Also, -X19 was killed off quite some time ago
<mustmodify> damn. ok.
parndt has quit [Ping timeout: 264 seconds]
<headius> yorickpeterse: sweet...let it run for a while and it will use none
<headius> I think OpenJDK was modified sometime in the past few years to FINALLY give back heap space it's not utilizing
<headius> it didn't for years
<mustmodify> so am I correct in understanding that I can't run 1.9.x ruby code with rbx? I ask because I have an app that apparently breaks in ruby 2 and I would really like to continue using a multi-threaded dev server.
<brixen> mustmodify: we don't support 1.9
<yorickpeterse> mustmodify: Rbx >= 2.2 implements Ruby 2
<brixen> mustmodify: I'd love to hear what is causing a problem
<yorickpeterse> errr I think 2.1 did even
<|jemc|> mustmodify: out of curiosity - what syntax is holding you back?
<brixen> since 2.x is supposed to be as backward compatible as possible with 1.9.3
<brixen> according to matz
<headius> yeah, I'm curious too
<mustmodify> I'm working in a medical app. I didn't want to lose precision when doing math, but I needed to specify a scale for output.
parndt_ has quit [Remote host closed the connection]
sferik has joined #rubinius
parndt has joined #rubinius
<yorickpeterse> mustmodify: in what way does that not work on 2.0?
<mustmodify> though actually that must not be exactly right... I'm not sure why it was so important.
<mustmodify> "can't modify frozen instance of Float"
<yorickpeterse> I'm getting NoMethodError: undefined method `round' for nil:NilClass
<mustmodify> yorickpeterse: yeah, sorry. I modified it for readability.
<yorickpeterse> mustmodify: well, there's the good old "You should not be modifying core objects like this"
<mustmodify> yes, dad, I know.
<|jemc|> mustmodify: sounds like unfortunately you mustmodify your app :P
<mustmodify> (I have teen-agers so I know exactly what that voice sounds like.)
<yorickpeterse> hehe
<mustmodify> You are SOOOO funny!
<|jemc|> sorry - couldn't resist - I know it's a serious problem
<yorickpeterse> mustmodify: yup, getting the same frozen error now
parndt has quit [Ping timeout: 265 seconds]
<mustmodify> updated the gist with code that should work and also... an example of why that's an issue. the first file blows up in irc. https://gist.github.com/mustmodify/15812f4756d987a99eeb
<yorickpeterse> mustmodify: I guess what you can do is create a MyNumberClass or w/e and basically make it behave like a Fixnum/Float
<yorickpeterse> then use that instead
<brixen> mustmodify: would be nice to have a rubyspec covering this case if we don't
<mustmodify> brixen: what would it test... "If you don't listen to your dad then it'll blow up"? :P
<brixen> mustmodify: hah
<yorickpeterse> brixen: we already do
<brixen> mustmodify: now that you mention it, we probably need a special matcher for that case :)
|jemc| has quit [Quit: WeeChat 1.0.1]
<headius> oh, because you try to extend...yup
<brixen> frolulululz
<brixen> also learned recently that trust/untrust are deprecated and just mean taint/untaint
<brixen> so we should clean that up
<yorickpeterse> they meant something different?
<headius> who the hell knows
<brixen> who knows
<headius> they never made sense
<yorickpeterse> haha
|jemc| has joined #rubinius
<headius> good to know, anyway
<yorickpeterse> apparently trust is one level lower than tain
<yorickpeterse> * taint
<mustmodify> brixen: ok so explain to me what test you need and I'll get on it.
<mustmodify> yorickpeterse: I agree. I already have a Datum class that will end up absorbing this functionality but it was super-nice to be able to have it on the number for math and stuff.
<|jemc|> mustmodify: the only approach I can think of at the moment that wouldn't require you to change usage in your app would be to change the patch would be to store your data associated with the frozen objects in some external store
parndt has joined #rubinius
<yorickpeterse> mustmodify: so if you make this Datum class, or w/e, behave exactly like fixnums then you can do just that
<|jemc|> perhaps some kind of weakmap - although that runs the risk of being confusing for someone else to come in and understand
<yorickpeterse> You'd only have to make sure they are MyFixnums instead of Fixnum
<brixen> mustmodify: sorry, I can't explain it without understanding more, and I don't have time to understand right now, sorry :(
<headius> mustmodify: including a utility class into Float would not break, if that helps
<headius> utility module, I mean
<mustmodify> np. Oh wait, I ... I got confused about the purpose of your tests for a second. I was thinking, "Why would you want to test something that was broken?" But that's exactly the point. So now I know what test needs writing.
<mustmodify> headius: ok I'll check it out.
<mustmodify> So the ESA landed on a comet, which is pretty amazing.
<headius> extend requires creating a singleton class, which you can't do to most numbers anyway
<headius> oh nice, they made it
<yorickpeterse> bah, Enumerable#select & friends always return Array
<mustmodify> And at home, Russia is invading Ukraine.
<headius> I forgot that was today
<headius> mustmodify: they just need a little lebensraum
<mustmodify> So it's a big-positive big-negative news day.
<headius> I'm sure it's ok
GitHub20 has joined #rubinius
<GitHub20> [rubinius] jc00ke closed pull request #3203: RubySpec for creating a Proc using the block from an enclosing method (master...spec-proc-new-without-block) http://git.io/n9nEtQ
GitHub20 has left #rubinius [#rubinius]
GitHub73 has joined #rubinius
<GitHub73> rubinius/master 2f86636 Jesse Cooke: Merge pull request #3203 from jimmycuadra/spec-proc-new-without-block...
<GitHub73> [rubinius] jc00ke pushed 3 new commits to master: http://git.io/PACJpg
<GitHub73> rubinius/master 18d9a84 Jimmy Cuadra: Add fails tag for Kernel#proc RubySpec.
<GitHub73> rubinius/master c99a61f Jimmy Cuadra: Add examples for creating a Proc using the block from an enclosing method.
GitHub73 has left #rubinius [#rubinius]
<headius> I've had three opportunities to visit Ukraine that all fell apart because of Russia :-(
parndt_ has joined #rubinius
<yorickpeterse> no no no
<yorickpeterse> it fell apart because of the EU
parndt has quit [Read error: Connection reset by peer]
<headius> certainly couldn't be Putin's fault...he just wants to take care of his children
<|jemc|> but only the heterosexual ones
<mustmodify> so if you remove those methods ... .FFFFFF.FFFFFFFFFFFFFFFFFFF.FFF..FFFFF..FF.F..FFF.................
<mustmodify> :S
<yorickpeterse> that's the computer raging
<headius> hahah
<yorickpeterse> "100 examples, 85 failures" almost there
* yorickpeterse is re-writing ASTs
slaught has joined #rubinius
<headius> this seems relevant to the ESA landing: http://www.gotfuturama.com/Multimedia/EpisodeSounds/4ACV08/16.mp3
travis-ci has joined #rubinius
<travis-ci> rubinius/rubinius/master (2f86636 - Jesse Cooke): The build was fixed.
travis-ci has left #rubinius [#rubinius]
mustmodify has left #rubinius [#rubinius]
parndt_ has quit [Remote host closed the connection]
max96at is now known as max96at|off
amsi has quit [Ping timeout: 264 seconds]
amsi has joined #rubinius
josh-k has joined #rubinius
goyox86 has joined #rubinius
diegoviola is now known as resented
parndt has joined #rubinius
JohnBat26 has quit [Quit: KVIrc 4.3.1 Aria http://www.kvirc.net/]
enebo has quit [Quit: enebo]
parndt has quit [Remote host closed the connection]
|Blaze| has quit [Ping timeout: 258 seconds]
resented is now known as dviola
|Blaze| has joined #rubinius
goyox86 has quit [Ping timeout: 240 seconds]
|jemc| has quit [Quit: WeeChat 1.0.1]
RageLtMan has joined #rubinius
|jemc| has joined #rubinius
<dreinull> brixen I don't mind beginners tags in issues. Makes it easier to find something to work on. I see myself as part of that group – lack of time and a day job that won't let me be as good a hacker as I'd like to see myself.
<yorickpeterse> dreinull: there's a problem with this though
<yorickpeterse> dreinull: a lot of people absolutely suck at estimating the complexity of an issue
<dreinull> yorickpeterse: :)
<yorickpeterse> they might mark something as "beginner" when it really is complex
<yorickpeterse> It becomes easier the more you break it up into sub issues/tasks, but even then it's questionable
<dreinull> I was thinking the other way. I think, hey, that looks doable and I spend a lot of time going through thte issue and then I give up.
<dreinull> I really don't mind sweeping the floor stuff. I'd find it helpful to have it tagged as such.
<yorickpeterse> The sweeping floor stuff isn't really nice either
<yorickpeterse> it comes down to the "lets abuse the intern" attitude
<dreinull> I just fixed two spelling mistakes. It's called fine tuning in my little universe.
<yorickpeterse> one of the things we _can_ do is documentation and providing more info
<yorickpeterse> So instead of suddenly this magical commit being there fixing an issue, put notes about the process in the issue
<yorickpeterse> basically rubberducking on Github
<yorickpeterse> That way _hopefully_ the thought process becomes more clear, while also providing more valuable information to the next person dealing with it
<dreinull> See, I'm not good at _that_. :)
* |jemc| notes to self: don't add the '~' folder to a sublime text workspace by accident - it will consume 100% of all of your cores crawling to keep its tree updated until your PC thermal-overloads
<yorickpeterse> No, that's something we (as in, me, brixen, other main contributors) should do
josh-k has quit [Remote host closed the connection]
<yorickpeterse> But yeah, the first few steps are always insanely steep
josh-k has joined #rubinius
<yorickpeterse> something something we need to fix that
<dreinull> yes, that sounds good to me. I actually do read a lot but can't really help because I'm lacking insight.
<dreinull> also my irc client doesn't line break.
<dreinull> silly thing
<yorickpeterse> brixen: also, funny that you mentioned LPEG
<dreinull> anyway, rubinius has always been the most open open source project so far. Every contribution was welcome and there has been a friendly attitude to any commitment.
<yorickpeterse> the other day I was actually thinking of compiling XPath to some kind of bytecode
<yorickpeterse> but I figured doing that would probably result in more overhead
<yorickpeterse> dreinull: one of the things that does really help is issue triaging
josh-k has quit [Ping timeout: 258 seconds]
<dreinull> can't really help in most cases
<yorickpeterse> https://github.com/rubinius/rubinius/issues/3197 for example, this reports a Gem not installing. There the process would more or less be to install it, gather whatever stack traces you can, provide system info, maybe provide gdb backtraces if you're familiar with that, etc
<dreinull> ok, I'll check that out but not today. It's time to say good night.
<yorickpeterse> np :)
sferik has quit [Quit: Textual IRC Client: www.textualapp.com]
<|jemc|> ah, fun, the blog post it up - did I miss the announce?
<yorickpeterse> hm, I should probably write about some of these things
<yorickpeterse> |jemc|: No, brixen didn't announce it in here yet
<yorickpeterse> lemme fix that
yorickpeterse is now known as totally-brixen
totally-brixen is now known as yorickpeterse
<yorickpeterse> :P
goyox86 has joined #rubinius
<yorickpeterse> but yeah, I've talked with numerous people about getting started with code/FOSS
<yorickpeterse> and every time they ask me "So how did you start?" / "How would you suggest starting?"
<yorickpeterse> and every time the answer is really difficult
josh-k has joined #rubinius
<yorickpeterse> also, people generally suggest documentation work
<yorickpeterse> but that actually requires a really deep understanding of what the code does in order to write good docs
<yorickpeterse> While also just being a good copywriter
goyox86 has quit [Ping timeout: 244 seconds]
<yorickpeterse> e.g. the docs LLVM, arguebly written by clever people, are a joke
<yorickpeterse> * of LLVM
<yorickpeterse> basically I see it this way: the smarter the people of a software project are, the worse the documentation will become
<yorickpeterse> (of course I'm generalizing here)
<|jemc|> brixen: hooray for parsing instructions! :)
elia has joined #rubinius
<|jemc|> I'm excited to get back to pegleromyces (and myco in general) - by the time I can get back to it, it may be around the same time we start to see parsing instructions :)
<yorickpeterse> well, I dont think Rbx 3 will be here for another year or so
<yorickpeterse> at least not if we also want to get mcjit _and_ a new JIT in
elia has quit [Quit: Computer has gone to sleep.]
josh-k has quit [Remote host closed the connection]
goyox86 has joined #rubinius
<|jemc|> yorickpeterse: sad to say it might take that long to get back to myco - it's a very personal project with only very limited application to the bigger fish I have to fry
elia has joined #rubinius
<yorickpeterse> well you "just" need to dedicate more time :P
<yorickpeterse> but yeah, time is annoying
<|jemc|> especially when spent in C land
parndt has joined #rubinius
<|jemc|> although I have to be honest, hintjens' design patterns and czmq are actually making me start to enjoy C again :/
<|jemc|> at least for now
elia has quit [Client Quit]
<yorickpeterse> be careful with that knife there
elia has joined #rubinius
<|jemc|> hence the slanty, skeptical smiley face rather than an actual smile
<jnh> reading the latest blog post, brixen; I came across the primitiveness fallacy in Huia also.
goyox86 has quit [Ping timeout: 264 seconds]
<jnh> I wound up building composite "instructions" into the generator to make my life easier: https://github.com/jamesotron/Huia/blob/master/lib/huia/generator.rb
parndt has quit [Ping timeout: 240 seconds]