<theCrab>
In DM, does the method get() accept more than 1 param?
<dkubb>
theCrab: it does if you have a compound primary key
<theCrab>
dkubb: whats the best way to get back 2 records using ids that are Serial?
<theCrab>
Person.get(id1, id2)
<solnic>
Person.all(id => [id1, id2])
<theCrab>
solnic: thanks
cored has joined #rom-rb
<snusnu>
dkubb: hey, after our dicussion about test first yesterday, i had a good case of letting the unit tests drive my design yesterday night .. integration specs brought me to reasonable working code quickly, then unit tests showed me that the design is suboptimal because it was hard to setup the object so that all mutations can be killed
<dkubb>
snusnu: oh cool
<snusnu>
dkubb: i thought about what would work better, wrote that code, and now both code and tests are simpler
<dkubb>
snusnu: yeah, I'm not a "always test-first" kind of guy, but I do notice a difference
<solnic>
you can't write tests-first when you have no idea what you're doing :)
<dkubb>
snusnu: I'm moving more in that direction, although when I really don't know then I just spike something out and not worry about tests at all, besides maybe a single test.rb file (that looks more like example code) that I just run
<solnic>
which is sometimes the case
<dkubb>
yeah
<solnic>
exactly
<dkubb>
I don't even write integration tests when I'm spiking
<dkubb>
although maybe I should
<solnic>
I do
<dkubb>
because I could still reuse those later when I rewrite it
<dkubb>
or I could use it in an example/ directory
<snusnu>
i do it like that too .. write example client code, implement that, turn the example code into integration specs, refactor, start writing unit tests, refactor, repeat*n, be happy
<dkubb>
the main thing about unit tests is if you write them first they drive your design, if you write them after they can only act as verification
<dkubb>
I've heard people say they wished TDD was actually named DDT: Design Driven by Tests.. to put the focus on the design aspect rather than the testing and verification
<snusnu>
that's true, and the golden path lies somewhere in between imo, integration specs let you approach a good design, while unit tests would be still overhead, as you're still in spiking mode .. once you're somewhat happy with the code, you end spiking phase, and use unit test as a guide for refactoring the api
<snusnu>
yeah, that'd be a neat abbreviation
knowtheory has quit [Quit: Computer has gone to sleep]
<snusnu>
dkubb: mutant does a nice job here too .. it makes you think about all edgecases, which, typically mean "more" test setup too
<snusnu>
dkubb: i'm pretty happy with the current DSL generation code
<snusnu>
dkubb: previously it was a mess (that looked nice at the time i wrote it ;)
<theCrab>
solnic: Person.all(id: [1,2,3]) is funny. If you pass it an id of a none existent record, it returns an empty array. Is that how its meant to work?
knowtheory has joined #rom-rb
<theCrab>
even if the rest of the ids are fine :(
<solnic>
theCrab: uhm, actually, can we move this conversation to #datamapper channel?
<theCrab>
yea
mbj has quit [Read error: Connection reset by peer]
knowtheory has quit [Quit: Computer has gone to sleep]
<travis-ci>
[travis-ci] rom-rb/rom-mapper#56 (relation-mapping-support - 2064334 : Piotr Solnica): The build was broken.
knowtheory has quit [Quit: Computer has gone to sleep]
<snusnu>
solnic: i left a tiny comment, will think more about the PR once i have more time
solnic has quit [Quit: Leaving...]
<dkubb>
snusnu: yeah, the way I think about it is when you write integration test first, you're basically doing TDD for the outermost layer. it may not be pure unit tests, but you are writing the tests first. that's why the interface for the outer layers ends up so polished and nice to work with. doing TDD all the way down provides the same interface benefits to all the layers
<dkubb>
snusnu: even if *we're* the only consumers of the inner layers, there's still a benefit to us
<snusnu>
dkubb: yeah, that's a neat way to summarize it
<dkubb>
I like to think of good design as being fractal. if you look at any granularity, from the individual methods, to the classes, to overall lib and the systems you use the libs to build.. if it's well designed you should see it at every layer
<snusnu>
yeah, onion style
<dkubb>
where I think normal ruby dev breaks down is even when the methods and classes are nice, and sometimes the libs are nice too.. but once you to get the app level things go wonky
<snusnu>
yeah
<dkubb>
hopefully substation and others can help with that
<snusnu>
dkubb: absolutely, i'd love to build such a thing using rom and substation
<snusnu>
dkubb: we could have it so that devtools pushes some json on every run (from a local machine, or from travis for that matter)
<snusnu>
dkubb: it'd be useful to us, and surely also for other oss projects, a pimped codeclimate if you will, decentralized
<snusnu>
dkubb: heh, maybe we could even sell commercial licenses at some point :p
<snusnu>
dkubb: also, this is in rememberance of dm-dev … decentralizing ci in that way, opens up testing adapters we can't otherwise integrate into ci .. remember the dm1 orcale and sql-server adapters for example
<snusnu>
dkubb: lol that reminds me, i even had that distributed ci thing running, quite some time before travis ;)
<dkubb>
snusnu: I need to look at some of the structures being used in ROM::Mapper and ROM::Relation to make sure there's not duplication of concepts in axiom. I see stuff like a header, and keys, and I wonder if some of those could be pushed down into axiom
<dkubb>
oh yeah?
<dkubb>
that's really neat
<dkubb>
I still think distributed testing is wide open
<snusnu>
dkubb: don't you remember? you even logged into it with your GH account :) currently looking if it's still live, forgot the url
<dkubb>
travis is alright, but I've seen issues stay broken for *way* too long. I mean ruby-head has been broken for months now
<dkubb>
snusnu: it's arguably better than what we have with travis
<snusnu>
dkubb: that's quite some "sleeping" code we have there … good stuff imo
<snusnu>
dkubb: i tend to agree
<snusnu>
dkubb: dm-dev would be awesome to have for rom .. it could be merged with devtools
<dkubb>
snusnu: what I would love to see is a distributed test environment where you could download a vm or maybe a container that you can run locally, and it reports the results back to a central server
<dkubb>
snusnu: or it could write the results to a directory that's shared with btsync, if you want to get totally distributed
<dkubb>
snusnu: then anyone can read that information and aggregate it
<snusnu>
dkubb: yeah that'd be sweet, btsync as in dht, torrent style?
<dkubb>
snusnu: yeah, exactly
<snusnu>
right, now *that* would be sweet
<dkubb>
snusnu: if you gave each node a unique id, then they could all write to their own directory and sign the reports
<snusnu>
that'd be an awesome tool
<snusnu>
to have
<dkubb>
yeah, it would be pretty awesome actually
<dkubb>
completely distributed
<dkubb>
people could donate time on spare machines and run the vm
<dkubb>
the vm could "listen" on the synced directory and run tests when updated via torrent
<snusnu>
it could even be used for companies without security worries .. say they install the clients on their boxes only, they still get speed boost from multiple machines, while not having to worry about private code
<dkubb>
yeah
<snusnu>
yeah, that was the whole idea behind it, back then
<dkubb>
the test server would just be something listening to the same directory and aggregating the results as it gets them
<snusnu>
i was trying to "build a business model" around this distributed testing back then already, time killed it
<dkubb>
and then people could configure their vm to listen to specific projects.. whichever ones they want to donate cpu time to
<snusnu>
ah, need to look at that at some point
<dkubb>
or by default it would just run all oss projects
<snusnu>
yeah
<dkubb>
soemthing like that could probably get funding actually
<snusnu>
maybe we should slowly grow our tools to handle what we need, and at some point, really consider packaging it up with a commercial license too …
<snusnu>
yeah!
<dkubb>
if we design it well we could do that
solnic has joined #rom-rb
<snusnu>
it'd also be a neat testbed for rom
<dkubb>
I wouldn't mind our "test ROM system" be a distributed testing app
<dkubb>
distributed testing with our stats
<snusnu>
hah, me neither
<dkubb>
that would be fucking awesome actually, even if we never released it commercially
<dkubb>
although that would be even cooler
<snusnu>
inorite?
<solnic>
wait what?
<snusnu>
heh
<snusnu>
old ideas
<solnic>
sorry I was away
<snusnu>
;)
<dkubb>
heh
<dkubb>
solnic: snusnu and I just started a company
<dkubb>
:D
<snusnu>
lol
<dkubb>
j/k
<snusnu>
solnic: you're in?
<snusnu>
heh
<solnic>
I'm always in
<solnic>
:D
<snusnu>
cool
<dkubb>
solnic: nah, we were just talking about distributed testing
<snusnu>
in all seriousness tho .. i would *love* to work (with you guys) on such a thing
<solnic>
ah, that
<snusnu>
commercial or not
<dkubb>
solnic: and stats aggregation
<solnic>
well, we have pretty good background to do such a thing
<dkubb>
yeah, I still think the models travis uses are broken, and to some extend circleci's
<dkubb>
I mean, why can't people run a local vm and help out with the load
<solnic>
it'd be a huge effort though
<dkubb>
that's what CPANTS does and it's awesome
<dkubb>
yeah I know
<dkubb>
for the first version we can just build it for ourselves
<snusnu>
and that's what we should do, because we already did it
<dkubb>
don't even try to make it generic, just learn what we need
<snusnu>
snusnu/testor-server + dm-dev *works*
<dkubb>
I have spare machines I could easily donate to running tests
<dkubb>
not super fast, but it doesn't matter if we get 20 machines working together
<dkubb>
almost everyone I know has spare machines sitting in their house, not being used for anything
<snusnu>
i always felt it was a waste that when prior to pushing, i run rake ci locally on one ruby, and travis does it again
<snusnu>
yeah
<dkubb>
I just run rake ci:metrics before pushing now
<dkubb>
I don't usually run mutant, unless it's super quick
<dkubb>
or if I'm mutation testing one specific piece of code, and I know the rest is mutation covered
<snusnu>
i normally do a targeted mutant run on the stuff i was working on
<snusnu>
yeah
<solnic>
dkubb: thanks for the comments, I addressed everything I could
<dkubb>
np
<solnic>
rom-relation hit 300 stars today :)
<snusnu>
awesome
<snusnu>
and 48 watchers, which is even more cool imo
<snusnu>
i tend to view watchers as people potentially more interested in contributing
<snusnu>
mbj: we can discuss the API, but this is basically what will allow you to drop quite some of the consts
<mbj>
solnic: It should be easy to support this via "after load time", not "ast time" introspection
<dkubb>
solnic: why not document the public api, and just adjust your yardstick coverage down from 100?
<snusnu>
that'd be a workaround, yeah .. somewhat brittle tho
<mbj>
dkubb: because you'd not know if there is undocumented public stuff
<snusnu>
that's what i meant by brittle i guess
<dkubb>
for me, the difference is that I don't write yard docs for the users. that's only a side effect. I write them for me when I'm thinking about the api. I use the summary to figure out the right nouns and verbs to describe what the method does to ensure I'm using the right method and class names
<snusnu>
well dkubb you could still do that?
<dkubb>
mbj: there's metrics:yardstick:measure .. maybe it would be nicer if it outputted the report to stdout?
<dkubb>
snusnu: well yeah, or you could just adjust the threshold :P I wrote yardstick to support my documentation preferences
<dkubb>
snusnu: yardstick does support a configuration file format where you can turn stuff off
<snusnu>
dkubb: heh, fair enough .. but you do see how adjusting the threshold is brittle?
<snusnu>
dkubb: ah!
<snusnu>
dkubb: i forgot about that! that's the PR from indrek, right?
<dkubb>
I don't know if the devtools task loads it by default, but it'd be an easy change to make to devtools
<snusnu>
yup
<dkubb>
and we'd want to copy this config file into the devtools defaultys
cored has quit [Ping timeout: 268 seconds]
<dkubb>
actually, wait, devtools does have a yardstick config file
<dkubb>
it just doesn't have the full config in it
cored has joined #rom-rb
<dkubb>
yardstick defaults to the strictest settings when no configuration is available for specific settings
<dkubb>
I haven't adjusted it because I happen to prefer the settings, but you could always configure it if you want
<snusnu>
absolutely, i guess both solnic and i forgot about that config
<snusnu>
mbj: any idea what happened with the devtools sync in concord?
<dkubb>
I do not think documenting private methods is a waste of time though. first of all, I try to minimize private methods anyway because I think it's a code smell when a class exists but the bulk of it's functionality is private.. it measn the public methods each do a ton of work or have lots of responsibilities. when I do have them the documentation helps me understand what the method actually does and what the expected return values are
<snusnu>
good point
<dkubb>
when I think about private methods, I try to imagine them inlined within the public methods that rely on them
<dkubb>
if those imaginary public methods are gigantic I think that's an issue
<snusnu>
agreed
<dkubb>
it's nice to be able to name a concept and put it in a private method, but if you've got 10 private methods and 3 public methods it usually means those public methods have a pretty wide interface.. i.e. they accept a ton of different kinds of inputs and all those private methods are responsible for massaging the data.. or it's doing a ton of work
<snusnu>
so i guess everyone can do whatever in his projects, but we should settle with a config for all rom repos
<snusnu>
that said, i guess i'd be fine with keeping docs for private methods .. thinking harder about reducing them .. (something we probably all do anyway)
solnic has quit [Quit: Leaving...]
<dkubb>
I would prefer that in the rom repos we the highest standards the 4 of us, or the community, uses.. imho
<dkubb>
I will probably continue to try pushing and experimenting with higher constraints on my end
<dkubb>
one thing I want to do with YARD is have something that uses parser/unparser to rewrite the ruby code to have assertions on the inputs and outputs of every method
<dkubb>
so I can basically test my docs
<dkubb>
I know I've mentioned this dozens of times :P
<snusnu>
maybe we could ignore failing yardstick in topic branches, but enforce it on master?
<dkubb>
I would be ok with that
<dkubb>
I wouldn't even have a problem if we only ran "rake spec" on non-master branches
<snusnu>
same goes for all the other metric tools probably
<snusnu>
yeah
<dkubb>
I don't think travis allows you to configure different rake tasks per branch
<dkubb>
we would probably want to have the rake ci task know which branch it's in, and conditionally include the other tasks it needs
<snusnu>
yeah
<snusnu>
pushing features into devtools is the better option anyway .. travis is just a service ...
<dkubb>
you can configure travis to only build specific branches too
<dkubb>
not sure I would want to forgo travis altogether for feature branches
<snusnu>
no, that would go too far
<dkubb>
actually, this brings up one thing.. if we *do* decide to disable full metrics on branches, then any contributor will be basically sending us code that requires us to clean it up before moving into master
<snusnu>
oh, heh
<dkubb>
one of the benefits of the ci stuff is that people get feedback before the code even reaches us
<dkubb>
I suppose we could say "does this run through rake ci"
<snusnu>
just a quick idea, maybe a special word in commit msg could do the trick? like: "[ci full] Some change .."
<mbj>
snusnu: fixed
<dkubb>
but it's another thing altogether to see how it performs in the matrics
<dkubb>
*matrix
<mbj>
snusnu: git checkout --theirs on merge conflict without review
<snusnu>
mbj: heh
<dkubb>
I don't see the problem with travis failing on a feature branch though
<dkubb>
I don't even bother to check it myself
<dkubb>
not until I get ready to merge it into master
<dkubb>
I usually rebase against master once prior to merging, and then ci runs and I get my green star on the last commit, and I know it's safe to merge into master :)
<snusnu>
hah, now that you say it, i do exactly the same, i.e. not caring about travis in feature branches
<dkubb>
snusnu: you could also use [ci skip]
<snusnu>
maybe we're looking for a non-existent problem here
<dkubb>
snusnu: if you really knew it wasn't going to pass that'll tell travis to not do anything
<snusnu>
dkubb: yeah, i usually do that for doc fixes or typos
<snusnu>
dkubb: to save the world some energy
<dkubb>
actually, if you *know* it's not going to pass, imho the right thing to do is tell travis to skip it.. after all it does use thier resources, so it's only polite to have it skip
<snusnu>
:D
<dkubb>
yeah
<dkubb>
wdyt about squashing commits in feature branches prior to merging?
<snusnu>
i dunno, i like history
<dkubb>
this is how they do it in the linux kernel
<dkubb>
I like history, but only some kinds.. some history is junk
<dkubb>
especially when you're playing around with something
<snusnu>
so what?
<snusnu>
;)
<dkubb>
not all history is equal
<dkubb>
some is noise and some is helpful
<snusnu>
that's true too
<xybre>
dkubb: its pretty abysmal. My company does it, it can result in Bad Things, other than just molesting the repo history.
<dkubb>
assuming the features are small, which we should be doing anyway, then it won't be too bad
<dkubb>
xybre: I think it depends on how you do it. I do it too, and never had a problem
<dkubb>
xybre: the biggest problem is when you rewrite shared history. that's bad
<dkubb>
I never rewrite master for example
<snusnu>
i dunno, somehow i don't really see the harm in having the complete history .. some of it might be junk, but yeah, it only shows that your understanding evolved while implementing it
<xybre>
Squashing commits from multiple authors is nightmare fuel.
<snusnu>
people might link to a commit, and then it's gone ...
<xybre>
People who write a bunch of BS commits should use amend, or they can squash them before they push. Any time after that it gets really questionable.
<dkubb>
if it's a feature branch, and you're about to merge it onto master, assuming that history is not being used by somone else it shouldn't be an issue
<xybre>
snusnu: well, the commits don't technically go away. Especially on GitHub, if you've linked to a commit its permanent.
<dkubb>
that's what I mean by shared history. if someone's branches off your feature branch, then it is shared and you can't rewrite
<snusnu>
ok then i'll say it … if i did 1000
<snusnu>
lol
<snusnu>
if i did 1000+ commits on a repo, i want to see that
<snusnu>
not a fake 50
<dkubb>
I usually do clean up local commits before pushing
<xybre>
I try to not to push useless commits, or even make useless commits. But as I aid, git amend and the like before pushing. Once its up, I don't mess with it.
<dkubb>
because I want people to be able to follow the history.. not see all my dead ends and direction changes which won't make any sense unless you were right in the middle of the change
<snusnu>
it'd also "level down" visible impact of contributors … they worked on something a week, all that's publicly visible is one commit ...
<dkubb>
xybre: do you guys all push into master or do you use feature branches and pull requests?
<xybre>
dkubb: feature branches and pull requests
<dkubb>
xybre: do you often branch off branches, or does your history resemble small feature branches coming off master, then being merged into master a day or two later?
<snusnu>
imo, even all those dead ends and direction changes have their own value
<dkubb>
snusnu: I don't think so. people think they do, but you can't understand what someone else was doing in flow without also going through the same process
<snusnu>
yeah, if only it's a reminder for myself
<xybre>
dkubb: mostly they're small-ish feature branches, its considered bad form to mangle 20 files in a single PR. But when working on larger projects it often happens that people branch off other branches.
<xybre>
Rebasing after someone squashed their commits is "FUN".
<dkubb>
xybre: yeah, once you branch off a branch then it's shared history
<snusnu>
also, history would always tell you those were inside a branch, there will be one point where that got merged
<dkubb>
xybre: if someoen else has those commits checked out on their machine and is doing work based on it, then I consider that shared history
<xybre>
I read through commit logs like its a twitter feed, yeah some things are dead ends or whatever, but the advantage of that is sometimes you go "why did they do it like that" and you can look at the file history and see what else they tried.
<dkubb>
xybre: if no one else has them checked out *and* is doing work based on it, it's safe to rebase.. but I guess it depends on the size of the team. I've had no problems with a 4 person team doing it like this, but maybe a 10 or 20 would be
<dkubb>
I actually never read through commit by commit, except on oss stuff.. on work stuff I only look at the PR
<xybre>
It seems that people who don't use the history themselves really want to manage it for others, which really confuses me.
<dkubb>
for work, we create a PR immediately for a feature, even with an empty commit added by --allow-empty .. then each commit is applied to that
<dkubb>
that's really interesting, although if history isn't shared then it's your history.. you're not managing someone elses'
<mbj>
I'm okay with fine granular commiets, git can present diffs cross many commits.
<xybre>
But its completely consistant. If you use the history, you want it unmolested. If you don't use it yourself much if at all at all, then you want to cleaned up.
<mbj>
So I can select the granularity I need.
<xybre>
Yeah, but in the second case, it doesn't hurt the person whose history it is, it hurts whoever has to use their code.
<snusnu>
also, sometimes my brain tricks me, telling me, hey, i can do that better, then i go through history and see, hey, i already tried that, won't do it again :p
<snusnu>
so yeah, i'm a fan of having complete history
<snusnu>
polished history can be a bad thing
<snusnu>
history has shown that
<snusnu>
:D
<xybre>
But seriously, if you don't use the history, then why would you change it? Other people do use the history.
<snusnu>
xybre: +1
<dkubb>
actually, I should amend my previous comment :P .. I use my own history but usually other developers have such nasty history that it's unusable
<xybre>
Its just more work for yourself.
<xybre>
Noobs have great FUN squashing too. They inevitibly end up destroying all their work and then accidentally pushing it all up to master. Yay git reflog..
<dkubb>
if you've ever seen my commits on a PR, then you'd know I try to write really good git summaries, and try to keep each commit to have one subject
<snusnu>
you can never change what happened on a specific day, instances in time are immutable, hiding/changing/tweaking what actually happened can be dangerous .. there are plenty of real world examples for that (not at all programming related)
<snusnu>
but that may be too philosophic for that particular discussion ;)
<xybre>
Git is super powerful, but hillariously designed as far as use interaction. The choices sometimes blow my mind.
solnic has joined #rom-rb
<Gibheer>
if you don't care about "clean" history, hg is pretty nice to use
<dkubb>
with hg you can never change history, right?
<Gibheer>
before git got popular, you couldn't, now you can
<Gibheer>
I think it is a mistake though
<xybre>
I love how the justification for no way (until very recently) to remove submodules without digging into configuration files and directorys in the .git directory was "data loss". A submodule by definition is in a different repo, there's no way to lose said data from the current one. @_@
<dkubb>
I wouldn't mind that. it would just require more discipline on the part of developers. most developers don't seem to have the discipline to keep their commits small and related
<snusnu>
history is history, you cannot change it, you can only try to do it better next time
<snusnu>
lol guys, sorry
<Gibheer>
snusnu: so right!
<xybre>
dkubb: so basically you need to work with better devs? ;)
<snusnu>
heh
<Gibheer>
too bad, that there is no git-hg adapter without problems
<dkubb>
xybre: hehe, yeah, I need to work with devs who value history better.. it doesn't seem to matter though, any project I work on I am the most pedantic developer
<dkubb>
which I realize reflects on me more than others probably
<dkubb>
for the record, I'm not talking about my oss projects
<snusnu>
there's alwyas the temptation to clean up history, make it shine in a different light .. i wonder when, in history, there've ever been occasions, where that was the right thing to do .. just stand for what you did, at any point in time … it's the honest thing to do
<snusnu>
(still off programming topic, but yeah, this is important for me)
<dkubb>
I see out oss projects where people push me in different directions
<dkubb>
*seek out
<xybre>
I've worked with a few other people who valued code quality and source control as highly as I do, mostly at Thoughtworks, but some other places too.
<dkubb>
I worked with avdi and solnic who pushed me pretty well, and I think (hope) I pushed them too
<solnic>
dkubb: you have no idea ;)
<snusnu>
hehe
<dkubb>
avdi invited me to speak on ruby rogues once about coding with constraints, so I guess he did :)
<dkubb>
:)
<dkubb>
my only request when working on a team is to push back
<snusnu>
also, we're all about immutable code …. why mutate the history of that code? … just sayin' … :p
<xybre>
lol
<dkubb>
locally, I do something check in everytime a spec passes, but then I'll rewrite before pushing it up
<snusnu>
yeah, i do that too .. one could say that if there are no outside observers, it never happened .. so that's fine
<snusnu>
:p
<snusnu>
did anything happen if nobody saw it and felt any impacts? no
<Gibheer>
snusnu: you want to clean up history after operator use the system without knowning why
<Gibheer>
they just push stuff without writing any comment on why or what they did
<Gibheer>
instead of deleting old code, they comment it, because it might be useful again and they don't realize, that they can get it back all the time ...
<snusnu>
tbh, to me, this discussion really isn't about code .. it's about the nature of time … if i did something, for the "benefits" of others too .. why lie about it
<xybre>
I do wish git had a way to package up commits without squashing them.
<snusnu>
i'm probably annoying you guys with that pov tho :)
<dkubb>
xybre: yeah, I've wished for that too
<Gibheer>
snusnu: I see it the same way.
<xybre>
"All these commits were for reasons" and then if you want you can open up the package and see all the commits in it without digging through the raw commit files.
<dkubb>
not just tagging, but something you can say "this is for X", and then you could view them as a single logical change and only dive in if you need to
<xybre>
Mercurial basically does that with their persistance, but closeable, branches.
<xybre>
persistant^
<dkubb>
what I do at work is prefix commits with some commit id, eg: [PJ-12345] with the initial being the project initial, and the id being the id in the tracker, whatever we happen ot be using
<dkubb>
so then at least I can unravel what commits belong to what feature
<snusnu>
yeah, squashed, rewritten history, and the fact that it might be confusing while browsing it .. is a tool problem .. if GH had an option to collaps stuff (they actually have, it's called compare) .. where's the issue
<Gibheer>
hmm, maybe I should switch my repo at work to hg again
<dkubb>
if there are interleaved commits it can be useful
<xybre>
Mercurial doesn't do quite all the stuff that git does, but its a lot nicer to use. git's rebase is pretty nifty, but it does recreate history, which is weird.
<snusnu>
damnit i love that discussion .. but i have to go .. will read the logs later on ;)
<snusnu>
bye guys
<xybre>
later snusnu
snusnu has quit [Quit: Leaving.]
<Gibheer>
xybre: I would say users come first, computers can figure out the rest themselves
solnic has quit [Quit: Leaving...]
<xybre>
Gibheer: I agree. And if people had adopted Mercurial, it would have the features git has now, but with a better interface.
<dkubb>
I wonder why git won out, aside from having Linus behind it
<dkubb>
and github obviously
<dkubb>
unless those are the reasons
<mbj>
dkubb: bad luck?
<dkubb>
I quite like git myself. I haven't used hg before though, so maybe I should
<dkubb>
I originally used cvs way back, which sucks, then moved to svn when it came out.. and it sucked less
<mbj>
dkubb: Exactly same history here, but started with svn
<mbj>
and ended up in git
<mbj>
And because of github, it is very unlikely I'll have/can do another step
therabidbanana has joined #rom-rb
<xybre>
Github came later
<xybre>
But mostly Linus. git had the star power.
therabidbanana has quit [Client Quit]
<xybre>
Git was originally some bash scripts and a little c library. It was horrendous. If Linus hadn't driven it, it would have been laughed out of existance in a week.
<dkubb>
mbj: I could do another step I think. I think github supports hg
<Gibheer>
dkubb: github came after the big adoption of git
<dkubb>
mbj: I don't see any "new" competing version control systems coming out.. but maybe because I'm not involved in dev of that stuff
<mbj>
dkubb: heh, I think most tool developments are pain driven
<mbj>
And git does not create enough pain :D
<Gibheer>
the only other system is darcs, which is used by the haskell devs
snusnu has joined #rom-rb
<Gibheer>
it is not based on branches but on patches, which may rely on other patches
<Gibheer>
the only one thing worse than cvs is the microsoft foundation server
<dkubb>
the basic thing we want is the ability to take a process, sandbox it somehow, then mutate the code within the sandbox and then run the test with the mutated codfe
<dkubb>
*code
<Gibheer>
thought Process.fork was platform independent?
<xybre>
There's also fossil
<dkubb>
what we do right now in mutant is fork a process, then it runs spec and the tests in the fork. the mutations are isolated to the fork so they can't affect anything in the parent
<xybre>
Windows doesn't have fork
<dkubb>
does it have any kind of code sandboxing system?
<dkubb>
somewhere where you could change a specific method but not have it affect anything outside it?
<dkubb>
Gibheer: the docs for Process.fork say it's not available for Windows :(
<dkubb>
the only problem is that it appears to be able to run an external process. what we'd have to do is, for each mutation, write a file somewhere with the mutated code and then spin up another ruby process to require everything plus this mutation, then run the specs with. it could work, but it'd be really slow I think
<dkubb>
at a certain point I'd be more likely to just tell people to not use windows for mutation testing
<solnic>
dkubb: re docs for priv methods, I don't dig that concept, when I look at a method I can tell by its name and actual code whether or not it should stay
<solnic>
I find myself writing docs like # Build mapper; def build_mapper; end
<solnic>
which is super annoying for me
<solnic>
besides the name tells you in 99% what a method does
<solnic>
not even mentioning "# initialize a new Foo instance; def initialize; end;
<dkubb>
solnic: are you seeing lots of dependency warnings?
<solnic>
dkubb: yeah esp when running mutant
<solnic>
dear lord I love you mutant
<dkubb>
solnic: I think it's ok as long as your own code runs with warnings on. I wish it were possible to tell ruby *what* you want to run with warnings on
<mbj>
solnic: "dear lord I love you mutant" :D
<solnic>
I missed a comma there ;)
travis-ci has joined #rom-rb
<travis-ci>
[travis-ci] rom-rb/rom-session#64 (master - 2019bdf : Piotr Solnica): The build has errored.
<solnic>
unkilled mutation was that if somebody broke the implementation so that #delete returns nil, this spec would pass
<mbj>
solnic: wow, nice!
<solnic>
because nil.to_a returns empty array
<mbj>
xybre: we already have elasticsearch, mongo and arangodb adapters
<mbj>
All could be polished a bit :D
<solnic>
btw I think nosql adapters will be ready sooner than sql one heh
<dkubb>
maybe
<mbj>
solnic: They are more easy
<dkubb>
it depends on who wants to do them. the people who are working on the sql ones are probably already working on other parts of ROM ;)
<solnic>
:)
<solnic>
ok gotta run, good night!
<mbj>
solnic: have fun!
<mbj>
dkubb: Can we exclude instances of Class from beeing deep freezed?
<mbj>
dkubb: I love imutable object trees, just like you
<mbj>
dkubb: But when a instance of ::Class ends up somehwere and its ivars get frozen you cannot declare memoized methods anymore :D
<dkubb>
mbj: yeah, although we probably have to do it on a project by project basis. axiom-types uses frozen classes
<dkubb>
mbj: atm I'm not sure how we can do it per project though, because including it in something axiom-types uses could turn off freezing of those classes.. we've talked before though about changing the ice nine interface so it allows configuration
solnic has quit [Quit: Leaving...]
<xybre>
dkubb: Redis and Mongo mainly (since those are the ones I'm familiar with)
<xybre>
Mongoid v2 was a really weak ORM. Huge problems, messy codebase, commented out tests. Made development really painful. I like the datastore, but the ORM was awful to work with.
* xybre
stars the repo
<dkubb>
besides mongoid and mongomapper are there any other mongo orms?
<xybre>
Those are the two I'm familiar with, but my last foray into it was almost a year ago now. the landscape might have changed.
<mbj>
When each db vendor tries to raise marketshare via their own, single store ORM we cannot expect quality
<xybre>
Yeah, it really cuts down on the number of people who can have eyes on it, and those that do are usually in a bind.
<xybre>
So they can't spend time to fix it, they just monkeypatch it internally, or throw a hack up that might work for them and no one else.
<xybre>
I'm going to pick on Octopus here, because I really feel like the same thing has happened to them. As a result I've built a completely unique sharding/multiple db solution for my company here.
<dkubb>
interesting
<xybre>
I feel like that keeps happening. People will build a special use case solution, but ignore that they'll have to maintain it forever themselves, since no one else will ever be able to use it.
<dkubb>
I believe we may be able to work towards supporting sharding at some point
<xybre>
Yet, there's a gem out there built by one of our employees that is right now having some random guy building Rails 4 support for because we open sourced it. It's like companies don't realize the power of generalized solutions and open source.
<dkubb>
not with the primary probject, but when I was designing axiom I realized I could do things like union relations from multiple datastores and present them as one single relation. right now we do have the ability to distribute writes to each relation across a union, but it would probably need more testing to make it a proper solution for that space
<xybre>
dkubb: I would MUCH rather work with ROM to build a sharding gem than AR. AR gave me nightmares. So convoluted and no documenation. I just sat and read source code for 3 weeks before I could even begin to address the problem.
<mbj>
dkubb: I see the following solution, hook freeze on adamantium infected classes to set @memoized_methods or change ice_nine not to freeze Classes.
<dkubb>
mbj: or define freeze on the class to return self, or mixin Adamantium::Mutable ?
<mbj>
dkubb: yeah
<dkubb>
mbj: obviously not a long term solution
<mbj>
but mixing in adamantium Mutable in such many classes (I hit that one often!) seems like a smelly solution.
<dkubb>
mbj: especially if it's infecting classes outside of our control. I would not want to monkey patch that in
<dkubb>
mbj: do you inject classes into your methods often? I've found I inject instances far more often
<mbj>
dkubb: Yeah, I do the same.
<mbj>
dkubb: But substation uses "class injection" often.
<dkubb>
I read something awesome comparing those two approaches and now I can't find it
<mbj>
heh
<dkubb>
the gist of it was to favour instance injection over class injection, but of course that drops the whole pro/con argument which I don't want to repeat.. I'll look for that article
<mbj>
dkubb: One could argue classes are instances also in ruby :D
<xybre>
Ouch, gems that fix to individual versions of other gems are a pain.
<xybre>
Its not the recommended way to go, and there's reasons for that.
<mbj>
dkubb: I commented
<mbj>
I like relation.one
<mbj>
without args
<dkubb>
mbj: yeah, solnic and I discussed that a few days ago but he may have forgotten. I think #one should do only one single thing
<dkubb>
and not do restriction on top. there's no reason we can't chain a #one to a restriction
<mbj>
dkubb: mbj-mapper does had that interface :D
<dkubb>
in DM1 I made the mistake of overloading #first and #last
<dkubb>
they have like 3 modes, and it kind of sucks for the internals
<mbj>
yeah
<dkubb>
actually they have way more than that if you want to get technical
<mbj>
heh
<mbj>
I still like to read dm-1 code to answer detailed questions
<mbj>
But I can feel it was painful to make it.
<dkubb>
heh
<dkubb>
I didn't know how much at the time
<mbj>
Especially the relationshop part seems to be a major source of pain
<mbj>
*in development
<dkubb>
it was, especially many to many relationships
<dkubb>
before I got to rewrite the code it was even worse though ;)
<dkubb>
I added shared specs for collections and then worked on all the collection objects to bring them inline. that ended up being a huge improvement
<dkubb>
it's still not perfect, obviously
<mbj>
dkubb: Wrong abstraction
<mbj>
dkubb: initially
<mbj>
dkubb: But this is very very easy to say from todays POV
<mbj>
dkubb: At this time it was the best you could come up with, no need to say sorry!
<dkubb>
mbj: yeah, today I probably would've approached it similarly to ROM and broken out the query generation into something separate, and have the models handle mapping the relations to its instances .. which, on the model side, is kind of what happens
<dkubb>
it's just that the query composition and sql generation is tightly coupled to the models
<dkubb>
(in DM1 I mean)
<mbj>
yeah
<mbj>
dkubb: All those small iterations of improvement are embedded into bigger ones :D
<dkubb>
a simple example is you have two relations, one with a restriction of records 1 through 100, and another with a restriction of records 101 through 200. assuming they are unioned together into a single relation, and if you insert a tuple in with an id of 155 it'll get propagated to the second relation only
<xybre>
Neat. We're using 64 bit IDs with the significant digits operating as the ID of the shard it was created on (but not necessarily the one its currently on) and having a column in a global table that specifies the shard for a given class of records (in this case, per user).