<lunarkitty>
Thing is a lot of those companies fund development to some extent too
rocx has joined #crystal-lang
rocx has quit [Remote host closed the connection]
rocx has joined #crystal-lang
<FromGitter>
<wyhaines> FWIW, I use a Windows 10 computer for my primary workstation. I do most of my work in Ubuntu under it, but do most of the graphical stuff (except for my terminal apps) via the Windows side. It works every bit as well, so far, as 12+ years of using OSX laptops for work.
<FromGitter>
<wyhaines> Also, I have a question. Is it expected, or surprising that on some relatively simple network IO related tasks that I am playing with, the Ruby version is faster than the Crystal version (built with --release)?
<FromGitter>
<Blacksmoke16> Got some example code? I'd make sure you're using fibers and maybe try MT
<FromGitter>
<wyhaines> It's really simple. For this test case, both the Crystal and the Ruby programs are the simplest possible clients (and written nearly identically) to a simple server written in Crystal (which I have not written a Ruby version for, yet). ⏎ ⏎ The client is just pounding a bunch of IO down a socket, so it looks like this: ⏎ ⏎ ```code paste, see link``` [https://g
<FromGitter>
<wyhaines> It just surprised me that the Ruby one is faster, given how dominant Crystal generally tends to be in performance. Generally around 3.6 seconds for Ruby to 4.7 seconds or so for Crystal.
<FromGitter>
<Blacksmoke16> what happens if you add a like `TCPSocket.sync = true`
<FromGitter>
<Blacksmoke16> before the bench
<FromGitter>
<Blacksmoke16> er `client.sync = true`
<FromGitter>
<wyhaines> Give me a moment....
<FromGitter>
<wyhaines> That
<FromGitter>
<wyhaines> That's pretty indistinguishable in performance.
<FromGitter>
<wyhaines> Still about 4.7 seconds.
<FromGitter>
<Blacksmoke16> what about `client.flush_on_newline = false`
<FromGitter>
<wyhaines> That also didn't seem to change anything.
<FromGitter>
<Blacksmoke16> Gotcha
<FromGitter>
<Blacksmoke16> is possible its not using any fibers internally
<FromGitter>
<Blacksmoke16> also you could do `iterations.times do |idx| ...`
<FromGitter>
<wyhaines> I went down this path because the msgpack implementation appears to be massively slower than the Ruby one, so I wanted to see what the best case performance was if I took msgpack out of the equation and simplified everything else. Sure re #times. One minute.
<FromGitter>
<Blacksmoke16> its not going to affect perf, just a bit cleaner syntax...
<FromGitter>
<Blacksmoke16> not super familiar with `TCPSocket` specifically, but maybe handle each message within its own fiber or something? :shrug:
<FromGitter>
<wyhaines> I can try it and see what happens.
<FromGitter>
<wyhaines> The overhead of creating that many fibers slows it down a lot. :)
<FromGitter>
<Blacksmoke16> Rip, well I'm out of ideas
<FromGitter>
<wyhaines> *nod* It is enough right now just to learn that this difference is surprising. I have a feeling that if I rewrote the server side to be Ruby, too, the difference will be even greater. ⏎ ⏎ I have some software that, essentially, is a simple client/server that mostly passes messages from the client to the server -- written in Ruby. On my laptop, for 10 byte messages it does 280k/second using msgpack, and
<FromGitter>
... 230k/second for 100 byte messages, and that includes a bunch of extra processing overhead. I'm only barely beating that speed with my simple Crystal test case server if I don't use msgpack, and then only because the Crystal one isn't doing anything except receiving the message. ⏎ ⏎ It is something that I would love to dig in ... [https://gitter.im/crystal-lang/crystal?at=5edda2c1225dc25f54d1de5b]
<FromGitter>
<Blacksmoke16> Could try the http server versus tcpserver
<lunarkitty>
Does crystal have something like ri for ruby, for viewing the docs in the console?
<FromGitter>
<Blacksmoke16> I doubt it
<FromGitter>
<Blacksmoke16> Web API docs are your best bet
<FromGitter>
<Daniel-Worrall> For spawning, I'd recommend you have a set amount of workers in an array then on an enumerated list, You can `I % length` to get your worker index. Much better than spawning for every request
<FromGitter>
<Daniel-Worrall> Also, ultimately the bottleneck will be the IO outside of the language
<raz>
wyhaines: that does seem strange. perhaps try adding a `client.close` after the while loop. my feeling is it might have to do with buffering behavior rather than the actual I/O.
<raz>
(i.e. perhaps ruby terminates without actually flushing the socket while crystal does? - just a guess)
<yxhuvud>
Opening a topic at the forum might be a good idea.
DTZUZU has joined #crystal-lang
HumanG33k has quit [Ping timeout: 246 seconds]
HumanG33k has joined #crystal-lang
HumanG33k has quit [Ping timeout: 256 seconds]
<FromGitter>
<cuteghost> Hi everyone. I had an issue related to installing and updating shards. I use ssh key on GitHub and when I run `sudo shards install` I get the error `Fetching git@github.com:... ⏎ Failed to update git@github.com:...`. ⏎ If I use `sudo -E shards install` the issue is solved. I am interested why do I need to use `sudo` at all when install & updating shards.
<FromGitter>
<cuteghost> On an other system I have the app set up to work properly so it doesn't need ` sudo ` when I do `shards install` but after a few days when I do again ` shards install ` it asks for a passphrase to enter. I was thinking could it be that passphrase isn't activated or something.
<raz>
wyhaines: try `client.write "This is message ##{count}\n".to_slice` in the crystal version. this beats ruby for me. (write faster than puts)
<raz>
i think it has to do with blocking vs non-blocking (`puts` in ruby probably translates to non-blocking in ruby but to blocking in crystal, but i haven't tested that part)
<FromGitter>
<ImAHopelessDev_gitlab> @kinxer Just to add: The "normal-person" OS part goes even deeper. I think there is a large majority that grew up using Windows and are heavily accustomed to it. It's not even about Windows vs Linux at that point. It eventually boils down to "why change!?". My analogous view: Driving an automatic car to work for 20 years, then all of a sudden a co-worker says omg dude, why are you not driving a
<FromGitter>
... manual? It's way better! No, I don't want to learn how to drive a manual, I got my driver's license and my car works fine to and from work.
deavmi has quit [Quit: Eish! Load shedding.]
deavmi has joined #crystal-lang
<FromGitter>
<asterite> kirk: try setting sync to false, then flushing manually
<FromGitter>
<asterite> sync = true has no effect because sync is already true for sockets
<FromGitter>
<asterite> same as ruby, though
<FromGitter>
<xmonader> if i have an object that is json serializable how can i do to_h on it directly?
<yxhuvud>
asterite: do you understand why it is slower than the ruby version though? Do ruby flush automatically there or what is going on?
<raz>
as said above, if you use `.write` instead of `.puts` the crystal version is faster
<raz>
crystal puts seems to behave differently from ruby puts
deavmi has quit [Ping timeout: 264 seconds]
deavmi has joined #crystal-lang
alexherbo2 has joined #crystal-lang
<FromGitter>
<Blacksmoke16> @xmonader would have to define that method, just because something can be serialized into `JSON` doesn't mean you can automatically convert it to a hash
<oprypin>
@Blacksmoke16: but you have to realize that .to_json can be imagined as .to_h.to_s (it's really not, but all the same conversion logic applies)
<oprypin>
so sure, you can claim this, but surely in that context it'd be very redundant to use from_json(.to_json)
<oprypin>
as would reimplementing the first part of the logic completely separately
<FromGitter>
<wyhaines> Awesome. re @FromIRC (raz) -- the difference between write and puts with a slice is substantial. That solves mystery 1. Thank you. That is a good piece of information.
<FromGitter>
<Blacksmoke16> oprypin: right, but `JSON::Serializable` doesn't implement that
<FromGitter>
<Blacksmoke16> so would have to define it yourself,
<FromGitter>
<asterite> yxhuvud: I ran the benchmark, I get similar times in Ruby and Crystal
<FromGitter>
<asterite> also puts does send two messages to the socket, and with sync = true it might be slower than Ruby... but I'm getting similar times in both cases, so I don't know what they are benchmarking or why they are getting different results
<FromGitter>
<asterite> Actually, with sync = false it's a bit slower... I don't know why
zorp_ has joined #crystal-lang
duane has joined #crystal-lang
JuanMiguel has joined #crystal-lang
JuanMiguel has quit [Client Quit]
<FromGitter>
<7sidedmarble_twitter> Hey guys, could someone help me with a macro question?
<FromGitter>
<Blacksmoke16> prob
<FromGitter>
<7sidedmarble_twitter> Macros is still the one thing I just am not good with
<FromGitter>
<7sidedmarble_twitter> I'm working on this caching tool for a web framework
<FromGitter>
<7sidedmarble_twitter> and I want to write a method that will get the filename of the file the method is called in, (or included in via module or whatever, whatever works I don't really care) and then upon calling the method it'll MD5 hash the contents of the source code file
<FromGitter>
<7sidedmarble_twitter> I know I need to get the filename at build time, so I know I need to use a macro
<FromGitter>
<7sidedmarble_twitter> but I just cannot figure out how to do it. I know theres a `filename` method on AST node
<FromGitter>
<Blacksmoke16> filename of the file the macro is used in?
<FromGitter>
<7sidedmarble_twitter> yeah
<FromGitter>
<Blacksmoke16> `__FILE__`?
<FromGitter>
<7sidedmarble_twitter> oh shit I didn't know that was there lol.
<FromGitter>
<Blacksmoke16> not really a macro but yea
<FromGitter>
<7sidedmarble_twitter> This is the full problem in case you have any input
<FromGitter>
<7sidedmarble_twitter> I need a method to compare the md5 hash of the contents of the source code, and I also need to know all the other html templates called from said template, because I need to include them in the digest
<FromGitter>
<7sidedmarble_twitter> luckily in the framework I'm using, I know there's only one way to call another template from a template, and its a method called `mount` in Lucky
<FromGitter>
<Blacksmoke16> and you want to cache templates or?
<FromGitter>
<7sidedmarble_twitter> kind of
<FromGitter>
<7sidedmarble_twitter> it eventually is used in the process of caching, but the digest is purely used for busting caches actually
<FromGitter>
<7sidedmarble_twitter> the idea is you compare the md5 hash of the source code when the cache was set to when you retrieve it, and if they differ you bust the cache
<FromGitter>
<7sidedmarble_twitter> the idea being the developer has changed the template file and now the cache should be invalidated
<FromGitter>
<7sidedmarble_twitter> this used to be done by hand by setting version flags on caches, but I Think rails was the first framework to have the idea of automating it via digesting the file
<FromGitter>
<Blacksmoke16> but isnt crystal a bit diff since its compiled
<FromGitter>
<7sidedmarble_twitter> indeed
<FromGitter>
<Blacksmoke16> i.e. wouldnt your templates be included in the binary
<FromGitter>
<Blacksmoke16> so whats there to bust?
<FromGitter>
<7sidedmarble_twitter> yeah this is assuming that the source code in the same directory
<FromGitter>
<7sidedmarble_twitter> I don't think thats a bad assumption though
<FromGitter>
<Blacksmoke16> im just saying what can you actually cache since the binary would need to be recompiled on any change anyway
<FromGitter>
<7sidedmarble_twitter> the caching is done outside
<FromGitter>
<7sidedmarble_twitter> in redis
<FromGitter>
<Blacksmoke16> oh i see
<FromGitter>
<7sidedmarble_twitter> a lot of people do not clear their redis cache between deploys
<FromGitter>
<7sidedmarble_twitter> in theory you shouldn't have to if the template hasnt changed or the data inside the cache hasnt changed
<FromGitter>
<7sidedmarble_twitter> but yeah if anyone has any ideas on how to accomplish more parts of this at build time that'd be sweet
<FromGitter>
<7sidedmarble_twitter> I don't suppose there's a way to read the contents of a file into a macro as a StringLiteral node is there? lol
<FromGitter>
<Blacksmoke16> yea there is actually
<FromGitter>
<Blacksmoke16> `read_file`
<FromGitter>
<7sidedmarble_twitter> that would be awesome
<FromGitter>
<7sidedmarble_twitter> then it would work even if the binary is deployed without source code
<FromGitter>
<Blacksmoke16> is also `read_file?` which returns nil if the file doesnt exist
<FromGitter>
<7sidedmarble_twitter> ugh that would be great
<FromGitter>
<Blacksmoke16> to be clear im saying these are already things
<FromGitter>
<7sidedmarble_twitter> there we go sorry
<FromGitter>
<7sidedmarble_twitter> this module gets included in the page class
<FromGitter>
<Blacksmoke16> i mean all of this happens at runtime, idt you have an easy way to get what you want
deavmi_ has joined #crystal-lang
deavmi has quit [Read error: Connection reset by peer]
<FromGitter>
<Blacksmoke16> unless you write some crazy ass regex to parse this out of the string literal
<FromGitter>
<Blacksmoke16> or something equally hacky
<FromGitter>
<7sidedmarble_twitter> I need an instance var declared in here that will then append the component name
<FromGitter>
<7sidedmarble_twitter> I don't think we need to parse the file at all
<FromGitter>
<Blacksmoke16> but you want this data at compile time in a macro right?
<FromGitter>
<7sidedmarble_twitter> this part doesn't need to be no
<FromGitter>
<Blacksmoke16> oh
<FromGitter>
<7sidedmarble_twitter> I'm fine with this being at run time
<FromGitter>
<7sidedmarble_twitter> I mean it would be awesome if there was a way to do it in a macro, but I dont see how
<FromGitter>
<7sidedmarble_twitter> I think If I just put an instance var in this module, and every time mount is called append the classname to it, I can then figure out the filename
<FromGitter>
<Blacksmoke16> then yea would prob just want to keep track of what `self` is, and each `mount` call within `self`
<FromGitter>
<7sidedmarble_twitter> hmmmmm
<FromGitter>
<Blacksmoke16> granted idk enough about lucky to know exactly how all this works
<FromGitter>
<7sidedmarble_twitter> what do you mean keep track of self within the mount call?
<FromGitter>
<Blacksmoke16> like given any component you would know what other components it mounted
<FromGitter>
<Blacksmoke16> essentially what you said ⏎ ⏎ > I think If I just put an instance var in this module, and every time mount is called append the classname to it
<FromGitter>
<Blacksmoke16> so would end up with like `@mounted_components : Array(Component)`
<FromGitter>
<7sidedmarble_twitter> oh yeah
<FromGitter>
<7sidedmarble_twitter> hmm
<FromGitter>
<7sidedmarble_twitter> I didn't even think of storing the components
<FromGitter>
<7sidedmarble_twitter> I was thinking storing the classnames
<FromGitter>
<7sidedmarble_twitter> I don't know, ultimately what I really need is the filenames
<FromGitter>
<7sidedmarble_twitter> is there a good way to go from the Class to its filename?
<FromGitter>
<Blacksmoke16> no
<FromGitter>
<7sidedmarble_twitter> hmm
<FromGitter>
<7sidedmarble_twitter> actually wait
<FromGitter>
<Blacksmoke16> there isnt anything like autoloading like PHP/composer has
<FromGitter>
<7sidedmarble_twitter> If the first file/read/digest method is included in all pages and all components
<FromGitter>
<7sidedmarble_twitter> then all I have to do is just call it
<FromGitter>
<7sidedmarble_twitter> then I don't need to know filenames at runtime
<FromGitter>
<Blacksmoke16> could be some edge cases
<FromGitter>
<Blacksmoke16> like for one off pages that dont use the base layout of the main application
deavmi_ has quit [Quit: Eish! Load shedding.]
<raz>
i didn't follow all details, but my belly feeling is this could miss changes in other files. (what if the template is unchanged but calls a method from another file that changed?)
<FromGitter>
<Blacksmoke16> going back a step, do you know this will actually help perf?
<FromGitter>
<Blacksmoke16> > We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
<FromGitter>
<Blacksmoke16> seems like a lot of work for an unsure amount of benefit
deavmi has joined #crystal-lang
<FromGitter>
<7sidedmarble_twitter> Raz, thats correct, in rails this issue is solved by having an explicit list you can add to
<FromGitter>
<7sidedmarble_twitter> for example if you call a helper or something
<FromGitter>
<7sidedmarble_twitter> it doesn't go look at those, unless you tell it to
<raz>
7sidemarble: sounds like that will be the part that's constantly forgotten :p
<FromGitter>
<7sidedmarble_twitter> well the alternative is bumping cache key version integers by hand
<FromGitter>
<7sidedmarble_twitter> so its better then doing nothing to help the user bust caches lol
<raz>
i can sympathize with premature optimization tho.
* raz
wrote a distributed in-process pg cache that gets change notifications via listen/notify.
<FromGitter>
<7sidedmarble_twitter> as for whether it's premature, I don't think *writing* it as a tool for people to use is premature. Maybe *using* it too much when you don't have to, yes
<raz>
saves me 1 lookup on each page-load (for the User in the session) which amounts to ~20ms vs a redis lookup
<raz>
no one will ever notice. :( but boy am i fond of it!
<FromGitter>
<7sidedmarble_twitter> but I'm just putting it out there for people. They might have a big template that takes a long time which this is great for, they might not
<raz>
yea, caching is good. esp. when it's client-side.
<raz>
sounds like you're basically going the russian doll path that DHH blogged about a while ago
<raz>
whether users will notice is a different story :D
deavmi has quit [Client Quit]
<FromGitter>
<7sidedmarble_twitter> it is russian doll yes
<FromGitter>
<7sidedmarble_twitter> it works *really* well in rails with this digest system
<FromGitter>
<7sidedmarble_twitter> as long as you're a little careful it's hard to bite yourself
<raz>
however blacksmoke has a point. the thing is that client-side/edge caching basically doesn't work when pages contain dynamic content (which most pages in a typical webapp do).
<FromGitter>
<7sidedmarble_twitter> you need to include that dynamic content into the cache key
deavmi has joined #crystal-lang
<FromGitter>
<7sidedmarble_twitter> and check `updated_at` timestamps
<FromGitter>
<7sidedmarble_twitter> if the associated models have changed, you bust the cache
<FromGitter>
<7sidedmarble_twitter> you also want to check the current_user if thats being used in the template
<raz>
yea, but for that you don't need to know anything about the template
<FromGitter>
<7sidedmarble_twitter> as long as you do that it works pretty well
<FromGitter>
<Blacksmoke16> is there an issue in Lucky for this? If not might want to make one to at least discuss it. Wouldn't be great to do all this work and the idea/implementation ends up being shot down
<raz>
if the page has dynamic content you can't cache it on the client. unless you're willing to tolerate serving stale content every now and then
<FromGitter>
<7sidedmarble_twitter> you don't, but the problem people had was that someone would go to change the source code of the template, then the cache would be bad, and it wouldn't be busted
<FromGitter>
<7sidedmarble_twitter> so people started appending numbers to their caches, and theyd have to remember to bump them upon any changes
<FromGitter>
<7sidedmarble_twitter> this digest makes that unneeded
<FromGitter>
<7sidedmarble_twitter> and yeah there is a lucky issue
<FromGitter>
<7sidedmarble_twitter> it got a good response so far
<FromGitter>
<7sidedmarble_twitter> the lucky orm's sql cache isn't even done
<FromGitter>
<7sidedmarble_twitter> which kinda sucks
<raz>
would rather focus on the orm side
<FromGitter>
<7sidedmarble_twitter> but if you are fragment caching you can kinda get around it
<raz>
that's where the big gains are
<FromGitter>
<7sidedmarble_twitter> I'm definitely going to take a crack at it myself after this is done
<FromGitter>
<7sidedmarble_twitter> but I thought it would be harder
<raz>
fragment caching makes sense in rails because ruby is slower than a snail on xanax
* FromGitter
* Blacksmoke16 thinks it looks like a lot of work/added complexity to save 10-20ms or whatever
<raz>
yea... crystal doesn't break a sweat to put some strings together
<FromGitter>
<7sidedmarble_twitter> if you fragment cache you also avoid even calling the DB however.
<FromGitter>
<7sidedmarble_twitter> assuming there's a call in the template
<FromGitter>
<7sidedmarble_twitter> doesn't help you all the time but a little bit
<FromGitter>
<7sidedmarble_twitter> I would love to tackle the sql caching though too
<raz>
well both _can_ make sense. but if the db lookups themselves are cached then the templating should rarely matter
<FromGitter>
<Blacksmoke16> that would prob be more helpful, as it wouldnt be just for templates
<FromGitter>
<7sidedmarble_twitter> anyone know if any of the other popular crystal ORMs for PG have caching done?
<FromGitter>
<7sidedmarble_twitter> for reference
<FromGitter>
<Blacksmoke16> maybe clear? cant say i looked in a while but i doubt it
<raz>
i've started wrapping my own around jennifer but it's not done yet. and it's veeeery tricky.
<raz>
i think essentially hibernate with its identity map is the reference. but that quote about invalidation being hard isn't a lie...
<FromGitter>
<7sidedmarble_twitter> maybe I should just read activerecord code at this point lol
<FromGitter>
<7sidedmarble_twitter> see this is why I didn't want to write it lol
<raz>
i'd say in 10 out of 10 cases caching is best solved on a case-by-case basis. generic caches work well for _some_ cases, but the big boost comes when you can cache at a high level where you have all the information about when/how to invalidate.
<raz>
e.g. generic fragment caching doesn't do you much good when it suppresses the SQL query that you need to figure out if something has changed
<FromGitter>
<7sidedmarble_twitter> true
<FromGitter>
<7sidedmarble_twitter> although having children `touch` their parents `updated_at` can help at least to a degree
<raz>
yup, but that's still very special casey
<FromGitter>
<Blacksmoke16> i feel like an API would be easier to cache
<FromGitter>
<7sidedmarble_twitter> yeah a general purpose smarter cache for the ORM in Lucky would be way better
<FromGitter>
<Blacksmoke16> since its just JSON, plus you got some stuff built into the browsers
<FromGitter>
<7sidedmarble_twitter> I just don't know if I'm smart enough for that lol
<raz>
i don't think anyone is smart enough for it. but working on it is still a good way get smarter :)
<FromGitter>
<7sidedmarble_twitter> it is one of the hardest problems after all
<raz>
it's kinda the essence of chicken/egg problems.
<raz>
"how can i know something has changed without looking at it"
<FromGitter>
<Blacksmoke16> ETAGs :0
<FromGitter>
<Blacksmoke16> and TTL
<raz>
blacksmoke: you only get those after looking. that's cheating :p
<FromGitter>
<Blacksmoke16> but then you dont have to look at it again :P
<FromGitter>
<7sidedmarble_twitter> does active record actually do anything smart to check if a query result has changed between calls per-route?
<FromGitter>
<7sidedmarble_twitter> per request I mean
<FromGitter>
<7sidedmarble_twitter> or does it just blindly cache it
<FromGitter>
<Blacksmoke16> no idea
<FromGitter>
<7sidedmarble_twitter> the source code is very enlightening
<raz>
e.g. latency in AWS (same AZ) is typically in the single digit ms range (4-7ms)
<raz>
where and whether that matters depends on what you're doing. it usually doesn't for a single query. it starts to add up when you do many queries for a page-load (that's why combining multiple fetches into one is often the lowest hanging fruit to start optimizations)
<FromGitter>
<7sidedmarble_twitter> right
<raz>
to make it even more exciting these things also change over the years. for the longest time the common wisdom was to prefer multiple simple queries over a single crazy JOIN. because db's lived on spinning rust and those IOs were expensive.
<FromGitter>
<7sidedmarble_twitter> Well another thought I had, since crystal has such nice concurrency structures, if it would feasible to cache repeated queries per-request ala activerecord, and return from the cache, but spin off a fiber to actually perform the query later and bust the cache if they don't match up
<FromGitter>
<7sidedmarble_twitter> I don't know how feasible that is though
<raz>
nowadays you can put millions of IOPS into a box the size of a shoe, and it's often better to use a few of them rather than doing multiple roundtrips
<FromGitter>
<Blacksmoke16> still seems like premature optimization to me :P
<FromGitter>
<7sidedmarble_twitter> I'm talking about as a strategy to implement SQL-level caching in Lucky's ORM
<FromGitter>
<7sidedmarble_twitter> cause yeah, the problem is you can't know if the cached values have changed before you read them
<FromGitter>
<7sidedmarble_twitter> so maybe you should just give out the cache and check later
<FromGitter>
<7sidedmarble_twitter> then when the next request comes it'll hopefully be accurate
<raz>
well, everything that can save roundtrips (to the db or even better: to the user) is good
<FromGitter>
<7sidedmarble_twitter> I mean it kinda makes sense
<FromGitter>
<7sidedmarble_twitter> every caching strategy is weighing risking giving out old data
<FromGitter>
<7sidedmarble_twitter> the more sure you want to be the more time it takes
<raz>
blacksmoke: shhh, we're long past that point! the fun in engineering is not about needs but about wants :D
<FromGitter>
<Blacksmoke16> i want a program that writes my programs for me
<FromGitter>
<7sidedmarble_twitter> yeah this is just indulgence lol
<FromGitter>
<didactic-drunk> I thought Rails checked `model.updated_at` for partial caching never returning stale data.
<raz>
Blacksmoke16: that was just to underline how much I/O you can get out of a db nowadays (i.e. JOINs aren't as scary as they used to be)
<FromGitter>
<Blacksmoke16> ah gotcha
<FromGitter>
<Blacksmoke16> i been wanting to get one of those nvme SSDs
zorp_ has quit [Ping timeout: 258 seconds]
<raz>
a few years back that stuff was measured in like the 100k/s ballpark before it got prohibitively expensive. nowadays you can buy yourself a box that pushes 20MM IOPS for little more than the price of a macbook
<raz>
(actually i lied, 20MM would still be expensive. but i think 8MM is prosumer budget now - two of the cards that linus shows in that video)
deavmi has quit [Ping timeout: 258 seconds]
deavmi has joined #crystal-lang
<jhass>
Stephie: straight-shoota WorksOnArm would love more than one applicant listed in the application, can I add one (or both) of you? :)
baweaver has joined #crystal-lang
<FromGitter>
<xmonader> > @xmonader would have to define that method, just because something can be serialized into `JSON` doesn't mean you can automatically convert it to a hash ⏎ ⏎ including Crinja::Object fixed it ;)
<FromGitter>
<Blacksmoke16> 👍
<Stephie>
jhass: yep
<jhass>
Stephie: great, it's public and asks for an email address, I'll pick the one from your GH profile unless you have another preference :) Also say if you want to proof read
<FromGitter>
<xmonader> how do i initialize a nested field in json in json serializable?
<FromGitter>
<Blacksmoke16> is that property in the initializer?
<FromGitter>
<xmonader> yes, but it's custom object
<FromGitter>
<xmonader> and defining it as property didn't work
<FromGitter>
<xmonader> @Blacksmoke16 that's what im trying to do and when i did intialize and force profile to be set to Profile.new, I couldn't see it updated ` @profile = Profile.from_json("{}")`
<FromGitter>
<Blacksmoke16> does the json you're creating this from have a `profile` object in it?
<FromGitter>
<xmonader> yes
<FromGitter>
<Blacksmoke16> so whats the problem then? Assuming both those types have `JSON::Serializable` included it should just work
<FromGitter>
<Blacksmoke16> like how location works in that example
<FromGitter>
<xmonader> @Blacksmoke16 problem is I can't define ⏎ `property profile = Profile.new` ⏎ as default, and when I define it in the initialize it doesn't get mapped to the actual values :(
<FromGitter>
<Blacksmoke16> why does it need to have a default?
<FromGitter>
<Blacksmoke16> if its coming from the json
<FromGitter>
<xmonader> even without the default I get empty values instead :)
<FromGitter>
<kinxer> Given that I don't think anyone uses any of my open-source software yet (which mostly consists of one shard), it shouldn't be disruptive to change my GitHub handle, right?
<FromGitter>
<Blacksmoke16> github sets up redirects anyway
<FromGitter>
<Blacksmoke16> should still work
<FromGitter>
<Blacksmoke16> assuming no one takes your old one ofc
<FromGitter>
<kinxer> A'ight, cool.
<FromGitter>
<Blacksmoke16> so id be sure to update links and stuff just in case, but in the mean time it'll be fine
mistergibson has joined #crystal-lang
<ryanprior>
iirc GitHub sets up redirects when you change a repo name, but not when you change your username.
<jhass>
it does setup up for a username
<jhass>
but only one level, if you change again the initial hop is broken
<FromGitter>
<kinxer> I'm not a web dev, so I'm not as familiar with that side of the standard library.
<jyc>
ah, thanks! I think it got renamed to URI.encode. Thanks for checking!
_ht has quit [Quit: _ht]
<FromGitter>
<kinxer> No problem. I'm glad you've found what you need.
Human_G33k has joined #crystal-lang
HumanG33k has quit [Ping timeout: 260 seconds]
zorp_ has quit [Ping timeout: 258 seconds]
duane has quit [Ping timeout: 260 seconds]
jyc has quit [Ping timeout: 245 seconds]
<FromGitter>
<Daniel-Worrall> ```code paste, see link``` ⏎ ⏎ Just did this to handle % download on large files. Does anyone have a progress bar shard recommendation? Bonus points for it replacing the old message and handling multiple bars [https://gitter.im/crystal-lang/crystal?at=5edec8299da05a060a5d8110]