rubydoc has quit [Remote host closed the connection]
rubydoc has joined #ruby
kristian_on_linu has quit [Remote host closed the connection]
rubydoc has quit [Remote host closed the connection]
rubydoc has joined #ruby
BSaboia has joined #ruby
ChmEarl has joined #ruby
Akem has quit [Ping timeout: 240 seconds]
fercell has quit [Ping timeout: 258 seconds]
Akem has joined #ruby
BSaboia has quit [Quit: This computer has gone to sleep]
BSaboia has joined #ruby
teardown has joined #ruby
teardown has quit [Client Quit]
BSaboia has quit [Quit: This computer has gone to sleep]
teardown has joined #ruby
fercell has joined #ruby
BSaboia has joined #ruby
Frankenlime has quit [Quit: quit]
also_uplime has joined #ruby
imode has joined #ruby
cthulchu has joined #ruby
Rudd0 has joined #ruby
BSaboia has quit [Quit: This computer has gone to sleep]
TCZ has quit [Quit: Leaving]
BSaboia has joined #ruby
teardown has left #ruby [#ruby]
teardown has joined #ruby
cd has joined #ruby
danielk43[m] has joined #ruby
maryo has joined #ruby
teardown has quit [Quit: leaving]
fercell has quit [Quit: WeeChat 2.9]
BSaboia has quit [Quit: This computer has gone to sleep]
BSaboia has joined #ruby
BSaboia has quit [Client Quit]
cnsvc has joined #ruby
regedit has joined #ruby
bmurt has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
imode has quit [Quit: WeeChat 2.9]
Emmanuel_Chanel has quit [Ping timeout: 265 seconds]
BSaboia has joined #ruby
banisterfiend has joined #ruby
maryo has quit [Quit: Leaving]
BSaboia has quit [Read error: Connection reset by peer]
vondruch has quit [Ping timeout: 258 seconds]
Emmanuel_Chanel has joined #ruby
bmurt has joined #ruby
burgestrand has quit [Quit: burgestrand]
Guest7181 has joined #ruby
Emmanuel_Chanel has quit [Read error: No route to host]
Emmanuel_Chanel has joined #ruby
ellcs has joined #ruby
phaul has joined #ruby
jwr has joined #ruby
<jwr>
Can anybody tell me why bundler is complaining about a lack of credentials for contribsys when those credentials have seemingly already been set up? https://pastebin.com/raw/fwieycfS
bmurt has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
imode has joined #ruby
moeSizlak has joined #ruby
<moeSizlak>
sequel is using N'string' quotes, and its noticably slower than just 'string'
<moeSizlak>
how can i fix that
<jwr>
oh, i see, i had a `.bundle/config` in the root of my repo, which is also where i ran my `bundle install`, but since I was using `--gemfile=gemfiles/Gemfile`, I also needed a config at `gemfiles/.bundle/config`. got it.
weaksauce has joined #ruby
maryo has joined #ruby
burgestrand has joined #ruby
cgfbee has joined #ruby
kneefraud has quit [Remote host closed the connection]
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
dionysus69 has quit [Ping timeout: 272 seconds]
BSaboia has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
rippa has joined #ruby
mwlang has joined #ruby
<mwlang>
I am working on releasing a new gem and have a general question about logging. Is it more common to provide a default Logger to STDOUT or to /dev/null (silent logging) with the expectation that the user must override to configure something more useful? Or another way to view this question: Which is your own preference and why?
maryo87 has joined #ruby
<havenwood>
mwlang: Yeah, default Logger to STDOUT with a configurable option to swap in your own logger. SystemD assumes logging to STDOUT, as do many other modern tools, so that makes sense as a default. If logging is a real consideration, making it configurable can be worth it so more folk can integrate their own logging.
maryo has quit [Ping timeout: 264 seconds]
bmurt has joined #ruby
<mwlang>
havenwood: thanks for that feedback. I’m definitely making it configurable and it’s expected the user will simply supply their own instantiated, self-configured Logger vs. me implementing all those things within the gem itself.
moeSizlak has left #ruby ["Leaving"]
<clemens3>
if it is a gem, why would anybody but you want to see the log output?
ruurd has quit [Read error: Connection reset by peer]
ruurd has joined #ruby
teardown has joined #ruby
teardown has quit [Client Quit]
rippa has quit [Read error: Connection reset by peer]
rippa has joined #ruby
<havenwood>
clemens3: A gem is just a library, so I imagine this is one that has a logging aspect. If a gem does something you want to keep a written record of, you log it.
teardown has joined #ruby
<adam12>
I'd prefer a null logger with the option to configure one (since I'd probably pass in my global application logger).
<adam12>
Unless it's a cli tool, which probbaly should configure a logger oob.
<mwlang>
For what it’s worth, it’s an API wrapper library. The logging is primarily for debugging purposes. Its expected (at least to me) that it’s useful to see log output during development to STDOUT while you’re composing your API calls with simple ruby scripts, and then adding your own logger at the point you’re ready buid a real solution suitable for deployment.
teardown has quit [Quit: leaving]
banisterfiend has quit [Ping timeout: 246 seconds]
banisterfiend has joined #ruby
banisterfiend has quit [Ping timeout: 260 seconds]
davispuh has joined #ruby
<adam12>
mwlang: In that case, maybe a default logger to stdout but with the level set to warn. Then allow someone to crank it down to debug.
<adam12>
In reality I'm not sure it matters much. I'd probably pass in my own logger anyways.
<mwlang>
that’s exactly what I had in mind. :-)
roshanavand has joined #ruby
burgestrand has quit [Quit: burgestrand]
banisterfiend has joined #ruby
banisterfiend has quit [Remote host closed the connection]
banisterfiend has joined #ruby
teardown has joined #ruby
phaul has quit [Ping timeout: 240 seconds]
phaul has joined #ruby
burgestrand has joined #ruby
s2013 has joined #ruby
banisterfiend has quit [Quit: banisterfiend]
mwlang has quit [Quit: mwlang]
rippa has quit [Read error: Connection reset by peer]
rippa has joined #ruby
bmurt has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
teardown has quit [Quit: leaving]
BSaboia has quit [Quit: This computer has gone to sleep]
ur5us has joined #ruby
hiroaki has joined #ruby
BSaboia has joined #ruby
lucasb has joined #ruby
maryo87 has quit [Quit: Leaving]
JayDoubleu has quit [*.net *.split]
Mutsuhito has quit [*.net *.split]
arekushi has quit [*.net *.split]
podman has quit [*.net *.split]
podman has joined #ruby
JayDoubleu has joined #ruby
Mutsuhito has joined #ruby
rippa has quit [Read error: Connection reset by peer]
rippa has joined #ruby
teardown has joined #ruby
rippa has quit [Read error: Connection reset by peer]
teardown_ has joined #ruby
teardown_ has quit [Client Quit]
rippa has joined #ruby
teardown has quit [Quit: leaving]
teardown has joined #ruby
teardown has quit [Client Quit]
teardown has joined #ruby
teardown has quit [Client Quit]
teardown has joined #ruby
rippa has quit [Quit: {#`%${%&`+'${`%&NO CARRIER]
clinth has quit []
clinth has joined #ruby
noizex has quit [Remote host closed the connection]
noizex has joined #ruby
teardown has quit [Quit: leaving]
m_antis has joined #ruby
m_antis has quit [Client Quit]
teardown has joined #ruby
teardown has quit [Client Quit]
s2013 has quit [Ping timeout: 264 seconds]
teardown has joined #ruby
sarmiena_ has joined #ruby
<sarmiena_>
I'm a pretty big newb when it comes to compression... but I'd like to use it for large text and store it in elasticsearch. However, i notice when i use ActiveSupport::Gzip.compress('compress me!').bytesize it's actually larger than the original text
<sarmiena_>
how should i go about thinking about this?
chouhoulis has joined #ruby
<leftylink>
indeed, it is a common thing we see with compression algorithms, where compressing a small thing will not compress it at all but instead make it larger
<leftylink>
to find the smallest size at which compression can be achieved, we might write some code that looks like this
<leftylink>
we can also try some degenerate cases where the input is incompressible but that should be obvious anyway... but we can try it regardless of the obviousness
<leftylink>
we can do it regardless of the obviousness just to make sure we have not accidentally messed something up
<havenwood>
sarmiena_: If you really have strings that short you'd like to compress, you might look at a compression algo meant for small strings like Shoco or Smaz.
<sarmiena_>
havenwood i'm compressing emails, which may or may not be small
<sarmiena_>
also the emails might have embedded images in them, so not sure how that's going to play out either
DTZUZU has quit [Read error: Connection reset by peer]
<havenwood>
sarmiena_: I'd guess emails are roughly 500 bytes of text, on average.
DTZUZU has joined #ruby
<havenwood>
sarmiena_: I'm seeing about a 50% compression with gzip and 90 random words.
<sarmiena_>
ok that's good
<havenwood>
words.bytesize #=> 600
<havenwood>
gzip.bytesize #=> 306
<sarmiena_>
running this on my db instance select id, length(email_content) from sent_mails where email_content is not null order by length(email_content) desc limit 500;
<sarmiena_>
40MM records worth 250GB
<sarmiena_>
gonna take a while hah
stoffus_ has joined #ruby
<havenwood>
sarmiena_: How about using gzip at the NGINX or equivalent layer and having Elastic Search use best_compression for index.codec?
<havenwood>
I guess that deflates/inflates twice, but ¯\_(ツ)_/¯
<havenwood>
Seems nice to let Elastic Search handle its own compression.
<sarmiena_>
havenwood my man. haha i think we're on the same brainwave. so i already did it on ES with index: false on the email_content. and i'm seeing in me dev environment it went from 1.5MB to 3ishMB for 243 records
<sarmiena_>
so i just got a little concerned
<sarmiena_>
and my next venture was to do the NGINX thing and store it to disk and allow NGINX be the gatekeeper
burgestrand has quit [Quit: burgestrand]
stoffus has quit [Ping timeout: 260 seconds]
<sarmiena_>
problem is that storing it in the DB is horrible because of the page size limit per row, and PG does this toasting stuff as well, which creates problems for IO
<havenwood>
sarmiena_: Can you just use Accept-Encoding header? Then let Elastic Search deflate?
<sarmiena_>
havenwood hmm not sure i follow. i read that ES automatically deflates the data without having to do anything else?
<sarmiena_>
i
<havenwood>
sarmiena_: It compresses with LZ4, but `best_compression` using deflate isn't standard.
<havenwood>
sarmiena_: "The default value compresses stored data with LZ4 compression, but this can be set to best_compression which uses DEFLATE for a higher compression ratio, at the expense of slower stored fields performance. If you are updating the compression type, the new one will be applied after segments are merged."
<sarmiena_>
ah hah!
<havenwood>
sarmiena_: I'm not quite sure I follow your case, but seemed to me like HTTP Accept-Encoding: deflate would handle over the wire and Elastic Search best_compression would handle on disk.
niceperl has joined #ruby
ellcs has quit [Ping timeout: 260 seconds]
<sarmiena_>
for the accept-encoding, is that assuming i deflate at the ruby level, then post to ES with that encoding? then ES would expand it, then deflate it again?
<havenwood>
sarmiena_: You can do deflation via Accept-Encoding at the Ruby level (Rack middleware, usually) but more often handle it at NGINX layer.
<havenwood>
sarmiena_: Yeah, that'd be asking for a deflated version via HTTP, inflating it, and letting Elastic Search deflate it again.
<havenwood>
sarmiena_: You could use gzip over the wire and deflate on disk or deflate for both.
<sarmiena_>
i don't care too much about the traffic, tbh. it's all private network and i'm making requests directly to ES from the web node (which isn't exposed)
<sarmiena_>
just want it deflated on the disk so it doesn't use so much space
<sarmiena_>
and out of the PG instance
<havenwood>
yeah, then i'd try using elastic search configuration to have it deflate it itself
<sarmiena_>
right. makes sense
<havenwood>
sarmiena_: often a good idea to just have NGINX: gzip on;
<sarmiena_>
gotcha
<sarmiena_>
gonna see about allowing best_compression
RickHull has joined #ruby
<RickHull>
what's a good way to iterate over an array that consumes the array? just use #each and then reassign to empty array?
<RickHull>
for example, doing a one-time operation that returns the most common element
teardown has joined #ruby
<RickHull>
also, maybe thanks to MS, the github gist link goes to /discover, which doesn't make it clear or obvious how to post a gist