amyers has quit [Remote host closed the connection]
amyers has joined #sandstorm
rafaelmartins has quit [Read error: Connection reset by peer]
rafaelmartins has joined #sandstorm
prettyvanilla has quit [Quit: Konversation terminated!]
frigginglorious has joined #sandstorm
tdfischer has quit [Read error: Connection reset by peer]
tdfischer has joined #sandstorm
asmyers has joined #sandstorm
amyers has quit [Ping timeout: 258 seconds]
wolcen has joined #sandstorm
sh_smith has joined #sandstorm
bodisiw has joined #sandstorm
asmyers has quit [Ping timeout: 264 seconds]
sydney_u1tangle has quit [Remote host closed the connection]
asmyers has joined #sandstorm
asmyers has quit [Read error: Connection reset by peer]
asmyers has joined #sandstorm
asmyers has quit [Remote host closed the connection]
asmyers has joined #sandstorm
wolcen has quit [Ping timeout: 264 seconds]
prettyvanilla has joined #sandstorm
jemc has joined #sandstorm
<dwrensha>
i,i "s&st|m"
wolcen has joined #sandstorm
jemc has quit [Read error: Connection reset by peer]
afuentes has joined #sandstorm
jemc has joined #sandstorm
pie_ has joined #sandstorm
pie_ has quit [Changing host]
pie_ has joined #sandstorm
wolcen_ has joined #sandstorm
* asheesh
waves, morning
frigginglorious has quit [Quit: frigginglorious]
Telesight has joined #sandstorm
<phildini>
Hello lovely people! If I wanted to spin up a sandstorm instance for 234 civic technologists, how big an instance would I need and what would be the quickest way to do that?
<asheesh>
ohai
<asheesh>
Quick notes on capacity planning:
<asheesh>
# active grains * 100MB = RAM needed, as a ballpark
<asheesh>
You could start with a 4GB RAM instance which =approx= 40 active grains, and see how that fares.
<asheesh>
People have very few active grains per user since we auto-scale the grains down to 0 processes when not being used.
<asheesh>
You should try to have some kind of way to detect high memory use conditions.
<asheesh>
This is assuming that cost is a factor. Oh and DigitalOcean is a fine way to get all that.
<asheesh>
If cost is no factor, then get a high-RAM VM somewhere (64GB) and then don't worry ever.
<asheesh>
(640GB RAM? Who knows)
<phildini>
oh interesting.
<phildini>
and that RAM-per-grain is a a starting constant, I'mguessing?
<phildini>
like, having 10 users using a grain is like 110MB and not 1000MB, yes?
<phildini>
Cost is totally a factor.
<phildini>
We want low cost, we also want to have it be as high-availability as possible for members.
<dwrensha>
phildini: right, once a grain is running, having many simultaneous users does not typically cause it to need much more memory
<phildini>
ok. is there a way to batch-provision sandstorm users yet?
<dwrensha>
apps have a wide distribution of memory requirements
<dwrensha>
e.g. GitLab takes >500Mb for an open grain, GitWeb takes ~ 10MB
<dwrensha>
Collections takes < 5MB
<dwrensha>
I think we have some basic batch-provisioning features in Sandstorm For Work
<mokomull>
I'm curious as an outsider - does RAM tend to be the limiting factor over, say, disk I/O with
<mokomull>
I'm curious as an outsider - does RAM tend to be the limiting factor over, say, disk I/O when you end up with multiple databases running simultaneously?
<mokomull>
(one of these days I'm going to learn that ^M is enter, and M is next to W in Dvorak)
<zarvox>
mokomull: right now I'd say that RAM is the limiting factor, largely due to 1) language runtimes using comparatively large amounts of RAM even when idle and 2) usage patterns of current apps and usage
<zarvox>
on average, grains don't have enough data and do enough I/O for I/O to be a major consideration
<zarvox>
that could change with the right set of apps, though!
<mokomull>
zarvox: And I suppose, with a small resident set for each process, I/O operations are "fast" by -- using RAM! :)
<zarvox>
Yeah, the amount of RAM used by e.g. a Python application to just idle with the code loaded is usually way more than the total size of /var for that grain
<mokomull>
cPython's memory management still gives me terrible memories.
<Lord>
was there an update about mails ?
<Lord>
my sandstorm instance can't send mails anymore
<Lord>
in fact my mail server is in another container and sandstorm used to talk to ip through local ip address
<dwrensha>
Lord: yep
<Lord>
but the tls cert doesn't match the ip address
<dwrensha>
we're apparently more strict about certain things now
<Lord>
i use the port 25 but sandstorm go starttls
<Lord>
ok
<Lord>
so there is no way to eplicitly disable starttls or disable cert verification ?
<dwrensha>
hm. according to the nodemailer README: "options.secure if truethe connection will only use TLS. If false (the default), TLS may still be upgraded to if available via the STARTTLS command."
<dwrensha>
and then there is another option: "options.ignoreTLS if this is true and secure is false, TLS will not be used (either to connect, or as a STARTTLS connection upgrade command)."
<dwrensha>
I wonder if we should somehow expose that `ignoreTLS` option
<Lord>
an option should be interesting
<Lord>
for now i'm stuck :-(
<sknebel>
Lord: can you turn of STARTTLS announcement for localhost in your mail-daemon?
<zarvox>
Lord: eeep, sorry for the breakage :(
<zarvox>
If you can make your mail daemon not announce STARTTLS in the HELO/EHLO reply, that ought to fix things for now, as sknebel suggested.
<Lord>
i have to read postfix doc
<mokomull>
Or giving the internal IP a name that is in the cert (e.g. via subjectAltName) should help :)
<mokomull>
</unhelpful-security-guy-perspective>
<zarvox>
Lord: in main.cf, smtpd_use_tls = no, if you can afford to disable STARTTLS for all inbound mail
<Lord>
hooo mokomull : good idea ! /etc/hosts
<mokomull>
Lord: I didn't suggest that directly because I'm not so sure how easy that is to wedge into your Sandstorm container. Other equivalents depend on how your local DNS is set up, I suppose.
<zarvox>
Either way, I'll bring up what we should do here at our meeting in 3 minutes. Ideally, we'd like to have the ability to require StartTLS, and to opportunistically encrypt where possible, but we'd also like to not break people's existing setups. :S
<mokomull>
zarvox: ISTR Google did something equivalent: they opportunistically upgrade *but* if you can't agree on TLS parameters then it outright fails to deliver rather than falling back to plaintextn.
<mokomull>
zarvox: Hm, looking back at the bounce I received, it looks like Google issued a STARTTLS and the *remote* end responded with "454 TLS currently unavailable", so Google took that to mean an outright delivery failure.
<mokomull>
Lord: That's typically a good choice, yes :) You could create a mail.internal.example.com that resolves to your internal IP, and add that as a subjectAltName to the certificate, though.
<mokomull>
*puts on ops-guy hat* naming things is preferable to IP addresses anyway