kentonv changed the topic of #sandstorm to: Welcome to #sandstorm: home of all things sandstorm.io. Say hi! | Have a question but no one is here? Try asking in the discussion group: https://groups.google.com/group/sandstorm-dev | Public logs at https://botbot.me/freenode/sandstorm/
cbaines has quit [Quit: bye]
indiebio has quit [Remote host closed the connection]
indiebio has joined #sandstorm
<ocdtrekkie> TIL zarvox really likes sed.
rolig has quit [Ping timeout: 268 seconds]
rolig has joined #sandstorm
jemc has joined #sandstorm
pie___ has quit [Ping timeout: 264 seconds]
jemc has quit [Ping timeout: 240 seconds]
samba_ has joined #sandstorm
samba_ has quit [Ping timeout: 268 seconds]
blueminder has joined #sandstorm
samba_ has joined #sandstorm
samba_ has quit [Ping timeout: 276 seconds]
ripdog has quit [Ping timeout: 240 seconds]
ripdog has joined #sandstorm
pie_ has joined #sandstorm
AZero has quit [Ping timeout: 264 seconds]
xet7 has joined #sandstorm
pie_ has quit [Ping timeout: 240 seconds]
pie_ has joined #sandstorm
pie_ has quit [Ping timeout: 260 seconds]
pie_ has joined #sandstorm
pie_ has quit [Ping timeout: 260 seconds]
pie_ has joined #sandstorm
ogres has joined #sandstorm
pie_ has quit [Ping timeout: 248 seconds]
pie_ has joined #sandstorm
keturn has quit [Ping timeout: 256 seconds]
jemc has joined #sandstorm
keturn has joined #sandstorm
samba_ has joined #sandstorm
pie_ has quit [Ping timeout: 260 seconds]
ogres has quit [Quit: Connection closed for inactivity]
keturn has quit [Ping timeout: 240 seconds]
benharri is now known as bhh
samba_ has quit [Ping timeout: 248 seconds]
pie_ has joined #sandstorm
samba_ has joined #sandstorm
jemc has quit [Ping timeout: 260 seconds]
tg has quit [Ping timeout: 240 seconds]
Telesight has joined #sandstorm
dreamcatch22 has joined #sandstorm
<dreamcatch22> Quick question. Why does Sandstorm not have a LAMP stack App already installed like cloudron?
dreamcatch22 has quit [Client Quit]
dreamcatch22 has joined #sandstorm
<TimMc> strange question
<dreamcatch22> haha sorry. I guess without context it is a very strange question. I am not a developer but kinda tech geeky. Just don't kno how to install a php script off of codecanyon on a sandstorm grain
<TimMc> Oh, you're still here, good!
<TimMc> I thought you'd left
<TimMc> dreamcatch22: Have you found the grain packaging example?
<dreamcatch22> No. I have not. But that makes sense.
<TimMc> Yep, although it might be better read in the published context: https://docs.sandstorm.io/en/latest/vagrant-spk/packaging-tutorial/
<dreamcatch22> But this is way over my expertise. That is why i was wondering why such a popular stack would just come pre-packaged
<TimMc> Sandstorm's packaging mechanism, as I understand it (and I am by no means an expert) works by having you run your app in a virtual machine; it then grabs all the data and binaries the app touches and bundles those up as a container.
<dreamcatch22> Interesting. thanks for that link. Much easier on the eyes
<TimMc> So you basically bring your own LAMP stack, if you're using LAMP.
<dreamcatch22> Yes. That is what i was picking up. ONce you create it once, you can use it over and over again
<TimMc> It looks like there's a built-in LEMP stack that you can use as a template, actually. (E = nginx)
<bhh> so lemp is lamp where s/apache/nginx?
<bhh> how do you get e from nginx?
<mokomull> pronounced like "engine ecks"?
<bhh> oh right...
<bhh> i still pronounce it n-jinx in my head
ocdtr_web has joined #sandstorm
dreamcatch22 has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
<ocdtr_web> dreamcatch22 TimMc: It's probably worth noting that the lemp stack currently has "issues".
<ocdtr_web> I am slightly too late, as it appears.
<ocdtr_web> I also really want a "lamp stack app", I've mentionned it before. Because I have one-off web apps I'd put on Sandstorm that don't warrant or make sense being app packages themselves.
<TimMc> What would that look like? A generic LAMP package you could drop scripts into?
<ocdtr_web> Yeah.
<ocdtr_web> So like, if you have shared hosting on a web host like HostGator or GoDaddy, you can't install any software.
<TimMc> Got it. I was having trouble understanding their original question, but it makes more sense now.
<ocdtr_web> You get a home directory, PHP and MySQL are already installed, and you can upload some PHP files, point them at your SQL database, and you're cookin'.
Zarutian_PI has quit [Read error: Connection reset by peer]
<ocdtr_web> I have a lot of stuff on traditional web hosting because it's just not that easy to grab that stuff and put it on Sandstorm.
Zarutian_PI has joined #sandstorm
<ocdtr_web> Once we get the LEMP stack fixed again, it probably wouldn't be too hard to make a generic LEMP app. You'd need to add an uploader, and have it precreate a database for you for it to connect to, and that's pretty much it.
<TimMc> Hmm. I think in such a case I'd have a hard time seeing why I'd not just use... a LAMP stack.
<TimMc> rather than Sandstorm
<TimMc> It's just a few minutes to set up a on a generic Linux box.
<ocdtr_web> Security.
<ocdtr_web> Nobody would accuse my legacy PHP apps of being "well-written".
<TimMc> hah
<TimMc> OK, point.
* TimMc nervously shoves his old PHP code under the tablecloth
<ocdtr_web> Throw each one in a sandbox in a Sandstorm grain, and now I have both happy isolation, and the ability to one-click take a backup of the current state of the app, both in terms of files and data.
<ocdtr_web> (I have scripts that take regulate backups of the databases. I don't generally with files, but 98% of the files associated with any given PHP app I have are the PHP files themselves, which I keep copies of on my PC.)
<TimMc> Mine are in a version-controlled directory on my laptop, and get rsync'd out to my shared hoster.
<ocdtr_web> My website code generally isn't version controlled, but it is backed up on five hard drives which are not all located in the same location.
<TimMc> ++
<ocdtr_web> Not gonna lie, I do get jealous of Cloudron's app availability.
<TimMc> All of my non-cloud backups are currently located in my house, which is making me nervous.
<ocdtr_web> At least you have cloud backups.
<ocdtr_web> But yeah, my hard drives don't get backed up to any cloud locations, to multi-site is kinda a must.
<TimMc> Yes, but where do I have backups of the passwords and keys necessary to retrieve them? :-)
<ocdtr_web> Ah, that would be a problem. Though passwords and keys are small and generally don't change often.
<TimMc> I need to do a backup availability audit on myself at some point to make sure my clever scheme isn't too circular.
<ocdtr_web> Drop your backups on a couple flash drives, and maybe look at a safety deposit box?
<TimMc> Yeah. Currently I have one passphrase-protected key in a flash drive, and another printed out in ASCII, on 2.3 pages.
<ocdtr_web> Since I'm backing up my actual data somewhere, I've got to have an actual computer offsite with Internet connectivity. Which is significantly more of a pain in the rear.
<ocdtr_web> Especially when I lose connectivity to it and have to drive over there to find out what stupid thing it did this time.
<ocdtr_web> (Once I had a drive failure of the OS drive, had to reinstall the OS, set everything up, drove back home... and when a few minutes after I got back home, it went back offline, I realized I forgot to disable sleep. So I had to drive back.)
<TimMc> haha ouch
<bhh> i've been working on getting tarsnap set up
<TimMc> bhh: Yeah, that's my main one. :-)
<TimMc> bhh: It's a little complicated to figure out how to make effective use of read/write/delete/nuke keys, by the way.
<bhh> good deal :)
<bhh> oof yeah
<TimMc> What I'm working on setting up is a way for my Sandstorm box to back itself up, but not have the ability to delete its own backups.
<TimMc> Which is easy enough until you want a different machine to perform rotations, since after you do a delete, the Sandstorm box's cache directory is out of sync.
<TimMc> I think the trick is to always run tarsnap --fsck before a backup, just in case a delete has happened in the meantime. :-)
<TimMc> So my Sandstorm box has a rw key, and the management box has the full key.
<bhh> right
<bhh> i also have most of my personal files in a syncthing dir that is shared across all of my devices
<bhh> so i could theoretically lose all but one
<ocdtr_web> I've never figured out where to draw the line on personal files at the scale that I can afford to sync them to all devices.
<bhh> documents, configs, some photos
<bhh> have you used syncthing?
<TimMc> ocdtr_web: I finally decided that it was worth syncing my 50 GB or whatever of photos and videos, despite the monthly cost.
<TimMc> They aren't replaceable, after all.
<ocdtr_web> (I sync less than a terabyte of data, but that's the order of magnitude I operate in.)
<ocdtr_web> Photos and videos mind you, generally are rarely a problem: Your backup software should realize that they haven't changed, and not ever be resending them.
<bhh> damn, my main syncthing dir is only ~800mb
<bhh> photos is less than 5g
<ocdtr_web> My biggest challenge is that say, I might have my personal PHP code in one subfolder of where I keep my websites (important), and an expanded copy of a WordPress install in another subfolder (mostly flotsam), and what really murders sync performance is large numbers of small files.
<ocdtr_web> Like a bloody WordPress.
<bhh> wordpress is so bad
<ocdtr_web> I think WordPress has like 15,000 files.
<ocdtr_web> So when you're resyncing, WordPress's 15 MB is 15000 times worse to sync than a single 300 MB video.
<bhh> i think in the case of wordpress, you'd want to keep that in version control outside of your sync software
<ocdtr_web> Note this isn't "where I host my websites". This is "a folder on my computer where I store stuff from websites I deal with".
<ocdtr_web> The ideal case, is I should either dump copies of WordPress after uploading them to my server, since WordPress changes them anyways. Or I zip it up in a single file so my sync program only has to care about it once, not 15,000 times.
<ocdtr_web> But sitting there and optimizing my working folders for backup sync is not something I generally have the time or patience for. :P
<bhh> ahh ok, that's totally fair
<TimMc> ocdtr_web: Tarsnap does deduplication, but of course deduplication takes time...
<TimMc> I was also using duplicity for a while, way back when. DIY tarsnap, sort of.
<ocdtr_web> I live in the Windows world. :P
<ocdtr_web> I've played with Windows Server's deduplication, mind you, and it does unholy things to your file system. Unholy. Things.
<ocdtr_web> I like to assume that if I take a given backup copy of mine, and plug it into a PC, that PC will be able to interpret it. So I worry a bit about overcomplicating it.
<ocdtr_web> (For that reason, for instance, I drastically prefer to backup on RAID1.)
<TimMc> Tarsnap and duplicity don't do deduplication of the original, just the backups.
<ocdtr_web> But my point is, like... if I lose the original, cuz my house burns down, maybe I pick up a new machine and drive to my backup site.
<ocdtr_web> How hard is it for me to retrieve the data, at that point?
<TimMc> ah yeah
<TimMc> Tarsnap has a package in the Debian repos, so that's just a minute's work; finding the key where you stashed it might take longer.
<TimMc> And then it's just a matter of listing backups and picking one to restore. Then go to bed. :-P
<TimMc> takes quite a while...
<ocdtr_web> But without dedupe, I can grab my backup, plug it in, and open the file I need to then and there.
<ocdtr_web> Rather than a "restore" process.
<TimMc> I don't use deduplication for local backups, just for remote.
<TimMc> No wait, that's not true.
<TimMc> One of my local backups uses deduplication using hardlinks, but it's still as simple as one rsync call to restore it all.
<ocdtr_web> Anyways, I don't personally see dedupe as a big benefit. Storage is cheap, but there's only 24 hours in a day. I'd rather have performance than small size. :P Also, YMMV with different dedupe schemes, but with Windows Server, it played heck with differential backup.
<simpson> Deduplication can be a problem: https://tahoe-lafs.org/hacktahoelafs/drew_perttula.html
<ocdtr_web> Oh, I remember, specifically it horrified our replication scheme when we tried it.
<ocdtr_web> Minor changes to raw files led to much larger changes in our dedupe storage at a block level, it made bad things happen to replication performance.
AZero has joined #sandstorm
<TimMc> I should clarify that I'm using deduplication to store something like 30 historical snapshots of my filesystem on the same drive.
<TimMc> I'm not doing deduplication *within* a snapshot.
<TimMc> And all of this is inside a LUKS partition, so it's block level encryption instead of file-level.
samba_ has quit [Remote host closed the connection]
samba_ has joined #sandstorm
nicoo has quit [Remote host closed the connection]
nicoo has joined #sandstorm
pie_ has quit [Ping timeout: 248 seconds]
samba_ has quit [Ping timeout: 255 seconds]
samba_ has joined #sandstorm
tg has joined #sandstorm
taktoa has quit [Remote host closed the connection]
Telesight has quit [Quit: Leaving.]
pie_ has joined #sandstorm
ocdtr_web has quit [Quit: Page closed]
samba_ has quit [Ping timeout: 255 seconds]
samba_ has joined #sandstorm
<ocdtrekkie> Fixed half of my vagrank-spk stack woes! \o/
<ocdtrekkie> My LESP PR should now work, and the same change will fix half the error messages on LEMP.
<ocdtrekkie> Now just need to deal with MySQL.