asheesh changed the topic of #sandstorm to: Welcome to #sandstorm: home of all things sandstorm.io. Say hi! | Channel glossary: "i,i" means "I have no point, I just want to say". b == thumbs up. | Public logs at https://botbot.me/freenode/sandstorm/ & http://logbot.g0v.tw/channel/sandstorm/today
admine__ has joined #sandstorm
admine_ has quit [Ping timeout: 250 seconds]
frigginglorious has quit [Quit: frigginglorious]
<ocdtrekkie> You'd think people would realize by now that piping all of your keyboard input to a cloud service might end in Bad Things(TM). http://www.theverge.com/2016/7/29/12326152/swiftkey-bug-backup-sync-down-error-prediction
admine_ has joined #sandstorm
admine__ has quit [Ping timeout: 252 seconds]
xet7_ has joined #sandstorm
neynah has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
jemc has quit [Ping timeout: 240 seconds]
dwrensha has quit [Read error: Connection reset by peer]
dwrensha has joined #sandstorm
themgt has quit [Read error: Connection reset by peer]
<digitalcircuit> Good thing Windows doesn't do that by default.. wait. (And Google Keyboard for that matter)
* digitalcircuit 's glad such things won't be an issue with Sandstorm - separation and whatnot.
admine__ has joined #sandstorm
admine_ has quit [Ping timeout: 244 seconds]
jemc has joined #sandstorm
jemc has quit [Ping timeout: 250 seconds]
neynah has joined #sandstorm
neynah has quit [Client Quit]
admine_ has joined #sandstorm
admine__ has quit [Ping timeout: 264 seconds]
jemc has joined #sandstorm
n8a has quit [Quit: Leaving]
<ocdtrekkie> digitalcircuit: Yeah, I have to disable a lot of things.
<ocdtrekkie> If only Sandstorm was my operating system too.
<digitalcircuit> Qubes OS, sorta?
<digitalcircuit> Not quite as friendly though.
<ocdtrekkie> I'm kinda sad kentonv is on vacation because when I see a serialization protocol announcement post on HN, I expect to see kentonv's opinion of it.
jemc has quit [Ping timeout: 276 seconds]
Isla_de_Muerte has joined #sandstorm
NwS has quit [Ping timeout: 240 seconds]
<GauntletWizard> jparyani: Did you break that again? I'm suddenly getting a failure "remote exception: expected iter != bridgeContext.sessions.end(); Session ID not found; id = 0"
<GauntletWizard> I swear this worked yesterday
<GauntletWizard> Ah, because it actually *does* care about sessionid, it just counts session from zero
* asheesh waves
<asheesh> Ah hah GauntletWizard, you're Ted, great to e-see you.
<GauntletWizard> Good to see you too
<GauntletWizard> I am unfrustrated!
<GauntletWizard> And almost ready to actually package this thing up
<asheesh> Egad
<jparyani> Ya sorry was afk, but ya you're correct. I guess it does care about sessionId :)
<GauntletWizard> I misinterpreted what you meant way back; thought that 0 was a fake session id, not the obvious start to real ones
<GauntletWizard> is session-guessing a security concern? I would have assumed them to be randomized, but eh, why?
<asheesh> FWIW, it's being run by the app so I see it as very similar to UNIX file descriptors.
<asheesh> The app could decide to try to read from stdout or something, or write debug data to a network socket it opened rather than to stderr, and it's up to the app to do that if it really wants.
<asheesh> But yeah, I had assumed they'd be random too before I saw code that used them and saw the numbers.
<GauntletWizard> yeah, it makes sense
<GauntletWizard> Argh! What'd I miss; it's broken on my live server.
<asheesh> Perhaps a problem where the pack'd SPK doesn't have all the needed files
<asheesh> ?
<asheesh> If so, a dev server should be able to reproduce the issue.
<asheesh> (so long as you upload the SPK to the dev server)
<GauntletWizard> point
<GauntletWizard> huh
<GauntletWizard> getting a Grain's package not installed
<asheesh> When you ^C spk dev, that happens
<asheesh> At that point, you need to restart (vagrant-)spk dev, or upload the SPK file.
<asheesh> You have to reload the grain page to trigger it re-checking for the Grain's package.
<asheesh> This is like the only one element of Sandstorm that's not reactive.
prettyvanilla_ has joined #sandstorm
prettyvanilla has quit [Ping timeout: 244 seconds]
xet7 has quit [Quit: Leaving]
xet7 has joined #sandstorm
Isla_de_Muerte is now known as NwS
jemc has joined #sandstorm
xet7 has quit [Quit: Leaving]
xet7 has joined #sandstorm
jemc has quit [Ping timeout: 276 seconds]
jemc has joined #sandstorm
xet7 has quit [Quit: Leaving]
xet7 has joined #sandstorm
rafaelmartins has quit [Remote host closed the connection]
rafaelmartins has joined #sandstorm
jemc has quit [Ping timeout: 250 seconds]
<ocdtrekkie> o7
Telesight has joined #sandstorm
<asheesh> OH RIGHT HI ocdtrekkie
<ocdtrekkie> Hi
<asheesh> We were going to video-chat etc but I forgot but I can do that now, or nearly-now in 5 min or so.
jemc has joined #sandstorm
amyers has joined #sandstorm
mnutt has joined #sandstorm
<asheesh> Hi mnutt : )
mnutt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
mnutt has joined #sandstorm
<mnutt> hey asheesh!
<mnutt> anyone have the link to the wekan board with the app ideas / progress?
<asheesh> I'm sure ocdtrekkie does
<mnutt> that would be it! thanks
<ocdtrekkie> I kinda wish there was a somewhat more structured/persistent way to put out such community grains.
<mnutt> totally agreed
<mnutt> I was thinking of porting http://tympanus.net/Development/GammaGallery/ and adding a backend/simple management UI
<ocdtrekkie> Things in both IRC (faster) and the group (slower) get buried by new items in discussions. No sorting or stickying like a forum currently.
<ocdtrekkie> Interesting
<ocdtrekkie> With static publishing, I assume?
<mnutt> yeah. you'd upload a bunch of files, choose a few options (light/dark theme, etc) and then hit 'publish'. It would use vipsthumbnail or something to generate a bunch of different sizes, then publish those + the index.html statically
<mnutt> lychee is great, but the publishing is a bit awkward
<asheesh> This is pretty cool.
<asheesh> I still think someone should write a "Make a static site" plugin to GNU MediaGoblin, but I'd also be excited to use something like GammaGallery.
<asheesh> ... turns out Kenton Varda has already struck! https://github.com/socketio/socket.io/pull/2603
<ocdtrekkie> mnutt: At the fear of suggesting piling too much on one codebase, Davros probably is still the best file uploading UI we got.
<dwrensha> asheesh: I was just going to link to that
<mnutt> ocdtrekkie: in the future I would love for davros to provide the file list via powerbox
<ocdtrekkie> How close are we to that now? With where Collections and stuff is?
<mnutt> the one unfortunate thing about GammaGallery is that their readme says "licensed under MIT" and then goes on to add a bunch of restrictions. I opened an issue to see if they're relicense as apache or gplv3 or something, really anything other than MIT + incompatible restrictions would work
<ocdtrekkie> asheesh: Hah. Apparently, between when I first couldn't get this to work, and today, kentonv suggested they fix it. But it looks like they haven't done it. :/
<mnutt> I need to take a look at collections
<asheesh> Right now, the Powerbox can let you choose a grain, but it can't let you launch a custom chooser UI to pick an object _within_ a grain.
<asheesh> Additionally, the Powerbox doesn't let an app read data from the chosen grain's storage yet, "merely" let you open the grain as if you clicked on a link to it (open it in a new tab).
<asheesh> So I'd say we're 1 month away in terms of technical effort, plus any scheduling delay.
<asheesh> AFK a little bit.
<mnutt> asheesh: the workflow I'm imagining is that powerbox would let you choose a grain, and in doing so you'd be granting that grain's capabilities (or some subset thereof) to your current grain. I _think_ this would mean it could eventually be possible to expose Davros' webdav API to other grains. At that point, my thinking was that I would have some prebuilt webdav.js libraries available to other apps for choosing a file,
<mnutt> writing new files, etc.
<ocdtrekkie> I dunno if you'd need to select files inside the Davros grain, asheesh. You'd probably just say "This gallery should be the files from this Davros grain". But yeah, you would need access to grain storage.
<mnutt> I'd probably lean towards letting the grain expose an API rather than accessing the grain's files directly off of the disk
<mnutt> or I guess there may be implications around having dependent grains running and using up memory?
<mnutt> my local davros is currently using about 35MB. could be better, but it's not too terrible
<asheesh> I think you're right that RPC to the grain via something like HTTP API support is great if possible.
<asheesh> A thing I'm concerned about is apps that are written to expect files on disk, but maybe we just plan to support them "later" (and then hope that we never need to).
isd has joined #sandstorm
<mnutt> oh, so you'd actually be able to somehow mount another grain's storage inside your grain? that could be interesting
<asheesh> Yeah, but that's vaporware dreamy-asheesh-land stuff with no particular plan as to how it'd work.
<mnutt> or, you could just run owncloudcmd inside your grain :)
<mnutt> that would double up on the file storage though
<asheesh> Sandstorm doesn't currently hard-require root so we can't "just" mount --bind, I think
<asheesh> We could play games with mount namespaces though, maybe, since I do know we do that... hmm maybe it's easier than I think? But I imagine RPC between apps is something Kenton will be happier with anyway.
<isd> Don't know how the fuse driver's security track record is?
<asheesh> Luckily we don't use FUSE for running grains, but only for the spk dev filesystem stuff.
<ocdtrekkie> asheesh: Did you see the tag about Wekan and Meteor 1.4?
<asheesh> I did briefly, yeah.
<mnutt> is sandstorm shell planning to move to meteor 1.4?
<asheesh> Yes, though ETA unknown, 0-6 months. zarvox did some prep work this week for it.
<asheesh> The future of meteor-spk is interesting, as is the future of Meteor, just in different ways.
<isd> asheesh: I guess what I was saying was more, how bad of an idea would it be to let grains mount fuse filesystems?
<isd> If you did that, you could get the mount feature without much intervention of sandstorm itself; maybe a capnp protocol for fs access, but then you can just have a fuse fs that talks to that protocol
<isd> but it's something where I'd want to stare pretty hard at fuse's security track record
<ocdtrekkie> Anyone know how much space updating Ubuntu 14.04 -> 14.04.1 will take up?
<mnutt> fuse doesn't require root, right?
<isd> mnutt: correct
<isd> /dev/fuse needs to exist, and you need to own the mount point I believe
<mnutt> with this hypothetical davros/powerbox integration you could use the fuse webdav driver
<isd> I haven't actually checked, but my guess is sandstorm doesn't expose /dev/fuse, as it (sensibly) tends to take a whitelisting approach
<isd> but it would ease a lot of things to be able to make stuff look like filesystems.
<isd> It also does some weird stuff with root's ability to interact with the mountpoint, but namespaces should make that moot.
<asheesh> Yeah, it doesn't expose /dev/fuse, but you could imagine proxying requests involving /dev/fuse to Sandstorm first for it to be the one to be using FUSE.
<mnutt> on a totally different topic, a while back there was some discussion about sandstorm analytics. are there any thoughts on exposing anonymized app usage data (total installed / monthly uniques, or something) to either app creators, or possibly just everyone?
<mnutt> I don't have any pressing need for the data, it would just be interesting
<isd> asheesh: that gets really weird, because sandstorm would have to be running in the same fs namespace for the grain to actually see it.
<mnutt> or even on the same node
<asheesh> Could it pass an FD for FUSE into the grain somehow?
<asheesh> Or /dev/fuse is a socket, not a mknod'd thing, and things in the grain aren't aware Sandstorm is proxying the requests
<asheesh> ocdtrekkie: I would estimate 200MB extra space.
<asheesh> (most of which gets reclaimed when you run: 'sudo apt-get clean' afterward)
<ocdtrekkie> Okay. I have 3 GB of space on that VM. Is there any reason I should be hesitant to do it? (For that matter, is there any reason I should do it?)
<ocdtrekkie> mnutt: How much of that is Sandstorm already collecting if people opt into that on their server, I guess? Then how much does the team want to share with the class?
<isd> Probably would have to look at the implementation of libfuse; if it uses ioctls we're probably screwed.
Telesight has quit [Remote host closed the connection]
<isd> I'm assuming ioctl isn't on the seccomp whitelist...
<isd> Could also LD_PRELOAD open and friends :P
<isd> Which might not even require changes to sandstorm?
<asheesh> Yeah; I don't think we make liberal enough use of LD_PRELOAD, honestly, but I might be crazy. Plus it's a weekend so don't trust me to be make professional judgments today, maybe.
<isd> I mean, I don't think there's anything that prevents grains from using it now?
<isd> It falls over with stuff like go that doesn't link against libc. but pobably a solid strategy in many cases.
<asheesh> Yeah, agreed that nothing stops grains from using it now.
<asheesh> I must AFK a little bit again!
<isd> an LD_PRELOAD library that let you spoof the filesystem would be very useful in general.
<isd> I give it fair ods the plan9 community has built something similar.
<isd> *odds
mnutt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
mnutt has joined #sandstorm
mnutt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
mnutt has joined #sandstorm
jemc has quit [Ping timeout: 244 seconds]
mnutt has quit [Read error: Connection reset by peer]
<TC01> So, my sandstorm app that I was discussing the other day works, but vagrant-spk dev is failing attempting to upload the file list with the following output: https://paste.fedoraproject.org/398376/99204991/; anyone have any ideas?
<TC01> I should note here that my vagrant-spk setup is somewhat... nonstandard. I followed https://github.com/sandstorm-io/vagrant-spk/blob/master/HOWTO-libvirt.md to set up libvirt and also my vagrant image is Fedora, not Debian
mnutt has joined #sandstorm
amyers has quit [Remote host closed the connection]