* ckocagil
wonders how jenkins could be ported to sandstorm
<paulproteus>
Probably reasonably easily. Auth would be fun, not in a sarcastic way but in a non-sarcastic way.
<paulproteus>
The main difficulty is that people's build scripts want to do non-Sandstormy things often.
<paulproteus>
I think a good way to address that is to have the Jenkins master be a Sandstorm app and have it spin up slaves that could even be remote EC2 or GCE machines.
<paulproteus>
That's not perfect (at all) for isolation, though.
<jparyani>
or run qemu workers…
<ckocagil>
why not docker?
<paulproteus>
But it's pretty reasonable. You could fake it right now by using the outbound HTTP support within HackSession.
<paulproteus>
Well it wouldn't be fake, it would just be the case that you'd have to give your EC2/GCE credentials to Jenkins then.
<paulproteus>
re: why not docker: if someone's built script wants to use sudo, but Sandstorm apps can't get to root, then Docker or not you're still in a bind.
<paulproteus>
s/built/build/
rustyrazorblade has joined #sandstorm
<paulproteus>
I suppose if you could build a Docker image outside the isolated world, and tell Jenkins-in-isolation to spin up a e.g. Docker container to run commands in, then maybe you could avoid needing to be root.
<ckocagil>
hm. isn't it safe to give root access in a docker container?
<ckocagil>
yeah that's what I was thinking. docker instances could be requested from sandstorm.
<kentonv>
ckocagil: docker is not a sandbox, especially when the guest has root
<paulproteus>
Agreed, ckocagil, a VM could work OK.
<paulproteus>
jparyani's idea of a qemu VM inside the Jenkins environment could be a good fit for that.
<paulproteus>
I think one question mark for me is performance.
<jparyani>
haha there’s no question, it will be poor :)
<jparyani>
probably around 2-3x slow down if not more
<ckocagil>
why is it 2-3x slow? due to the overhead of hardware virtualization?
<paulproteus>
Yeah, so it's not clear I could convince people to ever switch to that, hence my GCE/EC2 suggestion.
<kentonv>
ckocagil: qemu is entirely software virtualization (I think)
<kentonv>
which is why it would work inside Sandstorm
<zarvox>
oooh, you could make a capnproto interface for libvirt, and then have vms-in-a-grain
<ckocagil>
^
<paulproteus>
vms in the membrane
<zarvox>
(that first part might be somewhat involved)
<paulproteus>
vms in the grane!
<zarvox>
and then you could run sandstorm in those VMs
<zarvox>
and nest until you run out of RAM or storage or sanity
<ckocagil>
zarvox: even a very simple API would work. spin up this vm. execute this on the vm. get rid of the vm. etc.
<kentonv>
I do think Blackrock should someday support spinning up VMs
<kentonv>
probably not soon though
<kentonv>
and that doesn't help self-hosters
<paulproteus>
I still think my GCE/EC2 plan is reasonable, doubly so if the default network for those things routed all traffic back into the grain or something.
<kentonv>
I guess what's really desired here is a GCE or EC2 driver
<paulproteus>
Ya
<paulproteus>
: D
<zarvox>
woohoo testing checklist caught a bug in the payments code!
<paulproteus>
Wow.
<paulproteus>
Mega woo-hoo!
<paulproteus>
i,i it pays to test
<zarvox>
(payments code needs to be updated for the new identities/profile object structure)
rustyrazorblade has quit [Quit: rustyrazorblade]
<zarvox>
Some sequence of events involving SPK uploads, upgrades, and reuploads appears to have lost the "grain icon" static asset for Framadate.
<dwrensha>
is that the same as the app icon?
<dwrensha>
I think I've hit a similar thing
<dwrensha>
but I was also manually messing around with StaticAssets, so I thought it might not be possible to make it happen just from the UI
<zarvox>
yeah
<zarvox>
the app icon that shows in the navbar
<zarvox>
I seem to be unable to trivially reproduce it
<dwrensha>
I remember it happening when I was trying to get the RoundCube tests to pass
<dwrensha>
and I was upgrading and downgrading the RoundCube package
<dwrensha>
so right now I'm going through the motions to try to reproduce it
<dwrensha>
*those motions
<zarvox>
I think uninstall is a necessary action in the flow
<zarvox>
okay, I reproduced it!
<zarvox>
it's tricky because the caching makes it look like the icon's still there when it's gone
<zarvox>
oh wow, now I've gotten it to where I trigger a 500 every time I try to create a Framadate grain
<dwrensha>
!
<paulproteus>
Shake it 'til you break it
<paulproteus>
The opposite of
<paulproteus>
Fake it 'til you make it
<paulproteus>
zarvox++
<paulproteus>
repro-ing can be hard work!
<zarvox>
I uninstalled other unrelated packages and refreshed and now the 500s are gone. What.
<zarvox>
(so are the static assets, though)
<dwrensha>
I got it to happen!
<dwrensha>
the icon disappeared as soon as I uninstalled
<dwrensha>
but I still had a grain around
<zarvox>
I also appear to have some weird mongo behavior
<zarvox>
oh never mind, that's just me misunderstanding the difference between mongo's findOne() and meteor's findOne()
<zarvox>
so uninstall might wipe out assets that are still referenced?
<dwrensha>
I didn't know that mongo shell had a findOne
<zarvox>
it does, but you have to specify a {} query or nothing at all
<zarvox>
passing a string will not find by key
<dwrensha>
it's troubling that uninstalling the app and deleting its grains does not fix the problem... after reinstalling, the icon is still gone
<dwrensha>
huh
<dwrensha>
restarting the server seems to help
<dwrensha>
like, maybe an old image url is being cached in memory?
eternaleye_ has quit [Quit: Konversation terminated!]
<zarvox>
that's surprising to me, since the only caching done for static assets is at the HTTP Cache-Control layer...
<zarvox>
maybe an old package subscription is being cached in memory?
<zarvox>
ahhh, so if you install a package, uninstall it, and install it again, then the packageid is the same for the two installs, but the asset IDs will be unique
<dwrensha>
right, asset IDs are random
<zarvox>
so this is possibly the result of the userPackages publish caching too aggressively
<zarvox>
and asset id replacement making the assumption there invalid
<zarvox>
it was originally a bit more complicated to track things more granularly, and in the great post-launch performance tuning of 2015 we made it simpler and do heavy caching
<zarvox>
might have to make it invalidate cache on uninstall
<dwrensha>
Yeah, the documentation for the Packages collection is misleading
<dwrensha>
it sounds like the `manifest` field just contains a JSON-encoded Manifest
<dwrensha>
as defined in package.capnp
<dwrensha>
but really the image fields are replaced with asset IDs
<dwrensha>
which may change on a reinstall
<dwrensha>
maybe they should use SHA256 hash of the content, rather that the _id of StaticAssets
<kentonv>
dwrensha: Using a hash makes it easy to probe to detect what assets are present on the server, which is possibly bad
<kentonv>
with the ID, you don't find out the ID unless you had some reason to have access
<zarvox>
dwrensha: can you file a bug with your specific repro procedure? I'm having trouble causing the failure again
<dwrensha>
eh, just jiggle the handle for a bit...
<dwrensha>
I'll see if I can get a simple procedure
<zarvox>
okay, I'm going to leave that in your hands for now and continue down the testing checklist
<zarvox>
Looks like showing webkeys is broken on testrock? I made a Gitweb repository, and used the API to push a commit to it, but I'm not seeing the token under either Share access->See who has access, nor webkeys.
<zarvox>
and the "who has access" icon is misaligned, but that's less important
<paulproteus>
I presume there's a subscription missing somewhere.
NwS has joined #sandstorm
jeffmendoza has quit [Ping timeout: 265 seconds]
jeffmendoza has joined #sandstorm
<zarvox>
dwrensha: I just wanted to say: your Game of Y package is excellent for testing sharing & permissions!
<ckocagil>
Is there an easy way to drop to a shell in a grain's context?
<zarvox>
ckocagil: are you hoping to be confined by the same permissions, or just explore the FS as seen by the grain?
<ckocagil>
zarvox: former
<zarvox>
well, keep in mind that if your package does not include /bin/sh, you can't spawn it inside your grain's context
<zarvox>
in dev mode, the main filesystem is available, so you can just cd /opt/app/.sandstorm && sudo nsenter --target $(pidof sandstorm-http-bridge) --wd --mount --net --ipc --uts --pid
<ckocagil>
great, thanks
<zarvox>
I think that may give you a shell as host-root in all the other namespaces
<zarvox>
Note that anything you touch while in that shell will get added to your package, so you may want to throw away the sandstorm-files.list that you produced from that "spk dev"
<zarvox>
It's plausible that we'd want a cleaner way to inspect the grain or the runtime environment from the inside. What I suggested is not quite the same as the existing grain's environment - it's missing the seccomp filters, for instance.
NOTevil has quit [Quit: Leaving]
NwS has quit [Quit: See you in Isla de Muerte!]
bb010g has quit [Quit: Connection closed for inactivity]
gopar has quit [Remote host closed the connection]