<DanC>
"As a demonstration, I'm porting ZeroVault to CloudABI using a FreeBSD vagrant box VM. It's pretty fun since Ed fixes my issues within a few hours of when I report them."
harish_ has quit [Ping timeout: 240 seconds]
<mokomull>
DanC: bus1 looked interesting, but the patchset as of Linux Plumbers Conference was pretty buggy. I should re-test my test case and submit some bugs - do you know if they prefer Github or email?
<DanC>
no, I don't know anything about how they work
<isd>
I look at bus1 and my reaction is "why are these not file descriptors?"
keturn has quit [Ping timeout: 260 seconds]
funwhilelost has joined #sandstorm
funwhilelost has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
funwhilelost has joined #sandstorm
Zarutian has quit [Quit: Zarutian]
_whitelogger has joined #sandstorm
funwhilelost has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
leeola has quit [Quit: Connection closed for inactivity]
_whitelogger has joined #sandstorm
<zarvox>
isd: I think the authors intended to have something that would better handle zillions of objects per process, where fds with their density requirements tend to fall over. Additionally, something where the owner of the cap can forcibly close it, whereas fds are refcounted and only close in the kernel when the last ref is dropped
<zarvox>
there's a few other differences in semantics that make bus1 better than plain fds for capability-based IPC
<zarvox>
also I think you get causal consistency from the kernel, which makes for simpler application code?
<zarvox>
the design looked quite good/worthwhile to me; can't comment on the implementation (but mokomull if you wind up doing something with that testcase, let me know how it turns out!)
<mokomull>
isd: I've heard the same thought and *somewhere* I've got written notes from that being presented at Linux Plumbers Conference. If I come across them I'll dump them somewhere for #sandstorm to look at.
<isd>
I've seen a more detailed discussion of it, and yeah, it's not exactly trivial. You'd have to make it possible to have more fds per process, certainly. But it's one of these things where I feel like every time there's a problem that needs solving, we grow a whole new subsystem & interface instead of trying to extend/adapt existing concepts.
<isd>
So we end up with a monumentous API.
<isd>
The force close thing seems like your standard revoker pattern. Though you probably need some help from the kernel to make that efficient, but it doesn't need to be something that's exposed to anyone who's not poking at it.
<isd>
But, it's late and I'm getting curmudeonly. I'm going to try to sleep.
<isd>
(I seem to remember reading a detailed analysis of the challenges in making this work with fds, and it was mostly a question of getting from here to there, rather than what there needed to look like).
<isd>
okay, really, going to bed. 'night all.
jemc has quit [Ping timeout: 260 seconds]
harish_ has joined #sandstorm
aeternity has joined #sandstorm
Telesight has joined #sandstorm
DanC has quit [Ping timeout: 268 seconds]
aeternity has quit [Ping timeout: 240 seconds]
yeehi_ has joined #sandstorm
yeehi has quit [Ping timeout: 240 seconds]
yeehi has joined #sandstorm
yeehi_ has quit [Ping timeout: 246 seconds]
leeola has joined #sandstorm
aeternity has joined #sandstorm
aeternity has quit [Ping timeout: 258 seconds]
aeternity has joined #sandstorm
nicoo has quit [Remote host closed the connection]
nicoo has joined #sandstorm
metaphysician has quit [Quit: Bye.]
aeternity has quit [Ping timeout: 252 seconds]
bodisiw has joined #sandstorm
<ill_logic>
Well this is rich. I get "Grain no longer in use; shutting down" in the middle of an upload. 13 gigs in, it decides that it's no longer happening.
<ill_logic>
And then it starts back up, but I guess by then, jquery-file-uploader doesn't want to continue.
<ill_logic>
This thing just doesn't seem to want to happen.
<ill_logic>
Any ideas what's causing this?
<dwrensha>
ill_logic: how is the client accessing the grain? Through Sandstorm's usual framed UI or through an API token?
<ill_logic>
framed UI
<dwrensha>
weird. your browser should be sending keepalives to the grain then
<ill_logic>
how often does it need them?
<dwrensha>
I suppose it's possible that it is so busying sending the upload that it doesn't get a chance to send the keepalives
<dwrensha>
approximately every minute
<ill_logic>
I could try smaller chunks then.
<ill_logic>
how often does it need keepalives?
<dwrensha>
you could try keeping the grain open in a separate tab
<dwrensha>
so that the other tab would be responsible for sending keepalives too
<ill_logic>
I don't like that from a usability standpoint.
<ill_logic>
But I suppose it could at least debug it.
<dwrensha>
do you have a reverse proxy running in front of sandstorm?
<ill_logic>
I don't think so. I just did a normal Sandstorm installation.
<ill_logic>
(I'm developing an application so I'd like to make this work without trouble for users, eventually)
<dwrensha>
I know that nginx likes to buffer uploads, so that the backend of the app doesn't even see them until the client is done sending
<dwrensha>
hm... I suppose it's also possible that the supervisor gets so overwhelmed with forwarding the data that it does not get a chance to service the keepAlive() call
<dwrensha>
I thought we had decent flow control for that nowadays, but maybe it's buggy
<dwrensha>
in any case, this sounds like a bug in Sandstorm, not in your app
<dwrensha>
to debug, it might be interesting to watch activity with `top` while the upload is happening
<ill_logic>
I actually do that, though just checking whether it gets overwhelmed.
<ill_logic>
What should I look for?
<ill_logic>
(I don't know what to look for)
<ill_logic>
(I just have it open, heh)
<ill_logic>
And, should I make a new thread on the dev list?
<dwrensha>
yeah, that or open a github issue
<ill_logic>
right.
<ill_logic>
How often does Sandstorm need to be kept alive?
<dwrensha>
each grain supervisor kills itself if it does not receive a keepalive periodically
<ill_logic>
Do you know the period?
<dwrensha>
I think the period is something like 90 seconds
<ill_logic>
I see.
<dwrensha>
you can find it in src/sandstorm/supervisor.c++
<ill_logic>
So I could see potential for problem. If you miss a single keepalive.
<ill_logic>
I'll watch the sandstorm logs too. I see errors in there but it could be from me messing with things earlier.
<ill_logic>
also what if sandstorm upgrades in the middle of all this?
<dwrensha>
I suppose that arguably your app should be breaking up huge uploads into small chunks
<dwrensha>
upgrades of sandstorm do not kill supervisor processes
<dwrensha>
but they do require reconnects
<ill_logic>
reconnects between? I mean, would the same grain restart thing happen?
<dwrensha>
it might, but not necessarily
<dwrensha>
a new sandstorm frontend process will start and will need to reconnect to the grains
<ill_logic>
would it drop existing uploads?
<dwrensha>
yeah, I think in-progress uploads will fail
<ill_logic>
okay. so that's bad news for my application.
<ill_logic>
hmm... jquery-file-upload has a resume option. I'm gonna look into this.
<ill_logic>
Too small of chunks would be high latency I think.
<ill_logic>
I mean, it would stack up latency.
Marcelg has joined #sandstorm
bodisjw has joined #sandstorm
bodisiw has quit [Ping timeout: 255 seconds]
jemc has joined #sandstorm
<Marcelg>
Would anyone be able to help me figure out my DNS for my Hugo grain ? I'm making the entries it says to make but I get an error when I visit the URL indicating that the host must have exactly one TXT record. I have added the one TXT record that Hugo grain said to add in the DNS. If there is someone familiar with this could we talk and ma
<Marcelg>
ybe figure this out?
jemc has quit [Client Quit]
jemc has joined #sandstorm
<ill_logic>
(ah, jquery-file-upload has retries bulit in. I'll see if this solves it.)
Marcelg has left #sandstorm [#sandstorm]
funwhilelost has joined #sandstorm
DanC has joined #sandstorm
yeehi has quit [Quit: Konversation terminated!]
rolig has quit [Quit: Quit]
rolig has joined #sandstorm
Zarutian has joined #sandstorm
<isd>
ill_logic: you could probably hack around this by getting a wake lock from sandstorm. Ideally it would just see that there's activity and handle that correctly, but as a workaround...
Zarutian has quit [Ping timeout: 260 seconds]
ocdtr_web has joined #sandstorm
ocdtr_web has quit [Client Quit]
bodisjw has quit [Quit: This computer has gone to sleep]
Lord has quit [Remote host closed the connection]
aeternity has joined #sandstorm
Lord has joined #sandstorm
Telesight has quit [Quit: Leaving.]
jemc has quit [Ping timeout: 260 seconds]
samba_ has joined #sandstorm
bodisiw has joined #sandstorm
bodisiw has quit [Quit: This computer has gone to sleep]
bodisiw has joined #sandstorm
bodisjw has joined #sandstorm
bodisiw has quit [Ping timeout: 258 seconds]
bodisjw has quit [Quit: This computer has gone to sleep]
bodisiw has joined #sandstorm
Salt has quit [Ping timeout: 246 seconds]
sknebel_ has joined #sandstorm
sknebel has quit [Quit: No Ping reply in 180 seconds.]