<avsm>
it's quite an impressively growing list on the 'yes-jbuild.txt' -- 93 packages have been ported in around a month
<avsm>
of the remaining no- ones, a significant number are currently in the process of being released, over at https://github.com/mirage/mirage-dev
<avsm>
that includes tcp, conduit, cohttp, dns, charrua
<avsm>
so that then leaves around 50 dependencies which aren't yet ported, but we're well over the hump
<avsm>
of the ones that aren't, it's a few clusters: lwt (which has been ported in its trunk), dbuenzli's libraries (he's looking at it), and then mirage-entropy/platform and a few tricky C stubs ones
<rgrinberg>
Big ones remaining are Daniel's packages and the tls dependency cone
<avsm>
yeah, the tls dependency cone is tricky -- i'm starting on x509/nocrypto next
<avsm>
mirage-entropy is currently holding us back for some reason, so it definitely needs an update as well
<rgrinberg>
First make sure hannes is on board before you do work
<rgrinberg>
I might have rushed in to port tls :)
<avsm>
yeah. either way I'd like the port to make sure it works. Nocrypto seems to have a lot of outstanding PRs, so will need to ping David Kaloper as well
<avsm>
did you do a tls port:
<hannes>
well, I raised several questions in the PR, and got no real answer to them.
<avsm>
aha, i missed that one!
<avsm>
which PR is it?
<hannes>
(one of them were whether the documentation is the same or not -- the underlying issue is that we still don't have an API for TLS, and we're not willing to expose all the private things. this is why it is in the current state.)
<rgrinberg>
hannes: which answers do you think are unsatisfactory? In my opinion it's only the bisect one which is lacking
<hannes>
rgrinberg: all of them: what is the difference in docs? what is the difference of binaries?
<avsm>
the jbuilder documentation is pretty much only odoc
<rgrinberg>
You wanna see some sample sizes for binaries? Or the result of ocamlobjinfo?
<rgrinberg>
I think I misunderstood what you were getting at
<dinosaure>
o/
<hannes>
size and contents - and evaluation of what is different, and why (you partially answered that)
<avsm>
the substantive questions in that PR seem to be about preserving the current warning set (which can be done cleanly with jbuilder) and ensuring the compilation scheme is equivalent
<avsm>
and coverage testing is also important to have
<avsm>
the massive immediate advantage of jbuilder is compilation times (4-6x in various packages) and boilerplate removal (no META or merlin maintenance)
<hannes>
I'll look into that PR once david commented.
<avsm>
and also the subdirectory builds, which I'll come to next
<rgrinberg>
What do you mean by ensuring the compilation scheme is equivalent? It has to change to at least get rid of packs
<avsm>
so we're hitting a library design question as part of the port now
<hannes>
well, if compilation time is the only advantage, I don't care too much... the time I spend thinking about code is much higher than the time I compile.
<avsm>
i.e. its not just a simple build syastem switch
<avsm>
hannes: the "only advantage" is not consistent with what I just said. I listed two other things above in the same sentence block.
<hannes>
if it is a preserving build system switch, it is simple. but writing a tls.mli requires some effort.
<avsm>
the diffstat overall from porting to jbuilder has been a massive removal of code from our repos overall
<avsm>
much like the oasis->topkg ports were
<hannes>
avsm: the diffstat in that PR in question is +16 lines. the metadata duplication is immense since now there are 3 opam files which contain 95% the same informaion.
<avsm>
and the ability to do a single-directory build is transformative for the overall project
<hannes>
(but I don't see any value in arguing about this longer here, see my message from 16:14)
<avsm>
Fair -- I'll ping David as well to see if we can make progress on this.
<avsm>
djs55 is looking into the stub design in general -- that's also an area where the jbuilder workspace method needs to be clarified
<avsm>
once that's done, we'll have a better view into all the different mechanisms
<avsm>
it would be _really_ nice to purge oasis though -- otherwise we'll end up with a combination of oasis, ocamlbuild/topkg, jbuilder when the dust settles
ricarkol has quit [Ping timeout: 260 seconds]
<avsm>
and part of that is figuring out answers to questions like what tls.mli and public APIs for more libraries are. a good idea overall to do this regardless of build system :)
<djs55>
I'll do some experiments with stubs and see if I can propose something
<avsm>
to quickly cover the rest of the work : I'm moving stable packages into a metarepo at https://github.com/mirage/mirage-stable that contains a "mirage-core" package that only has dependencies and version constraints. This will be how doc generation happens soon, so you can just add packages there to show up on docs.mirage.io
<djs55>
I'd like to look at the flow disconnect/close (half-close) issue again soon — if I can make a prototype change across a bunch of libraries at once using that repo it would be very handy
<avsm>
ok -- i'll see if i can use 'git subtree' to import the yes-jbuild packages into one tree that builds
<avsm>
it works locally -- just havent published it yet
<avsm>
stedolan also suggested that he could build a single AFL binary from all the library tests
<avsm>
which would be a very efficient way of fuzz testing every library in the tree
<avsm>
but before all that; i'll do some measurements of build and binary sizes with jbuilder and post on that, to make sure we can quantify the benefits
<avsm>
any other queries jbuilder related?
<avsm>
lets move on! thanks for all the efforts from everyone so far -- in particular, our ppx story is far far cleaner now and we are not accidentally linking it into the resulting unikernels :)
<avsm>
next up: TImada has an update on solo5/netmap!
<avsm>
(since you joined just after TImada posted the url)
<yomimono>
nice, thanks :)
<djs55>
where do you think the biggest bottlenecks in the network stack are now?
<TImada>
avsm: No, you only need to compile the netmap and NIC drivers you want to use.
<djwillia>
did you implement the netmap and NIC driver stuff inside ukvm?
<TImada>
No, compiling the Netmap driver and patched-NIC drivers for the Linux host
<TImada>
patched NIC drivers are included in the Netmap source code package.
<avsm>
the numbers still seem to be CPU limited overall
<avsm>
as they cap out at 300Mbps for the 1 guest case
mort___ has quit [Quit: Leaving.]
ricarkol has joined #mirage
<avsm>
to be clear, this is going through the mirage udp stack and then through solo5?
<djwillia>
so ukvm uses some kernel api's to talk to the netmap, instead of the tap?
<djs55>
I wonder if we have GC problems. Perhaps some spacetime profiling would be interesting
<TImada>
avsm: 300MB/s with 1460 sender buffer size!
<avsm>
ahh!
<TImada>
over 2Gbps!
<djs55>
nice — I misread that! :)
<TImada>
however, one problem is performance degradation under the 6 send/recv paris case
<TImada>
shown in the page 7
<avsm>
I'm wondeirng if that is a general netmap problem
<avsm>
it may not have been designed for multitenancy -- so it could degrade under scheduling pressure
<avsm>
need to re-read the netmap paper
<TImada>
I don't think it is a netmap problem as shown in the page 8
mort___ has joined #mirage
<TImada>
Netmap related manipulations are included solo5_net_write_sync(). However, its execution time was not so much extended.
<avsm>
so it might be CPU pressure on the mirage stack?
<avsm>
thats the only thing left :-)
<djwillia>
i'm a bit confused about where things are implemented... does netmap_guest mean that the packets are showing up directly in or above solo5 so it doesn't need to exit for every packet? Or are the shared buffers with ukvm, with the same ukvm/solo5 API that there was before (1 per packet)?
<TImada>
djwillia: I implemented shared buffers with ukvm
<djwillia>
i see, so there is still an exit per packet for net_read_sync?
<djwillia>
oh wait i see what you mean
<djwillia>
you have shared buffers between solo5 and ukvm, right?
<TImada>
Yes, shared buffers between ukvm-bin(host side) and the guest kernel (guest side)
<avsm>
page 3 shows the batching
<djwillia>
another question: is iperf a mirage app? I'm guessing so because of avsm's comments, but can someone point me to the iperf repo? It would be super useful to have
<djwillia>
or did you take the iperf c code and stick it on solo5 directly?
<avsm>
i dont have any immediate insights into the 6 vm degradation, except for bisecting down to the pieces leads to the mirage udp stack
<avsm>
it could be cpu contention -- the only way to verify is to get a 16 core machine and try the 6 guest throughput there to see if more cores help
<TImada>
Actually I have no idea on this issue now, so I will continue to investigate it.
<TImada>
avsm: thanks for your comments!
<avsm>
sounds good. Also, is there a tree where we can try the netmap branch out? I've not personally used it for a few years
<djwillia>
you might try telling Linux to pin cores to ukvm instances in case some weird scheduling is happening
<hannes>
TImada: maybe take the iperf C code, and see whether the performance is better than when using MirageOS?
<TImada>
avsm: netmap source itself?
<mort___>
(sorry, gtg— bye!)
<djwillia>
that's a good idea, you could do an mirage-unix iperf vs. a c iperf and rule out a lot of higher-level issues
<avsm>
TImada: just generally how to try your whole setup out so others can try to repro
<djwillia>
although the implementations are probably sufficiently different that it might be a rathole
<TImada>
djwillia: I have already tested pinning cpu cores ...
<djwillia>
good luck with it TImada, it's really cool work
<avsm>
this definitely needs more detailed investigation. TImada: a post to the devel list would also be most useful to share instructions/pointers to trees. It's very very encouraging work though -- I am keen to see the performance improvements in mainline mirage!
<avsm>
but if there are no other comments immediately, it's time to move to AOB for a minute and otherwise wrap up
<TImada>
avsm: I'm asking my colleague in Japan if I can make my repository available on the Web. Please wait a while.
<avsm>
i promise i'll do that this evening :-) been travelling...
<hannes>
djwillia: any plans to rebase + merge your solo5-hypervisor.framework?
<hannes>
I've been rather busy developing a solution for deploying MirageOS (well, ukvm) unikernels on hardware
<djwillia>
hannes: it's a bit stalled at the moment... I wanted to wait until the ARM stuff was all in after the big change that mato did for FreeBSD and ARM
<hannes>
i.e. some unix process(es) which gather statistics, allow authorised deployments, read console output, ...
<djwillia>
but I haven't had time to rebase to it
<hannes>
I'm happy to announce that this allows far easier resource sharing (as in: I can give you a token which allows you to run 2 virtual machines with 3GB memory on my hardware)
<djwillia>
there will still be a few things that will not fit well into the structure that were nasty hacks and I'm not sure yet how to go about fixing them
<djwillia>
but the main problem is that it's fallen too low down my priority list with other things going on :(
<hannes>
release/announce has been delayed since I wanted to get some properties right... but I just finished the revocation bits! :)
<avsm>
really looking forward to seeing this :-)
<kensan>
hannes: Is that code up somewhere?
<kensan>
(Hi everyone btw :)
<djwillia>
hannes: did you see that ricarkol ported includeOS onto solo5/ukvm?
<hannes>
sure, it'll be online by tomorrow, together with some article describing it
<hannes>
it's basically encoding policies about resources into authenticated tokens! (and i use my ukvm-bin, as in you provide only your vm image + config (which network devices, block devices, etc.))
<hannes>
djwillia: that sounds great!
<avsm>
i gotta head off to another meeting, so heading out. Thanks everyone!