<lazka>
oh, it's in experimental, I somehow missed that
<lazka>
mattip, thanks, works nicely
jacob22__ has quit [Ping timeout: 250 seconds]
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
<mattip>
antocuni: I pushed a draft release note, and a release branch for py3.6
<mattip>
antocuni: up to you, but it would be nice to put a linux64 alpha out there
<tumbleweed>
lazka: debating getting it into unstable in the next few days. But there are still a bunch of issues, and I don't have a whole lot of time for them :(
jacob22__ has joined #pypy
<antocuni>
mattip: sorry, real life issues prevented me to work on this
<lazka>
tumbleweed, no prob, I'm happy with experimental
<antocuni>
I suppose we can release a 3.6-alpha together with 2.7 and 3.5
<mattip>
+1
<mattip>
what should we do about missing ARM build infrastructure? Release without it? Try to get a VM going somehow?
<mattip>
tumbleweed: fwiw, we are getting closer to releasing 7.0
<tumbleweed>
mattip: \o/
<antocuni>
I think that when in the past it happened the same with freebsd and windows, we simply released without them
<tumbleweed>
probably too late for me to get into Debian Buster. But we can backport it later.
<mattip>
tumbleweed: ok, and thanks for taking care of all that
<tos9>
Where's a good place for me to read what I should expect about PyPy's support for intrinsics
<tos9>
I am messing around with replicating https://jvns.ca/blog/2014/05/12/computers-are-fast/ via PyPy, and I can see in the perf report I get at the end that a decent amount of time was actually spent running movdqu, movntd, and movdqa instructions, but around 12% was also spent running regular ol' mov
<tos9>
Probably my next step should be grabbing a JIT log I guess and looking at concretely what was emitted and when I guess
<kenaan>
mattip default 07c721ed19aa /pypy/doc/release-v7.0.0.rst: first draft of release note
<kenaan>
mattip py3.6 614f05464dbb /: merge py3.5 into branch
<kenaan>
mattip release-pypy3.6-7.x a51d929d674b /pypy/module/: version -> 7.0.0 alpha
<mattip>
... adding the parameter 'vec=1' to the JitDriver
<tos9>
I *think* from what I see the default must have changed since then?
<tos9>
Because (and I know very little about this stuff, but...) I think movdqu, movntd, and movdqa *are* intrinsics?
<mattip>
there is something funky about compilers using those without actually parallelizing anything
<mattip>
so you get the mov* but not the simd calls
Zaab1t has joined #pypy
<mattip>
my knowlege is also very limited :(
themsay has quit [Ping timeout: 245 seconds]
Zaab1t has quit [Quit: bye bye friends]
<tos9>
Yeah :/
<tos9>
The "higher level" question I'm trying to answer at the minute is actually that on that exercise itself, it looks like I can basically do that in ~7 seconds with PyPy, so I want to be able to figure out where the extra 6 seconds are being spent
<tos9>
er, I guess extra 4 seconds
<mattip>
io?
<tos9>
I'm trying to basically learn the performance tools that are available a bit better, so maybe, but I basically want to know how to measure (and this is about as simple a code exercise as is meaningful I guess)
toaderas has joined #pypy
<tos9>
vmprof at least claims that 25% of the time is spent actually reading from disk if I understand its output correctly
<kenaan>
arigo cffi/cffi 73a16cc62771 /c/: Issue #362 Add "thread canary" objects which are deallocated if the PyThreadState is explicitly deallocated by C...
<arigato>
fijal: sorry for the delay :-)
oberstet has quit [Remote host closed the connection]
<kenaan>
arigo cffi/cffi e851dbe5757a /c/ffi_obj.c: Windows compilation fix
<kenaan>
arigo cffi/cffi e2f85d257915 /testing/cffi0/test_function.py: Backed out changeset 7a76a3815340 On Windows, there is no lround() or (as far as I can find) any math function r...