cfbolz changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://quodlibet.duckdns.org/irc/pypy/latest.log.html#irc-end ) | use cffi for calling C | if a pep adds a mere 25-30 [C-API] functions or so, it's a drop in the ocean (cough) - Armin
jacob22_ has joined #pypy
jacob22 has quit [Read error: Connection reset by peer]
<cfbolz>
mattip: we should maybe still clarify the docs, if it's a 2 only concern
gracinet has left #pypy [#pypy]
<arigato>
cfbolz: note that exec("", d) still puts a key '__builtins__' in d
<arigato>
on python 3
<arigato>
I think that the statement still makes sense then
kingsley has joined #pypy
<arigato>
what the statement tries to say is that if you say __builtins__={...}, then it doesn't mean that the new dictionary will be used as builtins for the code running in this context
<arigato>
in pypy as far as I remember the builtins dictionary actually used is always the same one
<kingsley>
I bench marked floating point multiplications.
<kingsley>
On my ol' AMD Athlon computer, pypy3 became faster than python2 and python3 when at least 32768 multiplications were done in each batch.
<kingsley>
I'd be happy to receive reports of its results on other peoples' computers.
<kingsley>
Ideally when run with python2, python3 and pypy3.
<arigato>
kingsley: that's really a benchmark of a lot of things but not really float multiplication. The actual operation turns into one machine instruction, the same as C would use. That's really hard to measure meaningfully, but you still get interesting results along the lines of "how long does it take for the JIT to wake up and compile a simple piece of code"
<kingsley>
Another interesting result was that pypy3 was evidently about 80X faster.
<arigato>
(note that if you do a float multiplication but don't use the result, it's likely that the machine instruction is actually not emitted, too, but again it's hard to measure just one instruction on modern CPUs)
<kingsley>
arigato: Is there any chance you'd be willing to run my bench mark with various versions of python on your computer, and relay the results, along with the performance specs of your hardware?
<arigato>
sorry, I'm trying to explain why I don't believe you're really doing something interesting here
<energizer>
arigato: if kingsley changed it to `x = n * 1.2345;; return x` would that be more informative?
<energizer>
or probably return max(1.2 * i for i in range(N))
<arigato>
(1) no, the JIT inlines functions; (2) that would add a lot of operations that take far longer to run than a mere CPU multiplication
<energizer>
kingsley: the suggestion is to change the benchmark so that it is more informative
<arigato>
float multiplication is an extreme example: you could try to benchmark it by writing C code---at least you have a better idea of what you're really testing, but it is still a hard exercice
<arigato>
the point in pypy is that any float multiply turns into just one CPU instruction after a lot of JIT-compiling takes place
xcm has quit [Remote host closed the connection]
<energizer>
is there a threshold for when jitting kicks in, on number of iterations over a certain code path?
xcm has joined #pypy
<kingsley>
energizer: Maybe the threshold is about 32768 multiplications.
agronholm has quit [Ping timeout: 248 seconds]
agronholm has joined #pypy
<kenaan>
mattip default 42337a484364 /pypy/doc/: add jit help to documentation
<kenaan>
mattip default 96a1c0a30e47 /pypy/doc/: clean up "make html" warnings, remove a few blank whatsnew files
CrazyPython has joined #pypy
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
<cfbolz>
my draft of the json blog post is here, will post tomorrow
<mattip>
it might have been hard to find the information previously since I just now created that page via "pypy --jit help > jit_help.rst"
<energizer>
mattip: perfect, thanks
CrazyPython has quit [Ping timeout: 240 seconds]
lritter has joined #pypy
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
xcm has quit [Read error: Connection reset by peer]
xcm has joined #pypy
<mattip>
I added a test to the _siphash24-collecting branch that has enough of the function to trigger the "can trigger collection" error
ssbr` has joined #pypy
ssbr` has quit [Quit: Leaving]
ssbr` has joined #pypy
antocuni has joined #pypy
<kenaan>
mattip _siphash24-collecting 158a54d11ca5 /rpython/memory/gctransform/test/test_framework.py: simplify test but it still passes (it should fail...)
<mattip>
it seems some combination of bit-twiddling and an inline function confuses the no_collect check
<mattip>
the CollectAnalyzer finds that the inlined function allocates a tuple4 for the return value, even though it is inlined?