02:53
jcea has quit [Ping timeout: 268 seconds]
03:32
jacob22 has quit [Read error: Connection reset by peer]
03:36
jacob22 has joined #pypy
03:39
oberstet has quit [Remote host closed the connection]
06:09
<
nulano >
Is there a way to run the benchmark buildbot on a branch and compare the results somewhere? Running json_bench (chosen arbitrarily) shows a clear 1% improvement on my linux64 system, but I don't want to run all tests manually.
06:23
nimaje has quit [Ping timeout: 260 seconds]
06:27
nimaje has joined #pypy
06:29
<
mattip >
you can kick the benchmark buildbot on a branch
06:29
<
mattip >
the "benchmark repo branch" should remain "default" - that is the branch of
07:53
<
mattip >
I think it is supposed to show up on the "comparison" page, if not ping me
07:54
<
mattip >
we used to see branches there but I don't see any now, maybe something is wrong with the site
07:55
<
nulano >
ah ok, I'll check when it's done
07:56
<
mattip >
1% is very hard to see on a single benchmark run, there is a lot of noise in the measurement
07:58
<
mattip >
this is the graph for json_bench of pypy3.7-jit, it varies quite a bit
08:02
<
mattip >
huh, looking at that there is an across-the-board slowdown in commit 081e3124a4f1,
08:03
<
mattip >
err, sorry, this link (then click on "benchmarker" under environment)
08:05
<
mattip >
.. and the only diff between the commits is some cpyext stuff, which should not impact benchmarks at all :(
08:21
ranpoom has joined #pypy
09:01
<
nulano >
good news: the run shows up on the comparison page; bad news: I don't know how to interpret the results
09:13
<
mattip >
because there is no significant change?
09:16
<
mattip >
the effect of the change is probably swamped by other factors
09:17
<
mattip >
if you really want to show there is less code after the change, you could add a pypy.module.pypyjit.test_pypy_c test
09:17
<
mattip >
but it is probably overkill
09:19
<
nulano >
because I don't know what is significant for the different benchmarks
09:20
<
nulano >
a pypyjit test would be in python code right? I'm pretty sure the change is in compiled code
09:20
<
nulano >
*for python code
09:24
<
nulano >
I'll probably take the translated C code and write a microbenchmark for the one function, as I'm actually curious how it behaves
11:24
wleslie has quit [Quit: ~~~ Crash in JIT!]
11:25
<
mattip >
the failure in interpreter/test/test_typedef.py::TestTypeDef::()::test_destructor for win32 is due to using pypy as a host to run tests
11:25
<
mattip >
I get the same failure if I run the test on linux64 using a pypy host
11:28
oberstet has joined #pypy
14:33
jcea has joined #pypy
17:15
<
mattip >
microsoft has an extension for C for structured exception handling SEH
17:16
<
mattip >
cpython ctype tests make sure it works for calling a function improperly, and our implementation crashes
17:16
ranpoom has quit [Quit: Connection closed for inactivity]
17:16
<
mattip >
in test_SEH
18:04
jacob22 has quit [Read error: Connection reset by peer]
18:05
jacob22 has joined #pypy
18:10
swills has joined #pypy
19:17
ronan has joined #pypy
21:06
CountryNerd has joined #pypy
21:08
CountryNerd has quit [Client Quit]
22:43
EWDurbin has quit [Ping timeout: 260 seconds]
22:43
nimaje has quit [Ping timeout: 260 seconds]
22:43
fijal has quit [Ping timeout: 260 seconds]
22:44
fijal has joined #pypy
22:45
EWDurbin has joined #pypy
22:45
nimaje has joined #pypy