cfbolz changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://botbot.me/freenode/pypy/ ) | use cffi for calling C | "the modern world where network packets and compiler optimizations are effectively hostile"
rokujyouhitoma has quit [Ping timeout: 246 seconds]
vkirilic_ has joined #pypy
vkirilichev has quit [Ping timeout: 240 seconds]
Rhy0lite has joined #pypy
adamholmberg has joined #pypy
vkirilichev has joined #pypy
Tiberium has quit [Ping timeout: 255 seconds]
antocuni has joined #pypy
vkirilic_ has quit [Ping timeout: 240 seconds]
Tiberium has joined #pypy
<kenaan>
rlamy cpyext-leakchecking 5bba19e669b0 /pypy/module/cpyext/test/test_cpyext.py: Filter out C functions
<kenaan>
rlamy cpyext-leakchecking 438c0c9af393 /pypy/module/cpyext/: Be more careful with refcounts in array.c
realitix has joined #pypy
santagada_ has joined #pypy
<antocuni>
njs: I am getting this error message when trying to auditwheel repair a wheel (I tried both pypy and cpython 3.6): "ValueError: Could not find soname in numpy/.libs/libm-2-e51d9a45.12.so"
<antocuni>
do you have any clue what is it?
Tiberium has quit [Remote host closed the connection]
<antocuni>
uhm, this is what I get when I do 'readelf -d libm-2-e51d9a45.12.so '
<holdsworth>
In #Python, people told me that eval and JIT are not friends but no one elaborated more
<antocuni>
holdsworth: do you restart the process every time? Or you have a process with repeat the same computation again and again, and sometimes it takes 40ms sometimes 1000?
<holdsworth>
the latter, thanks
<dash>
holdsworth: sometimes GC has to be done :)
<antocuni>
and yes, in general eval doesn't play well with the JIT, because it causes it to recompile the same code again and again
<antocuni>
it is expected that some iterations can be longer than the others, because sometimes the GC and the JIT kicks in; however, 40 ms vs 1000 ms sounds a bit too much
<antocuni>
holdsworth: what is returned by "construct(iteration)"?
<kenaan>
plan_rich default cb8f734c831d /rpython/rlib/rvmprof/src/shared/: remove old files
<kenaan>
plan_rich default ac3af78f56db /pypy/module/_vmprof/: remove write_all_code_objects, this method is not called when it does not exist
<holdsworth>
the thread contains the function exits after the function ends, so I don't understand the GC related comment. I must say that I am not familiar with the arch of GC in python, sorry
rokujyouhitoma has joined #pypy
<holdsworth>
antocuni: just a second
<holdsworth>
antocuni: construct(iteration) returns a statement
<holdsworth>
I am sorry
<holdsworth>
it returns a string
TheAdversary has quit [Remote host closed the connection]
<holdsworth>
sorry again
<holdsworth>
it returns a function
<antocuni>
holdsworth: in pypy, the memory is managed by the GC; every time you allocate an object (even if it's temporary and you don't even know it exists), the GC does it in a very fast way; from time to time, it needs to free the memory, so when it happens you see a small pause in the program
TheAdversary has joined #pypy
<antocuni>
holdsworth: you cannot pass a function to eval(); it must be either a string or a code object
vkirilichev has quit [Remote host closed the connection]
<holdsworth>
antocuni: it is a code object
<nimaje>
holdsworth: in most cases if you use eval you should think again what you want to do and if you really need eval
<holdsworth>
nimaje: I am aware of it, it's a code written by my associate, we are reviewing it and benchmarking
<holdsworth>
At first we will try to take down the eval and see if the benchmarking results improve or not and then we will see what we will do next
rokujyouhitoma has quit [Ping timeout: 260 seconds]
<holdsworth>
Is there any linux distro that you might recommend on running PyPy for Near Real-Time applications? Thanks
<antocuni>
holdsworth: yes, getting rid of the eval sounds like the best plan
<nimaje>
and in that case you probably can put the use of eval in the outer loop (except construct() changes global state and you really need to call it in the inner loop)
<plan_rich>
fijal, hi
vkirilichev has joined #pypy
<antocuni>
as for real time, note that pypy and real time do not play well together: as I said, the GC and the JIT can cause spikes at random points
<fijal>
plan_rich: can you sort out the upload size?
<fijal>
and explain to me how
<plan_rich>
unsure why it does not work. nginx does not seem to reject it.
<plan_rich>
(I assume you restarted nginx)
<antocuni>
holdsworth: on a system I worked on, we mitigated the issue by manually calling gc.collect() at points in which we knew we could afford a pause, and by disabling the JIT after a while
vkirilichev has quit [Remote host closed the connection]
<fijal>
antocuni: gc is not a problem for a while
<holdsworth>
thanks antocuni, sounds like a good advice
<antocuni>
fijal: I know it's incremental, but it still causes small pauses here and there. If we are talking about hard real time, it might still be a problem
<antocuni>
(if it's hard real time, pypy is probably not a good solution anyway)
<fijal>
well anything is a problem for hard real time
<Alex_Gaynor>
fijal: it's not that __del__ should add memory pressure, it's that if you're allocating memory that is freed by an __del__ you should add memory pressure
<Cheery>
here's the same with situation that it did continue further.
antocuni has joined #pypy
<Cheery>
g_initialstub hmm..
<fijal>
Alex_Gaynor: right
<kenaan>
rlamy cpyext-leakchecking 82d95ae3d2c8 /pypy/module/cpyext/test/: A failing test that explains why test_subclass() leaks the class Sub
<ronan>
mattip: ^^^ Note that it fails on nightly but not on 5.6.0, so it looks like a regression
<pjenvey>
fijal: cffi docs points this out now for app level -- I'm asking about rpython lltype.malloc + rpython __del__. it's a similar situation
<mattip>
ronan: nice. Maybe subtype_dealloc is too magical?
<Cheery>
I seem lucky, as this seems like a segmentation fault due to load from zero address
<ronan>
mattip: here the issue is that it's not called
<ronan>
Sub gets the tp_dealloc of array instead, which doesn't know about heap types
<mattip>
on cpython -A that test passes
<Cheery>
Hey. My thing seems to crash because stacklet_thread_s g_source is NULL
<Cheery>
thrd->g_source != NULL
<Cheery>
in g_initialstub
tbodt has joined #pypy
<Cheery>
but it does not make sense
<Cheery>
it should crash much sooner
vkirilichev has quit [Remote host closed the connection]
<Cheery>
I think I guess what it is.
<Cheery>
it is an uncatched exception of some class
<Cheery>
yep. There the culprit is.
rokujyouhitoma has joined #pypy
rokujyouhitoma has quit [Ping timeout: 240 seconds]
<mjacob>
mattip: on the mailing list there's someone complaining about the pypy version number in wheel file names
<mjacob>
mattip: wouldn't it make sense to include just the major version number in the file name and increase it every time we break compatibility?
gutworth has quit [Ping timeout: 260 seconds]
gutworth has joined #pypy
<antocuni>
mjacob: I think it's the 'wheel' package which chooses what to include the filename
<antocuni>
pypy reports only the ABI version, which is pypy_41. This is probably wrong for the opposite reasons, i.e. it claims it's backward compatible even when it's not
<mattip>
somehow the actual version number is being included in the wheel name, i.e. "pp257-pypy41"
<mjacob>
so our python tag would be pp{major version of implemented python version}?
<tumbleweed>
+ pypy version
<mjacob>
i'd vote for setting the python tag to pp27, pp35, pp36 etc.
<tumbleweed>
err how does that help?
<mjacob>
and setting the abi tag to something like pypy5, pypy6, pypy7 and increase pypy's major version every time we break ABI compatibility
<tumbleweed>
right
<tumbleweed>
if you're actually tracking ABI breaks
<tumbleweed>
the current (lazy) option is pypyX.Y, bumping when you know you've made a break
<tumbleweed>
err XY
<tumbleweed>
(which apparently isn't happening)
<mattip>
mjacob: +1, that would mean whoever is fishing out the xx in "ppxx-pypyzz" should be using sys.version_info, not sys.pypy_version_info
<mattip>
tumbleweed: the zz is from the SOABI and a whole nother kettle of fish, let's not open that now
<dstufft>
I'm not sure that's right
<dstufft>
well
<dstufft>
sort of
<tumbleweed>
mattip: are you saying that we currently say pp27 for python 3 too?
<mjacob>
tumbleweed: ok, then we have to tell the guy on the mailing list that "too bad, you need new wheels every few months" i think
<dstufft>
pip/wheel started doing pp2XY and pp3XY because PyPy didn't provide an ABI so we had to be conservative
<dstufft>
er didn't provide the ABI variable*
<tumbleweed>
dstufft: I fixed that (providing the variable) at some point
<dstufft>
yea
<tumbleweed>
if we're breaking ABIs he needs new wheels, that's simple enough :P
<dstufft>
I would have to think more about it, because I have a nagging feeling like even at the python level, PyPy can be incompatible between versions
<dstufft>
(e.g. CFFI differences and what not)
<tumbleweed>
CFFI has its own compatibility variables
Rhy0lite has quit [Quit: Leaving]
<tumbleweed>
but of course those aren't exposed in wheel tags
<mattip>
dstufft: currently we handle cffi with the SOABI value
<mattip>
so IMO the python version should reflect the "python" (interpreter + stdlib) implemented, i.e. 2.7, 3.5, 3.6
<tumbleweed>
yes, that's what the spec says
<mattip>
in my example ppXXX-pypyZZ currently XXX is the pypy version, not the "python" version
<mattip>
(of course I am ignoring the ZZ value for the sake of simplicity right now)
<tumbleweed>
right, +1
<mjacob>
so we have different levels at which incompatibilities could arise: new/changed bytecodes, cffi and cpyext
<mjacob>
an ideal solution would require wheel rebuilds only when *really* required
<tumbleweed>
oh, right, that's not what the spec says: The version is py_version_nodot . CPython gets away with no dot, but if one is needed the underscore _ is used instead. PyPy should probably use its own versions here pp18 , pp19 .
<dstufft>
the compatabiltiy tag spec isn't worded the most wonderfully
<tumbleweed>
it's pretty explicit on that, although it does say "should probably"
<dstufft>
the way to specify something that is python interpeter + stdlib is with the `py` prefix
<mattip>
dunno if that sentence ever came up on a pypy discussion, we don't have strong representation on these things
<dstufft>
py27 for instance works for anything that implements Python 2.7
<tumbleweed>
right
<dstufft>
pp258 is for something that the python level is specific to PyPy
<tumbleweed>
but as soon as you're in c extension land, you use an implementation prefix, and it assumes the ABI changes on every release because that's the case for cpython
yuyichao_ has quit [Read error: Connection reset by peer]
yuyichao has joined #pypy
<mjacob>
for cpyext extensions we could have the following: cp27-pypy57-linux_x86_64
<dstufft>
mattip: so if pypy_41 is not ana ccurate ABI tag, then the first thing that is going to need to be done is that the ABI tag gets fixed, that logic currently assumes there is only one ABI per running python process (get_config_var("SOABI")) but that's just software so it can change
<dstufft>
after that, probably the best route is to get pip updated so it starts accepting both the pp(2|3)XY and the new proposed ones
<dstufft>
and then update wheel so it starts producing the new ones
<mattip>
dstufft: +1 for the SOABI issue, it is now part of the release document so hopefully will recieve consideration before the next release
tbodt has joined #pypy
<mattip>
dstufft: if we can help with "get pip updated .. update wheel..." let us know here or on pypy-dev
<mjacob>
i still don't understand the point of the pp27 tag
<dstufft>
mattip: at a minimum issues would be great, PRs would be even better though :) but should at least open up issues so we remember