<mjacob>
mattip_: re haptapod: in the moment it's not ready for pypy, mainly because it expects every branch to have a single head (including closed ones). i'll attend the mercurial conference on tuesday + the sprint on wednesday, and can ask for the current status of this missing feature.
<Remi_M>
arigato: I don't think waiting for a new hardware design is the right approach :)
<arigato>
well, just saying, imho the future of Python as a seriously parallelizable language rests on this kind of things
<mattip>
the bad news is pypy2-HEAD is slower than cpython-2.7.11 on the bm_mdp and sphinx benchmarks
<antocuni>
arigato: do you feel like to give a 30 second summary of why a new design would help? (I haven't read the paper)
<arigato>
antocuni: sure, the paper is about changing the caches (L1, L2, L3...) so that instead of fixed blocks (like 64 bytes) it handles variable-size "objects", with GC references between them marked explicitly
<arigato>
every level becomes a GC generation, too, so that when one level (e.g. L1) is full, it gets "collected" and only surviving objects gets moved to L2
<arigato>
and in the end only whats gets collected out of the last level really gets an actual memory address
<arigato>
in other words it's doing a standard GC approach but in the levels of caches for generations, and with hardware collecting between generations
<antocuni>
so the GC collection would be basically done in hardware?
<arigato>
yes, apart from the "major" collection of the memory itself
<antocuni>
ok, but how does it help STM? I can somehow imagine how to implement transactions until all the objects stay in cache, but as soon as you do a write to a main-memory object you would still need an approach like the current, wouldn't you?
<arigato>
what is interesting is that GC references are marked specially and can move (i.e. actually change) when objects move between cache levels
<antocuni>
and they don't have a "real" address as long as they are in cache?
<kenaan>
andrewjlawrence py3.6 9bf9185580c6 /pypy/module/_socket/test/test_sock_app.py: Fix test
<arigato>
yes
<antocuni>
interesting
<arigato>
so we can much more control: it's never possible to directly "write" to main memory, because such a write first loads the object into the L1 and then the write happens in the L1
<dayton>
Hi, is there a special way to declare variadic functions in cffi?
<dayton>
When I write the declaration like I would in a c header file. The wrapper builds fine but at import time in python, it throws an error about missing symbols for the variadic function
<_aegis_>
I have a weird workflow where I call a pypy function during a crash to dump all python threads
<_aegis_>
but I've run into a case where the pypy lock is held during the crash and the app just hangs
<_aegis_>
(I think maybe the crash happened in a pypy thread, possibly a pypy -> cffi -> C call)
<_aegis_>
is there something I can do about this? I'd be fine not dumping the thread state in this case
<mattip>
what would you like to happen?
asmeurer has quit [Quit: asmeurer]
EWDurbin has quit [Ping timeout: 276 seconds]
graingert has quit [Ping timeout: 252 seconds]
antocuni has joined #pypy
DRMacIver has quit [Ping timeout: 252 seconds]
cadr_ has quit [Ping timeout: 276 seconds]
dayton has quit [Ping timeout: 252 seconds]
asmeurer_ has joined #pypy
<alawrence>
mattip: Do we have an equivalent of PyUnicode_FSDecoder in PyPy?
azrdev has joined #pypy
<azrdev>
hi! I have a scikit-learn extension which I want(ed) to try on pypy (possibly to optimize for execution speed), but seems like scipy and by extension sklearn are currently broken on pypy3-7.x
<azrdev>
http://packages.pypy.org/##scipy complains about the libraries missing, but I instead "ImportError: [...]venv/site-packages/numpy/core/_multiarray_umath.pypy3-71-x86_64-linux-gnu.so: undefined symbol: PyStructSequence_InitType2"
dayton has joined #pypy
<azrdev>
File "/home/azrael/dev/pypy/env/site-packages/numpy/core/overrides.py", line 6, in <module>
<azrdev>
from numpy.core._multiarray_umath import (
EWDurbin has joined #pypy
graingert has joined #pypy
DRMacIver has joined #pypy
cadr_ has joined #pypy
<mattip>
azrdev: you need to use a HEAD of pypy3.6
<mattip>
or numpy<1.16.3
<_aegis_>
dunno, I either want pypy to not hang, or to be able to timeout the lock or something on the call from C -> pypy during the crash
<_aegis_>
(if the signal handler can't get the lock within say 250ms I want to just give up)
<_aegis_>
I assume I'd have the same hang if I called sigaction from within pypy
<_aegis_>
(right now I'm calling it from C)
<mattip>
alawrence: we have interpreter.unocidehelper.fsdecode()
<mattip>
which calls runicode.str_decode_mbcs
<azrdev>
mattip: this is pypy3 from archlinux packages, calling itself Python 3.6.1 (784b254d669919c872a505b807db8462b6140973 / PyPy 7.1.1-beta0
<mattip>
so `pip install numpy==1.16.2` should work
<kenaan>
andrewjlawrence winconsoleio c795c20ce4b8 /: Initial implementation of winconsoleio
dddddd has joined #pypy
<kenaan>
rlamy optimizeopt-cleanup 5e0d762fe4fd /rpython/jit/metainterp/: Don't fallback to jumping to the preamble in compile_retrace(), since optimize_peeled_loop() already ta...
<kenaan>
rlamy optimizeopt-cleanup 3df8ad2225a4 /rpython/jit/metainterp/: Don't pass the optimizer around unnecessarily
alawrence has quit [Ping timeout: 256 seconds]
altendky has quit [Quit: Connection closed for inactivity]