cfbolz changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://quodlibet.duckdns.org/irc/pypy/latest.log.html#irc-end ) | use cffi for calling C | if a pep adds a mere 25-30 [C-API] functions or so, it's a drop in the ocean (cough) - Armin
mcon has quit [Quit: mcon]
ctismer has quit [Ping timeout: 260 seconds]
EWDurbin has quit [Ping timeout: 260 seconds]
fijal has quit [Ping timeout: 260 seconds]
ctismer has joined #pypy
EWDurbin has joined #pypy
fijal has joined #pypy
graingert has quit [Ping timeout: 260 seconds]
agronholm has quit [Ping timeout: 240 seconds]
graingert has joined #pypy
agronholm has joined #pypy
lritter has joined #pypy
Taggnostr has quit [Remote host closed the connection]
Taggnostr has joined #pypy
lritter has quit [Ping timeout: 265 seconds]
lritter has joined #pypy
jcea has quit [Ping timeout: 260 seconds]
oberstet has quit [Remote host closed the connection]
oberstet has joined #pypy
fijal has quit [Ping timeout: 265 seconds]
fijal has joined #pypy
_whitelogger has joined #pypy
lritter has quit [Quit: Leaving]
* cfbolz
waves
dustinm has quit [Quit: Leaving]
dustinm has joined #pypy
otisolsen70 has joined #pypy
<LarstiQ>
heya cfbolz, feeling better?
<cfbolz>
still this weird in-between, but yes
<cfbolz>
the kid-induced sleep deprivation isn't helping ;-)
<cfbolz>
LarstiQ: thanks for asking :-)
synaps3 has joined #pypy
<synaps3>
hi, how comes i get slower results when using pypy than regular python3
<cfbolz>
synaps3: I wonder whether it's our ssl implementation
otisolsen70 has quit [Quit: Leaving]
otisolsen70 has joined #pypy
mcon has joined #pypy
ronan has quit [Remote host closed the connection]
ronan has joined #pypy
<cfbolz>
anybody wants to guess what's the maximum number of bridges a single loop has when translating pypy?
<fijal>
1500?
<fijal>
I don't remember, but I checked at one stage, so a bit of cheating on my side
<fijal>
(but I really don't know)
<synaps3>
what are bridges cfbolz
<synaps3>
i opened issue btw :)
<cfbolz>
fijal: yes, pretty close 1556
<cfbolz>
synaps3: thanks
<fijal>
cfbolz: did I really remember or I just guessed?
<fijal>
we will never know! associative memory
<cfbolz>
fijal: lol, only you can know that
<fijal>
no! that's the point
<fijal>
we have associative memory, which also means it'll lie to us
<fijal>
it's great, but loosy
<cfbolz>
synaps3: we create our machine code in a piece by piece fashion. A bridge is a piece that's added later to an existing machine code chunk
<fijal>
cfbolz: I *think* and agian, I don't know, marking functions as "don't bother tracing or creating bridges" that produce bridges lead to some real speedups
<cfbolz>
fijal: yes, plausible
<cfbolz>
fijal: this is triggered by me investigating docutils
<cfbolz>
Where a single small function leads to most of the machine code
<cfbolz>
By most, I mean only 30%
<cfbolz>
But still
<cfbolz>
And I wonder how to fix it
<cfbolz>
My current plan is to stop promoting stuff after we made a chain of some number (20) of guard_values
<cfbolz>
In the hope that the resulting trace is more general and covers the remaining cases
<fijal>
wasn't that the plan with guard_value_something_something?
<fijal>
that the last bridge after X guard values becomes a more general one?
<cfbolz>
fijal: guard_compatible?
<cfbolz>
No, that was a lot more high tech
<cfbolz>
Annoyingly it wouldn't at all solve the docutils problem :-/
<cfbolz>
(and I never got it to help anyway)
<fijal>
right
<fijal>
but yes, having at most 20 guard values seems like it might help something
<fijal>
(and make some things worse of course)
<cfbolz>
Yes, the number needs to be tuned
<fijal>
always
<fijal>
when are we getting machine learning algorithms for perfect combination of magic numbers?
<cfbolz>
I mean, another thing to consider is to switch to a binary search, not a linear one for such a long chain of guard_values
<fijal>
would that make a meaningful difference?
<fijal>
probably in some cases
<cfbolz>
Who knows
<fijal>
that, I remember, was a little hard with changing numbr of cases
<fijal>
but if we fix the upper number, then it's easier
<cfbolz>
Armin implemented binary search with grow able cases for guard_compatible
<cfbolz>
But yes, it's a mess
<cfbolz>
(another interesting fact: translation spends 9% of the total runtime in the JIT)
<fijal>
that is, not so bad?
<fijal>
and interpreted?
<cfbolz>
No clue
<fijal>
vmprof could possibly tell you that? I'm not sure if would work
<fijal>
but also be careful - JIT does stuff like residual calls etc.
<cfbolz>
I should try
<cfbolz>
(I am getting majorly sidetracked, this isn't at all my task for today)
tsaka__ has quit [Read error: Connection reset by peer]
tazle has quit [Ping timeout: 260 seconds]
tsaka__ has joined #pypy
mjacob has quit [Ping timeout: 260 seconds]
ulope has quit [Ping timeout: 260 seconds]
tazle has joined #pypy
mjacob has joined #pypy
ulope has joined #pypy
jcea has joined #pypy
<mcon>
I want to construct binding for a rather large (dozens of small functions) lib (ell). What is the best practice to partition the lib so I have multiple packages (ell.misc, ell.settings, ell.ecc, ...)?
<LarstiQ>
mcon: is there a natural division coming from the library? Are you catering more to people who are already familiar with that libraries' interface, or more to Python users who want to use the functionality?
<LarstiQ>
is there any upstream documentation that might be confusing if you do things differently?
<LarstiQ>
it's more a library design question than so much cffi related
<LarstiQ>
mcon: although one piece of advice is to do a lower level binding and then a layer of a more pythonic approach on top
danchr_ has joined #pypy
danchr has quit [Disconnected by services]
danchr_ is now known as danchr
ronan_ has joined #pypy
ronan has quit [Remote host closed the connection]
<mcon>
LarstiQ: Yes. the main ell.h is just a list of other #include. That is the main reason why I think is a ood idea to partition bindings.
<mcon>
LarstiQ: If I get right your last comment I should do separate cffi bindings for each "sub-library" and provide a "wrapper" module including all sub modules, right?
<mcon>
LarstiQ: wrapper could include some interface functions to hide C-specific details (like allocation/deallocation o structures, proper Exception generation, etc.)
dmalcolm has joined #pypy
<LarstiQ>
mcon: a closer-to-the-C and a layer on top is one way to handle it, people who know the C library and know what they need can fall back to the lower level if they're not happy with how you decided to (re)organize things
ronan_ is now known as ronan
<mcon>
Is it possible to cross-compile the generated whatever.c? I am getting error: ".../host/include/python3.9/pyport.h:741:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)."" along with other errors apparently caused by mismatch between cffi rune env and cross-compilation env.
<mcon>
How can I tell cffi to generate code for "another machine"?
brechtm has joined #pypy
brechtm has quit []
<ammar2>
mcon: can you specify your cross-compilation flags in extra_compile_args?
<mcon>
ammar2: I surely can, but I'm unsure about what exactly needs to be put there. Another thing I'm unsure about is how to specify compiler. I normally use something like: "PATH=/path/to/my/compiler:$PATH CC=mipsel-linux-gcc ..."
<mcon>
ammar2: problem is not *how* to call compiler, but *which* compiler to call.
forgottenone has joined #pypy
Gustavo6046 has quit [Ping timeout: 240 seconds]
otisolsen70_ has joined #pypy
otisolsen70 has quit [Ping timeout: 246 seconds]
<mcon>
Another problem I have is I actually have two things to build into my package: the cffi bindings proper and a wrapper module making things a bit more pythonic. my attempt to setup.py gives both the same name (ell_wrapper) and I am currently unable to generate cffi as plain "ell". can someone help?
synaps3 has quit [Remote host closed the connection]
synaps3 has joined #pypy
otisolsen70_ has quit [Quit: Leaving]
jacob22 has joined #pypy
jacob22 has quit [Client Quit]
<mattip>
mcon: cffi should generate a standard C file, you should be able to cross compile it just like you do any other C file
<mattip>
you need to tell it where the target platform python include files are with a -I directive
jacob22 has joined #pypy
<mattip>
as for building a package, you can take a look at how PyPy does it, there are a handful of *build*.py scripts in the lib_pypy directory
dnshane has quit [Ping timeout: 272 seconds]
<mcon>
mattip: Problem is it makes assumptions about int size and I have a 32/64 mismatch on pointer size and integer size (e.g.: expected ‘size_t *’ {aka ‘unsigned int *’} but argument is of type ‘long unsigned int *’)
<mattip>
is map-improvements close to being able to merge?
synaps3 has quit [Remote host closed the connection]
<antocuni>
cfbolz: wow, that's impressive
<antocuni>
although I wonder whether it's an error of the benchmarks
<cfbolz>
antocuni: the C++ code comes from a Nature paper
<cfbolz>
So it should be good
<antocuni>
so maybe the error is in the python code :)
<antocuni>
although I'm not sure how to interpret that graph
<cfbolz>
Possible, but not likely
<cfbolz>
antocuni: just ignore the y axis
<antocuni>
ok, but then c++ is basically the slowest of all?
<cfbolz>
No, pythran an regular PyPy are slower
void_ has joined #pypy
<cfbolz>
mattip: it's getting there
<cfbolz>
I should write a jit test
<antocuni>
yes, but pypy just became faster, and pythran not-naive is faster
<cfbolz>
antocuni: right
<antocuni>
anyway, apart for those two, it's weird to see c++ so slower than fortran or julia
<cfbolz>
Anyway, in the original paper there are many more languages
<cfbolz>
All slower than this group
<antocuni>
ah ok
<cfbolz>
Including cpython, perl and Lua, the slowest
<cfbolz>
antocuni: but yes, I suspect they should optimize the c++ code
<cfbolz>
However, the original paper basically said 'python is terrible, it's immoral to use it, because it costs so much CO2' that's why Pierre is focusing on showing that it can be plenty fast, I think
<cfbolz>
He is writing a rebuttal
<antocuni>
aaaah ok
void_ has quit [Quit: Leaving]
<cfbolz>
antocuni: honestly, I am super happy to be even in the same order of magnitude than c++
<antocuni>
of course
<antocuni>
I'm surprised to see this result
<antocuni>
did you measure your branch on other benchmarks?
<cfbolz>
It doesn't seem to change much
<cfbolz>
It's really only useful if you have lots of int or float instance fields
<antocuni>
true
<cfbolz>
I hope we can get Pierre to write a blog post too
<antocuni>
ideally, it would be nice to be able to optimize instance fields as well, e.g. in the case they exists only in the parent (think of a Rectangle having two Points). But I have no idea how to conciliate this with Python semantics :)
<cfbolz>
antocuni: yes, it's hard
<cfbolz>
antocuni: we can do that with user annotations
<antocuni>
yes, I was thinking of something like that
<antocuni>
in which the user declares that this "class" doesn't have identity-semantics or something like that
<cfbolz>
antocuni: yes, many languages have value classes
<antocuni>
uhm, I wonder whether it is possible to implement it even on top of CPython with some custom metaclass and/or decorator
<cfbolz>
antocuni: probably. But unclear it's a win there
<antocuni>
I agree, but at least you would have the same semantics on both interpreter (and it will be faster only on pypy)
<cfbolz>
Anyway, I am still not a big friend of that kind of optimization
<antocuni>
why?
<cfbolz>
antocuni: right
<cfbolz>
antocuni: because I want every code to be faster ;-)
<antocuni>
I tend to agree, but I fear that with python semantics there is an upper limit
<cfbolz>
Yes
<antocuni>
and consider that when people are looking for speed, they start to use cython, numba and all sort of tricks
<cfbolz>
Indeed
<antocuni>
so from this POV, using a @valueclass decorator is better and less invasive
<antocuni>
so, please go and implement it 😂
<cfbolz>
Heh
<cfbolz>
If I run out of other ideas ;-)
<antocuni>
what are your next ideas?
<cfbolz>
antocuni: docutils is slow, because of huge chains of promotions
<cfbolz>
We should stop after some number
<antocuni>
I thought we already do that?
<antocuni>
ah no, maybe it's only for guard_value?
<cfbolz>
antocuni: no, we don't have that
<cfbolz>
We'll just make more and more bridges
<antocuni>
ok, I remember wrongly then
<antocuni>
probably we only discussed solutions many times without implementing them :)
<cfbolz>
Indeed
ctismer has quit [Ping timeout: 265 seconds]
graingert has quit [Ping timeout: 272 seconds]
jaraco has quit [Ping timeout: 264 seconds]
michelp has quit [Ping timeout: 264 seconds]
idnar has quit [Ping timeout: 264 seconds]
EWDurbin has quit [Read error: Connection reset by peer]
cfbolz has quit [Ping timeout: 272 seconds]
michelp has joined #pypy
cfbolz has joined #pypy
jaraco has joined #pypy
idnar has joined #pypy
ctismer has joined #pypy
EWDurbin has joined #pypy
graingert has joined #pypy
graingert has quit [Excess Flood]
graingert has joined #pypy
Gustavo6046 has quit [Read error: Connection reset by peer]
Gustavo6046 has joined #pypy
tsaka__ has quit [Ping timeout: 265 seconds]
tsaka__ has joined #pypy
tsaka__ has quit [Ping timeout: 264 seconds]
Hexxeh has joined #pypy
Hexxeh has left #pypy [#pypy]
<_aegis_>
how is that different from slots?
<cfbolz>
_aegis_: it's not just slots
<cfbolz>
We had automatic slots forever
<_aegis_>
yeah, I'm wondering what the distinction is
<cfbolz>
_aegis_: it stores the unboxed value of the integers and floats fields more directly in the object