cfbolz changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://botbot.me/freenode/pypy/ ) | use cffi for calling C | mac OS and Fedora are not Windows
<felix34>
The website says "PyPy's x86 version runs on several operating systems, such as Linux (32/64 bits), Mac OS X (64 bits), Windows (32 bits), OpenBSD, FreeBSD." Does that mean PyPy builds without issue on *BSD?
<felix34>
I see no *BSD binaries...
<simpson>
felix34: Which BSD in particular? It seems like you'll be grabbing/building from a ports tree.
marvin_ has quit [Remote host closed the connection]
marvin has joined #pypy
<felix34>
simpson: openbsd. yes, I investigated further and indeed, a ports tree.
<felix34>
I suppose the 8GB RAM requirement applies?
<simpson>
Yeah, for 64-bit builds with JIT enabled, that sounds like a good amount of RAM.
<felix34>
simpson: The last time I built PyPy there was a minimum RAM requirement to build. Maybe it has changed
<fijal>
felix34: it should work, but we sometimes don't actively maintain it
<fijal>
I'm not sure what's the status of the buildbots
<cfbolz>
fijal: feel like taking a look at my previous commit?
<fijal>
Not at the moment, but yes, remind me later?
<cfbolz>
ok
forgottenone has quit [Quit: Konversation terminated!]
antocuni has joined #pypy
tayfun26 has joined #pypy
tayfun26 has quit [Quit: tayfun26]
nunatak has joined #pypy
dddddd has joined #pypy
dmalcolm has joined #pypy
nunatak has quit [Read error: Connection reset by peer]
antocuni has quit [Ping timeout: 268 seconds]
forgottenone has joined #pypy
lritter has joined #pypy
antocuni has joined #pypy
dmalcolm has quit [Ping timeout: 268 seconds]
cpyprog has joined #pypy
alexband has joined #pypy
cpyprg has joined #pypy
cpyprg has left #pypy [#pypy]
cpyprg has joined #pypy
cpyprg has quit [Client Quit]
cpyprog has quit [Quit: Page closed]
jacob22__ has joined #pypy
jcea has joined #pypy
alexband has quit [Remote host closed the connection]
alexband has joined #pypy
arigato has joined #pypy
alexband has quit [Ping timeout: 244 seconds]
speeder39_ has joined #pypy
arigato has quit [Quit: Leaving]
<cfbolz>
antocuni: feel like taking a brief look at expose-gc-time?
<antocuni>
cfbolz: sure
<antocuni>
cfbolz: it looks good
<antocuni>
do you also plan to pass this info to the GC hooks?
<cfbolz>
antocuni:what do you mean by GC hooks?
<cfbolz>
the GC module? yes
<cfbolz>
see next commit ;-)
<antocuni>
no I mean pypy.module.gc.hook
<cfbolz>
ah. I wasn't planning to, no
<antocuni>
I think that API-wise it's a bit weird to have this stat in gc._get_stats() but not in the hooks, but I'm not going to force you to implement that :)
<cfbolz>
antocuni: well, what would the API on the hooks look like?
<antocuni>
I would expect an extra field which tells me how much time I spent doing this minor collection / step / complete collection
<antocuni>
wait
<antocuni>
I see that there is *already* a "duration" field :)
<cfbolz>
yes, I am seeing that now too ;-)
<cfbolz>
so now we have two ways to do this, which is bad
<cfbolz>
antocuni: what's the unit of duration? whatever weird thing read_timestamp returns?
<antocuni>
yes
<cfbolz>
so we should decide whether get_stats returns time in the same weird unit
<antocuni>
I planned to write some pip installable module to help people to convert this unit into seconds, but never did
<antocuni>
it's the most precise way we have to measure time, so I suppose we should use it
<antocuni>
cfbolz: on linux64, to convert this into ms I divide by FREQUENCY
<antocuni>
FREQUENCY = cpuinfo.get_cpu_info()['hz_advertised_raw'][0] / 1000.0 # KHz
<cfbolz>
ok, but it's also completely meaningless :-P
<antocuni>
cfbolz: see also pypy/doc/gc_info.rst
<antocuni>
and look at the "GC Hooks" section; there is an explanation of that value
<cfbolz>
wait, I am confused
<cfbolz>
this sounds weird: ``stats.duration`` contains the **total** time spent by the GC for this specific event since the last time the hook was called.
<cfbolz>
ah, is that if we do more than one minor collect, but only call the hook once?
<antocuni>
yes
<antocuni>
I saw it actually happens in practice
<cfbolz>
ok
<cfbolz>
antocuni: hm, if we use the timestamp, how quickly will the total time overflow?
<antocuni>
if I did the computations correctly, it should be ~109 years on my machine
<cfbolz>
antocuni: no wait, it will be returned in the unconverted unit
<cfbolz>
We can't assume that hz_advertized_raw is there on all oses
<antocuni>
yes, but I was answering to "how much it will take to overflow"
<cfbolz>
Ah, right
<cfbolz>
It's a lot less on 32 bit systems of course
<antocuni>
I think that the TSC is guaranteed to be 64 bit
<antocuni>
and the "duration" field is annotated as r_longlong (see e.g. GcMinorHookAction.fix_annotation)
<cfbolz>
antocuni: yes but I can't use long long for the stats, because right now all other stats are ints, and they need to be the same type
<antocuni>
ouch :(
<antocuni>
I suppose the correct way to solve would be to have a saner API for rgc.get_stats
<antocuni>
alternatively, you could have TOTAL_TIME_LOW and TOTAL_TIME_HIGH, which return the two halves :)
<cfbolz>
antocuni: honestly, I am more tempted to keep what I have. The timestamps might be a little bit more precise, but the fact that it's completely platform dependent to turn them into microseconds is quite annoying to me
<antocuni>
I agree it's ugly
<antocuni>
the other reason I didn't use time.time() originally was that I was worried about potentially doing a syscall on every minor collection
<antocuni>
I think that on modern linuxes time.time() is fast, not sure about other OSs
jacob22__ has quit [Ping timeout: 260 seconds]
<cfbolz>
antocuni: another option would be to bite the bullet and write rpython code to query the conversion on various platforms
<antocuni>
yes, apart the fact that it's non-trivial. I couldn't find any way to do it on linux without parsing /proc/cpuinfo
<antocuni>
and I searched a lot :)
<antocuni>
but I agree it would be the best solution
<cfbolz>
antocuni: and then we could even make the hooks return a result in seconds
<antocuni>
yes
<cfbolz>
antocuni: I think we parse cpuinfo in PyPy already anyway, to find the size of the cache?
<antocuni>
ah, could be
<cfbolz>
OK, let's see how motivated I am about this
<antocuni>
go go go :)
<fijal>
\o/ /o\ \o/ /o\
<fijal>
how am I as a cheerleader?
<cfbolz>
fijal: does the plan make sense?
<fijal>
from the 100 meters view yes :)
<fijal>
but I don't remember much details
<cfbolz>
antocuni: fwiw, the pip module exists ;-)
<antocuni>
cfbolz: yes, it's what I used in the example above. What I meant was a pypy-specific module which takes the value of __pypy__.debug_get_timestamp_unit() and do the proper thing
nunatak has joined #pypy
<cfbolz>
antocuni: just saying that this: "I planned to write some pip installable module to help people to convert this unit into seconds, but never did" is easy given the existance of the module I linked
forgottenone has quit [Quit: Konversation terminated!]
lritter has quit [Ping timeout: 264 seconds]
speeder39_ has quit [Quit: Connection closed for inactivity]
nunatak has quit [Quit: Leaving]
lritter has joined #pypy
lritter has quit [Remote host closed the connection]