cfbolz changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://quodlibet.duckdns.org/irc/pypy/latest.log.html#irc-end ) | use cffi for calling C | if a pep adds a mere 25-30 [C-API] functions or so, it's a drop in the ocean (cough) - Armin
<mattip>
maybe it is messing with the register allocation expected in the JIT backend?
* mattip
moving on to fixing ssl, this is beyond my depth
<tos9>
random odd question which maybe is the slightest deal related to revdb -- can however revdb is implemented be used to diff two program executions
<tos9>
I assume revdb basically serializes the whole world?
<tos9>
If I have then two programs and I want to know all the objects that differ between them, is that something borrowable from how revdb works?
<tos9>
(and then once I have those objects probably I Want to heuristically suggest some that are worth looking at -- the use case is also debugging, but where I don't want to trawl through two debuggers looking for the place their execution diverges)
<mattip>
since birdseye writes to a sql database, you might be able to use the ast and write some sql to find the first divergence
<tos9>
cfbolz / energizer: cool, thanks
junna has joined #pypy
<cfbolz>
tos9: do you have a difference at the end of the run?
<tos9>
cfbolz: yes
<tos9>
cfbolz: I'm reimplementing a section of a library, and get different results at the end
<cfbolz>
If yes, you could set a watch point for the different result, then say bcontinue and observe where it changes
<tos9>
so I want quick answers to "tell me the first time within some set of stack frames that an object was different"
<tos9>
cfbolz: (if I understand that idea properly) -- I want the intermediate object, not the end one
<cfbolz>
tos9: and there are tons of intermediate new objects created all the time?
<tos9>
cfbolz: I have f(foo) -> ... -> g() -> ... -> X != Y
<tos9>
And I implement g
<tos9>
and then for some code paths (arguments to f), g() returned some wrong result
<tos9>
So I want to know, if I have g and originalG, how the objects between f|g and X differ from the ones between f|originalG and Y
<tos9>
and there are lots of them
<cfbolz>
Right
<tos9>
so somewhere between the ... after g() I get some different objects (ones outside the call stack of g()) and I want to see them basically
<tos9>
which right now I'm finding manually in a debugger
<tos9>
I'm sure I could script that though yeah, so I guess pointing out that probably just scripting a regular debugger to do that is equally tenable
<tos9>
(the annoying bit in my case is that I switch back and forth between git branches and python interpreters to do it, so that's a bit of extra annoyance)
<cfbolz>
tos9: can you maybe do something simpler like hash the objects at every step, and print the hashes?
<cfbolz>
Or just a trace hook that prints events and then you diff the logs
<tos9>
(thinking whether that would have caught the last bug I manually tracked down)
<tos9>
probably worth a shot I guess
<tos9>
the trace hook I mean -- for hashing, I think the issue still would be I don't know which objects to look for I think?
<tos9>
Uh although I guess the idea is to instrument both implementations of g's return values, sorry, now I guess I understand
<mattip>
that will put you inside the docker image with a root shell, you will want to cd to the pypy checkout and su as the user you set the UID for when building
<mattip>
su buildslave
<mattip>
cd /build_dir
<mattip>
then you are good to test, debug, whatever
<mattip>
python2 pytest.py ...
<mattip>
the -v flags map directories outside to directories inside, the -it means interactive, and map a tty for io
Ai9zO5AP has joined #pypy
junna has left #pypy [#pypy]
<mattip>
maybe the UID thing is not clear. The docker build creates a user inside named buildslave with the UID that you fed in during "docker build"
<mattip>
so when you do "su buildslave", effectively you map buildslave (inside the docker) to the user with the UID outside the docker
<mattip>
which is why setting it to $UID does magic
adamholmberg has quit [Remote host closed the connection]
squeaky_pl has joined #pypy
<squeaky_pl>
mattip, about SSL patch you didn't cover the case when certificate store is not present on the platform, sometimes people would install portable pypy on bare bones Ubuntu that don't have those and it was practical to ship certificate store inside the build as a last resort for those people.
<mattip>
yes, thanks for the explaination. I was wondering what the use case is for that.
<mattip>
What does CPython do in that case?
<squeaky_pl>
Nothing, CPython always relies on "system SSL" and it's essentially broken when there is no database store.
<squeaky_pl>
Well it's up to you to decide but barebones docker Ubuntu does not come with SSL store.
<mattip>
which image? I will try it out
<squeaky_pl>
Let me check
marky1991 has joined #pypy
<mattip>
maybe we could raise an exception with a nicer message telling them how to get a ssl store
<mattip>
although poking around in the pip sources I see they vendor a cacert.pem in
<mattip>
which does not seem to discriminate between static or dynamic linking
<mattip>
I can try out the scenario that tripped up the reporter in issue 9, maybe now that libpypy itself does not link to openssl the problem has gone away
<squeaky_pl>
mattip, it can be entirely true that it is no longer valid in 2019
<mattip>
squeaky_pl: thanks for looking this over, it needs a good review
<squeaky_pl>
it's questionable of course if this is a valid use case but i got people complaining, there are some people that strip their docker images to bare minimum
<Dejan>
I build lost of my Docker images from scratch
<squeaky_pl>
but of course you need put a line somewhere, i considered including cert store in the build not too much of a hassle so i went for it
<mattip>
since we have ensurepip, which has a vendored store so pip can function, people can do "pip install certifi"
<Alex_Gaynor>
arigato: I assume it's currently failing for you with "ptrace: operation not permitted", if it's failing with something else, I have no idea. that's how gdb usually fails for me with docker though
<Alex_Gaynor>
arigato: try making it the first argument after `run`?
<arigato>
arigo@baroquesoftware:~/hg/buildbot$ ot$ docker run --add-caps=SYS_PTRACE -v/home/arigo/pypysrc:/build_dir -v/tmp:/tmp buildslave_i686 /bin/bash
<arigato>
unknown flag: --add-caps
<arigato>
See 'docker run --help'.
<Alex_Gaynor>
arigato: ah, it's `--cap-add`, not `add-caps`, sorry
<arigato>
thanks
<arigato>
yes, works better
adamholmberg has quit [Remote host closed the connection]
<Alex_Gaynor>
cool
<kenaan>
rlamy py3.6 8e5e71e1a26e /pypy/objspace/std/: Return W_IntObject from float.__round__() when possible. This should speed up all calculations involving int(round(<...
olliemath has quit [Remote host closed the connection]
olliemath has joined #pypy
olliemath has quit [Remote host closed the connection]
xyz111112 has quit [Remote host closed the connection]
squeaky__ has quit [Ping timeout: 265 seconds]
<arigato>
mattip: it's entirely obscure
<arigato>
it seems that the segfault disappear if I replace the value 0xAAAAAAAAAAAAA with a smaller value that actually fits inside 32 bits
<arigato>
ah!
<arigato>
for some reason, this "long" is implicitly typed as SignedLongLong
<arigato>
so what occurs is that the assembly code is called with the wrong arguments
<arigato>
argh this is bound to create similar obscure issues
<arigato>
I'll try to raise when lltype.typeOf(0xAAAAAAAAAAAAA) is called on 32-bit, instead of deciding that returning SignedLongLong makes sense
olliemath has joined #pypy
<olliemath>
hi - I'm trying to run the tests for a pypy3.6 branch, but without success - I wondered if you had any tips?
<olliemath>
currently ./run_pytest.py lib-python/3/test/test_datetime.py fails in both py2 and py3 venvs
<kenaan>
arigo default 56cb51f3c081 /rpython/: Prevent lltype.typeOf(2<<32) from returning SignedLongLong on 32-bit just because 2<<32 doesn't fit into a regular ...
<ronan>
olliemath: run 'python -m test.test_datetime' in a pypy3 venv
squeaky__ has joined #pypy
<ronan>
also what's run_pytest.py??
<olliemath>
@ronan that worked - thanks!
<olliemath>
it's a script in the top level of the repo
<olliemath>
I assumed it was doing something specific to the repo (e.g. hacking the pythonpath)