cfbolz changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://botbot.me/freenode/pypy/ ) | use cffi for calling C | the secret reason for us trying to get PyPy users: to test the JIT well enough that we're somewhat confident about it
nimaje has joined #pypy
jcea has joined #pypy
marr has quit [Ping timeout: 260 seconds]
antocuni has quit [Ping timeout: 255 seconds]
[Arfrever] has joined #pypy
julius has left #pypy ["Leaving"]
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
tbodt has joined #pypy
<kenaan> rlamy py3tests 64d59e349b56 /pypy/module/_demo/test/test_import.py: fix test
jcea has quit [Quit: jcea]
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
tbodt has joined #pypy
tbodt has quit [Ping timeout: 260 seconds]
tbodt has joined #pypy
RemoteFox has quit [Ping timeout: 268 seconds]
jerith has quit [Remote host closed the connection]
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
dfee has joined #pypy
<dfee> what kind of madness is this pypy interpreter? https://paste.ofcode.org/VCKuQuigVXuuGRrb9b2Hb8
Graypup_ has quit [Quit: ZNC 1.6.1 - http://znc.in]
asmeurer__ has joined #pypy
<njs> dfee: currently pypy is compatible with python 3.5
<njs> dfee: those trailing commas only started to be legal in 3.6
<njs> dfee: bizarrely enough
asmeurer__ has quit [Quit: asmeurer__]
asmeurer__ has joined #pypy
jacob22 has quit [Quit: Konversation terminated!]
dddddd has quit [Remote host closed the connection]
asmeurer__ has quit [Quit: asmeurer__]
asmeurer__ has joined #pypy
<alcarithemad> dfee: that does in fact work in the 3.6 branch as of at least a couple months ago
njs has quit [Quit: Coyote finally caught me]
njs has joined #pypy
jamesaxl has quit [Quit: WeeChat 2.1]
illume has joined #pypy
dfee has quit [Ping timeout: 256 seconds]
dmalcolm has quit [Ping timeout: 248 seconds]
Hasimir has quit [*.net *.split]
asmeurer__ has quit [Quit: asmeurer__]
asmeurer_ has joined #pypy
asmeurer_ has quit [Client Quit]
asmeurer__ has joined #pypy
lritter has joined #pypy
asmeurer__ has quit [Ping timeout: 255 seconds]
dmalcolm has joined #pypy
dfee has joined #pypy
<dfee> njs: alcarithemad i had no idea those were illegal in 3.5. i think i used them in cpython 3.5?
asmeurer has joined #pypy
asmeurer has quit [Ping timeout: 264 seconds]
energizer has joined #pypy
jerith has joined #pypy
tayfun26 has joined #pypy
<arigato> dfee: no, cpython 3.5 behaves in the same way as pypy 3.5
<arigato> at least cpython 3.5.3. I guess it didn't change in minor versions but you never know
<arigato> ...in micro version, I think they are called
rubdos has quit [Quit: WeeChat 2.0.1]
rubdos has joined #pypy
dfee has quit [Ping timeout: 264 seconds]
wleslie has quit [Quit: ~~~ Crash in JIT!]
rubdos has quit [Quit: WeeChat 2.0.1]
rubdos has joined #pypy
asmeurer__ has joined #pypy
asmeurer__ has quit [Client Quit]
RemoteFox has joined #pypy
marself has joined #pypy
dfee has joined #pypy
dfee1 has joined #pypy
dfee has quit [Ping timeout: 246 seconds]
wleslie has joined #pypy
marr has joined #pypy
dfee1 has quit [Ping timeout: 255 seconds]
inad924 has joined #pypy
dfee1 has joined #pypy
illume has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
asmeurer has joined #pypy
asmeurer has quit [Client Quit]
asmeurer has joined #pypy
asmeurer has quit [Client Quit]
asmeurer__ has joined #pypy
asmeurer__ has quit [Client Quit]
asmeurer__ has joined #pypy
asmeurer__ has quit [Client Quit]
asmeurer__ has joined #pypy
asmeurer__ has quit [Client Quit]
asmeurer has joined #pypy
asmeurer has quit [Client Quit]
dfee1 has quit [Ping timeout: 240 seconds]
asmeurer_ has joined #pypy
Hasimir has joined #pypy
asmeurer_ has quit [Client Quit]
wleslie has quit [Ping timeout: 264 seconds]
dfee1 has joined #pypy
asmeurer_ has joined #pypy
asmeurer_ has quit [Client Quit]
asmeurer_ has joined #pypy
asmeurer_ has quit [Client Quit]
jamesaxl has joined #pypy
agates has quit [Ping timeout: 256 seconds]
yuvipanda has quit [Ping timeout: 260 seconds]
dash has quit [Ping timeout: 255 seconds]
bendlas has quit [Ping timeout: 269 seconds]
asmeurer_ has joined #pypy
asmeurer_ has quit [Client Quit]
asmeurer__ has joined #pypy
asmeurer__ has quit [Client Quit]
asmeurer__ has joined #pypy
ceridwen has quit [Ping timeout: 265 seconds]
antocuni has joined #pypy
Hasimir has quit [*.net *.split]
asmeurer__ has quit [Ping timeout: 248 seconds]
<antocuni> arigato: ping
wleslie has joined #pypy
mcyprian has joined #pypy
Hasimir has joined #pypy
wleslie has quit [Quit: ~~~ Crash in JIT!]
energizer has quit [Disconnected by services]
dash has joined #pypy
energizer has joined #pypy
inad923 has joined #pypy
inad924 has quit [Ping timeout: 240 seconds]
marr has quit [Ping timeout: 255 seconds]
marself has quit [Ping timeout: 268 seconds]
marself has joined #pypy
Eran has joined #pypy
bendlas has joined #pypy
yuvipanda has joined #pypy
agates has joined #pypy
<Eran> Hi, We have a problem on production. We are running pypy code the fork processes and it seems like one if the forked processes is hang.
energizer has quit [Ping timeout: 260 seconds]
<Eran> How can we debug this issue when it happens? any Ideas?
jcea has joined #pypy
ceridwen has joined #pypy
antocuni has quit [Ping timeout: 250 seconds]
ceridwen has quit [Ping timeout: 256 seconds]
ceridwen has joined #pypy
marr has joined #pypy
jcea has quit [Quit: jcea]
jcea has joined #pypy
marky1991 has joined #pypy
jcea has quit [Quit: jcea]
inad924 has joined #pypy
inad923 has quit [Ping timeout: 255 seconds]
jcea has joined #pypy
dfee1 has quit [Ping timeout: 268 seconds]
lritter has quit [Quit: Leaving]
marky1991 has quit [Remote host closed the connection]
dddddd has joined #pypy
antocuni has joined #pypy
user24 has joined #pypy
marky1991 has joined #pypy
mcyprian has quit [Ping timeout: 240 seconds]
raynold has quit [Quit: Connection closed for inactivity]
<arigato> antocuni: pong
<arigato> Eran: if the subprocess hangs for unknown reasons, try to attach a gdb to it (or a MSVC debugger on Windows)?
mcyprian has joined #pypy
dfee1 has joined #pypy
<antocuni> arigato: so, I have some questions about the GC
<antocuni> the first is about PYPY_GC_INCREMENT_STEP: looking at the code, it seems it does something different than what is documented in the docstring
<antocuni> in particular, "The minimum is set to size that survives minor collection * 1.5 so we reclaim anything all the time."
<antocuni> if I read the logic at incminimark.py:2316 correctly, I think that what happens is that "estimate" is always at least nursery_size*2; i.e., it always depends on the size of the whole nursery, not on the size of the surviving objects
dfee1 has quit [Ping timeout: 260 seconds]
<arigato> "estimate_from_nursery" is based on self.nursery_surviving_size
<arigato> not self.nursery_size
<antocuni> oh, right
<arigato> I think the numbers in the docstring are wrong
<antocuni> but then at least the magic numbers in the docstring look wrong?
tayfun26 has quit [Read error: Connection reset by peer]
<arigato> default size appears to be 4 * nursery size, and minimum is 2 * surviving_minor_collection
<antocuni> ok, but then it means that by default, it will always be based on the nursery_size, instead of nursery_surviving_size?
<antocuni> because estimate == 4*nursery_size, which is always > nursery_surviving_size*2
<arigato> no, I think that "nursery_surviving_size" includes the large young objects
<antocuni> ah
<arigato> ...no?
<antocuni> I see this line, "self.nursery_surviving_size += raw_malloc_usage(totalsize)"
<antocuni> which probably means you are correct
<arigato> where? it appears several times but AFAICT always on nursery objects
<antocuni> right
<arigato> no, it's self.size_objects_made_old I had in mind:
<arigato> see the long comment before line 360
<antocuni> ok. But then my original remark about "estimate" is correct
<arigato> yes
<arigato> it's just that self.size_objects_made_old is used elsewhere, to sometimes force more than one step of the major gc to occur
bremner has quit [Quit: Coyote finally caught me]
<antocuni> a bit of context: I started to dig this because I saw that on a real-world application, collect-step seemed to make long pauses
<antocuni> up to ~100 ms
<antocuni> so I investigated a bit, and found that this machine has a very large L2 cache (25MB), so the nursery is big, and "estimate" as well
<arigato> ah, argh
<arigato> then just setting PYPY_GC_NURSERY=4MB should help?
<antocuni> if I understand things correctly (which is unlikely :)), I think that we could improve the incrementality of our GC by making sure that "estimate" is always based on the surviving size, instead of the full nursery; is it ever useful to base it on nursery_size, after all?
<antocuni> arigato: yes, PYPY_GC_NURSERY=6M helped, as well as PYPY_GC_INCREMENT_STEP=2M
<antocuni> (although I'm not sure why the latter helps, looking at the code)
<arigato> probably because the default is 4 * PYPY_GC_NURSERY, which is still a bit too much
<antocuni> no, I'm saying that both settings improve the situation, even when used separately
marky1991 has quit [Remote host closed the connection]
<antocuni> ah right, I see
<arigato> h
<arigato> a
<antocuni> when I set PYPY_GC_INCREMENT_STEP, I'm saying: "collect either 2MB, or surviving_size*2, the largest"
marky1991 has joined #pypy
<antocuni> so if I have a situation in which many objects die young, surviving_size*2 is smallish
<antocuni> and since the nursery is so big, it is likely that they die young
bremner has joined #pypy
<arigato> right
<arigato> it's a bit unclear why we use nursery_size in that estimate
<antocuni> maybe because when you wrote it, you didn't have nursery_surviving_size yet?
<arigato> no, I think it's some fear I have but never made very concrete:
<arigato> what occurs if the memory usage from the program grows fast
<arigato> if the incremental gc is running too slowly, it means that memory usage grows faster than that
<arigato> (where "memory usage" means the really used, reachable memory)
<antocuni> but it cannot grow more than nursery_surviving_size, can it?
<antocuni> ah, maybe it can if you count young-but-large objects?
<arigato> yes, something like that
<arigato> self.size_objects_made_old is a later fix
<antocuni> but then it can grow indefinitely even if you use nursery_size*2
<arigato> I think the problem is fixed with self.size_objects_made_old
<antocuni> so, something like: estimate=size_objects_made_old * k, where k>1 ?
<arigato> but I'm not 100% sure, and I wasn't at the time either, so I preferred to keep a largish estimate
<antocuni> one way to handle it is that if we are wrong, eventually someone will report a memory leak :)
<arigato> meh :-)
* antocuni looks at how size_objects_made_old is computed
<arigato> I *think* that we could check what occurs even if we have estimate=smaller-than-nursery_surviving_size
<arigato> if I'm reading the loop at line 783 correctly
<antocuni> I think that size_objects_made_old doesn't help for computing estimate: the comment says that it's the size since the last major collection, but we want the size since the last MINOR one
<antocuni> and I'm not sure to get what you are saying about like 783
<arigato> I'm saying that the global invariant that we try to maintain is that the GC major steps do sufficient work so that after a full cycle, we have only 50% more memory used than at the start of the full cycle, or something
<antocuni> "full cycle" == "a complete major collection"?
<arigato> yes
<antocuni> uhm, I think I start to grasp the logic. Let's forget about large objects for now
<antocuni> at each minor collection, we increment the used memory by at most nursery_size (if everything survives)
<antocuni> but to collect nursery_size of memory, we need at least two steps: one for marking, and one for sweeping
<antocuni> so we mark an amount which is twice the nursery
<antocuni> or something along these lines?
<arigato> well, sweeping uses different thresholds anyway
<arigato> but yes
<arigato> the idea is to have marking progress "fast enough"
<antocuni> right, I see that sweeping is somehow based on 3*nursery_size
<antocuni> so, basically: I think that we should compute estimate from "size_objects_made_old_since_the_last_minor_collection"
<arigato> I also think the loop at :783 is printing, confusingly, that two major gc steps occurred, but the real program couldn't run between them
<antocuni> and possibly let the user to change the factor using an env variable, so that it can tweaks the incrementality
<antocuni> arigato: speaking of that, I have some interesting real-world chart to show, based on PYPYLOG
<arigato> antocuni: not quite, the current logic is probably better
<antocuni> why?
<arigato> because :783 does the same result, but is more general: it works for example even if the major gc is in another phase than marking
mcyprian has quit [Ping timeout: 248 seconds]
<antocuni> ah, you mean that it works even if we make estimate too small?
<arigato> yes, but also, it works also to speed up sweeping, for example
<antocuni> ok, so I guess we can safely say "estimate = self.nursery_surviving_size * k"
<arigato> yes, I think the conclusion is that we can use here an estimate that is generally good, and not worry about rare cases
<antocuni> coll, I'll try to implement this in a branch. What about killing PYPY_GC_INCREMENT_STEP and introduce PYPY_GC_INCREMENT_FACTOR (default=2)?
<arigato> yes, or maybe default = more than 2 because that's a big change from the current situation of 4 times nursery_size
<antocuni> true, but I think that 4*nursery_size is really "wrong", especially on high-end machines such as the one I found
<arigato> yes
<arigato> I still think we should keep a minimum value
<arigato> otherwise estimate might be very small
<arigato> and a call to major_collection_step() doesn't do anything at all
<antocuni> like, min_estimate = self.nursery_size / 8 or so?
<antocuni> do we have any statistics about the average ration of surviving_size/nursery_size?
<antocuni> s/ration/ratio
<arigato> it's low, is all I know
<arigato> maybe 20%?
<antocuni> it's probably useful to print it in the pypylog
<arigato> right
<antocuni> here are the instructions to see my pypylog, if you are interested
<arigato> so yes, to avoid to change "too much" at once, I would go with a minimum of nursery_size/2
<antocuni> (the PYPYLOG viewer which I wrote is generally useful, I think)
<arigato> the reason is that we use nursery_size/2 in "self.threshold_objects_made_old += r_uint(self.nursery_size // 2)
<arigato> "
<antocuni> ok, I suppose it makes sense; and if we add enough env variables, we can still do experiments to see whether we find better defaults
<antocuni> arigato: in particular, the pypylog I linked to shows another collect-step problem which I think it's unrelated to what we are discussing now: a huge spike near the end of every single major collection
<antocuni> up to ~55ms
<arigato> right, but we're never careful enough in the GC: ideally we must never allow a corner case to do bad things
<arigato> that's why I'm not sure what occurs if we set estimate < nursery_size
<arigato> and in particular, <= nursery_size / 2
<antocuni> ok, it's likely that I am too optimist because I have not been bitten by these problems yet :)
<arigato> antocuni: likely, the spike is caused by the non-incremental steps of finalize and cpyext
mcyprian has joined #pypy
<antocuni> ah
<antocuni> finalize is "calling the __del__" ?
<arigato> yes
<arigato> well, not exactly doing the call to the __del__
<antocuni> I assume it's deal_with_objects_with_finalizers
<arigato> because that is not inside the pypylog reports for the gc
<arigato> yes
<antocuni> is there any fundamental reason why it's not incremental, or it's just that it has never been done?
<arigato> looks messy, but not fundamentally so
<antocuni> (in case it's not clear: I'm trying very hard to reduce the maximum observable pause from the user point of view)
<antocuni> calling the __del__s is also probably bad from this point of view, I didn't think about them
lazka has joined #pypy
Rhy0lite has joined #pypy
<antocuni> arigato: actually, looking inside the log shows that these spikes are in the SWEEPING phase
<antocuni> (in case you are not using my pypylog viewer, here is a screenshot: http://antocuni.eu/misc/img/tD0DZyrV.png)
<antocuni> and here a zoomed-out screenshot of the whole log: http://antocuni.eu/misc/img/5rpDzaUV.png
inad923 has joined #pypy
inad924 has quit [Ping timeout: 260 seconds]
<arigato> how do you know it's sweeping?
<antocuni> I digged in the log and searched for the associated timestamp
<antocuni> it says: "starting gc state: SWEEPING
<antocuni> "
<antocuni> stopping, now in gc state: SWEEPING
<arigato> just one?
<arigato> or many that are bunched together?
<antocuni> in the chart, each pypylog section is one point
<antocuni> this specific one lasts 0.56 ms
<antocuni> but as you can see from the zoomed-out log, every collection follows a similar pattern
<arigato> unsure what you're saying
<arigato> did you find a single SWEEPING log section that is 55ms?
<antocuni> yes
<arigato> ok
<antocuni> moreover, each full cycle seems to have a single sweeping log section which is much slower than the others
<arigato> near the middle or always the last one?
<antocuni> look at the screenshots, it's easier :)
<arigato> I don't see that info in the screenshots
<antocuni> the blue line represent the gc-collect-step
<antocuni> there is a big spike near the end
<arigato> but how I am supposed to know which of these steps is SWEEPING or something else?
dfee1 has joined #pypy
<antocuni> yes, you can't know from the screenshot this particular piece of info
<antocuni> but I assure you it's sweeping, I just checked :)
<arigato> *what* is sweeping? sorry, I don't understand you
<arigato> my question is: when does sweeping start, and when does it end?
<antocuni> each point of the graph is a single gc-section: on the X axis there is the "start", on the Y axis the "delta"
<arigato> ah, every point is a full cycle?
<antocuni> there is a section which starts at 716.51 seconds and ends at 716.45
<antocuni> every point is a gc-collect-step
<arigato> ok
<antocuni> the whole blue line line of the zoomed-in screenshot is a full cycle
<arigato> ok
<antocuni> if you look at the larger screenshot, you can see that at every full cycle (in blue), the memory drops (in green)
inad924 has joined #pypy
<arigato> then, some of these gc-collect-steps are for MARKING and some are for SWEEPING and a few for other things
<arigato> my question is: which ones?
<antocuni> yes
dfee1 has quit [Ping timeout: 265 seconds]
<antocuni> I suppose the first ones are marking, the last ones are sweeping; I don't know precisely where the border is, I can try to dig it in the log
<arigato> yes, I'd to know if the very-slow sweeping is the first, the last, or in the middle of the sweeping steps
<antocuni> ah ok, now I get your question
<antocuni> let me try to hack something
<antocuni> arigato: the phase of a gc-collect-step section is the "ending" phase, right?
inad923 has quit [Ping timeout: 240 seconds]
<antocuni> i.e., if a step starts in marking and ends in sweeping, should I consider it marking or sweeping?
<arigato> yes, it's the ending phase
<arigato> ah
<arigato> it prints both
<arigato> so no, you should read the first one
<arigato> if it starts in marking, then it is marking
<antocuni> ok
inad924 has quit [Quit: Leaving]
Taggnostr has quit [Ping timeout: 255 seconds]
<antocuni> the points marked in yellow are SWEEPING
<antocuni> so it seems to be the very first sweeping phase
<antocuni> and indeed, digging in the log confirms
exarkun has joined #pypy
<arigato> uh
<arigato> ok what
exarkun has left #pypy [#pypy]
<arigato> the first sweeping step, it walks and frees a number of rawmalloced objects which is likely to be far too large
<antocuni> what is small_request_threshold?
<arigato> a number like 134816 in your case
<arigato> small_Request_threshold = 35*8
<antocuni> I'm not sure to follow the logic
<antocuni> limit is a number of objects or bytes?
<arigato> number of objects
<arigato> it happens to be the number such that, if all the objects are exactly 35*8 bytes long, then it'll sweep 3 nursery_sizes in bytes
<arigato> but the objects can be much larger
<antocuni> and a larger object takes a longer time to sweep
<arigato> in theory, no
<arigato> in practice, yes, because the headers of objects are on completely different pages
<antocuni> I don't understand whether 3*nursery is arbitrary or necessary to ensure termination
<arigato> it's a number that is sure to be large enough to ensure termination
<arigato> probably any number greater than 1 would do
<antocuni> is it? I can have arbitrarily as many rawmalloced objects as I want, without touching the nursery
<arigato> the major GC is started because the memory pressure has grown too much since the last GC
<arigato> so that means you can't think only about the nursery
<arigato> in this case, I think the nursery is used because one major gc step occurs because the nursery was full
<arigato> so, like during MARKING, we could instead base the number on nursery_survived_size instead
<antocuni> ok, so in the worst case I am sweeping some objects but I am allocating nursery_size more objects
<antocuni> right, it's exactly what I was about to suggest
<arigato> to be honest, I am not completely sure about any of these reasonings
<antocuni> do we have any test which checks whether the algo terminates/doesn't leak/
<antocuni> ?
<arigato> maybe not
<antocuni> "good"
<antocuni> I am about to go afk soon; I'll try to implement these ideas in a branch and see what happens. But we surely need to think more before merging
<arigato> the basic idea is probably still this loop at :783, which guarantees that there is at least one major gc step for every (nursery_size/2) allocated bytes outside the nursery
Taggnostr has joined #pypy
<arigato> so if every MARKING and every SWEEPING does its job on more than nursery_size bytes, then it should guarantee that we mark and sweep faster than we allocate
<antocuni> nursery_size or nursery_surviving_size?
<arigato> these two numbers of (nursery_size/2) and nursery_size are arbitrary, and they are related to the nursery size only because it looks like a good idea to make them so
<arigato> no, always nursery_size
<arigato> so maybe indeed it would be an idea to make them related to nursery_surviving_size instead, but then all of them, not just half
<antocuni> M-x replace string nursery_size -> nursery_surviving_size and we are done :)
<antocuni> arigato: leaving now, thanks for the help
<arigato> well, that's obscure. maybe we should instead ask major_collection_step() to do some progress, but it would return how much progress it really did, and we use that
Eran has quit [Quit: Page closed]
<arigato> antocuni: bye
jcea has quit [Read error: Connection reset by peer]
jcea has joined #pypy
user24 has quit [Remote host closed the connection]
lazka has quit [Quit: Leaving]
mcyprian has quit [Ping timeout: 260 seconds]
kanaka has quit [Ping timeout: 240 seconds]
kanaka has joined #pypy
kanaka has joined #pypy
kanaka has quit [Changing host]
exarkun has joined #pypy
xorAxAx has quit [Remote host closed the connection]
xorAxAx has joined #pypy
xorAxAx has quit [Remote host closed the connection]
dfee1 has joined #pypy
illume has joined #pypy
tbodt has joined #pypy
tbodt has quit [Client Quit]
tbodt has joined #pypy
marky1991 has quit [Ping timeout: 240 seconds]
<antocuni> arigato: "we use that" to do what? To compute how much to do at the next step? Or to continue calling major_collection_step in a loop until we reach a certain threshold?
illume has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
user24 has joined #pypy
raynold has joined #pypy
antocuni has quit [Ping timeout: 240 seconds]
dfee1 has quit [Ping timeout: 265 seconds]
xorAxAx has joined #pypy
tbodt has quit [Quit: Textual IRC Client: www.textualapp.com]
tbodt has joined #pypy
exarkun has left #pypy [#pypy]
marky1991 has joined #pypy
mcyprian has joined #pypy
mcyprian has left #pypy [#pypy]
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
necaris has joined #pypy
marky1991 has quit [Remote host closed the connection]
<necaris> hey folks
<necaris> quick question
<necaris> what is the state of the `py3.6` branch?
<necaris> would love some tips on the best thing i could do to help on it
necaris has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
tbodt has joined #pypy
<cfbolz> necaris: hey Rami!
<cfbolz> How are things?
void__ has joined #pypy
user24 has quit [Ping timeout: 240 seconds]
demonimin has quit [Remote host closed the connection]
_aegis__ has quit [Ping timeout: 255 seconds]
mcyprian has joined #pypy
mcyprian has quit [Client Quit]
mcyprian has joined #pypy
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
tbodt has joined #pypy
mcyprian has quit [Ping timeout: 265 seconds]
mcyprian has joined #pypy
<bbot2> Started: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/6747 [Carl Friedrich Bolz-Tereick: force build, py3.6]
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-app-level-linux-x86-64/builds/3956 [Carl Friedrich Bolz-Tereick: force build, py3.6]
<cfbolz> necaris: I don't actually know the answer to this question, but I kicked of a test run on the branch, which should give us a clue what is still missing
_aegis_ has joined #pypy
dfee1 has joined #pypy
void__ has quit [Ping timeout: 256 seconds]
lauren has joined #pypy
<lauren> is jitpy dead
<cfbolz> lauren: what is jitpy?
<cfbolz> Ah
<cfbolz> No clue, fijal?
<cfbolz> It looks more like an experiment to me
<lauren> dang
<lauren> I had it in some really old notes
<lauren> ...I'm actually not even sure which really old notes. oh maybe my old twitter posts
<cfbolz> lauren: do you have a use case in mind?
dfee1 has quit [Ping timeout: 256 seconds]
<lauren> yeah, web thing. I'm depending on this package list: {pyramid,sqlalchemy,psycopg2-binary,pyramid_jinja2,bcrypt,twisted,alchimia,SQLAlchemy-Utc,pytz,blessed,lxml, stripe} -
<lauren> psycopg and lxml are pain ones, right?
Rhy0lite has quit [Quit: Leaving]
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-app-level-linux-x86-64/builds/3956 [Carl Friedrich Bolz-Tereick: force build, py3.6]
<ronan> lauren: everything should work, I think
<lauren> damn really? including psycopg2-binary?
<ronan> yes
<lauren> wow
<lauren> what a world
<lauren> how is it that that would work
<simpson> cpyext keeps getting buffs.
<lauren> ah
<lauren> will it be fast enough to be usable?
<ronan> it's probably a bit slower than CPython but not horribly so
<ronan> as usual, perfs depend a lot on what your app actually does
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
tbodt has joined #pypy
dfee1 has joined #pypy
<ronan> lauren: well, you need psycopg2, actually, as there are no wheels for pypy
<lauren> ah
<lauren> reasonable re perf
<lauren> is psycopg2-cffi production ready
<lauren> or even a real thing rather than me misremembering
<ronan> I know that some people use it, but I can't tell how good it is
<bbot2> Failure: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/6747 [Carl Friedrich Bolz-Tereick: force build, py3.6]
devwatchdog has joined #pypy
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
tbodt has joined #pypy
asmeurer__ has joined #pypy
illume has joined #pypy
asmeurer__ has quit [Quit: asmeurer__]
mcyprian has quit [Quit: Leaving.]
devwatchdog has quit [Quit: Leaving]
dfee1 has quit [Ping timeout: 265 seconds]
dfee1 has joined #pypy
asmeurer has joined #pypy
zmt00 has quit [Quit: Leaving]
zmt00 has joined #pypy
wleslie has joined #pypy
jamesaxl has quit [Quit: WeeChat 2.1]
dfee1 has quit [Ping timeout: 264 seconds]
dfee1 has joined #pypy
illume has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
asmeurer has quit [Quit: asmeurer]
asmeurer__ has joined #pypy
tbodt has quit [Ping timeout: 240 seconds]
dfee1 has quit [Ping timeout: 256 seconds]
asmeurer___ has joined #pypy
dfee1 has joined #pypy
froztbyt1 has joined #pypy
froztbyte has quit [Ping timeout: 240 seconds]
[Arfrever] has quit [Ping timeout: 240 seconds]
dmalcolm has quit [Ping timeout: 240 seconds]
dmalcolm has joined #pypy
asmeurer__ has quit [Ping timeout: 260 seconds]
[Arfrever] has joined #pypy
wleslie has quit [Quit: ~~~ Crash in JIT!]
marself has quit [Ping timeout: 260 seconds]
asmeurer___ has quit [Quit: asmeurer___]
asmeurer_ has joined #pypy
asmeurer_ has quit [Quit: asmeurer_]