<drmeister>
FYI - I'm giving the opening technical talk at the 2018 Bay Area LLVM Developers' Meeting in October.
<beach>
That's very exciting.
<beach>
It means that they will be aware that there are languages other than C++ out there.
quazimodo has joined #lisp
<drmeister>
Oh yes - that has always been the case.
<drmeister>
C++, C, D, Rust - all the myriad languages.
zigpaw has quit [Read error: Connection reset by peer]
<beach>
I don't know about Rust, but the others are in the same family as C++.
<drmeister>
Yeah - so's Rust - that was part of the joke.
<beach>
So I hope you get lots of remarks such as "But isn't Common Lisp an interpreted language?", "Lisp is intrinsically slow, isn't it", and "Nobody uses Common Lisp anyway, right?".
<drmeister>
They have some gizmo in their compiler called a "borrow checker" that drives programmers mad - apparently. But it still looks like C.
<drmeister>
Nah - I'll slap up the summary from the "Energy Efficiency of Programming Languages" paper from Google.
<beach>
I vaguely remember that.
rumbler31 has quit [Remote host closed the connection]
<drmeister>
Lisp is the fastest, most memory efficient and most energy efficient dynamic language by a long shot.
<flip214>
drmeister: that paper has "Lisp" in "VM" languages, not compiled?!
<beach>
Nice!
<drmeister>
I'm going to want to talk about Cleavir and how we fused it to llvm. We can talk about that in a couple of weeks.
<beach>
Sure.
fikka has joined #lisp
kdas_ has joined #lisp
<beach>
Table 3 is devastating for Python.
oystewh has quit [Ping timeout: 246 seconds]
eagleflo has quit [Ping timeout: 260 seconds]
<drmeister>
flip214: Where is that?
kushal has quit [Ping timeout: 256 seconds]
omilu has quit [Ping timeout: 272 seconds]
<flip214>
drmeister: figure 4 has "Lisp" in the middle column, "VM"
<flip214>
left is compiled, right interpreted
<drmeister>
I downloaded the lisp code - it's SBCL code - and the lisp programmer used some pretty low level tricks to get performance. All's fair in love and programming.
<beach>
flip214: Can you explain your example with (let ((=var= ...)) (defun ...)) please?
fikka has quit [Ping timeout: 272 seconds]
<flip214>
beach: I'm trying to do some performance optimizations for a state variable that's needed in several functions. a DEFPARAMETER needs to check for thread-local bindings,
<flip214>
and a (LET ((=var= ...)) (defun ...)) binding around the functions does lookup via the symbol and some indirections.
<flip214>
(I invented =var= for a kind-of-global variable.)
kdas_ is now known as kushal
<flip214>
I'll yet have to try sb-ext:defglobal.
zigpaw has joined #lisp
<flip214>
Basically, I'd hoped to find something that keeps the address in a register and then just does direct access of that...
Lycurgus has quit [Quit: Exeunt]
<flip214>
perhaps I should just make a local binding of the global variable, that might achieve nearly the same effect?!
anewuser has quit [Ping timeout: 252 seconds]
<beach>
flip214: What makes you say that this example looks up via the symbol?
<beach>
The compiler completely eliminates the symbol for lexical variables.
quazimodo has quit [Ping timeout: 246 seconds]
quazimodo has joined #lisp
fikka has joined #lisp
Oladon has quit [Quit: Leaving.]
<flip214>
the disassembly
fikka has quit [Ping timeout: 246 seconds]
<beach>
That would be very surprising. It would mean that every time you use =var= as a lexical variable, they would share the location?
<beach>
Sounds like incorrect semantics to me.
<flip214>
well, it's a closure binding, not a lexical variable - the LET is outside of the DEFUNs.
<flip214>
but never mind... I'll retry with DEFGLOBAL and some more ideas
<beach>
It is still lexical.
<beach>
And I still don't see how the symbol would be involved. Certainly, that example could not use any kind of value slot of the symbol.
it3ration has quit [Ping timeout: 260 seconds]
fikka has joined #lisp
pjb has quit [Ping timeout: 250 seconds]
fikka has quit [Ping timeout: 252 seconds]
ahungry has quit [Remote host closed the connection]
<beach>
Anyway, I think I have a good solution in SICL for the method-combination conundrum that the Common Lisp HyperSpec and the MOP introduce.
<beach>
DEFINE-METHOD-COMBINATION will create a method-combination TEMPLATE that has one ore more VARIANTs. Those variants are instances of the class METHOD-COMBINATION. FIND-METHOD-COMBINATION takes additional arguments that determine what variant to use.
<beach>
If no variant exists that corresponds to the arguments given, then a new variant is created. The way I determine whether an existing variant corresponds to the additional arguments is as follows: I take the ordinary lambda list of DEFINE-METHOD-COMBINATION, parse it and obtain a list of all the variables that it introduces. I then create a function that looks like this: (lambda <the-lambda-list> (list v1 v2 ... vn)) where v1...vn are
<beach>
the variables introduced by the lambda list.
<beach>
Running this function returns a SIGNATURE which is a list that determines the variant to use and it is stored with the variant.
<beach>
The "expansion" function that DEFINE-METHOD-COMBINATION creates looks like this: (lambda (methods v1 v2 ... vn) <body>) where the body contains the code to categorize all the methods and the code from the body of DEFINE-METHOD-COMBINATION.
<beach>
To run that function, given a particular variant, just do (apply <function> methods <signature>).
<beach>
If course, the templates with all the variants will be stored in a first-class global environments as usual.
gpiero has quit [Remote host closed the connection]
azimut has quit [Ping timeout: 252 seconds]
fikka has joined #lisp
fikka has quit [Ping timeout: 240 seconds]
bitch has quit [Ping timeout: 268 seconds]
it3ration has joined #lisp
<no-defun-allowed>
hi beach
caltelt_ has quit [Ping timeout: 245 seconds]
it3ration has quit [Ping timeout: 240 seconds]
sauvin has joined #lisp
fourier has joined #lisp
doubledup has joined #lisp
fikka has joined #lisp
doubledup has quit [Remote host closed the connection]
rumbler31 has joined #lisp
fourier has quit [Ping timeout: 245 seconds]
<beach>
Hey no-defun-allowed.
fikka has quit [Ping timeout: 240 seconds]
Inline has quit [Quit: Leaving]
rumbler31 has quit [Remote host closed the connection]
scymtym has quit [Remote host closed the connection]
fikka has joined #lisp
asarch has quit [Quit: Leaving]
fikka has quit [Ping timeout: 246 seconds]
vlatkoB has joined #lisp
fikka has joined #lisp
it3ration has joined #lisp
<slyrus>
drmeister: regarding that benchmarking paper you posted earlier, I was disappointed to see that the "fasta" benchmark was really just writing a random "FASTA" file, not reimplementations of the actual FASTA alignment algorithm.
fikka has quit [Ping timeout: 252 seconds]
gpiero has joined #lisp
fikka has joined #lisp
orivej has joined #lisp
it3ration has quit [Ping timeout: 272 seconds]
tfb has quit []
Fare has quit [Ping timeout: 252 seconds]
shrdlu68 has joined #lisp
<shrdlu68>
phoe: (How) did you solve the fuzzy search problem?
azimut has joined #lisp
scymtym has joined #lisp
orivej has quit [Ping timeout: 245 seconds]
rumbler31 has joined #lisp
fikka has quit [Ping timeout: 245 seconds]
rumbler31 has quit [Ping timeout: 244 seconds]
dented42 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
fikka has joined #lisp
fikka has quit [Ping timeout: 244 seconds]
shka_ has joined #lisp
varjag has joined #lisp
quipa has joined #lisp
oystewh has joined #lisp
quazimodo has quit [Ping timeout: 252 seconds]
quazimodo has joined #lisp
razzy has joined #lisp
fikka has joined #lisp
orivej has joined #lisp
fikka has quit [Ping timeout: 245 seconds]
p9fn has joined #lisp
quazimodo has quit [Ping timeout: 240 seconds]
fikka has joined #lisp
quazimodo has joined #lisp
fikka has quit [Ping timeout: 246 seconds]
aeth has quit [Ping timeout: 252 seconds]
aeth has joined #lisp
fikka has joined #lisp
jochens has joined #lisp
jochens has quit [Remote host closed the connection]
jochens has joined #lisp
orivej has quit [Ping timeout: 245 seconds]
runejuhl has quit [Read error: Connection reset by peer]
runejuhl has joined #lisp
astalla has joined #lisp
quazimodo has quit [Ping timeout: 246 seconds]
azimut has quit [Ping timeout: 246 seconds]
quazimodo has joined #lisp
mange has quit [Ping timeout: 260 seconds]
azimut has joined #lisp
it3ration has joined #lisp
azimut has quit [Read error: No route to host]
nirved has joined #lisp
azimut has joined #lisp
it3ration has quit [Ping timeout: 244 seconds]
rumbler31 has joined #lisp
quazimodo has quit [Ping timeout: 244 seconds]
quazimodo has joined #lisp
tripty has quit [Remote host closed the connection]
rumbler31 has quit [Ping timeout: 252 seconds]
wigust has joined #lisp
tralala has joined #lisp
shrdlu68 has quit [Quit: WeeChat 2.0.1]
JuanDaugherty has joined #lisp
JuanDaugherty has quit [Client Quit]
jochens has quit [Read error: Connection reset by peer]
jochens has joined #lisp
jochens has quit [Read error: Connection reset by peer]
jochens has joined #lisp
jochens has quit [Read error: Connection reset by peer]
jochens_ has joined #lisp
shrdlu68 has joined #lisp
quipa has quit [Ping timeout: 252 seconds]
azimut has quit [Ping timeout: 245 seconds]
azimut has joined #lisp
MetaYan has joined #lisp
steiner has quit [Remote host closed the connection]
oystewh has quit [Quit: .]
[X-Scale] has joined #lisp
X-Scale has quit [Ping timeout: 246 seconds]
[X-Scale] is now known as X-Scale
quazimodo has quit [Read error: Connection reset by peer]
it3ration has joined #lisp
quazimodo has joined #lisp
it3ration has quit [Ping timeout: 252 seconds]
rumbler31 has joined #lisp
rumbler31 has quit [Ping timeout: 244 seconds]
ebrasca has quit [Ping timeout: 252 seconds]
m00natic has joined #lisp
steiner has joined #lisp
vlatkoB_ has joined #lisp
vlatkoB has quit [Ping timeout: 246 seconds]
fikka has quit [Ping timeout: 272 seconds]
fikka has joined #lisp
regreg has joined #lisp
pierpal has quit [Quit: Poof]
pierpal has joined #lisp
<LdBeth>
What happened to EuLisp?
<razzy>
LdBeth: conference?
<jackdaniel>
it has 3 implementations and exists, what would happen to it?
<jackdaniel>
it is not widely adopted
makomo has joined #lisp
<jackdaniel>
razzy: lisp language with a standard draft (somewhere in between CL and Scheme size-wise)
<no-defun-allowed>
Brexit happened cause Theresa May waned Scheme to be the national Lisp.
<razzy>
lol
<no-defun-allowed>
Irish independence was a thing actually because the IRA was fighting to get rid of R5RS.
<LdBeth>
Interestingly, T also fell into obscure
<no-defun-allowed>
Back to the land of overpronounced R's, I think I could make oclcl and Common Lisp compile a small subset of CL.
<no-defun-allowed>
I realistically need let, aref and arithmetic.
<no-defun-allowed>
X, Y and C(hannel) can be automagically bound and all input images are named. The macro has an implicit setf which sets the appropriate pixel of the new image.
lavaflow has quit [Ping timeout: 245 seconds]
josemanuel has joined #lisp
fikka has joined #lisp
<razzy>
picolisp is spartan :]
<razzy>
but still not spartan enought form me
<razzy>
for
<jackdaniel>
use ulisp then
gector has quit [Read error: Connection reset by peer]
<no-defun-allowed>
Is there anywhere to talk about GC design?
azimut has quit [Ping timeout: 252 seconds]
azimut has joined #lisp
<galdor>
I'm curious, is there a reason for DEFVAR and DEFPARAMETER not to have an optional :TYPE keyword argument with errors signaled when setting a value of the incorrect type, the same way than for object slots ?
<galdor>
I fail to see any downside to such behaviour
fikka has quit [Ping timeout: 244 seconds]
<jackdaniel>
galdor: both operators create special variables which may be bound inside (for instance) a function, so their location can't be resolved at compilation time
<jackdaniel>
each such binding would require checking, whenever type is correct
<TMA>
galdor: there are several. first: the runtime performance might get hurt. second: it is totally permissible to ignore all type declarations elsewhere, this would be an exception. third: what jackdaniel says
<jackdaniel>
what has some penalty to it, so that may be the reason (or not)
<jackdaniel>
mind also, that lisp is a dynamically typed language, so enforcing *variable* type is against dynamic typing spirit
<jackdaniel>
values are typed, not variables. of course nothing prevents you from adding your own set function which will check whenever value is of correct type
fikka has joined #lisp
<galdor>
oh ok
<galdor>
I did not think about the variable/value distinction
lonjil has quit [Ping timeout: 252 seconds]
<galdor>
thank you for the explaination
eagleflo has joined #lisp
<jackdaniel>
sure
azimut has quit [Ping timeout: 240 seconds]
azimut has joined #lisp
fikka has quit [Ping timeout: 244 seconds]
nullniverse has quit [Ping timeout: 252 seconds]
<no-defun-allowed>
There aren't many digestible resources on GCs. One guy stepped through the process which helped me understand Cheney's algorithm but otherwise all the resources that aren't writeups and the GC Handbook are specific to Java or C# or something.
<drmeister>
Hello everyone
<no-defun-allowed>
Hi drmeister
<drmeister>
no-defun-allowed: Have you seen Ravenbrook's website on garbage collection?
<drmeister>
They created the Memory Pool System garbage collector that we use - the documentation for that is also excellent.
<jackdaniel>
/context hint: "we use (in clasp -red)"/
eMBee has quit [Ping timeout: 272 seconds]
eMBee has joined #lisp
<no-defun-allowed>
I've got the gc handbook but it's quite tricky to understand without more research. ADHD doesn't help either.
<no-defun-allowed>
Do you develop the clasp environment, drmeister?
<jackdaniel>
no-defun-allowed: drmeister is clasp's creator
<flip214>
jackdaniel: in other words, drmeister is clasp's meister ;)
tralala has quit [Quit: out]
eMBee has quit [Ping timeout: 244 seconds]
cpape has quit [Ping timeout: 252 seconds]
<galdor>
for GCs, "The Garbage Collection Handbook" is probably the best reference out there
<galdor>
it goes really deep and detail various kinds of GCs
<no-defun-allowed>
Amazing.
<no-defun-allowed>
--Can I get your autograph please?--
lonjil has joined #lisp
<galdor>
oh that's the same book just above
<galdor>
sorry
<jackdaniel>
I have it on my bookshelf, but sadly didn't have time to give it a proper read yet (only skimmet through chapters)
<galdor>
I barely started too
<galdor>
I don't have a practical use case right, either personally or for $DAYJOB, so it's hard to find some time to invest
<drmeister>
no-defun-allowed: Yes, I'm responsible for getting clasp started.
Gnuxie[m] has joined #lisp
* drmeister
has no idea how to digitally sign an autograph :-)
<galdor>
drmeister: I loved your talk presenting clasp, it was very inspiring
<jackdaniel>
you start clim-fig demo and draw with a "pont" drawer using the mouse ;)
<jackdaniel>
save image will be implemented soon™
eMBee has joined #lisp
azimut has quit [Ping timeout: 240 seconds]
<drmeister>
galdor: Thank you.
<drmeister>
I'll next be talking at the llvm developers meeting in October - that will be more on Clasp and working with llvm - but it will be recorded and put up on YouTube.
<shrdlu68>
I too loved that talk.
<drmeister>
jackdaniel: You are implementing "save image" in ECL? How?
* drmeister
spits coffee through his nose.
<drmeister>
With boehm? Seriously - how?
<jackdaniel>
no, I'm implementing "Save image" in clim-fig demo in McCLIM
<jackdaniel>
sorry for the confusion
orivej has joined #lisp
<Xach>
ha
<jackdaniel>
like png image
<Xach>
i have a library for that~~
<drmeister>
Oh ok - good grief. I overreacted. I'm used to impossible things before breakfast - but that would be beyond the pale.
<beach>
razzy: So you love the idea of my LispOS, except the most essential parts of it?
<jackdaniel>
Xach: afaik we take adventages of some of your libraries for raster image graphics
<jackdaniel>
or you talk that you have a library, which takes CLIM sheet and saves it as png?
<jackdaniel>
s/talk/say/
<Xach>
no, sorry, just something that takes a bunch of samples and makes a png. you have to bring your own samples!
fikka has joined #lisp
steiner has quit [Remote host closed the connection]
<drmeister>
We are working with Ravenbrook to implement image saving - and we also support Boehm and I've been thinking about image saving a lot. That's why thinking about it triggers me.
steiner has joined #lisp
<jackdaniel>
we use cl-vectors, zpb-ttf, cl-paths-ttf and cl-aa for manipulating memory with images
<drmeister>
shrdlu68: Thank you.
<Xach>
fantastico
<jackdaniel>
and opticl for loading / saving them
<makomo>
hello \o
<jackdaniel>
drmeister: at least I know now how to make you spit your coffee :-)
<beach>
Hey makomo.
<jackdaniel>
either way these raster image parts of mcclim are under heavy changes now, but dependency-set doesn't change
<makomo>
drmeister: looking forward to that talk :-)
<razzy>
beach: maaaaaaybe? i like benefits that your view provide. i want to keep most of benefits, and add new ones :]
<jackdaniel>
less is a new more
<razzy>
jackdaniel: agreed, we should assimilate best parts, burn the rest :]
it3ration has joined #lisp
<phoe>
shka_: fuzzy search problem?
<shka_>
?
<phoe>
uh, sorry
<phoe>
shrdlu68: ^
<shka_>
no problem
<shka_>
drmeister: oh so Ravenbrook is now involved in clasp?
<shka_>
that sounds awesome
<shrdlu68>
phoe: The levenshtein distance thing.
<phoe>
shrdlu68: I haven't solved it yet
<phoe>
I haven't yet worked on that part of the spellchecker
<shka_>
phoe: it is not complicated at all as you can see
<phoe>
shka_: my only issue is, I have trees instead of words, so I cannot use your code directly
<shka_>
obviously
<phoe>
but nonetheless, that's a simple algorithm
<shka_>
yes
<shka_>
and probabbly you prefer loop over iterate
<shka_>
but otherwise, simple stuf
jochens_ has quit []
orivej has quit [Ping timeout: 246 seconds]
jochens has joined #lisp
astalla has quit [Ping timeout: 240 seconds]
Bike has joined #lisp
orivej has joined #lisp
LiamH has joined #lisp
azimut has joined #lisp
light2yellow has joined #lisp
Tristam has quit [Ping timeout: 240 seconds]
pierpal has quit [Quit: Poof]
pierpal has joined #lisp
gingerale has joined #lisp
mindCrime_ has joined #lisp
vaporatorius has joined #lisp
housel has quit [Read error: Connection reset by peer]
nowhere_man has quit [Ping timeout: 252 seconds]
wiselord has joined #lisp
vaporatorius has quit [Quit: Leaving]
steiner has quit [Remote host closed the connection]
steiner has joined #lisp
quazimodo has quit [Ping timeout: 244 seconds]
quazimodo has joined #lisp
Denommus has joined #lisp
slyrus has quit [Quit: slyrus]
slyrus has joined #lisp
makomo has quit [Ping timeout: 252 seconds]
dale_ has joined #lisp
dale_ is now known as dale
akovalenko has quit [Read error: Connection reset by peer]
jkordani has joined #lisp
jkordani_ has quit [Ping timeout: 244 seconds]
it3ration has joined #lisp
fikka has quit [Ping timeout: 272 seconds]
joni has joined #lisp
FreeBirdLjj has joined #lisp
FreeBirdLjj has quit [Remote host closed the connection]
FreeBirdLjj has joined #lisp
makomo has joined #lisp
it3ration has quit [Ping timeout: 272 seconds]
shrdlu68 has quit [Ping timeout: 252 seconds]
rumbler31 has joined #lisp
scymtym has quit [Ping timeout: 252 seconds]
fikka has joined #lisp
Lycurgus has joined #lisp
sabrac has quit [Ping timeout: 244 seconds]
Inline has joined #lisp
rumbler31 has quit [Ping timeout: 244 seconds]
Inline has quit [Read error: Connection reset by peer]
Inline has joined #lisp
pierpal has quit [Quit: Poof]
pierpal has joined #lisp
fikka has quit [Ping timeout: 252 seconds]
SaganMan has quit [Quit: WeeChat 1.6]
emaczen has joined #lisp
<emaczen>
My understanding of #'bt:make-thread is that the spawning thread (parent thread) returns immediately but the child thread only returns once the function is finished
<emaczen>
I don't seem to be getting this behaviour since I put a sleep form in the function body of #'bt:make-thread and my spwaning (parent thread) will not return until the sleep time is complete
astalla has joined #lisp
<jkordani>
is your sbcl built with multithreading?
<jkordani>
er sorry I assumed sbcl
fikka has joined #lisp
<jkordani>
emaczen: ^
<emaczen>
jkordani: I'm using CCL and yes it does multiple threads
shka_ has quit [Quit: WeeChat 1.9.1]
fikka has quit [Ping timeout: 245 seconds]
<jkordani>
paste it?
fikka has joined #lisp
fikka has quit [Ping timeout: 252 seconds]
bradcomp has joined #lisp
<emaczen>
jkordani: Even if you just evaluate in the repl (bt:make-thread (lambda () (sleep 2))) it doesn't return immediately
<loke>
emaczen: sounds like you don't have full thread support (i.e. you're using some green-thrads system)
<emaczen>
jkordani: Actually it is.
<emaczen>
loke: for CCL?
<loke>
I don't know about CCL, but I can tell you that that is the behaviour you'd see with green threads
<emaczen>
loke: In the REPL it is working how I expected
scymtym has joined #lisp
fikka has joined #lisp
<jackdaniel>
emaczen: is that exactly this form which you show?
<jackdaniel>
or something different is make-thread argument?
<jackdaniel>
(and clip-region
<jackdaniel>
s/↑//
<jkordani>
emaczen: in the repl its working how you expected aka returning immediately?
<jackdaniel>
emaczen: because I can imagine, that you are doing something like: (defun foo () (sleep 2)) (bt:make-thread (foo))
<jackdaniel>
that will indeed sleep without spawning a thread
<jackdaniel>
make-thread accepts a function
<jackdaniel>
so a correct invocation would be (bt:make-thread #'foo)
Kundry_Wag has joined #lisp
fikka has quit [Ping timeout: 244 seconds]
<jackdaniel>
(as in arguments are evaluated, since it is a function)
warweasle has joined #lisp
Lycurgus has quit [Quit: Exeunt]
SenasOzys has joined #lisp
varjag has quit [Quit: ERC (IRC client for Emacs 24.5.1)]
Fare has joined #lisp
fikka has joined #lisp
fikka has quit [Ping timeout: 240 seconds]
<emaczen>
jackdaniel: Yep, I'm getting there. I don't know what exactly I entered in the repl at first
<emaczen>
but it is behaving now
<emaczen>
maybe an unrealized after method
gpiero has quit [Ping timeout: 252 seconds]
Kundry_Wag has quit [Ping timeout: 272 seconds]
astalla has quit [Ping timeout: 252 seconds]
nowhere_man has joined #lisp
igemnace has joined #lisp
fikka has joined #lisp
light2yellow has quit [Ping timeout: 252 seconds]
shka_ has joined #lisp
<shka_>
good evening
<AeroNotix>
shka_: o7
igemnace has quit [Client Quit]
<shka_>
^5
fikka has quit [Ping timeout: 245 seconds]
steiner has quit [Remote host closed the connection]
jochens has quit [Read error: Connection reset by peer]
asarch has joined #lisp
jochens has joined #lisp
fikka has joined #lisp
fikka has quit [Ping timeout: 244 seconds]
varjag has joined #lisp
igemnace has joined #lisp
robotoad has joined #lisp
<jasom>
CCL I believe is always OS threaded (its repl uses a separate thread even)
it3ration has joined #lisp
pjb has joined #lisp
Fare has quit [Ping timeout: 245 seconds]
quazimodo has quit [Ping timeout: 246 seconds]
jkordani_ has joined #lisp
figurehe4d has joined #lisp
it3ration has quit [Ping timeout: 252 seconds]
quazimodo has joined #lisp
rumbler31 has joined #lisp
jkordani has quit [Ping timeout: 252 seconds]
<jasom>
beach: the only thing in LispOS that I think is a bridge-to-far is treating RAM only as a cache for non-volatile storage just because of the ordering issue.
Guest60615 has joined #lisp
jinkies has joined #lisp
fikka has joined #lisp
rumbler31 has quit [Ping timeout: 240 seconds]
mindCrime_ has quit [Ping timeout: 244 seconds]
Fare has joined #lisp
fikka has quit [Ping timeout: 240 seconds]
Guest60615 has quit [Ping timeout: 252 seconds]
azimut has quit [Ping timeout: 240 seconds]
light2yellow has joined #lisp
joni has quit [Ping timeout: 240 seconds]
kaun has joined #lisp
igemnace has quit [Quit: WeeChat 2.2]
dented42 has joined #lisp
jochens has quit [Remote host closed the connection]
jochens has joined #lisp
quazimodo has quit [Ping timeout: 240 seconds]
fikka has joined #lisp
jkordani has joined #lisp
quazimodo has joined #lisp
kaun has quit [Quit: IRC for Sailfish 0.9]
azimut has joined #lisp
<dim>
jasom: I like this idea of the user not having to deal with a hierarchical file system a lot, myself, and also not having to remember to save a copy and remember where it was saved
jochens has quit [Ping timeout: 244 seconds]
<dim>
the argument that the computer should be dealing with that makes sense to me, somehow
fikka has quit [Ping timeout: 252 seconds]
<jasom>
dim: everyone likes that idea, and the OS graveyard is littered with failed solutions to it.
jkordani_ has quit [Ping timeout: 246 seconds]
<jasom>
The proper solution might include just treating RAM as a cache for NV storage, but that's not a solution, it's a possible building block for one.
m00natic has quit [Remote host closed the connection]
azimut has quit [Ping timeout: 246 seconds]
<dim>
agreed
azimut has joined #lisp
light2yellow has quit [Ping timeout: 245 seconds]
FreeBirdLjj has quit [Remote host closed the connection]
orivej has quit [Ping timeout: 272 seconds]
Fare has quit [Ping timeout: 240 seconds]
<beach>
jasom: Can you describe the "order issue" a bit more?
fikka has joined #lisp
gpiero has joined #lisp
fourier has joined #lisp
mindCrime_ has joined #lisp
<beach>
jasom: Better yet, point me to some documentation.
devon has joined #lisp
azimut has quit [Ping timeout: 252 seconds]
orivej has joined #lisp
fikka has quit [Ping timeout: 252 seconds]
Fare has joined #lisp
quazimodo has quit [Ping timeout: 246 seconds]
devon has quit [Ping timeout: 252 seconds]
dented42 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
FreeBirdLjj has joined #lisp
steiner has joined #lisp
<White_Flame>
whartung: the transputers were programmed in Occam. It didn't auto-parallelize, it was CSP agent programming.
emerson has joined #lisp
<whartung>
yea that’s it, occam
<whartung>
wasn’t there another language? Linda?? something like that?
<whartung>
how do you distinguish treating the ram as cache vs the disk simply backing the ram? by being able to have larger elements than what ram can hold?
<White_Flame>
that would be my opinion. "disk simply backing the ram" means disk footprint == ram footprint, and the rest of disk would need a separate interface
fikka has joined #lisp
<whartung>
while I can appreciate the “ram as cache” concept, there’s also a reality of modern ram limitations (or lack of).
<whartung>
Consider I have a 4GB video to play.
<whartung>
you can load the entire thing, and then play it, or you can stream it from “storage”.
<beach>
whartung: How do you "load the entire thing" into RAM?
<White_Flame>
and if you just stream it, when you scrub back and forth does it make disk API calls?
<White_Flame>
another option is that you could mmap it
<whartung>
I dunno, seems to be that goal is that the there IS no “disk”. simply there’s no difference between disk and ram from the OS perspective, right? There’s just objects, and the System magically pages things in and out.
<dim>
a friend of mine had to optimize linux kernel to avoid pre-fetching an entire video file when serving VOD adult content, because it would be a waste, most buyers only ever watch the first 10 to 15 mins of the movie anyway, so he managed to have a kernel module that would only pre-fetch that much…
<beach>
whartung: Yes, and my question remains.
<p_l>
beach: on linux, mmap() the file
<p_l>
then trigger demand paging
<beach>
p_l: That does not load it into RAM.
<p_l>
beach: does once you put a "demand" on all pages
<beach>
I don't see how that would be different in a system where the RAM is a cache for the non-volatile memory.
<whartung>
what’s the resolution of objects? are they just adresses and byte ranges? And the OS simply loads them when it hits LDA #0x12345678 ?
orivej has quit [Ping timeout: 244 seconds]
<p_l>
mmap + mlock will load everything into memory and lock it there
<beach>
whartung: What system are you talking about now?
fikka has quit [Ping timeout: 272 seconds]
<whartung>
the mythical OS y’all are talking about
<beach>
p_l: I can't see how that would be impossible in a system where RAM is the cache of the disk.
<beach>
whartung: It would work just like an ordinary Common Lisp system.
<whartung>
how do ordinary common lisp systems page memory?
<p_l>
beach: I think the point was that they *didn't* want to have it all in memory
<beach>
whartung: Make a huge swap space.
<beach>
p_l: Oh, OK.
<beach>
whartung: The Common Lisp system won't "see" that level of abstraction.
<beach>
whartung: It will just work as if the memory is the size of the disk.
<p_l>
also, interesting thing to consider - while the APIs do not really match, unless you do certain (broken) tricks, most filesystems on Linux specifically act with "RAM is cache for disk space"
<beach>
whartung: If it tries to touch a page that is not in RAM, some pretty much ordinary paging logic will hapen.
<beach>
happen
<beach>
p_l: Exactly. So RAM is actually disk-backed (swap space) and disk has a RAM cache.
<whartung>
so if I iterate across all of the frames of a 4GB video, odds are, it will eventually all be paged in to RAM.
<beach>
Go figure.
<beach>
whartung: Yes.
<p_l>
beach: however, the APIs being file-oriented mean that it's reasonably easy to get "memory" that you know will end up on disk
<whartung>
if your ram is persistent, then there’s no need to distinguish “disk based” or RAM based.
<whartung>
but it does potentially cause issues with difference in perforamnce.
<beach>
whartung: Exactly.
<beach>
So you still have to do paging, considering RAM as a cache.
<whartung>
I mean, I understand the underlying concept — there are great advances in persistent ram, not even swapped, but fast, persistent memory
<White_Flame>
in many systems there are intent hints passed as parameters when allocating memory or dealing with files. I think that sort of thing is useful here
<whartung>
plus modern SSDs are silly fast anyway
<whartung>
it then comes down to the checkpointing process.
<beach>
whartung: Not nearly as fast as dynamic RAM.
<shka_>
it is more about simplicty
<White_Flame>
things like "I am reading this file as a stream" or "I am random-access writing to this existing file" etc
<White_Flame>
or "This memory is flushable if memory is low" etc
<beach>
Look, it has already been done. Multics did not have a separate abstraction for disk and RAM.
<beach>
So I know it can be done.
<White_Flame>
certainly one could rely on heuristics for a few of those
<whartung>
that’s important, White_Flame, parituclarly in database systems where you don’t want the table scan of a 100GB table to trash your cache, just becaue you’re counting all the rows.
pierpal has quit [Quit: Poof]
<whartung>
that requires intimacy with the virtual memory manager.
pierpal has joined #lisp
<White_Flame>
because really this is all about optimization. Blind swap space would work for everything, it's just a really bad performance idea
<whartung>
right
CrazyEddy has quit [Remote host closed the connection]
<White_Flame>
well, I guess malloc persisting would be different than just swap space, for single address space architectures
<whartung>
what would be nice is an implementation that just did it naively so that it could be studied as to how good or bad the swap is.
<whartung>
the real problem, as witnessed running large java JVMs, is the garbage collector. you do NOT want to have your JVM heap swapped out — you’re in for a very bad day.
<White_Flame>
well, there's the OS graveyard to dig from again
CrazyEddy has joined #lisp
<whartung>
honestly, the closest thing we have to this today is probably a modern DBMS.
<beach>
whartung: That may be right. A lot of it existed in the 1970s but we don't have it today anymore.
<beach>
Though there is a Multics emulator online that you can play with.
<beach>
I was able to run Multics Emacs.
<whartung>
And the pharo guys are probably the closeset starting point if you wanted to try out such a system — the whole thing is written in smalltalk (though compiled down in to C for the kernel).
fikka has joined #lisp
FreeBirdLjj has quit [Remote host closed the connection]
<beach>
Well, the fact that there *is* a kernel makes it very different from what I am planning.
<beach>
And, just in case you think that an OS must have a kernel, again, Multics did not have one.
<jasom>
beach: w.r.t. the ordering issue: whenever you cache a secondary store, then operations that require valid ordering require extra manual work.
<beach>
So, really, what I am planning could be considered a merge of Multics and Genera.
<beach>
jasom: You mean so that disk is always consistent?
<jasom>
beach: correct
<beach>
jasom: That problem was solved by Eros.
<beach>
And the LispOS specification contains a variant of that technique.
<jasom>
there is also the secondary concern that under the current system rebooting clears any buggy state that is not on-disk.
fikka has quit [Ping timeout: 252 seconds]
varjag has quit [Ping timeout: 246 seconds]
<beach>
I am also willing to add a little routine that, before a page is written to disk, it is check-sum verified.
<jasom>
beach: well I mean "consistent" not just in the sense of the metadata being correct, but consistent in the sense that the data is useful upon restart
<beach>
jasom: Yes, Eros solved that problem.
<shka_>
Eros?
<jasom>
beach: I'd be interested in reading how, if you have a link handy
<beach>
jasom: If this secondary concern is about buggy software, then I have no solution for you, since even the file manager in a Unix-like operating system can be buggy.
<beach>
OK, hold on...
<whartung>
pharo has a kernel because their JIT is not good enough to run their “kernel” efficiently. it’s more a runtime than a kernel in that sense.
<jasom>
shka_: it's a predecesor to Coyotos if you're familiar with that
<shka_>
i am not
<whartung>
that’s the biggest problem with persistent memory. You can’t turn the machine off and back on and have it necessarily start from a clean slate — harder to abandon your sins that way.
<beach>
jasom: Well, not quite. Coyotos was what they were forced to do when funding for Eros dried up.
<beach>
US academia for you.
azimut has joined #lisp
<shka_>
ooooh capability system
<jasom>
beach: in terms of an absolute sense, the ram/disk separation doesn't help at all with buggy software, but IMO in a practical sense it does.
<shka_>
from 1999
<shka_>
i had no idea that it was a thing back then!
<beach>
jasom: I can't figure out a reasonable way of dealing with buggy software, so I am not going to try.
<shka_>
i guess there is really nothing new nowdays
<beach>
shka_: That's what I keep saying.
<jasom>
shka_: capability systems go back at least to the 70s I think
<beach>
But people act as if this is some imaginary thing, impossible to realize.
<shka_>
jasom: i had no idea
<whartung>
what’s new today is the VAST resources we have compared to back then
<beach>
whartung: Yes, so things are MUCH EASIER now.
<whartung>
planning out a computer with 128G of ram and 10T of “storage” isn’t a big deal.
<shka_>
ok, so we have tech from 70s
<shka_>
so in 2020 we will have tech from 80s? :D
<beach>
shka_: You really should read the Unix-haters handbook. It will tell you what we had in the 1970s and that was destroyed by Unix.
<shka_>
beach: i read it already
<White_Flame>
beach: "buggy software" includes playing around with the REPL and testing in development. So those should be real concerns for protection against poorly behaved bits of code
<beach>
OK, so no reason to be surprised.
<shka_>
but for some reason i was always more interested in 80s IT
<beach>
White_Flame: You missed the point.
<beach>
White_Flame: The claim was that it is easier to deal with buggy software in a system that separates RAM and disk.
varjag has joined #lisp
<beach>
But the problem is the same.
<beach>
Buggy paging software in both cases will give bad disk blocks.
<whartung>
bettes said that it’s easier to deal with buggy software on a system that does not make your bugs persistent
<beach>
White_Flame: Buggy application programs will obviously not break the system in either case.
<jasom>
beach: bug-free software is IMO still to expensive to produce. capabilities and MAC help to contain the damage bugs do. I'm talking about application level bugs, not os service level bugs (or at least code that would be in one rather than the other in a Unixy system)
lemonpepper24 has quit [Ping timeout: 272 seconds]
<beach>
I don't know what to say.
<jasom>
beach: buggy applications that have to explicitly save their state to disk can be recovered by killing them off and restoring
<White_Flame>
basically, it's an issue of the resolution of cleanup. Per-process isolation is fairly clean. Systems like the Amiga required a reboot, as any bad behavior pulled the whole RAM footprint's stability into question
<jasom>
every time I've had to hit "revert to saved document" or kill off a process and reload, I was able to recover an application from a buggy state because disk and ram are different.
azimut has quit [Ping timeout: 244 seconds]
<shka_>
jasom: you can have transactional model with checkpoints to revert
<White_Flame>
jasom: that's more a difference between "staging" and "committed". Both could persist
<beach>
So, am I supposed to say something like "Oh, OK, guys. I really didn't think about that. I guess I'll give up on the LispOS project"?
<White_Flame>
shka_: right
<jasom>
beach: not at all, but it worries me when you say you're not going to try to deal with buggy software.
<beach>
Or maybe "OK, thanks for the input. I think I'll put in another couple of decades of research before trying to write it"?
<beach>
jasom: What worries you about it?
<beach>
This is research.
<beach>
I don't expect anyone would want to use it anyway.
<jasom>
beach: fair enough
<beach>
And I am just one person.
<beach>
If there are remaining problems, perhaps someone else could try that path.
Fare has quit [Ping timeout: 244 seconds]
<beach>
I want to see whether the basic abstraction works.
<beach>
Because I think the user experience will be much improved.
<beach>
And I have history to back me up.
<shka_>
not only that
<jasom>
To me, it seems intuitive that it would work, but I think it's a mixed-bag for user experience.
<beach>
Multics did not have any particular problem with buggy application software.
Fare has joined #lisp
<jasom>
but if intuition was always right, then we wouldn't need to do research :)
<beach>
I can't just accept a statement implying that "Multics was unusable for application programmers".
azimut has joined #lisp
<beach>
Especially since I used it myself for several years.
<beach>
In a production environment.
fikka has joined #lisp
<shka_>
jasom: well, if you can have transactional persistence you can get a lot of ultra cool features for free
<White_Flame>
not just transactional, but historical
<shka_>
yes, historical
<jasom>
shka_: yes you do, but you can get that already with mmap on a unix, and non-databases aren't written that way.
<shka_>
jasom: not quite as easy1
<jackdaniel>
saying tht you already have that with mmap is like saying you already have repl with dlopen, printf and scanf :-)
<jasom>
shka_: yes, it needs a custom runtime if you want your heap, stack &c. all to be persistent
<shka_>
and much more
<jasom>
the fact that some databases are written like this is a point in favor of it being possible.
<shka_>
that's why it is not used all that often
Kaisyu has quit [Quit: Connection closed for inactivity]
sauvin has quit [Read error: Connection reset by peer]
<shka_>
anyway, i think that this sounds like something that should be explored further
<shka_>
potential wins are huge
<jasom>
anyways a kernel that does all this will be too bloated MULTICS was over 100KB of code!
<jackdaniel>
I think that the whole point is that there is no kernel
<beach>
Oh, my.
<shka_>
if done well, it can be used to write full blown database in few hours
<jackdaniel>
your operating system is all userspace (drivers included)
<beach>
Multics did have processes though.
<beach>
The address space of the processor was still too small.
fikka has quit [Ping timeout: 246 seconds]
<shka_>
then it can be perhaps extended towards multiple machines
gravicappa has joined #lisp
<shka_>
it sounds like the best thing ever
<whartung>
it’s a matter of lifecycle beach
Lycurgus has joined #lisp
Jesin has quit [Quit: Leaving]
<beach>
whartung: What is?
<whartung>
processes have an implicit lifecycle
<whartung>
this issie with buggy software
<razzy>
shka_: read heisig backlog
slyrus1 has joined #lisp
<whartung>
*issue
azimut has quit [Ping timeout: 272 seconds]
<beach>
whartung: I am sorry, but I am unable to understand your point.
<whartung>
consider the extreme case of “booting” a persistent system.
<whartung>
on a current system, memory is empty, code it loaded, and data is loaded.
<whartung>
there’s the whole start up, initialize the hardware, put everything in to a “known state".
<jasom>
The idea of using a managed runtime rather than hardware memory protection is one I like though. No systems programmer ever said "if I could only turn this function call into a system call my software would be faster and easier to understand"
<whartung>
the “known state” is based on what is persisted on permanent storage.
<beach>
So, let me say this again: What I am planning is nothing new. Combine Multics, Genera, Eros, etc., and all the technology existed decades ago.
<beach>
Also, I am not going to even try to address every concern that people might have. Especially if those concerns are based on ignorance about the history of computing.
<beach>
But even some valid concerns I am not going to try to address, simply because I am only one person and I don't have the time to figure everything out by myself.
<jackdaniel>
whartung: it is like with continuations in scheme; you can go back in time, but external resources may change
<beach>
And, I don't care if my system will be used only by myself, so objections related to "nobody will use it" won't prevent me from going ahead.
it3ration has joined #lisp
<White_Flame>
whartung: again, that's just staging and versioning. Consider web browsers. They save your current state, but allow you to choose between starting over or restoring old state on startup
<White_Flame>
even if a bunch of broken or spammed out tabs are persisted
<whartung>
but if I kill -9 the browser, it may not save the state.
<beach>
All that said, I am perfectly willing to discuss my ideas and even debate them. But I don't want the discussions and debates to be based on ignorance.
<White_Flame>
whartung: they autosave every N minutes
<White_Flame>
plus, you can set your startup pages to something else
<whartung>
the mac does the same thing.
<White_Flame>
so it's not just "RAM vs disk"
<White_Flame>
it's simply current vs old versions vs reference versions
<White_Flame>
all of which can be persisted
<whartung>
all of which ARE persisted
<whartung>
there’s no choice in the matter
fourier has quit [Ping timeout: 240 seconds]
<White_Flame>
right... and a "persistent RAM" type of OS does the same :)
<whartung>
and then we’re talking high level objects here (like “documents”), not low level primitives - structures, array, etc.
<whartung>
in terms of versioning
<whartung>
RAM is not versioned
<White_Flame>
right, there's a scope to the versioning
<White_Flame>
and that's where transactions come in
<White_Flame>
(or checkpoints, or whatever)
<whartung>
that’s what Smalltalk does
<White_Flame>
so I think "RAM vs disk" is an oversimplification of the underlying concepts
<whartung>
every time to “do” something in smalltalk, (which means you enter some bit of source code), it gets saved to the change log
<whartung>
I should say, you enter the source code and execute it
<whartung>
it gets saved to the change log
<on_ion>
changesets are awesome. wished a lisp image had it
it3ration has quit [Ping timeout: 245 seconds]
<whartung>
it is trivial to wreck a small talk image (not in the sense that you can bumble on to it, but if you know what you are doing — trivial)
<on_ion>
(sources are saved in the image though, too)
<whartung>
and the recovery process is to reload the OLD image, and then replay the change set up until the point you did the Bad Thing
rumbler31 has joined #lisp
<whartung>
but images are snapshots, and transactions are blocks of code
<whartung>
when the system is constantly snapshoting, always persisting, then it’s a differnet issue. You, as the user may well not know which page write is the one you DON’T want to use
<whartung>
because stuff getting persisted includes things like TCP buffers and terminal sessions and just “work memory"
<whartung>
who knows which one of those stomped on some important global resource due to a bug, and when such global resource got paged back to disk so that when it was reloaded — you got to reload the corrupted global resource.
<whartung>
if I corrupt my ST image, I kill it, and reload from “backup”, essentialy
<whartung>
then roll forward changes
<whartung>
so, I don’t know how Multics, or EROS or anything else deals with that problem.
rumbler31 has quit [Ping timeout: 252 seconds]
<jackdaniel>
in the systems you mention you take the punch card, fix the broken memory with a tape and start over ;-)
<razzy>
White_Flame: whartung for single virtual memory composed of multiple real memories you need good optimization. that optimization would be different for every hardware system you build and user needs. this is good thing, because you could SW optimize user experience. machine learning methods only get better
igemnace has joined #lisp
<whartung>
there was a turtable stereo company who made a persistent Smalltalk — I can’t remember the name, but it was northern european so it had a lot of Ks in it.
<on_ion>
ST/X was awesome, german =)
<on_ion>
also ithad inline C
<on_ion>
which inspired a GNU smalltalk + objective-c hybrid thing i workd on
<on_ion>
also, StepTalk
<whartung>
no, it wasn’t ST/X — afaik it never left the company and was purely an internal project
<whartung>
and….it’s Scottish — what do I know lol
Jesin has joined #lisp
nbunjevac has joined #lisp
<shka_>
80s :-)
light2yellow has quit [Client Quit]
<jackdaniel>
this conversation shifts closer to #lispcafe every minute
<beach>
jackdaniel: So what is your work on ECL these days?
<beach>
Are you still working on refactoring the compiler?
<jackdaniel>
I'm stuck with mcclim pattern transformations and the renderer for a 3rd week now
<jackdaniel>
so not much work on ecl lately
<beach>
Oh. :(
<jackdaniel>
when I'm done with that I'm focusing on timers, cltl2 interface and green threads
<beach>
Well, not :( I am happy that McCLIM is making excellent progress.
<jackdaniel>
then wrap things up with testing and release 16.2.0
<jackdaniel>
hopefully before 2019
<beach>
Sounds good.
makomo has quit [Ping timeout: 252 seconds]
<jackdaniel>
but ECL has another excellent contributor lately who did a lot of work on improving thread safety of ecl's internals
<beach>
Great!
<jackdaniel>
and I have an intern who works on cxx bridge (for C++ programmers to use ECL "their way")
<beach>
Interesting.
<beach>
Does that mean that Clasp could use ECL to bootstrap, rather than what they are now doing?
<beach>
jackdaniel: Speaking of McCLIM, I watched that Genera presentation someone posted a link to. It made me more convinced than ever that McCLIM is an essential part of future progress in our computing environment.
<jackdaniel>
as of refactoring compiler, nothing tangible has changed with source code, but I have better understanding of it and of what I want to achieve
<beach>
OK, that's progress.
<jackdaniel>
I don't know if clasp could use ecl for bootstrap
<jackdaniel>
because I don't know what are clasp's requirements with this regard
<beach>
Right. It was mostly a rhetorical question.
<beach>
But if LLVM could be used from ECL that might become something to consider.
<jackdaniel>
ah, you mean the c++ part. well, this bridge is meant as a handicap for interfacing with CL from C++
<beach>
I see.
<jackdaniel>
not as a tool for CL programmers or something you could bootstrap with
<beach>
OK.
<jackdaniel>
adding llvm backend is blocked by compiler refactor which rather won't start before 2019 (unless some brave soul steps in :)
<jackdaniel>
not that I'm interested in writing such backend myself
<beach>
I completely understand.
<jackdaniel>
it's getting late for me, good night \o
<beach>
'night jackdaniel.
shka_ has quit [Ping timeout: 252 seconds]
orivej_ has joined #lisp
orivej has quit [Read error: Connection reset by peer]
<beach>
Concerning recent SICL news, the latest progress on bootstrapping revealed a problem that I had swept under the rug, namely method combinations.
<beach>
I had failed to understand the use for MAKE-METHOD and CALL-METHOD and I could not fully understand the documentation for DEFINE-METHOD-COMBINATION.
<beach>
Interacting with the host Common Lisp system made the problem urgent to deal with. So I think I am now tackling the last major blob of code that must work before I can continue my work on bootstrapping.
<beach>
As I wrote this morning (UTC+2) I think I have a good solution to the problems discussed by Didier Verna in his ELS paper.
josemanuel has quit [Quit: leaving]
Copenhagen_Bram has quit [Read error: Connection reset by peer]
rotty has quit [Ping timeout: 240 seconds]
makomo has joined #lisp
nirved has quit [Quit: Leaving]
cpape has joined #lisp
rotty has joined #lisp
<emaczen>
How do you get a restart to be active when a condition is signalled from #'bt:make-thread?
Fare has quit [Ping timeout: 252 seconds]
<emaczen>
My confusion is that the call stack will be complete and therefore no restart (either define by with-simple-restart or restart-case) is active
Fare has joined #lisp
Essadon has joined #lisp
Essadon has quit [Client Quit]
<Bike>
you mean from make-thread itself, or from the thread thunk?
Essadon has joined #lisp
<emaczen>
Bike: I'm signalling an error from the function passed to #'bt:make-thread
<emaczen>
The only method I see is *debugger-hook*?
jkordani_ has joined #lisp
jkordani has quit [Ping timeout: 252 seconds]
<Bike>
the function passed to make-thread runs in its own thread, of course... it probably works like dynamic bindings
<Bike>
you can put a restart-case in the thunk itself
Jesin has quit [Quit: Leaving]
Copenhagen_Bram has joined #lisp
<emaczen>
Bike: The restart is going to call the same code that calls #'bt:make-thread
<Bike>
the restart makes a new thread?
<emaczen>
Bike: nvm I got it
lemonpepper24 has joined #lisp
longshi has joined #lisp
dddddd has joined #lisp
Jesin has joined #lisp
terpri has quit [Remote host closed the connection]
terpri has joined #lisp
Copenhagen_Bram has quit [Ping timeout: 240 seconds]
j`ey has joined #lisp
lemonpepper24 has quit [Ping timeout: 252 seconds]
<j`ey>
I just generated a 268MB file by PRINT-ing a large array to a file. now when I try READ it back, I get out of memory errors. what can I try do to make this work?
<beach>
j`ey: What implementation?
<j`ey>
sbcl
<beach>
Just increase the initial heap size.
<beach>
(setq inferior-lisp-program "/usr/local/bin/sbcl --dynamic-space-size 10000") is what I have in my .emacs
<j`ey>
what unit is that in, mb?
<beach>
Yeah.
<j`ey>
giving it a try
<beach>
Report back please!
<jasom>
It's also possible that SBCL is inefficient at READing large arrays, it would be interesting to see what the peak memory usage is when reading it back. 268MB isn't that big.
<j`ey>
hm, still the same issue
<beach>
Indeed.
<trittweiler>
I think it's printed as #(.....) and so lost its specialization
<Bike>
assuming the array is actually specialized, rather than just having integersin it
<jasom>
Bike: right
warweasle has quit [Quit: home!]
<jasom>
also, for very large arrays you probably want to use a serialization library anyways. cl-store or conspack ,for example
<j`ey>
yeah, just wanted to try get this working with print/read for now
<whartung>
the reader is probably consing the heck during that.
<beach>
jasom: Right, because we don't have a unified API for primary and secondary storage.
<Bike>
if you have an array with integers between 0 and 99, it'll be two.something bytes per element...yow
<jasom>
beach: :)
pierpa has joined #lisp
<j`ey>
my integers are between 0..5mil
<whartung>
plus all the garbage from teh constantly growing array
<jasom>
beach: I'm so far behind in points with you I don't keep score anymore
<whartung>
as it’s extended
<jasom>
whartung: *print-readably* will fix that because it length-prefixes it
<j`ey>
whartung: yeah that's why I was hoping I could do something like tell it the size upfront
<j`ey>
ah
<whartung>
oh ok jasom
<trittweiler>
seems like sbcl's sb-impl::sharp-a reads in the contents as a list first
<jasom>
(this is all sbcl specific btw j`ey) *print-readably* does something useful on each implementation, but exactly what it does is not strongly specified
<whartung>
that wouldn’t be surprising trittweiler
<j`ey>
jasom: that's fine for now
<j`ey>
jasom: now I get errors actually writing out the array..
<whartung>
from the print-readably?
<j`ey>
yeah I think so. I also added :type 'fixnum to my array declaration, but I hope that shouldnt cause issues
<whartung>
no, that shouldn’t
<whartung>
Apparently, the “print-readably” isn’t a single pass change to the data.
<whartung>
I have no idea what it does in this case.
<beach>
jasom: I don't count points.
<j`ey>
maybe I can split the table into two
<j`ey>
not sure how to do that though
<beach>
jasom: I hope I am not sounding hostile. You give very valuable inputs for me to consider.
<whartung>
you should consider one of the options presented before or writing your own serilaizer j`ey
<j`ey>
whartung: yeah. I guess I have to
<_death>
never use fixnum.. you know the bounds, so use (integer 0 5000000)
<pjb>
beach: lispos -> clouseau ; boot it with the command: jacques
<beach>
Good one.
<j`ey>
_death: I guess that wouldn't actually change the text serialisation
<aeth>
I wonder how many implementations can understand (upgraded-array-element-type '(and fixnum (integer 0 *))) as an idiom for unsigned fixnum (SBCL makes that (unsigned-byte 62))
<aeth>
But you would want to specify (integer 0 5000000) so the tightest bounds can be used for storage. In this case, SBCL uses (unsigned-byte 31)
<aeth>
Half the memory
<j`ey>
oh I guess I was thinking about the memory of the file, not of when it actually serialised it back into objects
<aeth>
(upgraded-array-element-type '(integer 0 5000000)) ; I would expect (unsigned-byte 31) or (unsigned-byte 32) and mostly the latter
<j`ey>
so yeah, I've added the (integer..) now
<aeth>
I get ub32 everywhere but on SBCL, where it is ub31
<j`ey>
do I need the upgraded-array-element-type bit?
<aeth>
no, that's how you test what you get
<j`ey>
ah
<aeth>
you just put :element-type foo
<aeth>
Every implementation makes it (unsigned-byte 32) except for SBCL, which makes it (unsigned-byte 31)
Roy_Fokker has joined #lisp
<_death>
it's a matter of correctness.. fixnum with these bounds may or may not work, depending on the implementation.. the integer spec will always work
<j`ey>
even with integer, print-readably fails :(
<pjb>
whartung: when you stream a vidoe, linux still puts the whole video in RAM! If you stream it again, it won't access the disk again!
<whartung>
if you have enough disk buffer cache, yes it will do that.
<pjb>
(well, assuming a 4GB video, and 32GB RAM minimum as I have).
<j`ey>
at least it started to write out the file: #A((5322240 6) (UNSIGNED-BYTE 31)
<whartung>
but the application doesn’t load that 4G
<aeth>
_death: Types can provide performance, space efficiency, and/or correctness
<pjb>
whartung: it should not. It's more efficient if I/O is performed by the OS instead of the applicaiton.
<j`ey>
is there a simple way to split an array? so I can serialise across several files, and join them back up later?
<aeth>
j`ey: If you want to be semi-portable you could use the result of (upgraded-array-element-type '(integer 0 5000000)) instead of (unsigned-byte 31)
<whartung>
correct
<whartung>
to a point...
<pjb>
whartung: this is why we want to use the RAM as a cache for the file system: so the OS can do all the I/O.
<j`ey>
aeth: what I pasted was from the output of print-readably
<aeth>
j`ey: Generally, I think the idiomatic CL solution is to not split the array and to work directly with start/end indices
<aeth>
2D makes that more complicated, of course
<beach>
pjb: Am I detecting that you are supporting my point of view?
<j`ey>
aeth: Im only talking about splitting it, to be able to serialise it
<dlowe>
(I wish displaced arrays had a better interface :p)
<pjb>
beach: Yes, I like the EROS system :-)
<dlowe>
displaced arrays are way easier to deal with than messing about with start/end indices
<pjb>
and coyotos.
<beach>
pjb: got it.
<beach>
pjb: By the way, you are the one who made me reconsider crash-proofness. And I think you are right that it might be required. Not for crashes, but for the issue jasom is talking about.
it3ration has joined #lisp
<beach>
Add to EROS a check-sum verification before pages are written to disk and it should be fantastic.
<pjb>
IIRC, the disk was always consistent in EROS.
<beach>
Indeed.
<aeth>
j`ey: I would personally use a binary format. Perhaps it would be fast and simple enough to reconstruct ub32s from stored/read ub8s. (Or is there a way to read ub32s directly?)
<pjb>
The updates were written, and only when the write was complete, a flag was flipped on the disk to activate them.
<beach>
pjb: Provided RAM was not corrupted by cosmic radiation.
<pjb>
So if a crash occured, it could only read the disk from one stage or the other.
<j`ey>
aeth: yeah, seems like I'll have to take a look at CL-STORE
<aeth>
j`ey: You probably want something low level that can give you an ub32 abstraction since you're not doing anything elaborate. I'm not sure if CL-STORE is at that level.
<aeth>
It's a common enough problem that I'm sure someone has written something.
azimut has quit [Ping timeout: 252 seconds]
azimut_ has joined #lisp
<aeth>
Failing that, you can just (read-byte) 4 times and use bit magic to turn that into a ub32. It should be about 20-30 lines
<j`ey>
never used any CL libs before heh
<pjb>
As for the garbage collection, in EROS it's performed over the whole disk (but lazily). It's the only way to erase a file (notice that it's also already the case in unix file systems, with the unlink (not delete) syscall!
<beach>
On an unrelated topic, I am disturbed by seeing people give style advise who have not exposed any of their own shared systems.
<pjb>
(only in unix it's a refcount-based collector :-))
it3ration has quit [Ping timeout: 244 seconds]
rumbler31 has joined #lisp
<j`ey>
aeth: actually true, maybe I'll just DIY. the size is fixed
<beach>
pjb: I think the EROS gc is an issue orthogonal to the main problem they solve.
<aeth>
j`ey: There's almost certainly a library for going to-and-from ub32 and ub8, but it's also very simple, maybe an hour or two
<pjb>
aeth: you can have binary files of (unsigned-byte 32). Only the file format is not specified therefore it's implementation dependent. Not portable across systems or across implementations (or even versions of the same implementation).
<pjb>
so for temp files, it'd be ok. For persistent file we use (unsigned-byte 8) which is what POSIX provides.
<aeth>
pjb: Using byte functions (is it ldb? been a while) to go ub32->ub8 for writing and ub8->ub32 for reading would work best afaik
<j`ey>
how can I read a 32bit value then?
<pjb>
Yes, ldb, dpb.
<pjb>
You can also use ash and logior and logand, a good compiler whould generate the same code.
<aeth>
That's my BF implementation, where I needed to do that because my BF uses ub8 but needs to support Unicode
<j`ey>
read-char
<aeth>
I might have the endianness wrong, though
<aeth>
(Well, not wrong, but not efficient)
<pjb>
or even arithmetic, a good compiler should notice the multiplications and truncates by powers of two …
<j`ey>
Im just going to write out 32bits, it's fine for what im doing
rumbler31 has quit [Ping timeout: 252 seconds]
vlatkoB_ has quit [Remote host closed the connection]
<aeth>
pjb: Standards have been raised. Apparently a good compiler now recognizes x * (2^n + 1) and x * (2^n - 1) and rewrites them as (x * 2^n + x) and (x * 2^n - x)
<_death>
there's a library called nibbles...
<aeth>
pjb: where x is exact and not a float, of course
<aeth>
Rules of algebra don't apply to floats :-p
<j`ey>
aeth: is there a read-32 equivalent to read-char?
<aeth>
j`ey: portably streams only work on characters and bytes where bytes are ub8. If you read in bytes you need to combine them into 4 (and for writing, divide them into 4) using dpb/ldb/etc.
<aeth>
Alternatively ash and +
<aeth>
(format t "~B~%" (ash #b11111111 8))
<_death>
logior/logand.. + only "makes sense" if the bit ranges are mutually exclusive
<pjb>
aeth: more tricks can be done by compilers on integer arithmetic, notably when modulo arithmetic is used. like using a multiplication of the modulo inverse instead of a division.
dueyfinster has quit [Quit: My iMac has gone to sleep. ZZZzzz…]
<pjb>
_death: sometimes + makes sense even if the bit ranges are not exclusive. like when you're on a 8088 and converting a segment:offset address :-)
<_death>
pjb: ;)
<_death>
pjb: protected mode protects people from that
<aeth>
Something like this could work: (defun combine-bytes (byte-0 byte-1 byte-2 byte-3) (declare ((unsigned-byte 8) byte-0 byte-1 byte-2 byte-3)) (+ (ash byte-0 24) (ash byte-1 16) (ash byte-2 8) byte-3))
<j`ey>
aeth: can you write me a function to split them? :P
<aeth>
There you would need ldb
<_death>
aeth: that code is sitting on the fence.. logior/ash or +/*
<Bike>
wait, i think you can have streams with bit width bigger than 8 fine.
<aeth>
_death: using logior and + are the same apparent speed and the same assembly size, just a different instruction in the asm
<aeth>
j`ey: Feel free to use logior instead of + if you want to be ideologically correct
<_death>
aeth: it's just a style remark
wigust has quit [Ping timeout: 245 seconds]
<_death>
like (list (car foo) (rest foo))
<aeth>
_death: Odds are that Intel recognizes this pattern and uses OR or something. Since asm isn't the lowest level anymore
rumbler31 has joined #lisp
<pjb>
Bike: well, POSIX doesn't explicitely specify 8. It specifies bytes and unsigned char. So it's actually CHAR_BIT.
<pjb>
Bike: on the other hand, hardware interfaces to hard disks (IDE, SCSI, etc) do specify 8-bit bytes.
Guest23783 has joined #lisp
<_death>
aeth: by "anymore" you're talking about the 80s right? :)
<aeth>
j`ey: the only thing to keep in mind is that whatever format you use, order the bytes so that you can reverse it.
<pjb>
Bike: so in practice, 8 is the portable bit width.
<j`ey>
aeth: I will try your code soon
<Guest23783>
Hi
<pjb>
Bike: (Internet protocols also specify 8-bit bytes).
<aeth>
j`ey: and either pjb's is the reverse or one of the two byte sequences needs to be reversed in order
kajo has quit [Quit: From my rotting body, flowers shall grow and I am in them and that is eternity. -- E. M.]
figurehe4d has quit [Quit: Leaving]
<Guest23783>
I'm getting an error when I try to do division.
<aeth>
You probably want write-byte and to write the four individually because that temporary vector is probably going to cons a lot in a really big file
<White_Flame>
Guest23783: defvar defines a variable, and only sets it if it's new
<White_Flame>
so you want defvar up above your code, and (setf n1 (read)) instead
<White_Flame>
the 2nd time you run your code, the defvar does NOT change the value of n1 and n2, because the variable is already defined
<j`ey>
aeth: write-byte is easier
Kundry_Wag has joined #lisp
<aeth>
I'm not sure if there is an advantage to matching the machine's endianness if you're doing it manually like this.
<White_Flame>
however, it's more lisp style to do (let ((n1 (progn (print "Number 1:) (read))) ....) so you have a scope in which the local variable n1 (and n2) exist in. There's no need for them to be global
kajo has joined #lisp
<White_Flame>
Guest23783: there's also #clschool which is more appropriate for beginner questions
<j`ey>
aeth: for now, I just want something simple :P
<pjb>
Guest23783: you have too many closing parentheses.
it3ration has joined #lisp
mindCrime_ has quit [Ping timeout: 252 seconds]
rumbler31 has quit [Remote host closed the connection]
Kundry_Wag has quit [Ping timeout: 252 seconds]
gravicappa has quit [Ping timeout: 272 seconds]
Kundry_Wag has joined #lisp
<Guest23783>
Thanks pjb and White_Flame.
it3ration has quit [Ping timeout: 246 seconds]
Kundry_Wag has quit [Ping timeout: 252 seconds]
Kundry_Wag has joined #lisp
kajo has quit [Ping timeout: 252 seconds]
kajo has joined #lisp
Kundry_Wag has quit [Ping timeout: 252 seconds]
<Demosthenex>
could someone recommend to me some reading on organizing a CL project? i've reached the point where i need to break up my program into multiple files, dump a few executables, etc.
igemnace has quit [Ping timeout: 252 seconds]
igemnace has joined #lisp
<White_Flame>
look at multi-file libraries, and how their .asd file is set up
<Demosthenex>
and other than (load...) i'm lost ;]
<Demosthenex>
(my google-fu is failing because i'm not using the right words to describe what i'm trying to do)
nbunjevac has quit [Quit: leaving]
<White_Flame>
do you use quicklisp? if so, then you have lots of small libraries to look at
<Demosthenex>
yeah, i do. and i was looking at roswell for dumping files. but my library management is really (load "../lib/x.lisp")
<Demosthenex>
i'm not creating libraries for use outside the project, just trying to break up my program for reuse, especially with the executables
<White_Flame>
~/quicklisp/dists/quicklisp/software/ will hav ethe source code to the libraries you downloaded
ealfonso has joined #lisp
Bike has quit [Ping timeout: 252 seconds]
<Demosthenex>
part of what i'm asking is since these are internal libs... isn't ql/asdf overkill? is (load) enough, or am i missing something specific
it3ration has joined #lisp
<whartung>
if you can do it with a simple “project.lisp” that loads the files in their proper order, then do it. nbd.
<whartung>
if you want to learn about formal packages and systems and such, then do that instead. Your code, up to you.
Copenhagen_Bram has quit [Ping timeout: 252 seconds]
Kundry_Wag has joined #lisp
<Demosthenex>
sounds like use load until i outgrow it ;]
<whartung>
much progress has been made by “doing what works, until it doesn’t — then change it"
Kundry_Wag has quit [Ping timeout: 240 seconds]
<pjb>
Demosthenex: definitely. Since this way you will understand why you need those sophisticated tools and how they work.
Copenhagen_Bram has joined #lisp
<whartung>
yup! There’s a lot to be said about understanding why you’re jumping through all those hoops to begin with, and you won’t know that until you start encountering some kind of pain point. (a point that you simply may not have reached yet)
jdz has quit [Ping timeout: 240 seconds]
shymega has quit [Quit: Ciao.]
Ober has joined #lisp
shymega has joined #lisp
bradcomp has quit [Ping timeout: 246 seconds]
Fare has quit [Ping timeout: 272 seconds]
foom2 is now known as foom
jdz has joined #lisp
<Demosthenex>
exactly.
LiamH has quit [Quit: Leaving.]
Kundry_Wag has joined #lisp
Copenhagen_Bram has quit [Ping timeout: 252 seconds]
Kundry_Wag has quit [Read error: Connection reset by peer]
<_death>
Demosthenex: an .asd file and a package.lisp file gets you a long way..
<_death>
White_Flame: this is actually better than my version, since it's correct in the case n=0
rpg has joined #lisp
<aeth>
White_Flame, _death: I wrote an alternative not in regex because the regex doesn't run in cl-ppcre: https://gitlab.com/snippets/1751515
dale has quit [Read error: Connection reset by peer]
dale_ has joined #lisp
dale_ is now known as dale
Denommus has quit [Remote host closed the connection]
<aeth>
It looks like it even handles trailing whitespace for free
<aeth>
I'm surprised relying on handler-case didn't absolutely destroy the performance of the NIL results, at least in SBCL
<White_Flame>
I'd just put a handler-case aroudn the whole thing and not do the intermediate checks
kajo has joined #lisp
<aeth>
I can also get rid of the second checks for space-0 and space-1
<White_Flame>
that way the parse error would just abort out to the outer handler to return NIL, and not bother running through the AND checks and further fields
<aeth>
i.e. (and a b c (= c (+ a b))) will work just fine
<_death>
this is lisp... just (= c (+ a b)) would be fine ;)
<aeth>
_death: no because those are only defined on numbers and the type of those variables is (or null integer)
<White_Flame>
right, which would trip an error
<_death>
so add ignore-errors
<aeth>
I will edit it to just make it (and a b c (= c (+ a b))) instead of (and space-0 space-1 a b c ...)
<White_Flame>
get rid of the handler-cases and that means the final numerics are only reached if parse-integer succeeded
<_death>
what I actually aimed for was pointing out that in lisp we don't care about strings ;)
<aeth>
_death: No. In C/C++ people don't care about strings
<aeth>
In most other languages, people do :-p
<White_Flame>
none of your existing checks actually propagate numeric type information. They only ensure non-nil, so you're not gaining any speed with these checks
<White_Flame>
just let the builtins check & throw for you
<aeth>
White_Flame: the known type is (or null integer)
<White_Flame>
ah, hmm, I guess so
<aeth>
White_Flame: I could just put handler-case over the whole thing and return nil for a parse-error, though, if that's what you mean
<White_Flame>
yes
<White_Flame>
then you don't hav eto check for a/b/c at the end
<White_Flame>
your error handling is only pushing it forward to require more error handling, which is overall slower and more error-prone :-P
Lycurgus has quit [Quit: Exeunt]
<aeth>
White_Flame: It doesn't handle some edge cases, though, such as where b and c are NIL by space-1 not existing
<White_Flame>
you could also bail after space-0 and space-1, by nesting ANDs
<White_Flame>
since yo make redundant space-0/1 checks
<aeth>
I did initially nest LETs
<aeth>
it looked ugly
<White_Flame>
this looks uglier IMO :)
<aeth>
It's flat, though
<White_Flame>
flat is bad and non-lispy
<aeth>
PROGN is bad and non-lispy
<White_Flame>
I doubt this would need progn
<White_Flame>
though a when-let sort of thing would certainly be handy
<White_Flame>
to check for spaces
<White_Flame>
the integers could just throw to the outside
<aeth>
White_Flame: What I meant is that any form with an implicit PROGN is just as not-flat as LET*
<White_Flame>
what do you mean by "flat" specifically?
<aeth>
s/not-flat/flat/
<White_Flame>
I thought you were implying straight indentation at first
<White_Flame>
that doesn't really answer my question
Trystam has joined #lisp
aindilis has quit [Ping timeout: 260 seconds]
eschulte has quit [Ping timeout: 245 seconds]
Trystam is now known as Tristam
<aeth>
White_Flame: Doing everything all at one level can definitely be lispy if it fits the problem
lavaflow has quit [Ping timeout: 252 seconds]
arduo has quit [Ping timeout: 244 seconds]
<White_Flame>
I still don't know what "flat" implies
fikka has quit [Ping timeout: 246 seconds]
<aeth>
White_Flame: LET* is designed to avoid the nesting in (let ((x 42)) (let ((y 43)) ...)) and, yes, it's not a perfect fit if the LET is conditional
<_death>
the issue is separation of concerns.. you're mixing both parsing and testing.. (test (numbers string))
lavaflow has joined #lisp
<aeth>
_death: I'm not mixing both parsing and testing. I'm parsing the number and if the parsing fails, I'm returning NIL, parsing three numbers as (or null integer)s, and then I do the testing. That's also why I saw the handler-case as best done 3 separate times
<_death>
I feel we've been miscommunicating a lot today ;)
<aeth>
(and a b c (= c (+ a b))) is essentially (and parse-successful (= c (+ a b)))
<aeth>
I suppose the parsing could be done in a helper function that returns 3 values and only has one handler-case
<White_Flame>
I'm a bit surprised that read-into-list-until-eof isn't a builtin
<_death>
more efficient to use make-concatenated-stream
<_death>
of course there's also a looping solution
robotoad has quit [Quit: robotoad]
rumbler31 has joined #lisp
fikka has quit [Ping timeout: 272 seconds]
aindilis has joined #lisp
<White_Flame>
(read-delimited-list #\EOF)
rumbler31 has quit [Remote host closed the connection]
Pixel_Outlaw has joined #lisp
kushal has quit [Remote host closed the connection]
SenasOzys has quit [Remote host closed the connection]
Kaisyu7 has quit [Quit: ERC (IRC client for Emacs 26.1)]
mange has joined #lisp
fikka has joined #lisp
danielxv_ has joined #lisp
danielxv_ has quit [Client Quit]
vertigo has quit [Ping timeout: 240 seconds]
kushal has joined #lisp
danielvu has joined #lisp
fikka has quit [Ping timeout: 252 seconds]
<aeth>
White_Flame: CL is built for efficiency.
<aeth>
alexandria:read-file-into-byte-vector and alexandria:read-file-into-string seem like obvious things that are missing, though
Kaisyu has joined #lisp
<_death>
it comes from a time where such things were obviously wrong
<aeth>
They're still usually wrong, especially for the byte-vector option
<aeth>
But it's weird that they're not there (as well as the rest of alexandria, for the most part)
<White_Flame>
aeth: I would say that the lack of some of those things are probably from small-memory-footprint thinking
<_death>
maybe make-array should be able to take a filespec for :displaced-to :)
<White_Flame>
that'd be nice :)
ebrasca has joined #lisp
<_death>
from what I've seen, sbcl assumes the (specialized) array's header immediately precedes the contents, so it's a bit tricky to implement
dented42 has joined #lisp
fikka has joined #lisp
asarch has quit [Quit: Leaving]
fikka has quit [Ping timeout: 272 seconds]
robotoad has joined #lisp
<jasom>
aeth: what does that even do
<jasom>
oops I was scrolled up, that was in response to the regex posted 3 hours ago
<jasom>
Demosthenex: I use ASDF for pretty much any program I write; it doesn't add much complexity and has the advantage of declaring your dependencies.
<jasom>
though I only do so because I did what pjb mentions (jumped through a bunch of hoops until I got tired of it).
Jesin has quit [Ping timeout: 252 seconds]
dale has quit [Quit: dale]
fikka has joined #lisp
it3ration has quit [Remote host closed the connection]
it3ration has joined #lisp
rpg has quit [Ping timeout: 245 seconds]
fikka has quit [Ping timeout: 245 seconds]
dented42 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]