00:02
pierpal has quit [Ping timeout: 252 seconds]
00:35
Natechip has joined #picolisp
00:37
Natechip has quit [Remote host closed the connection]
00:52
ubLIX has quit [Quit: ubLIX]
02:07
jibanes has quit [Ping timeout: 244 seconds]
02:09
jibanes has joined #picolisp
02:15
jibanes has quit [Ping timeout: 268 seconds]
02:15
jibanes has joined #picolisp
02:25
pierpa has quit [Quit: Page closed]
03:29
_whitelogger has joined #picolisp
04:05
aw- has quit [Quit: Leaving.]
05:39
macker15 has joined #picolisp
05:39
macker15 has quit [Remote host closed the connection]
05:50
orivej has quit [Ping timeout: 252 seconds]
07:39
<
Regenaxer >
Sure. List of lists
07:40
ubLIX has joined #picolisp
07:41
<
razzy >
Regenaxer: will picolisp swap them properly? it should
07:48
orivej has joined #picolisp
07:55
ubLX has joined #picolisp
07:57
ubLIX has quit [Ping timeout: 252 seconds]
08:07
<
Regenaxer >
swap? Have you looked at 'swap' or 'xchg'?
08:16
<
razzy >
Regenaxer: i mean, would the long nested lists live in RAM? or would they get swapped in HDD if they are not needed
08:16
<
Regenaxer >
Lists always live in RAM
08:17
<
Regenaxer >
They get swapped if the OS does so
08:17
<
Regenaxer >
Again, preliminary worries!
08:17
<
razzy >
i am guilty
08:18
<
Regenaxer >
A list of one million elements is unusable for other reasons, but takes only 16 MiB
08:26
orivej has quit [Ping timeout: 268 seconds]
08:35
<
razzy >
Regenaxer: skiplist is something different than just list of lists
08:36
<
razzy >
skiplist have same data in different nested lists
08:36
<
Regenaxer >
Still just lists of lists, no?
08:36
<
Regenaxer >
(in pil everything is a list anyway ;)
08:36
<
razzy >
yes but i do not know ho to handle adressing
08:37
<
razzy >
it is ctl magic
08:37
<
Regenaxer >
ctl = control?
08:37
<
razzy >
i think it is some kind of pointer in picolisp lists
08:38
<
Regenaxer >
lists consist
*only* of pointers
08:39
<
Regenaxer >
and thus s-exprs in general, with the exception of short numbers
08:40
<
Regenaxer >
doc64/structures
08:41
<
Regenaxer >
short numbers and the DIG part of big numbers
08:45
ubLX has quit [Quit: ubLX]
09:04
pierpal has joined #picolisp
09:23
pierpal has quit [Read error: Connection reset by peer]
09:40
orivej has joined #picolisp
10:08
<
razzy >
does "(NIL (< N 3) (printsp 'enough))" have special meaning in whole picolisp? or just in escape in (for *) function
10:11
<
Regenaxer >
It is a special syntax 'for' but also 'do' and 'loop'
10:12
<
Regenaxer >
NIL
*looks* like a function call here, but it is not
10:14
<
razzy >
but picolisp interpreter could look for patterns similiar to functions and make apropriate choices
10:14
<
razzy >
it could be usefull to jump out of current list
10:15
<
Regenaxer >
you cannot jump out of a list, only out of a loop
10:15
<
Regenaxer >
The above pattern does not work in 'while' or 'until' if you mean that
10:16
<
razzy >
maybe jump from current function? it is clearer expression
10:17
<
Regenaxer >
Also not. There is no way to jump out of a function in pil
10:17
<
Regenaxer >
You can jump out of more deeply nested structures with 'throw'
10:18
<
Regenaxer >
So yes, with 'throw' you jump out of one or many functions
10:18
<
razzy >
is it not basicly the same?
10:18
<
Regenaxer >
The same as what?
10:18
<
razzy >
as jumping out of for
10:18
<
Regenaxer >
I mean, 'throw' is the only way to jump in this sense
10:19
<
razzy >
i will read
10:19
<
Regenaxer >
'for' is left with NIL or T in a more controlled way I woud say
10:19
pierpal has joined #picolisp
10:20
<
Regenaxer >
ah, and 'yield' is also like 'throw', ie "jumping"
10:37
<
razzy >
is (I . L) special notation for I line-number of the cell?
10:40
<
razzy >
i guess i works only in for function
10:42
<
Regenaxer >
correct. Special syntax only in 'for'
10:43
<
razzy >
but this one i find awesome :]
10:43
<
Regenaxer >
useful :)
10:45
<
razzy >
do coroutines for 64bit pil work on several cores?
10:46
<
Regenaxer >
no, they run in the same process, so only on a single core at a given moment
10:49
pierpal has quit [Quit: Poof]
10:49
<
razzy >
but one core processing several lisp threads at one time yes?
10:49
pierpal has joined #picolisp
10:51
<
Regenaxer >
One core can run one process at a time
10:51
<
Regenaxer >
or one thread, but pil has no threads in that sense
10:52
<
Regenaxer >
coroutines are logically parallel, not physically
10:54
<
razzy >
hmm, will it show in debuger?
10:55
<
razzy >
i thought that you cramp several lisp instructions in one 64bit real CPU instruction
10:57
<
razzy >
idk how it works
10:57
<
Regenaxer >
I
*planned* for PilOS to execute several primitives of a single lisp functions in parallel on several cores
10:58
<
Regenaxer >
eg doing things on the stack and in heap parallel
10:58
<
Regenaxer >
not sure if this would be faster at all, as bottleneck is the buses
10:58
<
Regenaxer >
ie memory access
10:59
<
Regenaxer >
There are no "lisp instructions"
10:59
<
razzy >
i do not understand coroutines, are not logically parallel instructions extremelly rare?
10:59
<
Regenaxer >
and one lisp functions consists of many many 64bit instructions
11:00
<
Regenaxer >
They are useful sometimes
11:00
orivej has quit [Ping timeout: 252 seconds]
11:01
<
razzy >
how do it recognise?
11:02
<
Regenaxer >
A more typical example is traversing several trees at the same time, asyncronously
11:02
<
Regenaxer >
recursive traversal
11:03
<
razzy >
and you could cramp more of them in one CPU core?
11:03
<
Regenaxer >
What do you mean?
11:04
<
Regenaxer >
Pil does not deal with cores, the OS does
11:04
<
razzy >
ok, i have vague, propably wrong idea i am satisfied with
11:04
<
Regenaxer >
And there is nowhere something which cramps something else into cores (?)
11:05
<
Regenaxer >
A single core always executes maximally one thread or process
11:07
<
Regenaxer >
An example for parallel traversing trees:
11:07
x49F has joined #picolisp
11:11
x49F has quit [Remote host closed the connection]
12:39
<
razzy >
how do i get usefull data from forked picolisp
12:41
<
Regenaxer >
Use 'pipe'. Or IPC via 'tell', but this only *sends' to other processes
12:46
<
Regenaxer >
A convenient function based on 'pipe' is 'later'
12:51
<
razzy >
imagine parallel bubble sort.
13:14
<
razzy >
later float my boat a little
13:29
razzy has quit [Read error: Connection reset by peer]
13:30
razzy has joined #picolisp
13:33
<
Regenaxer >
parallel bubble sort is neither as easy nor as useful as you may think ;)
13:34
<
Regenaxer >
To do something in parallel, the data should either be independent, or you need to lock (synchronize) things which makes parallelism meaningless
13:35
<
razzy >
Regenaxer: does picolisp have lock on variables?
13:36
<
razzy >
it has lock on file streams
13:36
<
Regenaxer >
Makes no sense
13:36
<
Regenaxer >
As there are no threads, variable locks are meaningless
13:37
<
Regenaxer >
The DB locks
13:37
<
razzy >
but if everything is symbol, it should not be problem to use file lock on variables
13:38
<
Regenaxer >
no problem, but useless
13:38
<
razzy >
ah,.. i know now,..
13:38
<
Regenaxer >
No variable can be modified asynchronously by another process
13:39
<
razzy >
i cut list and make independent bubblesort
13:39
<
Regenaxer >
(tell <pid> 'setq '*Variable 12) will set a var in another process
13:39
<
Regenaxer >
but syncronously
13:39
<
Regenaxer >
you can do that with 'later'
13:40
<
Regenaxer >
The example in the ref executes something on all members of a list
13:40
<
Regenaxer >
a kind of parallel mapcar
13:42
<
Regenaxer >
The example is a bit meaningless though
13:42
<
Regenaxer >
No sense to parallize (* N N)
13:43
<
razzy >
bubblesort is good example to paraelise
13:44
<
Regenaxer >
hmm, I'm not sure. Can you distribute the data?
13:46
<
razzy >
Regenaxer: you example (* N N) is better just had to read it again
14:09
<
razzy >
Regenaxer: you have better example, but not that much readable. i had to (read) it to comprehend. (later really float my boat. i prefer this version of example
https://ptpb.pw/3jSB
14:15
<
razzy >
how much overhead (later have?
14:18
<
tankf33der >
i’ve implement map-reduce on coroutines
14:19
<
tankf33der >
razzy: check this out too
14:25
<
razzy >
i really think that coroutines, tasks, should be in separate library
14:27
<
Regenaxer >
coroutines and tasks cannot be in a separate library, they go deep into the interpreter core
14:34
<
razzy >
tankf33der: now i understand better coroutines
14:36
<
razzy >
(later) uses fork?
14:40
jibanes has quit [Ping timeout: 272 seconds]
14:40
<
razzy >
(later) uses (fork)? (co)?
14:40
<
razzy >
i guess fork
14:41
jibanes has joined #picolisp
14:42
<
Regenaxer >
It uses 'pipe', which does fork() inteanally
14:42
<
Regenaxer >
: (pp 'later)
14:42
<
Regenaxer >
(de later ("@Var" . "@Prg")
14:42
<
Regenaxer >
(pipe (pr (prog . "@Prg")))
14:42
<
Regenaxer >
(setq "@Var" (in @ (rd)))
14:42
<
Regenaxer >
(task (close @)) ) )
14:42
<
Regenaxer >
"@Var" )
14:42
<
Regenaxer >
-> later
14:44
<
razzy >
(de pipe 724692000 . 724692000 )
14:44
<
razzy >
: (pp 'pipe)
14:44
<
Regenaxer >
: (vi 'pipe)
14:44
<
Regenaxer >
or (em 'pipe) I think
14:45
<
Regenaxer >
724692000 means it is a function pointer to a built-in
14:46
<
razzy >
learning here
14:47
<
Regenaxer >
See doc/ref.html#ev
14:47
<
Regenaxer >
Under "What is an executable function?"
14:56
helloworld has joined #picolisp
14:58
<
razzy >
rule of thumb, how big your code should be to gain advantage when running through (later) function?
14:59
<
Regenaxer >
Not the size of the code, but how long it takes to run it
14:59
<
Regenaxer >
I used it a lot for remote queries, with network delay etc
15:00
<
razzy >
usefull yes,.. but not now :]
15:01
helloworld has left #picolisp [#picolisp]
15:01
helloworld has joined #picolisp
15:01
<
Regenaxer >
Take a look at misc/fibo.l
15:02
<
Regenaxer >
there is a parallelized version, using 'later'
15:04
helloworld has left #picolisp [#picolisp]
15:07
<
razzy >
Regenaxer: imho it takes too much time to use later
15:07
<
Regenaxer >
I think it is very efficient
15:08
<
Regenaxer >
or you mean programmer's time?
15:08
<
razzy >
it start whole new pil proces with whole initialisation, yes?
15:08
<
Regenaxer >
(this is what counts most imho)
15:09
<
Regenaxer >
New process yes, but no initialization
15:09
<
Regenaxer >
copy on write
15:09
<
Regenaxer >
Very fast in Unix
15:09
<
Regenaxer >
Not slower than threads
15:09
<
razzy >
copy on write?
15:09
<
Regenaxer >
(internally the same)
15:10
<
Regenaxer >
yes, nothing is copied unless modified
15:10
<
Regenaxer >
the example (* N N) copies nothing on the OS level
15:10
<
razzy >
aaaa,
*clever girl*
15:11
<
Regenaxer >
Again, you worry too early. Just try it practically instead of theorizing
15:11
<
razzy >
i theorized most of my life, bad habit
15:12
<
Regenaxer >
Best is a good mixture of both
15:14
<
razzy >
how long it take to build a thread?
15:15
<
Regenaxer >
No idea. The kernel copies the internal process structure (same as in fork())
15:16
<
Regenaxer >
Only a few bytes I think (few hundred perhaps)
15:17
<
Regenaxer >
and a new entry in the process or thread table
15:17
<
Regenaxer >
For some reason people are unbelievably afraid of forks but not of threads
15:18
<
razzy >
it is magic :]
15:18
<
Regenaxer >
Forked processes are better in some regards, they have their private memory
15:18
<
Regenaxer >
"immutable" to stay with the buzz
15:18
<
Regenaxer >
well, not immutable, but protected
15:19
<
razzy >
threads you could move to another machine
15:19
<
razzy >
no magic involved
15:19
<
Regenaxer >
you can start processes on another machine
15:19
<
Regenaxer >
another advantage
15:20
<
Regenaxer >
threads must be on the same
15:20
<
Regenaxer >
I used distributed DBs with up to 70 proceses plus children on several remote machines
15:21
<
razzy >
well general knowledge says you are right
15:23
orivej has joined #picolisp
15:41
ubLIX has joined #picolisp
16:18
pierpal has quit [Quit: Poof]
16:18
<
razzy >
it is not so bad
16:18
pierpal has joined #picolisp
16:52
pierpal has quit [Quit: Poof]
16:52
pierpal has joined #picolisp
16:54
<
razzy >
i am surprised, my memory does not bloat when computing fibonachi
16:56
<
Regenaxer >
fibo uses almost no memory
16:56
<
Regenaxer >
or none
16:56
<
Regenaxer >
only stack for recursion
17:01
razzy has quit [Ping timeout: 268 seconds]
17:02
razzy has joined #picolisp
17:07
<
razzy >
i am impressed :] very clever interpreter. soooo, to make it properly bloat, lets alocate some lists :]
17:09
<
Regenaxer >
Good :)
17:11
<
razzy >
the interpretter looks like monster on its own :]
17:12
<
razzy >
more like slick elusive sprite
17:41
<
razzy >
i had feeling, that the interpretter throw away code that is of no use to result
17:42
<
razzy >
it would be scary
17:44
fireworks15 has joined #picolisp
17:49
fireworks15 has quit [Remote host closed the connection]
17:58
razzy has quit [Ping timeout: 264 seconds]
18:03
razzy has joined #picolisp
18:09
ubLIX has quit [Quit: ubLIX]
18:11
<
Regenaxer >
razzy: Where did you see that code was thrown away?
18:13
<
razzy >
Regenaxer: for example, when do you loading libraries, are you loading whole code to RAM? of just what is needed
18:13
* razzy
is trying to bloat his code, with little luck
18:25
<
razzy >
well i am happy, i was able to make work with memory 2 times faster
18:44
<
Regenaxer >
Sorry, we had guests
18:44
<
Regenaxer >
In 'load', the whole file is passed through a REPL
18:44
<
Regenaxer >
ie each expression is read, evaluated, and thrown away
18:45
<
Regenaxer >
so it is in fact not a REPL, but a REL
18:45
<
Regenaxer >
(no print, print happens only in interactive mode)
18:46
<
Regenaxer >
interactive mode = stdin is a tty
19:04
<
razzy >
no problems with guests
19:05
<
razzy >
i consider IRC asynchronous, not reliable communication
19:06
<
Regenaxer >
relaxed
19:39
pierpal has quit [Quit: Poof]
19:39
pierpal has joined #picolisp
20:46
freemint has joined #picolisp
20:47
<
freemint >
will somebody be at froscon tomorrow?
21:27
freemint has quit [Quit: Leaving]
21:47
ubLIX has joined #picolisp
22:17
viaken has quit [Quit: WeeChat 2.1]
22:21
viaken has joined #picolisp
22:52
siniStar has joined #picolisp
22:54
siniStar has quit [Remote host closed the connection]
23:12
alexshendi has joined #picolisp