flux changed the topic of #ocaml to: Discussions about the OCaml programming language | http://caml.inria.fr/ | OCaml 4.00.1 http://bit.ly/UHeZyT | http://www.ocaml.org | Public logs at http://tunes.org/~nef/logs/ocaml/
<wmeyer> so ousado it seems like, you want more ram in rpi
<wmeyer> I experimented how to get it on AC100
<wmeyer> and was able to compile OCaml
<wmeyer> but I don't think I will be able to wait for Coq until it will be ready for proving ;;)
theWinner has joined #ocaml
<ousado> unfortunately I don't have one
<theWinner> does ocaml have a difference list data structure?
<theWinner> or is there an ocaml code repository I can search to find such a thing?
<theWinner> I'm trying to write a diff list for F#, and am hoping to use an existing one as a start
<wmeyer> theWinner: the answer is definitely not it Stdlib, but you may find it in Batteries or Core :-)
<wmeyer> s/may/might/
<wmeyer> you can also use Set module from stdlib for fast diffs. If you talk about arrays implemented as diff lists, then just use Array
ontologiae has quit [Ping timeout: 264 seconds]
<theWinner> ok, i will check those
<wmeyer> but I can point out now the implementation, first you have to specify what you want
<wmeyer> s/can/can't
<theWinner> I'm having trouble finding the core repository
<theWinner> do you know where I could find Core?
<theWinner> ah found it
<rks_> theWinner: opam install core
<rks_> (but otherwise http://janestreet.github.io/ )
darkf has joined #ocaml
<wmeyer> yes, I missed that completely rks_ , theWinner opam is our choice when installing packages
astro73|mal has joined #ocaml
<astro73|mal> I'm compiling an ocaml project (the haxe compiler) on a Raspberry Pi (armhf). What do I do with assembler errors?
<astro73|mal> i'm using ocaml 4 (compiled myself) and ocamlopt
<astro73|mal> http://pastebin.com/M518hsjd is the build log
<wmeyer> astro73|mal: no idea at the moment. However, this might mean, that the configure script detected wrong architecture.
<wmeyer> ah that's a project, sorry astro73|mal, let's see
<wmeyer> which distro is it?
<astro73|mal> raspbian
<wmeyer> looks like gas don't accept 32 bits shift
<wmeyer> pretty standard
<wmeyer> so 32 bit shift don't exist in reality
<wmeyer> some assemblers accept it some not
<wmeyer> but then it should be reported in mantis, if the OCaml native compiler generates it
<wmeyer> or as I say, configure script might detect wrong configuration
<astro73|mal> does the configure script leave a log anywhere i can check it with?
<wmeyer> so it shows what kind of configuration will be hardcoded into ocamlopt
<astro73|mal> right
<astro73|mal> how do i check what configuration it used w/o rerunning configure?
<wmeyer> so I think you should wait again for the compiler, and see what configure script does
<astro73|mal> that would be "no, just rerun configure"
Nahra has quit [Remote host closed the connection]
<wmeyer> right I am not sure what ARM here means, it should be arm v6
<wmeyer> so it looks a bit like a compiler bug
theWinner has quit [Ping timeout: 256 seconds]
<wmeyer> please report it here: http://caml.inria.fr/mantis/my_view_page.php
Nahra has joined #ocaml
q66 has quit [Remote host closed the connection]
walter|rtn has quit [Quit: This computer has gone to sleep]
madroach has quit [Ping timeout: 248 seconds]
madroach has joined #ocaml
<astro73|mal> that issue is a blocker for me, but i can't do anything about it
<wmeyer> I assigned it to current ARM backend maintainer, and he is quit fast usually.
<astro73|mal> well, if he has questions, i plan on sticking around here for a while
<wmeyer> he is not in your timezone :-)
<wmeyer> currently we need to wait I think.
<astro73|mal> i assume by that, you mean "he's not even a little bit near your timezone"
<wmeyer> assuming here is nearly 3 and he is +1
<astro73|mal> yeah, that's going to be a bit of a wait
<wmeyer> i can see that you might need to wait for the answer a bit
<rks_> (it's 3:38 you mean)
<rks_> (anyway, good night!)
<wmeyer> rks_: good night!
walter|r has joined #ocaml
chrisdotcode has joined #ocaml
dsheets has quit [Ping timeout: 258 seconds]
emmanuelux has quit [Quit: emmanuelux]
trep has joined #ocaml
sysopfb has quit [Ping timeout: 256 seconds]
cdidd has joined #ocaml
walter|r has quit [Quit: This computer has gone to sleep]
walter|r has joined #ocaml
pango_ has joined #ocaml
pango has quit [Ping timeout: 240 seconds]
tane has joined #ocaml
groovy2shoes has quit [Quit: groovy2shoes]
tane has quit [Quit: Verlassend]
leoncamel has quit [Ping timeout: 240 seconds]
walter|rtn has joined #ocaml
walter|r has quit [Ping timeout: 248 seconds]
frogfoodeater has joined #ocaml
sysopfb has joined #ocaml
trep has quit [Ping timeout: 258 seconds]
trep has joined #ocaml
leoncamel has joined #ocaml
sysopfb has quit [Ping timeout: 256 seconds]
weie has quit [Quit: Leaving...]
<wmeyer> hi
<wmeyer> morning
awm22 has quit [Quit: Leaving.]
Snark has joined #ocaml
Yoric has joined #ocaml
Kakadu has joined #ocaml
frogfoodeater has quit [Ping timeout: 276 seconds]
ttamttam has joined #ocaml
chrisdotcode has quit [Ping timeout: 256 seconds]
chambart has joined #ocaml
ttamttam has left #ocaml []
ttamttam has joined #ocaml
chrisdotcode has joined #ocaml
UncleVasya has joined #ocaml
ttamttam has quit [Quit: ttamttam]
chambart has quit [Ping timeout: 246 seconds]
<adrien> o/
ggole has joined #ocaml
Trollkastel has quit [Quit: Brain.sys has encountered a problem and needs to close. We are sorry for the inconvenience.]
Trollkastel has joined #ocaml
Zerker has joined #ocaml
ttamttam has joined #ocaml
ttamttam has quit [Remote host closed the connection]
ttamttam has joined #ocaml
ttamttam has quit [Remote host closed the connection]
ttamttam has joined #ocaml
Yoric has quit [Ping timeout: 256 seconds]
pango_ is now known as pango
Arsenik has joined #ocaml
paolooo has joined #ocaml
mcclurmc has joined #ocaml
Neros has joined #ocaml
paolooo has quit [Quit: Page closed]
Fullma has quit [Read error: Connection reset by peer]
Fullma has joined #ocaml
anderse has joined #ocaml
weie has joined #ocaml
ulfdoz has joined #ocaml
astertronistic has joined #ocaml
tianon has quit [Read error: Operation timed out]
tianon has joined #ocaml
q66 has joined #ocaml
tane has joined #ocaml
ollehar has joined #ocaml
ulfdoz has quit [Ping timeout: 264 seconds]
ulfdoz has joined #ocaml
ulfdoz has quit [Ping timeout: 246 seconds]
jbrown has joined #ocaml
ulfdoz has joined #ocaml
frogfoodeater has joined #ocaml
Arsenik has quit [Ping timeout: 252 seconds]
invariant has joined #ocaml
<invariant> Is anyone of you using utop.el as installed via opam? It doesn't seem to work.
<invariant> (error "Autoloading failed to define function utop-setup-ocaml-buffer")
Arsenik has joined #ocaml
<invariant> M-x utop also returns: Process utop exited abnormally with code 2
<invariant> The reason it doesn't work is because it is completely wrong. Who would have guessed?
<invariant> I would, which is why I fixed it now.
frogfoodeater has quit [Ping timeout: 240 seconds]
UncleVasya has quit [Ping timeout: 256 seconds]
<rks_> invariant: you're the man ;)
<invariant> Except usability wise it kind of sucks.
<invariant> If you input '4' and press enter is says you made a mistake.
<invariant> It could just have automatically entered 4;;
<invariant> The whole point of such an interactive tool is that it isn't in your way.
<invariant> If you absolutely want to go to the next line you would just press another keybinding (which are also standard in Emacs).
groovy2shoes has joined #ocaml
Anarchos has joined #ocaml
talzeus has quit [Quit: Where is the love...]
emmanuelux has joined #ocaml
ttamttam has left #ocaml []
emmanuelux has quit [Max SendQ exceeded]
eni has joined #ocaml
tholu has joined #ocaml
<tholu> So, anybody in here using opam?
<tholu> #opam seems to be empty...
<orbitz> tholu: yesm
<tholu> orbitz, cryptokit is required by websocket, but if I want to use cryptokit-sha512 it conflicts
<tholu> So how can I resolve this?
<tholu> Currently I can only get cryptokit-sha512 or (cryptokit and websocket)
<orbitz> tholu: I'll do my best but not sure I can help: is cryptokit-* opam pakages orOS packages?
<tholu> opam
tani has joined #ocaml
<tholu> like cryptokit and websocket as well
<orbitz> and websocket depends on cryptokit, not cryptokit-sha512?
<tholu> yes
<tholu> But I need websocket and cryptokit-sha512
<orbitz> and what is the conflict? ocamlfind name?
<tholu> When I try to install cryptokit-sha512 (which should be an extended cryptokit), I get: The package cryptokit-sha512.1.6.2 is in conflict with cryptokit.1.6.
<tholu> This is due to the following dependency chain(s): cryptokit.1.6 <- websocket.0.3
<invariant> tholu, if only there existed something starting with nix and ending in nix.
<orbitz> tholu: so is cryptokig-sha512 a proper superset?
<orbitz> invariant: hah, i think he wants access to both in this case though!
<tholu> invariant, sorry I don't get it, since I'm a bloody beginner :(
<tholu> orbitz, It should be, yes.
<orbitz> tholu: i believe he's referring to a package manager that lets you have multiple versions of conflicting libraries isntalled concurrently
<invariant> orbitz, nix also allows that.
<orbitz> invariant: it depends on the ocamlfind naem
tane has quit [Ping timeout: 258 seconds]
<tholu> orbitz, invariant, that sounds like a solution.
<tholu> But the dependency chain in opam is broken imho.
<orbitz> tholu: how about trying this: clone opam-repositorty, and modify websocket to depend on the cryptokit you want, then add teh repo to opam and install taht one?
<invariant> orbitz, are you saying that they implemented it wrong in Nix?
<orbitz> tholu: my guess is opam doesn't know cryptokig-sha512 is a superset of cryptokit
Yoric has joined #ocaml
<tholu> orbitz, I guess that is the case, but why did they add it then ;)
<orbitz> invariant: I'm saying his particular case depends on both versions ofthe library being accessibel at compile time of his package, which is not the problem nix solves
<orbitz> tholu: i don't know who 'they' is, and people don't get things right first try
<tholu> orbitz, I don't know either who maintains the opam packages, sorry - as I said, bloody beginner in the oCaml world.
<invariant> orbitz, I thought it was possible to link two different versions of the same library in a single executable via nix. Is that wrong?
<invariant> orbitz, via gcc for example.
<orbitz> invariant: not as far as I know, perhaps I'm wrong. I was under the impression they solved having progrm X and Y installed, where X and Y depend on different versions of library Z
<invariant> orbitz, perhaps you are right.
<tholu> orbitz, invariant, best solution for me would be fixing the dependency chain of opam. Any hint how to do this?
<orbitz> tholu: what do you mean by dependency chain?
<orbitz> I'm not sure if opam has aconcept of 'this is a superset of that'
frogfoodeater has joined #ocaml
<tholu> orbitz, "cryptokit.1.6 <- websocket.0.3"
<orbitz> tholu: might be worth mailing the opam mailing list
ontologiae has joined #ocaml
<tholu> orbitz, sure
<orbitz> tholu: turn around time might be a few days
<invariant> orbitz, their website does not advertise this extended featureset which we talked about, which seems to suggest that you are right.
<tholu> But perhaps someone here knows a solution as well ;) The information that websocket needs cryptokit must be stored somewhere.
<orbitz> invariant: I thin kthe only way to get that is to have an intermedaite library that is statically linked against the appropriate version
<orbitz> tholu: it is i nthe opam package
<orbitz> tholu: which is why is uggested cloning the repoistory, making changes, adding repo to your opam, and installing
<tholu> orbitz, ok, sorry I missed that.
<orbitz> tholu: when you add a repo you can set its priority higher than the current one
<tholu> orbitz, I found the corresponding file but somehow the change is not recognized (live edit :)) ~/.opam/repo/default/packages/websocket.0.3/opam - any hint how to get opam to recognize the new dependency?
<orbitz> tholu: do ti throug hthe git repo
<orbitz> opam does some indexing
<tholu> I guesss I broke something
<orbitz> I hope you learned your lesson
<tholu> orbitz, :)
<tholu> orbitz, no worries, just reinitialized the repo, but the change in the opam file is not enough I guess.. so l have to dig deeper
<tholu> orbitz, you meant like http://opam.ocamlpro.com/doc/Advanced_Usage.html described here on the bottom of the page I guess ;)
<orbitz> tholu: git clone /wahtever/to/github/opam-repsoitory
<orbitz> maek chagnes
<orbitz> opam repository add /path/to/repo
<orbitz> make sure the repo is higher in the list than default one
<tholu> Thanks!
<orbitz> opam install websocekts
<orbitz> I think 'add' takes a priority
<tholu> orbitz, seems to work flawlessly, thanks (after symlinking endian.h for cryptokit-sha512), is compiling right now
<orbitz> tholu: jättebra!
<tholu> add took priority 10
<tholu> orbitz, :)
<tholu> I'm starting to like oCaml
<orbitz> good, it's a fantastic langauge
ontologiae has quit [Ping timeout: 252 seconds]
wwilly has quit [Read error: Operation timed out]
ocp has joined #ocaml
Trollkastel has quit [Quit: Brain.sys has encountered a problem and needs to close. We are sorry for the inconvenience.]
<invariant> It just has no parallelism which is not half-baked.
mcclurmc has quit [Ping timeout: 256 seconds]
<orbitz> Parmap!
<Kakadu> parmap is not very common pattern of parallelism
<orbitz> Kakadu: in what way?
<Kakadu> The case when lexer is tokenizing in one thread and put tokens to another thread where they are evaluated via parser is more interesting
<Kakadu> and it seems that it is not for parmap
Trollkastel has joined #ocaml
<orbitz> I guess it depends on your perspective, Parmap is pretty close to a poor man Hadoop which is pretty popular
frogfoodeater has quit [Ping timeout: 264 seconds]
<ousado> maybe this is just a terminology issue, but parallelism implies that the same operations are split up
<ousado> unles you use 'concurrent' and 'parallel' interchangeably
<invariant> Good paralllism would allow me to implement parallel quicksort without starting additional processes or needlessly copying.
<orbitz> Then I guess Erlang has terrible parallel support
<ousado> invariant: good parallelism would also make that safe
<orbitz> invariant: I think that specific use case is one Xavier is especially afraid of
<invariant> ousado, yes
<invariant> ousado, and perfect when it has been proven to be correct before it runs.
<invariant> Making your computer slower just because that is easier to put in a run-time system or because your users are stupid is a bad idea, imho.
<invariant> Message passing is useful when you are actually sending a message.
<ousado> invariant: message passing is the only sane concurrency model
<invariant> ousado, linear logic is more sane.
<ousado> whether you do it between threads or processes doesn't matter
<ousado> invariant: you seem to imply that linear types and message passing are not related
<ousado> but they are
<orbitz> So far doesn't seem like linear logic has proven itself very practical (althouhg I'd love to see it for some thigns!)
<ousado> it's very practical
<ousado> you can implement bullet-proof concurrency stuff in e.g. ATS
<invariant> ousado, you mean that you can generate a linear model from message passing code?
<invariant> ousado, concurrency or parallelism?
bsrk has joined #ocaml
<ousado> since one is more general than the other, both
<invariant> ousado, yes, I have the same understanding of those concepts as you.
<invariant> ousado, lots of other people don't which is why I asked for clarification.
<bsrk> Hi,
<bsrk> if I have a recursive function,
<bsrk> that calls itself at two places,
<bsrk> with one place where the result is returned immediately,
<bsrk> and the other where there is extra computation,
<bsrk> will it be tail call optimized for the call where it is possible to be optimized?
<orbitz> bsrk: if you write the function properly, sure
<adrien> the second call will prevent it
Zerker has quit [Quit: Colloquy for iPad - Timeout (10 minutes)]
<adrien> the one after which there are still computations
<adrien> but chances are you can turn the code differently
<ousado> bsrk: you can always rewrite it using CPS style
<ousado> s/style//
<bsrk> It is a function on binary tree.
<bsrk> type 'a tree = Leaf | Branch of 'a * 'a tree * 'a tree;;
<bsrk> let rec in_order (accum : 'a list) : 'a tree -> 'a list = function
<bsrk> | Leaf -> accum
<bsrk> | Branch (v, lt, rt) -> in_order (v::(in_order accum rt)) lt;;
<bsrk> This is the function. So what is going to happen? How can I optimize it(via tail calls)?
<ggole> That contains one full call and one tail call.
<bsrk> ggole: yes
<ggole> So you should be fine.
<ggole> Although it may be a better idea to do something other than accumulate a list there
<bsrk> ggole: you mean the tail call will get optimized?
<bsrk> ggole: what do you mean?
<ggole> Yep
<ggole> It looks you are constructing a list in order to walk the elements of the tree
<bsrk> Yes
<ggole> Which you could just do directly
<bsrk> What if I have a complex operation that is more suitable for lists?
<bsrk> for example, checking if the tree is a bst.
<bsrk> if I use the inorder traversal, it is easy
<bsrk> but it is hard to do on the tree itself.
<ggole> I don't think you need lists for that, but by all means do so if you prefer.
<bsrk> How can we do it with the tree itself?
<ggole> One moment, I'll write it.
<bsrk> ggole: I figured out a way too. :-)
<ggole> Ah :)
<ggole> What's your approach
<ousado> invariant: my point is, message passing allows for trivial control flow, so even if you have multiple threads (and hence get passing big chunks of data around basically for free) you still have a deterministic system. As soon as you have multiple threads interlocking on datastructures, you either need a non-trivial machinery to prove your code that better just works, or you have indeterministic programs.
<bsrk> Write a function that checks if the given tree is a bst with values between two parameters
<invariant> ousado, what exactly are you afraid of then? That the hardware doesn't work?
<bsrk> f : 'a tree -> 'a -> 'a -> bool
<bsrk> now pattern match on the tree
<bsrk> | Leaf -> True
<ousado> invariant: I don't understand?
<ggole> Yep, that sounds right
<ousado> invariant: you mean WRT ocaml?
<invariant> ousado, but you cannot implement parallel database structures via message passing.
<invariant> ousado, or at least not commercial versions.
<ousado> you can implement everything via message passing
<bsrk> ggole (and everyone else): thanks!
<invariant> ousado, but not with performance that people want.
<ousado> invariant: what makes you think one could not?
<invariant> ousado, because otherwise algorithm papers on datastructures would use those techniques, and they don't.
<ousado> sure they do
UncleVasya has joined #ocaml
<ousado> concurrent GC stuff, for instance
<ousado> like hazard pointers
<invariant> ousado, I was talking about database structures.
<invariant> ousado, if message passing was to great for that application, why isn't PostgreSQL using it?
<invariant> so great*
<ousado> errm.. why is a quite traditionally designed database server written in C not using safe feature X?
chrisdotcode has quit [Remote host closed the connection]
<ousado> because people who write C are convinced they can deal with all eventualities
<orbitz> i think they were convincd they needed a really simple langauge
<invariant> ousado, we are not talking about human ability here.
<invariant> ousado, just raw objective performance.
<orbitz> have you all read the MSR paper on reference uniqueness?
<invariant> orbitz, no, what did they do?
<invariant> Deciding reference uniqueness is hard for general programs.
<invariant> So, they must either simulate to disprove and prove for simple cases.
<invariant> If they did anything else, I would probably want to know about it.
<ousado> either way, that something isn't commonly done in a certain way doesn't imply that the common way to go about something is more performant
<ggole> It does if the people implementing that stuff are red eyed performance freaks
<ousado> if that was different, apache 1.3 would beat nginx in terms of performance
tholu has quit [Quit: Verlassend]
<invariant> ousado, also, why are there such parallel trees written in C and C++, but zero in OCaml?
<orbitz> Because Ocaml has a stricter memory model than C or C++
<invariant> orbitz, how is that a reason?
<ousado> errm, because of a design decision the ocaml implementors made
<ousado> and I'd say that "zero" is a brave guess
<orbitz> invariant: having two interpreters running in the same memory space violates that
<orbitz> if I udnerstand what you're talking about at least
<ousado> since there are libraries that deal with shared memories
<invariant> ousado, I cannot prove a negative.
<ousado> *memory
<invariant> Yes, I know how certain people say that it is theoretically possible to obtain high-performance via such libraries.
<invariant> Except, why would those people then not first demonstrate that this is actually the case?
<invariant> Why must I accept on faith that this is true?
<orbitz> what are you guys talking about?
<invariant> orbitz, ocamlnet, I presume.
<ousado> but I'd also argue that the way to go about doing something like that, using ocaml, is not necessarily relying on the ocaml runtime alone
<orbitz> invariant: is the high level discussion still baout ocaml not having threads that execute in parallel?
<ousado> but rather writing a DSL producing code targeted at solving the problem in a lower-level language X
<invariant> orbitz, yes, that would be the top of the call stack
<orbitz> Xavier has said the reason it doesn't is because nobody has figured out a way to do it sanely
<invariant> orbitz, what's wrong with the way Haskell does it?
<orbitz> invariant: how haskell does it cannot be done in ocaml
<ggole> Yeah. No effect system, no transactional memory.
Arsenik has quit [Ping timeout: 246 seconds]
<ggole> It could add threads and locks and a memory consistency model, like Java.
<ggole> But that's completely unsafe.
<orbitz> Java isteh exact example Xavier gave of what he does not want :)
<ggole> I get the feeling that Xavier would be very happy to see SMP roll over and die.
<orbitz> :)
<ggole> Not gonna happen though.
<orbitz> Does anyone here understand .Net's memory model in relationtion to java when it comes to concurrent access?
<ggole> I seem to recall it has barriers, which you are obliged to use to prevent data races.
bsrk has left #ocaml []
<ousado> ..and in terms of scalability, SMP stuff is just an intermediate thing anyway, what you really need are algorithms that scale to multiple machines equally well.
<ggole> That's not remotely true if you are writing compute-heavy software to execute on desktop, console, or mobile devices.
<ben_zen> searching for Java, C# memory model and concurrency results in this: http://www.itu.dk/courses/SPLG/E2012/splg2012-ps-1.pdf (haven't read it yet.)
<ousado> ggole: I'm not saying one shouldn't take advantage of multiple threads being able to pass around data without having to copy it
<ggole> ousado: right... unfortunately performance concerns mean that happens a great deal, with all the pain that comes with it.
<ggole> At least, for some classes of software.
Arsenik has joined #ocaml
<ben_zen> interesting. Java can rely on x86's Atomic operations for multithreaded siuations.
<ggole> I guess what you are saying is that very large compute tasks need to think about more than threads.
<ousado> ggole: .. I just question the idea that using an amount of locks proportional to the size of the datastructure always has superior performance characteristics over message passing between threads
<ggole> Well, locks can be very cheap if they are uncontended.
<ggole> So it all depends what you are doing.
<ousado> some instructions simply are expensive
<ggole> Synchronisation can certainly be expensive
<ousado> from my personal experiments, very few lock/wait-free channels between threads that exchange instructions on how to deal with data is the more reliable and performant model. but I don't have enough data on sufficiently heterogeneous workloads to make any general statement about that.
<ousado> but since it's trivially deterministic, I'll prefer it any day until I hit a case where it doesn't work out
<ousado> and in that case there's nothing that'll stop me to employ a more risky (as in less tractable) algorithm for a specific task
ocp has quit [Quit: Leaving.]
groovy2shoes has quit [Quit: groovy2shoes]
<invariant> It stills seems OCaml's parallelism is half-baked :)
<ggole> It would be nice to have a functional language with threads and mutation that could only share immutable bits
<ggole> Without going the Haskell route
<ousado> ATS meets those requirements
<ggole> I've never been able to comprehend ATS though :(
<pippijn> ggole: Clean
<ggole> It's an amazing effort though
<ousado> well, the upcoming ATS2 (regarding the "share immutable bits" part)
<ggole> Hmm, never looked at Clean
<pippijn> share immutable bits is inherent in Clean's uniqueness typing
<pippijn> mutation is only possible on guaranteed unique pointers
<invariant> ousado, that has been upcoming for how long now?
<ousado> for about a year
<rks_> ggole: have you looked at Mezzo ?
<ousado> and most of it works
<ggole> rks_: never heard of it
<rks_> wait a sec then
<ggole> Google beat you to it :)
* ggole reads
<rks_> damn!
<rks_> :p
<pippijn> ggole: impressive google skills
<ben_zen> so they want to both reject patterns and accept others. Fascinating.
<adrien> s/go/oog/
* adrien -> []
<ggole> "mezzo functional" popped it right up
<pippijn> "mezzo language" also did that
<rks_> ben_zen: you probably want to look at the second article :)
<ggole> "In order to make this possible, Mezzo replaces the traditional concept of "type" with that of "permission"." hmm
<ggole> This looks intruiging
<pippijn> how do you say "abuse of language features" in a nice way?
<rks_> (there is a paragraph about concurrency in the second article ben_zen)
<ben_zen> ah-ha
ontologiae has joined #ocaml
<ousado> pippijn: do uniqueness types in clean allow sharing of immutable views of the data?
<pippijn> ousado: once you have more than 1 viewer of data, it is immutable
<ousado> ah yes, that's what ATS2 adds to linear types
<ousado> nice.. it's a pity the clean guys like billy boy so much
<rks_> ousado: where do you get information on ATS2, is http://www.ats-lang.org/ it?
<pippijn> who's billy boy?
<ousado> rks_: mostly the mailing list(s)
<ousado> pippijn: the man at the gates
<pippijn> as in MS?
<ousado> pippijn: I mean their unfortunate preference for the windows platform, yes
<pippijn> ah, yes
<pippijn> that's true
<pippijn> but it works on linux, too
<pippijn> "with limited io"
<pippijn> whatever that means (I didn't try it much)
<ousado> yes, just not the object IO stuff
tani is now known as tane
<ousado> which is most interesting
<ggole> "With limited IO"? Do they make use of completion ports or some other windows stuff?
<pippijn> do you have any idea why?
<ousado> nope that's their name for the GUI
<ousado> .. stuff
<pippijn> ah
<ousado> and their IDE etc.
<ggole> Blargh
<pippijn> oh well
<pippijn> ocaml has nothing (by itself) for gui
<pippijn> people can make libraries for that if they want
<pippijn> ok, ocaml has the graphics module
<ousado> I'd say js_of_ocaml + node-webkit ftw
<pippijn> but that's not so interesting
<ousado> rks_: but otherwise, that's the ATS website, yes
<rks_> ok :)
RagingDave has joined #ocaml
ttamttam has joined #ocaml
ontologiae has quit [Ping timeout: 256 seconds]
bsrk has joined #ocaml
<bsrk> Hi,
<bsrk> I am learning about Continuation Passing Style.
<bsrk> I was trying to make a function in CPS style.
<bsrk> type 'a tree = Leaf | Branch of 'a * 'a tree * 'a tree;;
<bsrk> (* non cps *)
<bsrk> let rec construct (start:int) : int -> int tree = function
<bsrk> | 0 -> Leaf
<bsrk> | 1 -> Branch (start,Leaf,Leaf)
<bsrk> | n -> let hn = n/2 in
<bsrk> Branch (start+hn, construct start hn, construct (start+hn+1) (n-hn-1));;
<bsrk> (* cps *)
<bsrk> let rec construct_cps (start:int) : int -> (int tree -> 'b) -> 'b = function
<bsrk> | 0 -> fun f -> f Leaf
<bsrk> | n -> fun f -> let hn = n/2 in
<bsrk> construct_cps start hn (fun lt ->
<bsrk> construct_cps (start+hn+1) (n-hn-1) (fun rt ->
<bsrk> f (Branch (start+hn, lt, rt))));;
<bsrk> is the function I wrote in cps?
<bsrk> Somehow, it takes more time than the original function!
<gasche> bsrk: that's to be expected, CPS doesn't make function any quicker
<bsrk> But the non cps version is also not tail recursive!
<gasche> non-tail-recursive doesn't mean slow
<gasche> it only means it consumes stack memory, which is more limited than heap memory, hence the dread Stack Overflow
<gasche> bsrk: I've written about CPS-style and defunctionalization there: http://stackoverflow.com/questions/14781875/choosing-between-continuation-passing-style-and-memoization
<bsrk> gasche: will read it. :-)
<bsrk> But is it also not the case, that functions that are tail recursive are faster than the non tail recursive counterparts?
<bsrk> Why is this?
<gasche> it is not the case
<gasche> sometimes there are two algorithms to implement a given specification
<gasche> one of them happens to be tail-rec
<gasche> in that case, this one will generally be faster
<gasche> (because tail calls are marginally faster than the usual calls)
<gasche> (but the difference is not big)
<gasche> you can also turn *any* function into a tail-rec function by using continuation-passing style (CPS)
<gasche> but if you just turn your code in CPS it won't give you a different, more efficient algorithm
<gasche> it will be the exact same one, only allocating on the heap instead of the stack
<gasche> which is marginally slower (but avoids stack overflows)
<gasche> sometimes you can use that CPS functions to do some other stuff that is interesting, such as turn your function into an incremental one
<bsrk> Thanks gasche!
<ggole> Tail calls may be noticeably faster if call/jump machinery is a large percentage of the work done by the function
<ggole> Usually this isn't the case
<gasche> invariant: OCaml allows you to use multi-process concurrency that shares immutable memory
<gasche> see rwmjones's Ancient module to share immutable memory between processes
<gasche> with Gerd Stolpmann's OcamlNet multicore libraries you can even share mutable memory of some ground types (no fancy garbage-collected stuff, though)
<gasche> (there is work ongoing on multi-runtime multi-threaded OCaml programs, but that's no news, people have been trying to get that to work for a long time)
<bernardofpc> ggole> Usually this isn't the case -> except for the old famed micro-benchmarks ;-)
<pippijn> gasche: nicely written
<gasche> ggole: that's what I call "marginally"
<gasche> pippijn: that's what I do when I'm not slacking off on IRC
<gasche> write stuff
<gasche> thanks :)
<ggole> gasche: shared mutable memory between caml processes? Is it a wrapper around shared memory allocated with mmap or something?
<bernardofpc> gasche: I suggest you to say that the call fib (n-1) was also not a tail call in the fib example, because of the +
ontologiae has joined #ocaml
<bernardofpc> (and also I don't understand why fib (n-2) has become a tail call, it is in the argument of something in the body)
<ggole> It takes another argument and becomes a tail call in the cps transformed version.
<bernardofpc> Fib(n-2) is a subproblem both of Fib(n) and Fib(n-2) -> shouldn't be it Fib(n-1) the last ?
hcarty has quit [Ping timeout: 252 seconds]
hcarty has joined #ocaml
<bernardofpc> ggole: I don't see it as a "tail call", because it constructs the closure (so it is not executed, but saved on the heap or wherever) but the only point when it is called is the termination 0 | 1 | k 1
<bernardofpc> but maybe this is a goto
<ggole> Function calls don't require closure construction: function values do
<ggole> And the call itself is in tail position
eni has quit [Quit: Leaving]
<bernardofpc> ggole: there are two functions, one fib_cps (n-1), the other is in the closure fib_cps (n-2)
<bernardofpc> the fib_cps (n-1) is in tail position, so it is a goto
Trollkastel has quit [Read error: Connection reset by peer]
<bernardofpc> the question is "how does OCaml build the closure (fun a -> fib_cps (n-2) ... )
malo has joined #ocaml
<ggole> Well, really both are in tail position: the first within the body of fib_cps, and the second within the body of the function literal.
<ggole> Closure construction doesn't affect either.
<bernardofpc> my guess is that it builds another closure, (fun b -> k (a + b)), saves it somehow and somewhere, then builds the "fib_cps (n-2) ... "
<ggole> You can think of closure construction as happening at the "evaluation" of the function literal
<ggole> (of course the compiler is free to rearrange things)
<bernardofpc> (of course)
<ggole> So why do you believe this has any bearing on whether there is a tail call?
<bernardofpc> Because I *love* tree-tail-recursion such as fibonacci ;-)
<ggole> Hah :)
<bernardofpc> and also I like to understand these ideas
<ggole> Recursive fibonacci is such a cliche.
<ggole> Matrix exponentiation please!
<bernardofpc> but for the moment I am still pretty much on the pragmatic side
<bernardofpc> fib is just a pretense, you know. there are much better methods for it, but the general algorithm idea is of tree traversal
<ggole> Sure.
<bernardofpc> end pragmatic: I like very much the fact that gcc will allow me to write many recursive functions and do the mangling itself to accumulate and tail-rec
<bernardofpc> so that the code is much more readable
<bsrk> Another question, can I use pattern matching the way I did in cps?
<bsrk> let rec construct_cps (start:int) : int -> (int tree -> 'b) -> 'b = function
<bsrk> | 0 -> fun f -> f Leaf
<bsrk> | n -> fun f -> let hn = n/2 in
<bsrk> construct_cps start hn (fun lt ->
<bsrk> construct_cps (start+hn+1) (n-hn-1) (fun rt ->
<bsrk> f (Branch (start+hn, lt, rt))));;
<bsrk> bernardopfc: gcc has tail rec?
<ggole> gcc will transform even some non-tail calls into loops.
<companion_square> if you enable some optimizations, yes
bsrk has quit [Read error: Connection reset by peer]
<ggole> Other optimising compilers will not, so you are tying yourself to gcc a bit if you rely on it.
bsrk has joined #ocaml
<companion_square> bernardofpc: do you know the "same fringe problem"? it's a nice tree traversal problem
<bernardofpc> companion_square: determine if two trees have the same fringe ?
<companion_square> yes
<bernardofpc> (are there only values at the leaves ? )
<companion_square> yes
<companion_square> well, you choice actually
<bernardofpc> right
<ggole> That's the usual motivation for coroutines
<companion_square> exactly
<ggole> The answer if you dont' have them is to reinvent iterators
<companion_square> but I believe you can do it in CPS style
<companion_square> (encoding coroutines in CPS, basically)
* ggole recalls doing the exercise while learning ocaml
<bernardofpc> what's the problem in doing a "stupid" traversal and comparing the resulting lists ?
<ggole> You end up with a stack threaded through the closures created by CPS style, I think
<bernardofpc> (of course you have to build it lazily)
<companion_square> bernardofpc: if the tree is big
<companion_square> hmm, building the list lazily is not a trivial problem
<companion_square> not in strict languages
<ggole> Of course you can't traverse in parallel, because the trees may be of a different shape
<bernardofpc> I don't know
bsrk has left #ocaml []
<companion_square> ggole: you're quite right, it's really a problem that requires iterators (or more powerful abstractions) to be done right
<bernardofpc> ggole: right, but my algorithm is : "find next fringe in both ; compare ? loop : return false"
<bernardofpc> so basically this is all about building the fringe lazily
<companion_square> bernardofpc: but to "loop" you need to remember where you were in the tree
<ggole> Ah, I still have my solution http://pastebin.com/YZZ8SEVh
<ggole> Pretty straightforward: just encodes stacks as lists
<bernardofpc> ggole> Other optimising compilers will not, so you are tying yourself to gcc a bit if you rely on it. -> sure, but the point is that I'd prefer the compilers to improve in that way so that my code may be clearer to read
contempt has quit [Ping timeout: 256 seconds]
<companion_square> some code may break if the optimization is not performed, though
<bernardofpc> companion_square: sure, it would be better to *enforce* this in the language level
<bernardofpc> but in the meantime, better do something than just sit around with grokky tail-calls unbitables
<ggole> Relying on compilers to optimise based on observing algebraic identities is a bit suspect imho
<ggole> Because there are an unbounded number of such optimisations
<ggole> It's not like tail calls, which you can always be certain of
<bernardofpc> right, but that's all the inversion of science
<bernardofpc> you're pushing the hardness on people !
mye has joined #ocaml
<bernardofpc> science is precisely when knowledge releases you from doing boring things like inverting matrices or integrals
<bernardofpc> (or assembling in x86)
<ggole> If you rely on the compiler, you push the burden of cleaning up the mess if it fails onto people
<ggole> This is not just idle bitching: people waste their time rewriting things that are not portable all the time
<bernardofpc> I know
<ggole> I agree that the more things that may be relied upon, the better
<ggole> But tail calls in C is at this time not one of those things
<ggole> Shame.
<bernardofpc> but MOST transformations that I see in fun prog courses are trivial for gcc to do
<bernardofpc> (and correcly do, of course)
<bernardofpc> (including factorials, fibonacci, tree traversal)
<ggole> Does icc do them? msvc?
<ggole> tcc?
<bernardofpc> llvm ?
<bernardofpc> I'd say, if they don't do, they are pushing a real burden into the programs and programmers
<ggole> Strange thing for a C compiler to do. ;)
<bernardofpc> (cf last micro benchmark that unfolded in -O2 something that no one would have done, but gives almost 50% improement)
<companion_square> http://ocaml.xelpaste.net/7354 CPS iterators
contempt has joined #ocaml
ttamttam has left #ocaml []
<companion_square> I don't believe gcc could rewrite the naive "put leaves in lists and compare lists" into this
<ggole> C compilers will almost never eliminate allocations.
<ggole> Sigh, don't get me started on allocation patterns in C
<ggole> I want to believe that compilers can do a good job on the low level memory smashing stuff that people do to make things fast
<ggole> But I see little evidence of it so far
<bernardofpc> companion_square: Monday I'll be at the uni, I'll tell what GreatMaster says ;-)
<companion_square> who? :)
mcclurmc has joined #ocaml
<bernardofpc> the same guy that told me about science and art
groovy2shoes has joined #ocaml
<bernardofpc> (science is deterministic, art is not)
<bernardofpc> (well, the ASM code of the CPS is indeed just jumps)
<companion_square> that's the idea here
<companion_square> the point is exploring the trees lazily, as you said
<bernardofpc> (I'm still lookiung at the fib_cps above)
<bernardofpc> (I want to make a fringe C solution and see how it compares ;-))
<companion_square> in idiomatic C? :p
<companion_square> I think you can do something with a stack
<ggole> In C you might maintain parent pointers, and iterate the tree directly.
<ggole> That's what the usual tree libraries that offer iterators usually do.
<ggole> Three pointers per node kind of sucks though.
<companion_square> hmm
<companion_square> let's try with the exact same kind of tree
<ggole> And you are sidestepping the spirit of the problem anyway.
* ggole wonders whether threaded binary trees would allow the stack to be encoded in place
<ggole> I bet Knuth has this as an exercise.
<companion_square> hmm, aren't binary trees already threaded trees?
<ggole> No
<companion_square> or am I confused?
<ggole> Not in the sense I meant, anyway
<companion_square> oh, I see
<ggole> Threading is using the pointers that would be null to store information that helps you traverse the tree.
<gasche> bernardofpc: I haven't have time to read the whole thread
<companion_square> rhaa, this is so terrible to write
<gasche> but do you have a place where the kind of extended tail-rec recursion you're thinking about (is it tree-recursion or something else with associative contexts?) is described properly and precisely?
<gasche> I haven't been able to find any real good information of what gcc actually does
<ggole> I seem to recall a page describing it
<gasche> get me a precise description, and then we can talk
<ggole> I'll see if I can find it
<gasche> (not necessarily exactly gcc, but whatever you are asking for)
<gasche> invariant: I've been trying to read about past history, the talk on memory models
Trollkastel has joined #ocaml
<gasche> when you say "what's wrong with the way Haskell does it?", I would like to ask back: "which one are you talking about" and "are there actually proofs that this way is worth the required engineering effort"?
<gasche> because STM is nice, but it's not necessarily what people use when they want high performances either
<companion_square> remind me never to write iterators in C
<darkf> who needs iterators when you have closures
<gasche> (see e.g. Sulzmann et al. 2008 "Comparing the performances of Concurrent Linked-List implementations in Haskell"; in the end, the only approach that was actually competitive was ugly low-legel compare-and-swap shit)
<gasche> I haven't yet seen one approach that convinced me that (1) it is safe to program in (2) it will consistently beat a good single-threaded implementation running on a good single-threaded runtime
<gasche> (but then I'm far from a concurrency expert; I've had the luck, or the blindness, to mostly look at the sequential world)
<bernardofpc> companion_square: I don't see why you need iterators
<ben_zen> gasche: concurrency is best used for problems that aren't iterative in nature
<gasche> I'm not sure what your point is
<ben_zen> gasche: raytracing, for instance, is an area where a concurrent model is pretty much always better than an iterative model.
<companion_square> bernardofpc: or something equivalent
<bernardofpc> gasche: it is purely from a compiler technology point of view
<companion_square> how would you solve it?
<gasche> ben_zen: sure, it's also an area where OCaml does just fine
<bernardofpc> companion_square: my bet is a recursive function samefringe(stacka, stackb)
<gasche> just use Parmap for your raytracing tasks
<ggole> ray tracing isn't a good example for concurrency, since it is so embarassingly data parallel
<ggole> Pretty much anything you do will work
<bernardofpc> and a modifying function nextfringe(stack)
<companion_square> yep
<companion_square> but the stack is tedious to write -_-
<bernardofpc> oh
<ben_zen> ggole: yeah
<bernardofpc> sure haveing a gc helps in not writring the stack
<bernardofpc> -not
<companion_square> and algebraic types
<companion_square> with the GC I don't use a stack, but closures
<ben_zen> a lot of the stuf with concurrency is about code that will at times be parallel and at times require other input
<bernardofpc> honestly, here I don't see the benefit that much
<ggole> You don't need algebraic types for a stack?
<companion_square> bernardofpc: I have several kinds of items in the stack
<ben_zen> or will require different sections completed at different times.
<ggole> Just some silly error prone realloc stuff
<companion_square> well, maybe I don't need to
<ggole> Oh, I see
<companion_square> like "explore left subnode" and "explore right subnode"
<companion_square> but I'm probably doing it wrong
ontologiae has quit [Ping timeout: 256 seconds]
<gasche> ben_zen: that kind of fork/gather patterns is reasonably handled by what we have
<ggole> iirc the trick is to only push right children on the stack
<gasche> I'm not saying everything is just fine; sure, we could use something a bit more fine-grained than multi-processing
<gasche> but it happens that the kind of use cases where multi-processing really cannot work
<gasche> (what people are complaining about, about concurrency in OCaml)
<gasche> also are the kind of use cases where the "nice approaches" to concurrency tend to fall short
<gasche> and you need that ugly atomic shared memory stuff to get good performance
<gasche> which means full-blown concurrent separation logic just to prove the absence of data races
<ggole> If it was easy it would be solved by now.
<gasche> yeah
<gasche> and don't get us started on weak memory models
<ggole> People have been doing it for decades, with no silver bullet in sight.
<ben_zen> right
<gasche> I've met people who have been busy for the last few months
groovy2shoes has quit [Quit: groovy2shoes]
<ben_zen> however, it looks like we've hit the edge of what we can improve in single cores
<gasche> writing a *correct* single-consumer, single-producer fifo with weak memory models assumptions
<gasche> I find it a bit unrealistic to assume that people are able to program with these things
<companion_square> share-nothing models may be more tractable
<ggole> The OS people do it. But I think they put in a lot of effort.
<gasche> hm
<ggole> And they are the right sort of people, which you can't always rely on.
<gasche> and they screw up quite often as well
<ggole> Yeah
<ggole> Very hard to know when you've made a tiny mistake.
<gasche> even the processor designers screw up
<ggole> But what's the alternative? All the nice models are far more expensive.
<ggole> Message passing is beautiful, but you can't just copy everything all the time.
<gasche> hm
<ggole> And you have to design around it. Try to just drop message passing into a large already written application. No way.
<ousado> with threads you don't have to copy anything but a pointer
<gasche> ousado: but you need a solid ownership tracking system
<gasche> as Rust or Mezzo are trying to get
<ousado> yes, linear types
<ousado> .. or ATS
<gasche> indeed
<gasche> (but iirc. ATS, version 1 at least, wasn't concerned with multi-threading)
<ggole> Rust is interesting. It would be a healthy thing for it to work out.
<gasche> indeed
<ousado> gasche: only the GC has issues WRT multi-threading
<ousado> ATS2 uses boehm now
<gasche> are there design docs out there for ATS2?
<gasche> last time I checked, Hongwei Xi was a bit shy to give details
<ousado> unfortunately it's all in HX brain so far
<ousado> and a few mails on the ML
<ousado> (and in the code)
<ggole> What did it use before? Another conservative collector?
<gasche> as for "the alternative", I think I would be happy with a solid message-passing system with ownership handled through typing, for the 99% of code to write that can afford that
<pippijn> ousado: why!
<pippijn> :(
<pippijn> I thought they had their own GC
<ousado> yes, similar to boehm, but a thing that knows about the types
<ggole> :(
<gasche> and specialized languages or unsafe modes for the 0.05% of "concurrency experts" writing crazy-ass concurrent copy-on-write linked lists
<ggole> If you want C-like types, it's conservative or stack/object maps
<ggole> And maps are very complicated
<ousado> well, because most use(r)s of ATS don't use the GC at all
<gasche> (with as much datarace-freedom proof obligations as we can get)
<ousado> so no ones really interested
<ggole> gasche: yeah, safe-by-default with an escape hatch seems like a good way to go
<ggole> As long as people take their responsibility to not just dive for the escape at the first sign of trouble seriously
<ousado> pippijn: I think it's a pity, too, but this is only until someone who cares enough to step up and actually write a bug-free concurrent GC in/for for ATS :)
<ggole> (Then again, C is all escape hatch and people selected that pretty hard)
<ousado> pippijn: modulo grammar :)
<pippijn> ousado: I didn't have problems parsing that
awm22 has joined #ocaml
<ousado> yeah, but something somewhere was wrong :)
<pippijn> I wrote a non-concurrent precise GC for cyclone
<gasche> ggole: I'm not sure there actually are a lot of people writing correct concurrent algorithms in C right now
<pippijn> it was slow
<pippijn> because cyclone goes to C
<pippijn> and "stack maps" are either non-portable or slow for C
<pippijn> I chose slow
<gasche> (I'm talking about the bottom layer of concurrent data-structures that need the compare-and-swap subtleties, not using them)
<gasche> in the Java world at least, my understanding was that there are a few sanctified concurrent datastructs in the library
<ggole> gasche: the os people, gamedev people and database people all use that stuff
<gasche> and people just use that
<ggole> And I imagine profitably in terms of performance
<gasche> yeah, but do they actually design it, or reuse existing implementations?
<ggole> So the question becomes "correct"
<gasche> yeah or write incorrect stuff
<ggole> They actually design it
<gasche> there was a race-related bug in PostGreSQL not long ago
<ggole> Look at linux, they have their own barriers, waitlocks and spinlocks
<gasche> yeah
<gasche> so the question is: how many people in total maintain this stuff?
<ggole> Few compared to the users of the resulting interfaces, I imagine
<gasche> in the Linux case, I heard that the people deciding about barriers for each respective architectures are usually also the designers of the processors in question
<ggole> Maybe for ARM
<gasche> (eg. the authority on x86 barriers was an Intel employee, and he had access to information that the other devs had not)
<ggole> x86 is well documented
<ousado> isn't there a static analysis tool for the concurrency stuff in the linux kernel?
<ggole> There's a deadlock detector
<gasche> I'm wondering at quantitative numbers
<ggole> I believe it is a dynamic tool that works by recording some information about execution paths
<gasche> would that be 200 persons? 1000?
<ggole> Hmm, I have no idea
<ggole> If you have to have an Intel guy write your code, that's not a good sign
<ggole> But I suspect that is a matter of scraping performance
<ggole> I often wonder what this will all look like in 50 years
<ggole> A lot of the stuff people were doing 50 years ago is just dumb and obsolete
<ggole> Presumably our work will suffer the same fate
<companion_square> 50 years ago was really the beginning
<companion_square> but much works from the 70s ~ 80s is still quite relevant (at least on the theoretical part)
<ggole> Yeah. Would have been an exciting field then.
<ggole> Graphics guys complain about Blinn inventing everything in the 70s.
<ousado> hehe
<ggole> The joke goes something like "there are a dozen timeless, great graphics people and James Blinn are six of them".
<companion_square> :D
<ggole> Must have been a hell of a thing to see a bitmap display for the first time.
<gasche> I can't find back a source on the intel guy thing
<gasche> (either it comes from lwn.net coverage about a year ago, or corridor talk)
ontologiae has joined #ocaml
hyperboreean has quit [Ping timeout: 264 seconds]
sysopfb has joined #ocaml
Kakadu has quit [Ping timeout: 256 seconds]
mye has quit [Quit: mye]
trep has quit [Ping timeout: 240 seconds]
trep has joined #ocaml
eikke has joined #ocaml
sysopfb has quit [Ping timeout: 248 seconds]
eikke has quit [Ping timeout: 264 seconds]
mcclurmc has quit [Ping timeout: 272 seconds]
Kakadu has joined #ocaml
darkf has quit [Quit: Leaving]
ggole has quit []
walter|rtn has quit [Quit: This computer has gone to sleep]
malo has quit [Quit: Leaving]
Arsenik has quit [Remote host closed the connection]
gautamc has joined #ocaml
eikke has joined #ocaml
ttamttam has joined #ocaml
walter has joined #ocaml
companion_square is now known as companion_cube
ttamttam has left #ocaml []
mcclurmc has joined #ocaml
madroach has quit [Ping timeout: 248 seconds]
mcclurmc has quit [Ping timeout: 246 seconds]
madroach has joined #ocaml
Anarchos has quit [Ping timeout: 255 seconds]
emmanuelux has joined #ocaml
mcclurmc has joined #ocaml
chambart has joined #ocaml
weie has quit [Quit: Leaving...]
milosn_ has joined #ocaml
milosn has quit [Ping timeout: 256 seconds]
Anarchos has joined #ocaml
<areece> can I do something equivalent to `module Printer = if !flag_outfile then OutfilePrinter else StdoutPrinter`
<areece> have if then else for a module
Tobu has quit [Ping timeout: 246 seconds]
Tobu has joined #ocaml
<wmeyer> areece: yes, it's possible with packaged modules
anderse has quit [Quit: anderse]
eni has joined #ocaml
<wmeyer> areece: the syntax would be: "(let module Printer = if !flag_outfile then (val OutfilePrinter : PrinterS) else (val StdoutPrinter : PrinterS) in ... "
<areece> yeah, I saw a bit of that
<areece> I decided I'd just do it another way :(
contempt has quit [Ping timeout: 255 seconds]
astertronistic has quit [Quit: Leaving]
contempt has joined #ocaml
Snark has quit [Quit: leaving]
hyperboreean has joined #ocaml
mcclurmc has quit [Ping timeout: 276 seconds]
mcclurmc has joined #ocaml
eikke has quit [Ping timeout: 256 seconds]
Trollkastel has quit [Quit: Brain.sys has encountered a problem and needs to close. We are sorry for the inconvenience.]
chambart has quit [Ping timeout: 246 seconds]
UncleVasya has quit [Ping timeout: 264 seconds]
Kakadu has quit []
mcclurmc has quit [Ping timeout: 252 seconds]
jbrown has quit [Ping timeout: 264 seconds]
eni has quit [Ping timeout: 252 seconds]
SuperNoeMan has joined #ocaml
<SuperNoeMan> does ocaml still lack threading capability?
<orbitz> It has had threading for a long time
<orbitz> it lacks the ability to run multiple threads in parallel
<pippijn> ocaml has threading, but no parallel execution of ocaml code
<SuperNoeMan> why can't it run things in parallel?
<pippijn> multiple ocaml* threads
<SuperNoeMan> and why have threads if you can't execute them simultaneously?
<pippijn> because the runtime is not thread-safe
<pippijn> because you can execute C code concurrently with ocaml code
Yoric has quit [Ping timeout: 240 seconds]
<orbitz> useful for handling blocking syscalls
<SuperNoeMan> ok, well, what's the industry effort status toward making the runtime thread safe look like?
<orbitz> zero
<SuperNoeMan> is it something that's coming, or something that isn't wanted
<SuperNoeMan> wow. why zero?
<pippijn> wanted, but not coming
<orbitz> there is some effort to allow running two interpreters in one process and use messaing pasing between
tane has quit [Quit: Verlassend]
<orbitz> SuperNoeMan: because memory models for threads suck and nobody know show to properly solve it yet
<orbitz> without being very inefficient
<SuperNoeMan> interesting
<SuperNoeMan> so, I'm guessing that parallelization for ocaml programmers is better achieved through some other round about method
<SuperNoeMan> what is it?
<orbitz> running multiple processes
<orbitz> of course: do you actually want parallelism or just concurrency?
<SuperNoeMan> right, well that could be about as efficient with modern operating systems
<SuperNoeMan> I mean concurrency
<Anarchos> orbitz i thought it was just due to the complexity of debugging a parallel GC ?
<SuperNoeMan> I want two things happening at the same time in order to take advantage of the hardware
<orbitz> SuperNoeMan: what kind of things?
<SuperNoeMan> Anarchos: wouldn't "memory models for threads suck" also touch that issue?
<orbitz> Anarchos: There are multiple reasons, the memory model is the last thing i have heard from Xavier
<SuperNoeMan> any kind of things
<SuperNoeMan> very general, broad
<orbitz> SuperNoeMan: please be specific, certain things are quite different
<orbitz> do you want to calculate fibonnaci numbers?
<orbitz> or read data from a socekt?
<SuperNoeMan> I want to be able to do the same kinds of things in ocaml that I can in other languages concurrently
<orbitz> like what?
<orbitz> threading in Ocaml has roughly the same limitations as in Python
<SuperNoeMan> why not the example of breaking up a segment of data that requires an expensive computation
<SuperNoeMan> and taking advantage of the hardware by doing it concurrently
<orbitz> SuperNoeMan: If you want to do computationally expensive things at the same time, you probably want proceses, if you want to do things that mostly wait at the same time, then there are multiple concurrency models out there
<SuperNoeMan> processes
<orbitz> k
<Anarchos> i remember the PhD thesis of Damien Doligez about a concurrent GC :) and caml light was concurrent at the end.
<SuperNoeMan> is my conjecture that multiple processes can be approx as effcient as threading on modern OS's accurate?
<orbitz> SuperNoeMan: probably not
<Anarchos> orbitz if Xavier said it, he should be right :)
<companion_cube> the things that multiprocess can't do is fine-grained parallelism
<companion_cube> like parallel map or such things
<SuperNoeMan> ah
<SuperNoeMan> ok, well one more thing
<SuperNoeMan> if languages like java have a garbage collector, and yet also allow concurrency, then why can't ocaml since "memory models for threads suck"?
<orbitz> because the java memroy model sucks
<SuperNoeMan> ?
<SuperNoeMan> elaborate? :)
<orbitz> it is trivial to write programs in java that make no sense
<orbitz> with threads
<companion_cube> SuperNoeMan: java's GC is very poor for the allocation profile of typical ocaml programs
<companion_cube> ie, lots and lots of small, short lived allocations
<orbitz> i dont' think that is his question though
<adrien> hasn't it improved in this regard recently?
<SuperNoeMan> yeah that would make sense companion_cube, ocaml being fully compiled I would imagine that java would be a different world entirely being the object user and memory hog that it is
<bernardofpc> orbitz> it is trivial to write programs in java that make no sense -> probably true for any language out there ;-)
ulfdoz has quit [Ping timeout: 245 seconds]
<SuperNoeMan> yeah i thought that when I saw it and didn't know how immediately to wrangle it best
<orbitz> bernardofpc: by "no sense" i mean the language specification does not specify what will happen
<bernardofpc> oh
<SuperNoeMan> well, haskell and erlang have gc's...
<orbitz> and?
<SuperNoeMan> why doesn't ocaml follow in those steps?
<SuperNoeMan> are they good implementations
<orbitz> what steps?
<bernardofpc> because they have other memory model
<bernardofpc> (whatever that actually means)
<SuperNoeMan> also, orbitz "language specification does not specify what will happen" excellent answer XD
<orbitz> erlang is actors, where each process is memory isolated, which is roughly what i've heard ocaml will do, but heavier
<bernardofpc> SuperNoeMan: undefined behaviour is not so good
<orbitz> and haskell has very strict control over effects, which ocaml does not
<SuperNoeMan> bernardofpc: I know that
<SuperNoeMan> bernardofpc: orbitz's understanding is what is good
<SuperNoeMan> orbitz: does ocaml not have monads or something?
<orbitz> you can impleemnt monads in ocaml, that doesn't mean it has contorl over effects
<SuperNoeMan> orbitz: by other steps I mean by following suit with haskell or erlang...
<bernardofpc> did C memory model change to allow for //ism ?
<SuperNoeMan> /ism?
<bernardofpc> parallelism
<orbitz> ocaml's type system is very different than haskells, and as I said the plans i have heard are roughly Erlang with extreemly heavy processes relatively to Erlang processes
<orbitz> bernardofpc: C11 added support for atomics and what not, but a vast majority of thignsa re still undefined
<SuperNoeMan> ah ok ok
<SuperNoeMan> sorry
<SuperNoeMan> yeah C++ is a piece of shit language
<orbitz> I am talking about C not C++
<SuperNoeMan> I can't understand why it could have possibly propagated so much
<SuperNoeMan> so is C
<orbitz> C is a beautiful language, although I'm not a hug efan of C11
<SuperNoeMan> if people understood the evils of its unsafety
<SuperNoeMan> ew. How can you say that?
<orbitz> C90 does exactly what you ask it to do, no more no less
<SuperNoeMan> just because its popular doesn't mean that its good!
<orbitz> there is beauty in that
<bernardofpc> there's beauty in simplicity
<SuperNoeMan> are many implementors really up to the task of knowing the ins and outs of the system so damn well that it won't blow your legs of like it pratically tries to?
<orbitz> implementors or users?
<SuperNoeMan> honestly, the language's type system is *almost more obnoxious than useful since it's weakly typed anyway
<SuperNoeMan> users of the language*
<bernardofpc> SuperNoeMan: and in fact, a funny joke, is that one guy predicted around 74 that in 10 years a whole family of languages would have emerged and would "de facto" dominate programming
<orbitz> SuperNoeMan: I don't write C often, but for the things I do it is a joy to use, one simply has to be aware of what those situatiosn are
<bernardofpc> for example, writing a // mandelbrot set calculator :D
<SuperNoeMan> "one simply has to be aware of what those situations are"
<SuperNoeMan> *simnply
<SuperNoeMan> simply*
<orbitz> it's not that hard if you have read the standard
<bernardofpc> SuperNoeMan: C has much much less awful situations that C++
<SuperNoeMan> most programmers aren't astute enough to recognize its pitfalls
<orbitz> that isn't C's fault
<orbitz> for exmapel, C++ is difficult to write even for a good C++ programmer IMO
<bernardofpc> I'd say that even more programmars are also uncapable of understanding OCaml
<orbitz> C is so dead simple it is not as bda
<orbitz> but alas, debating my appreciate of C with you is not a meaningful way to spend a night. seeya!
<SuperNoeMan> ok, sorry
<SuperNoeMan> yeah, I really came in to learn about ocaml concurrency, I appreciate the help, sorry it got out of hand
RagingDave has quit [Quit: Ex-Chat]
smerz_ has quit [Ping timeout: 258 seconds]
RagingDave has joined #ocaml
contempt has quit [Ping timeout: 264 seconds]
contempt has joined #ocaml
Anarchos has quit [Quit: Vision[0.9.7-H-090423]: i've been blurred!]
pango has quit [Ping timeout: 255 seconds]
pango has joined #ocaml
ollehar has quit [Ping timeout: 240 seconds]
z_- has joined #ocaml
milosn has joined #ocaml
<z_-> I am trying to use concat to compare 2 lists. but get the error : " This expression has type string but an expression was expected of type 'a list"
yacks has quit [Ping timeout: 256 seconds]
milosn_ has quit [Read error: Operation timed out]
frogfoodeater has joined #ocaml
<thizanne> z_-: how are you trying ?
<z_-> thizanne: Really Hard.
<thizanne> ok then maybe give it a tip and he'll be happy and make it work
<thizanne> or, you can also paste your code and we'll make him cooperate
<z_-> thizanne: That's what I did, and it now works ^^. i had 1 ; instead of 2 at some place.
<thizanne> z_-: if you use an editor able to automatically indent your code (or ocp-indent), you should see such mistakes easily
frogfoodeater has quit [Ping timeout: 264 seconds]
ontologiae has quit [Ping timeout: 256 seconds]
frogfoodeater has joined #ocaml
q66 has quit [Remote host closed the connection]
frogfoodeater has quit [Ping timeout: 246 seconds]