flux changed the topic of #ocaml to: Discussions about the OCaml programming language | http://caml.inria.fr/ | OCaml 4.00.1 http://bit.ly/UHeZyT | http://www.ocaml.org | Public logs at http://tunes.org/~nef/logs/ocaml/
UncleVasya has quit [Ping timeout: 248 seconds]
gautamc has quit [Ping timeout: 240 seconds]
gautamc has joined #ocaml
f[x] has joined #ocaml
JcGood has joined #ocaml
oriba has quit [Quit: oriba]
f[x] has quit [Ping timeout: 240 seconds]
venk` has joined #ocaml
dsheets has quit [Read error: Operation timed out]
tchell_ has quit [Ping timeout: 256 seconds]
tchell has joined #ocaml
madroach has quit [Ping timeout: 248 seconds]
madroach has joined #ocaml
f[x] has joined #ocaml
tchell has quit [Ping timeout: 248 seconds]
gautamc has quit [Read error: Connection reset by peer]
gautamc has joined #ocaml
JcGoood has joined #ocaml
pootler__ has quit [Ping timeout: 264 seconds]
pootler_ has quit [Ping timeout: 268 seconds]
emmanuelux has quit [Remote host closed the connection]
JcGood has quit [Ping timeout: 246 seconds]
tchell has joined #ocaml
es917 has joined #ocaml
manud has joined #ocaml
tchell has quit [Ping timeout: 268 seconds]
tchell has joined #ocaml
<pippijn> sexplib.syntax needs type-conv
<pippijn> but there is no type-conv, only type_conv
manud has quit [Read error: Operation timed out]
<f[x]> quite the contrary
<pippijn> do you have type-conv?
talzeus has joined #ocaml
<f[x]> $ ocamlfind query -format %v type-conv
<f[x]> 3.0.4
<f[x]> NB the names of the following entities are distinct and not tied to each other technically : opam package, findlib package, library, module, cma
csakatoku has joined #ocaml
<amiller_> in my type directed translation i have a lot of dpulicated code being generated
<amiller_> for the reason that equal types are assigned distinct ids and not simplified in typedtree
lusory has quit [Ping timeout: 245 seconds]
<amiller_> is there some part of the compiler that reduces equivalent types or something
pootler_ has joined #ocaml
pootler_ has quit [Ping timeout: 255 seconds]
Zeev has joined #ocaml
gautamc has quit [Read error: Connection reset by peer]
gautamc has joined #ocaml
justdit has joined #ocaml
darkf has joined #ocaml
JcGoood has quit [Ping timeout: 264 seconds]
osnr has quit [Quit: Leaving.]
manud has joined #ocaml
pootler_ has joined #ocaml
ttamttam has joined #ocaml
JcGoood has joined #ocaml
pootler_ has quit [Ping timeout: 246 seconds]
ttamttam has quit [Client Quit]
ggole has joined #ocaml
osnr has joined #ocaml
osnr has quit [Changing host]
osnr has joined #ocaml
osnr has quit [Client Quit]
justdit has quit [Ping timeout: 246 seconds]
jrgarrison has joined #ocaml
justdit has joined #ocaml
xenocons has joined #ocaml
<xenocons> hi, are there haskell style 'type constructors' in ocaml?
mhr has joined #ocaml
<mhr> Hey guys, I'm trying to decide whether I should invest serious time in learning Haskell or OCaml. What do you think?
<ggole> "Haskell style" isn't very informative
<mhr> informative... of what?
<xenocons> newtype Fix f = Fx (f (Fix f))
<ggole> type 'f fix = Fix of 'f fix, I think would be close
<ggole> OCaml doesn't have the newtype efficiency hack though
<ggole> (Shame.)
<mhr> I still don't know what you're talking about.
ben_zen has joined #ocaml
pootler_ has joined #ocaml
<ggole> Oh wait, that's not the same: f takes an argument there
ttamttam has joined #ocaml
ttamttam has quit [Remote host closed the connection]
mhr has quit [Quit: Page closed]
pootler__ has joined #ocaml
justdit has quit [Read error: Connection reset by peer]
osnr has joined #ocaml
osnr has quit [Changing host]
osnr has joined #ocaml
<xenocons> ggole: yeh
<ggole> xenocons: interesting article. After reading it, I'd say the answer is "no".
<xenocons> yeah, ive come to a similar conclusion, thats unfortunate a bit
sclv has joined #ocaml
<ggole> Maybe with modules?
<ggole> My type-fu is too weak to say, to be honest
lqa has joined #ocaml
<xenocons> hmm, could be possible, might not be pretty though
<ggole> Indeed.
manud has quit [Quit: Leaving]
lqa has quit [Quit: Leaving.]
lqa has joined #ocaml
Drup has quit [Quit: Leaving.]
Watcher7 is now known as Watcher7|off
osnr has quit [Quit: Leaving.]
ben_zen has quit [Ping timeout: 240 seconds]
ben_zen_ has joined #ocaml
ben_zen_ is now known as ben_zen
jrgarrison has left #ocaml []
<ggole> xenocons: after fiddling around I got some of the definitions to work and almost make sense
<ggole> But things go south about half way through :)
<xenocons> heh
<xenocons> ill try more over the weekend
<ggole> So far
<ggole> type 'a termF = | Int of int | Add of 'a * 'a | Mul of 'a * 'a
<ggole> type 'f fix = Fix of 'f fix fix
<ggole> let fix (elt : 'a fix fix) : 'a fix = Fix elt
<ggole> That all makes sense
<xenocons> ahhh interesting yeh
<ggole> But then Haskell somehow makes the type constructor argument go away?
<ggole> I don't understand what happens there
<xenocons> ggole: im not confident i do too, i guess ghci needs to be installed and played with
<ggole> It's this bit: type Expr = Fix ExprF
<xenocons> but i couldnt actually work out the type 'a termF = ... syntax, so thats pretty good!
<ggole> ExprF should take an argument, right?
<xenocons> for some reaosn i was trying type termF : 'a -> termF = ..
<ggole> Yeah, it's all backwards relatie to Haskell
csakatok_ has joined #ocaml
csakatoku has quit [Ping timeout: 255 seconds]
f[x] has quit [Ping timeout: 252 seconds]
lqa has quit [Quit: Leaving]
Yoric has joined #ocaml
f[x] has joined #ocaml
ben_zen has quit [Ping timeout: 264 seconds]
ttamttam has joined #ocaml
djcoin has joined #ocaml
<gasche> xenocons, ggole: note that with structural types (polymorphic variants in this case) you don't actually need to pollute your recursion with a Fix constructor
<gasche> type 'a term = [ `Int of int | `Add of 'a * 'a | `Mul of 'a * 'a ]
<gasche> type closed_term = closed_term term
Snark has joined #ocaml
<gasche> this is the basis of the 2000 paper "Code reuse through polymorphic variants" of Jacques Garrigue
<gasche> that shows how open recursion on variants can provide a solution to the Expression Problem
<ggole> Oh, hmm
<ggole> By the way, I took a stab at the not-subtyping-with-GADTs that I was talking about yesterday
<ggole> But it doesn't work out
<ggole> For reasons I don't yet understand, type variables like [< 'A | 'B | 'C] get constrained to ['A | 'B | 'C], destroying the "subtyping"
talzeus has quit [Remote host closed the connection]
talzeus has joined #ocaml
<gasche> I'm not sure why you need polymorphic variants at all to solve the "only some subet of values are possible" with GADTs
<gasche> ggole: would you suggest a concrete example of such situation?
<ggole> Do you remember the instruction operand example I mentioned last night?
<ggole> As a plain ADT, it would look something like type operand = Reg of register | Mem of memory | Imm of immediate
<ggole> The problem is how to express that an instruction wants a subset of {Reg, Mem, Imm}
<ggole> With GADTs, you can say type 'a operand = Reg : register -> register operand | Mem : memory -> memory operand | Imm : immediate -> immediate operand
<ggole> But this doesn't let you construct, say, nonimmediate operands
<ggole> Thus the desire to use polymorphic variants for "subtyping"
<gasche> well you could easily say
<ggole> The problem I ran into is that if you decorate the GADT with polymorphic variants, you can write a such a subset-of-constructors type, but it cannot be used as a subset of a fuller type
<gasche> (register operand, memory operand) either -> ...
<gasche> but I agree that's not very satisfying
<ggole> Yeah, I think polymorphic variants are a better solution
<ggole> (With some careful annotations to catch typos)
<gasche> but it's not sound to say that (operand) is covariant (+'a operand)
<gasche> you could also do the following
<gasche> type _ nonimm = R : register nonimm | M : memory nonimm
<gasche> and then take a ('a nonimm * 'a operand) -> ...
<gasche> of course if you wanted to express "any subset" that way, that would be around 2^n types to define; but in pratice you'll only need those that appear as invariants of your problem domain
<ggole> The issue with this is that all operations over operands must be coded anew for each possible subset
<gasche> I don't see why
<gasche> you get a pair of the "is not immediate witness" and the operand
<xenocons> gasche: interesting, i think i seee
<gasche> isn't the second element suitable for generic functions on the operand type?
<ggole> Hmm
* xenocons reminds me to install ocaml now that i have linux, don't need to use F# for my ML fix
<xenocons> that was so incredibly painless to apt-get :\
mika1 has joined #ocaml
cago has joined #ocaml
<ggole> gasche: don't you need different constructor names for each subset?
osnr has joined #ocaml
osnr has quit [Changing host]
osnr has joined #ocaml
<ggole> Or does that not matter as you don't match against them? I'm not 100% sure I understand your suggestion
<gasche> ggole: it depends on whether you allow the new type-directed constructor disambiguation
<gasche> you do have to match
<xenocons> haven't seen the [ ] type syntax before
<ggole> Ah, I'm ignorant of such extensions
<gasche> it's not released yet ggole
<gasche> xenocons: do have a look at this "code reuse with polymorphic variants" paper
<xenocons> right, i will
<xenocons> ty
<gasche> ggole: the use case would be
<gasche> let foo : type a . a nonimm * a operand -> bar = function
<gasche> | R, Reg -> ...
<gasche> | M, Mem -> ...
<gasche> ;;
<gasche> the match on the left component is exhaustive, and for each left component there is only one GADT-typeable case for the right component
zpe has joined #ocaml
<gasche> hm (Reg r) and (Memory m) of course, I forgot the parameters
<ggole> I think I see
zpe has quit [Remote host closed the connection]
<gasche> you can see "nonimm" as a type predicate/constraint
<gasche> not unlike some hackish use of Haskell type classes, but for the fact that they're closed so we can reason on exhaustiveness (very important for maintainance in practice)
<xenocons> for the lazy: http://www.math.nagoya-u.ac.jp/~garrigue/papers/variant-reuse.pdf non post script version
osnr has quit [Ping timeout: 240 seconds]
Zeev has quit [Ping timeout: 246 seconds]
eikke has joined #ocaml
osa1 has joined #ocaml
Zeev has joined #ocaml
<ggole> gasche: by the way, this is (more or less) what I was trying to do http://ocaml.nopaste.dk/p49195
<ggole> (It should fail to type check)
ontologiae has joined #ocaml
<companion_cube> a friend of mine who is fan of gentoo asked me why opam was written, rather than using emerge
<companion_cube> I'm not sure, are there features that are actually really specific to ocaml in opam?
<ousado> which language is emerge written in?
<companion_cube> python, I think
<ousado> sounds like a sufficient answer to me then
<companion_cube> what, you don't trust a program just because it's written in python?
<ousado> I trust a program that's written in python and gets the same amount of developer time less than one in ocaml, yes
<ggole> Isn't emerge gentoo specific?
yacks has quit [Ping timeout: 260 seconds]
venk` has quit [Ping timeout: 252 seconds]
justdit has joined #ocaml
osnr has joined #ocaml
osnr has quit [Changing host]
osnr has joined #ocaml
talzeus has quit [Remote host closed the connection]
talzeus has joined #ocaml
thomasga has joined #ocaml
tane has joined #ocaml
yacks has joined #ocaml
osa1 has quit [Ping timeout: 255 seconds]
raichoo has joined #ocaml
whitequark has joined #ocaml
skchrko has joined #ocaml
<whitequark> hello
<companion_cube> world
<whitequark> how would you write a lexer which keeps source location information for every token, and a parser which aggregates this and stores in nodes?
<whitequark> I wrote one in Ruby with Ragel, Racc [bison] and a huge amount of handwritten location tracking, but I'm not looking forward for doing that again
<companion_cube> I think ocamllex (and the Lexing module) provide things to know the location of the current token
<companion_cube> then, you can save the location of your AST during parsing
<whitequark> oh, Lexing.lexeme_start/lexeme_end. I see
<companion_cube> check Lexing.new_line, that's what you need to call in order to maintain the lexer's location info
Kakadu has joined #ocaml
<whitequark> oh, it can even return positions instead of character indexes. excellent.
eikke has quit [Ping timeout: 246 seconds]
<companion_cube> whitequark: yep. Just call new_line() when you meet a \n :)
Yoric has quit [Ping timeout: 252 seconds]
<whitequark> companion_cube: thanks!
<whitequark> how good an idea is it to use a recursive descent parser provided by camlp4? my motivation is that RDPs seem to provide the best kind of error recovery available
<whitequark> e.g. clang uses a handwritten RDP.
<whitequark> however, e.g. http://www.ffconsultancy.com/ocaml/benefits/parsing.html mentions that camlp4's RDPs can be more complex... as this syntax is generally less flexible than handwritten parsing code (I think?), how much is it a problem in practice?
<companion_cube> whitequark: I don't know, I don't use camlp4
<companion_cube> I'd suggest to use parser generators, such as menhir, unless you have very specific needs
<companion_cube> menhir provides some kind of error recovery too
<whitequark> oh, menhir looks... really awesome.
mcclurmc has joined #ocaml
<whitequark> ok, last question (for a while). how'd you design a data type for AST, considering that I need to keep location information for each node (so, it's a tad more complex than just a union type of all node types);
<whitequark> I imagine that location info will appear as a record, possibly as several kinds of records for even a single node. for example:
<whitequark> if a method call can be represented both as "lhs + rhs" and "lhs.+(rhs)", then in the first case I need to keep source ranges for whole 'expression' and 'operator'
<whitequark> while in the second case it will be 'expression', 'dot', 'selector', 'lparen' and 'rparen'.
<whitequark> I don't want to make separate node types for them, as there is no semantic difference.
avsm_cl has joined #ocaml
Yoric has joined #ocaml
eikke has joined #ocaml
<ggole> whitequark: probably the simplest approach is just to stash a Location.t as an argument in each constructor
<whitequark> ggole: thanks. was my thought also. I'll try it and see if it works.
<ggole> So in your AST you'd have something like MethodCall of term * method_name * term list * Location.t
<ggole> And the extent of the location would cover all the parts, whatever those are
<whitequark> ggole: can you elaborate on how would you define Location?
<ggole> Er, it's just an abstract type there
<ggole> So you would stash whatever information you find necessary
<ggole> A simple choice might be a triple of beg * end * filename
<whitequark> ggole: I see what you mean. but I want another level of indirection there, because I want to be able to locate syntactical parts of the node, not just its overall extent
<whitequark> and different nodes have different set of parts. E.g. a MethodCall has a 'dot' and 'lparen', whereas ClassDefinition ("class Foo; end") will have 'class_kw' and 'end_kw'.
<ggole> Well, that sounds like lower level information than an AST.
<ggole> I guess you mean lexemes
<whitequark> this will be more clear if I explain the purpose
<whitequark> ggole: yes, lexemes.
<ggole> OK. However, earlier you said "I don't want to make separate node types for them, as there is no semantic difference", which says "AST" to me.
<whitequark> have you seen clang's diagnostics? e.g. http://clang.llvm.org/diagnostics.html, second example. so. I want to have an interpreter, for which syntactic differences do not matter; and it is convenient for an interpreter to operate on an AST
<ggole> Yes. The idea is that the lexer will construct extent information for lexemes, and the parser will eat a stream of those and emit an AST which is annotated with extent information (computed from those emitted by the lexer).
<whitequark> however, when it is time to report errors, I want to descend a level lower and relate the abstract properties of error ("first arg", "second arg") to the source.
<whitequark> ggole: exactly.
<whitequark> I've already built one such parser, but it was in Ruby.
<ggole> When you run your type checking, you can map an AST node to its position in the source.
dsheets has joined #ocaml
<ggole> You don't need to track dots and the like (unless you want to, I guess)
<whitequark> so I generally understand how it works, but I'm not sure how to design a data structure within ocaml's type system
<whitequark> ggole: yes, don't technically need that
<whitequark> however if I do track all lexemes, this makes source-to-source conversions very easy
<whitequark> or reformatting
<whitequark> it's generally quite hard to pretty-print code while saving source whitespace, comments, etc.; but if I perform transformations on the source instead (e.g. "add whitespace before = sign"), and verify that ASTs before and after transformations match, it's easy.
ggole has quit [Ping timeout: 240 seconds]
<whitequark> and since I track most lexemes already, I figured that I could as well track every one.
<whitequark> oh, he quit :/
osnr has quit [Quit: Leaving.]
ggole has joined #ocaml
<ggole> whitequark: seems there was a bubble in my internets
<ggole> If you said anything after "you don't need to track dots", I'm afraid I missed it.
<whitequark> ggole: http://tunes.org/~nef/logs/ocaml/13.06.21 seems to have up-to-date log at the bottom
<ggole> Ah, I see
<ggole> So usually you would keep an AST as a big tree modelled with algebraic types
<ggole> A simple example would be type term = Int of int | Bool of bool | Add of term * term | If of term * term * term
<ggole> In an OO language, I think you would write a subclass for each case
<ggole> For each construct in your language, you define one of these: and for each one, you would put a Location.t at the end to tell you where in the source it came from
<whitequark> right
csakatok_ has quit [Remote host closed the connection]
<ggole> The trick which lets you do clang-style ^^^s is that each bit comes with its own location info
<ggole> So you can highlight, say, the left hand side and the second argument easily.
<whitequark> I know. I'm just not sure how to store them.
csakatoku has joined #ocaml
<ggole> You don't have to decide, you know
<ggole> Just make a dummy type Location.t = unit, and use that for now
<ggole> When you figure out what location info you need, stash it there
<whitequark> mm yes, I guess this will also work.
<whitequark> and, asking questions with code on hand is easier :)
<whitequark> thanks.
<ggole> My pleasure, have fun working it all out.
<whitequark> so far, it seems that writing a language implementation in ocaml is much, much more pleasant than in ruby.
<whitequark> also, llvm bindings come for free, yay
<ggole> I've not played with those yet
<ggole> More fun to write my own backend
<whitequark> oh, I do have my own backend
<whitequark> which includes an SSA-based optimizer which understands semantics of my language
<whitequark> but I use LLVM as a clever codegen.
<ggole> It's a lot of work to do a good job of code generation.
<ggole> I have a fairly primitive graph coloring allocator and some weak optimisations at the moment
<ggole> But it emits shocking code in some cases :)
<whitequark> indeed
<whitequark> I'm more interested in novel techniques on the front/middle end, though.
Drup has joined #ocaml
<ggole> Both are pretty interesting imo
<ggole> Although most of the backend stuff seems pretty figured out at this point
<whitequark> yeah
<whitequark> that, or it is limited by characteristics of the source language.
q66 has joined #ocaml
<ggole> Yeah
<ggole> C seems pretty limiting in that regard
<ggole> Gotta go: back in an hour or so o/
<whitequark> ggole: oh, I was about to tell you about my approach to languages :)
<whitequark> sure
<whitequark> I think it's novel, at least, in the particular form I use it. Never seen anything similar.
<MarcWeber> gasche: Do you think its possible to derive show for http://dpaste.com/1257190/ ?
zzz_ has quit [Remote host closed the connection]
jcao219 has joined #ocaml
zzz_ has joined #ocaml
_andre has joined #ocaml
darkf has quit [Quit: Leaving]
yacks has quit [Quit: Leaving]
yacks has joined #ocaml
<gasche> MarcWeber: I can't really make sense of the first code snippet you posted
<gasche> you don't have deriving error, you have a syntax error
<gasche> so it doesn't mean that deriving failed -- maybe fixing the syntax would make it work
<gasche> (that's what I would expect0
<gasche> )
<gasche> independently, you should *not* have to hack deriving sources to implement all the deriving you wish for
<gasche> for functions, you wished to override the behavior that was implemented
osnr has joined #ocaml
osnr has quit [Changing host]
osnr has joined #ocaml
<gasche> but in the general case I'm quite sure deriving's design has covered the situation where, locally, you want to provide a Show instance without using automatic derivation, by writing the printer yourself
<gasche> companion_cube: your friend should have a better knowledge of source package managers
<gasche> emerge is by no means a reference in its beauty of design or implementation
<gasche> BSD systems have used source-based package managers for decades, of which emerge is inspired
<gasche> Gentoo's implementation has some nice sides, and a lot of community rants, foul history and social drama
<gasche> using it as the prototypical source package manager is a bit presomptuous, I think
<gasche> now, on technical grounds: OPAM tries to use proper dependency/metadata handling as proposed by the Mancoosi project
<gasche> I think this alone warrants a novel implementation
<gasche> (of course that would not be necessary, had maintainers of existing package managers accepted to look outside their closed garden and collaborate with mancoosi; but that didn't happen, and the joke is on them)
ttamttam has quit [Quit: ttamttam]
jcao219 has quit [Ping timeout: 248 seconds]
talzeus has quit [Remote host closed the connection]
osnr has quit [Ping timeout: 240 seconds]
f[x] has quit [Ping timeout: 260 seconds]
osa1 has joined #ocaml
chambart has joined #ocaml
ttamttam has joined #ocaml
troydm has quit [Ping timeout: 248 seconds]
csakatok_ has joined #ocaml
ttamttam has quit [Client Quit]
<ggole> whitequark: do tell
csakatoku has quit [Ping timeout: 252 seconds]
cago has quit [Ping timeout: 268 seconds]
<whitequark> ggole: ok. I'll try to be concise. (The complete explanation is rather lengthy.)
<whitequark> First, some background. I want to have a modern, statically typed language with memory safety, higher order functions and type inference, and its type system should be expressive enough to allow the compiler to produce efficient code.
<ggole> OK.
<whitequark> (One of the targets is STM32F103RB: 20 KB of RAM and 128 KB of Flash.)
<whitequark> however, I do not want to give up on metaprogramming; rather, I want to have the best breed of metaprogramming which is possible given the constraints.
<ggole> That's pretty minimal space in which to run a GC
<whitequark> (GC) region-based memory management, plus statically allocated data, plus a GC if your workload doesn't fit into the above
gnuvince has quit [Ping timeout: 264 seconds]
<whitequark> GC is optional and is expected to be used sparingly, even if included.
<ggole> Right
ttamttam has joined #ocaml
cago has joined #ocaml
<whitequark> so. a huge amount of tools in C-land performs metaprogramming by syntactic expansion or code generation.
ontologiae has quit [Ping timeout: 260 seconds]
ttamttam has quit [Client Quit]
<whitequark> the C preprocessor (horrible.), lex/yacc, stuff like Qt's metaobject compiler, and so forth.
<ggole> Yeah, it is pretty awful
<whitequark> essentially, it can be said that all these source-based tools are crude 'macros'; they have source, in whatever form, as the input, and emit source too.
csakatok_ has quit [Remote host closed the connection]
<whitequark> they also perform a really strange, crude and limited form of introspection (e.g. C preprocessor can fold certain constant expressions).
<whitequark> I think that in order to perform metaprogramming (as opposed to language extensibility), there is no need to use macro-like constructs; in fact, it is best to avoid them, because metaprogramming is not fundamentally tied to syntax
<ggole> Yes, so they can index types (arrays).
<whitequark> yeah. the compiler also performs constant folding, which is not guaranteed.
<whitequark> one of my 'favorite' examples of why is this approach wrong is as follows:
<whitequark> avr-libc has a function 'delay', which essentially performs busy-looping, and loops for T seconds on a core with frequency F.
<whitequark> so, it is naturally implemented as a C macro, which expands to two nested loops, and a calculation of how much CPU cycles is T.
<whitequark> as C does not have any guarantees for constant folding, and that code relies on constant folding, with -O0 it will refuse to work.
<ggole> Oof
<whitequark> in various funny ways; in particular, a non-folded floating point number will cause your code to be linked with libm.
avsm_cl has quit [Quit: My Mac Mini has gone to sleep. ZZZzzz…]
<whitequark> which will explode your code size and, of course, not work as intended, because it's a software FP implementation.
<whitequark> there is a really large amount of such examples, relying on essentially unspecified behavior.
<whitequark> in C++ land, there are some operations which can be used in guaranteedly static computations, but this set is limited
<ggole> Yeah
<whitequark> despite that, template system is turing-complete, so you can write your own--it will be just ugly.
<whitequark> (I think static_if was voted down for some reason in C++14. A pity.)
<ggole> They hae constexpr, which is essentially adding certain fixed terms to the type language
<ggole> Mostly so they can become template arguments
<whitequark> another thing which C++ really ought to have is introspection for class layout; this way, external code--like GCs--can avoid "guessing" ABI and just use its implementation in the compiler
<ggole> (Fixed terms meaning terms that only evaluate to one value)
<whitequark> instead, they voted to add several additional restrictions on pointer hiding to accomodate for conservative [essentially, Boehm] GCs.
<whitequark> ok. I hope I've explained why metaprogramming with macros is wrong :)
<ggole> Well, wait, Lisp has macros too
<ggole> But they don't suffer the same problems
<whitequark> I agree; however, Lisp has a long history of dealing with hygiene
<ggole> True
<whitequark> and there *are* problems with this approach too, they're just much less drastic with Lisp, due to various factors
<companion_cube> gasche: interesting, thanks
<whitequark> ggole: plus, it's much easier if it's statically typed.
<whitequark> er, dynamically
ggole_ has joined #ocaml
<whitequark> ggole: ok. now to my approach :)
<ggole_> What's with my internets today O_o
<gasche> whitequark: did you know there is a port of OCaml on PIC, that can run programs with 4K of RAM?
<whitequark> (We can discuss Lisp's macros later.)
* ggole_ goes to check the magic box
<ggole_> Sec, be right back
<whitequark> gasche: I didn't. However that is a wide statement.
<whitequark> I know a port of Scheme to PIC with similar requirements; however, it can allocate just 1k cells and is insanely slow.
<ggole_> Hmm, seems ok
<whitequark> my goal is to get rid of C code (in a perfect situation with no legacy code); thus, language semantics must be adjusted accordingly.
<gasche> the PIC OCaml runtime is relatively naive, being written in PIC assembly, so you won't get the allocator performance of the usual OCaml runtime
<ggole_> gasche: thanks, I'll take a look
<jdoles> whitequark: what are you replacing C with?
ggole has quit [Ping timeout: 255 seconds]
<whitequark> jdoles: Foundry [my language]
<whitequark> plus I'm explaining how it works right now :)
<whitequark> gasche: thanks for the link.
<jdoles> whitequark: does it have anything over Rust (simple base line)?
<ggole_> whitequark: agreed that macros are "easier" in dynamic languages
<whitequark> jdoles: yes, much more powerful metaprogramming
<whitequark> jdoles: and an stdlib which is specifically developed for embedded applications.
<ggole_> Certainly defmacro is more convenient than camlp4
<jdoles> whitequark: and you also have a type-system which is at least as expressive?
<ggole_> (Of course, partly this is a matter of language integration)
<jdoles> whitequark: and you have zero cost abstractions?
<whitequark> gasche: I think that OCaml-the-language can perform very well in limited environments. However, its library will need to be adjusted accordingly... memory management and recursive types, mainly, are a problem.
<gasche> yeah
<jdoles> (The only reason Rust even remotely has a chance is that, like C++, it offers zero cost abstractions when designed well. )
<whitequark> jdoles: yes (I do not yet have linear types, but it's on roadmap); and yes.
<gasche> hm
<companion_cube> whitequark: there is a lua implementation for pic, too
<whitequark> companion_cube: I bet it's interpreted
<gasche> your blog post has a list of requirement that would be reasonable for a LuaJIT target
<whitequark> gasche: JIT.
<gasche> except maybe memory allocation due to JIT
<gasche> well
<companion_cube> whitequark: I don't know
<jdoles> whitequark: on the roadmap seems more like that you released it too soon.
<gasche> ok you want to have read-only code regions?
<whitequark> gasche: you can't really do JIT on an embedded device.
<whitequark> not enough RAM, not enough resources, worst of all... not realtime
<whitequark> jdoles: I didn't release it yet.
<gasche> so Rust?
<ggole_> Doesn't seem stable enough at the moment
<whitequark> jdoles: I wrote an initial prototype, verified my main idea and now writing a production-quality variant.
<ggole_> Although any other new language would face the same problem
<jdoles> whitequark: with what purpose?
<gasche> note that I'm not trying to imply that creating new languages is a bad idea
<gasche> (by proposing existing alternatives)
<jdoles> whitequark: do you want to build open-source fame, sell it to enterprises, ..?
<whitequark> jdoles: it's a commercial project; I don't think that anything in embedded can live just on the community.
<gasche> I think there is a middle-ground between "just on the community" and "with no community"
<whitequark> I'm building a language which is suitable for practical applications... there's quite a bit of projects which are essentially just 'dumb interpretes' for some familiar language out there. Not my goal.
<whitequark> gasche: right. I didn't say it isn't open-source. You can combine the two.
<jdoles> I don't see how a language can be open-source and commercial at the same time, myself.
<jdoles> People have tried that and they all are not successful.
<gasche> like F#?
<whitequark> jdoles: there's more to it than just language.
ttamttam has joined #ocaml
osnr has joined #ocaml
<jdoles> gasche: do you really think that if an independent developer like whitequark would have created F# a few years ago, that anyone would have cared?
<whitequark> will it run on $your_favorite_chip? will it have $tooling? etc.
<jdoles> gasche: only reason people care is because Microsoft said so.
<gasche> that's probably right
<ggole_> It's not impossible. Commercial interests pay Mike Pall to work on lua, for instance, even though they won't own the result
ttamttam has quit [Client Quit]
<gasche> the Scala startup may be worth mentioning as well
<gasche> (in fact the OCaml development is also funded through the OCaml Consortium, that is commercial interest)
<Drup> gasche: like Opa ? ;)
<jdoles> gasche: OCaml is funded through the French government mostly.
<gasche> jdoles: it is unclear at which point this is true
<jdoles> gasche: just track all the tax dollars.
<gasche> well
<gasche> I don't think it was true this year, for example
<gasche> it used to be true, but that's not clearly the case anymore
<jdoles> gasche: there are 3 successful OCaml companies now, which is more than Haskell has :P
cago has quit [Ping timeout: 246 seconds]
<whitequark> don't forget Java
<ggole_> ocamlpro isn't government funded afaik
<jdoles> gasche: perhaps it's changing now for OCaml, but the tool support for OCaml still isn't very good.
<gasche> it still doesn't mean OCaml is funded by the government "mostly"
<gasche> I'm just saying that if you look at how the people maintaining the compiler get their funding
ontologiae has joined #ocaml
<jdoles> OCaml had very basic run-time problems until as recent as last year.
osnr has quit [Ping timeout: 255 seconds]
<gasche> you don't find an obvious majority of government-funded activity
csakatoku has joined #ocaml
<gasche> (of course you can debate whether Damien Doligez and Xavier Leroy are government-funded; in practice their development activity is reduce enough to be considered as funded by the OCaml consortium)
<jdoles> I think there is room for a different eco system, but there is no room for one man shows like what whitequark is going to do.
<gasche> well
<whitequark> gasche: (PIC OCaml) it's bytecode based; so for example for tight loops, you still have to drop down a level, to assembly, I guess.
csakatoku has quit [Remote host closed the connection]
<gasche> whitequark: right
<ggole_> Like Python or Ruby?
<gasche> jdoles: I don't try to to tell people to stop doing what they're interested in anymore
<gasche> at best, they have a reasonable success
<ggole_> (Actually I doubt such success stories would occur so easily today. But the point stands.)
<gasche> at worst, they did interesting stuff and learned a lot
<jdoles> If you are into writing compilers, you can take an existing language and write a better compiler for example.
malo has joined #ocaml
<whitequark> gasche: and I wonder how well it handles interrupts.
<whitequark> jdoles: it started that way.
<jdoles> That way, you can say -- like Intel does -- that companies can save energy.
<whitequark> as a Ruby compiler, that is.
<whitequark> however, it quickly became obvious that this approach doesn't make a lot of sense.
<jdoles> whitequark: so, that was a failure/success?
<jdoles> whitequark: I am sure you learned something about the experience and it's better than doing nothing.
<ggole_> Writing a better compiler is no recipe for success
<whitequark> jdoles: I have had several iterations before the model I have arrived to, at this point.
<ggole_> Look at all the failed attempts for Python
<whitequark> now I understand what semantics do I want, how do I implement it, and why it is a good fit for the problem domain.
<jdoles> ggole_: but those "better" compilers never have been better.
<ggole_> Yes, but I think not for lack of effort
<jdoles> ggole_: I tried several when they were already shouting how great they were and found nothing.
<whitequark> I'm working on this for about 1.5 years already... I have indeed learned a lot.
<whitequark> And the things I've learned lead me to conclusion that my ideas make sense and, with good enough execution, may have some success.
<ggole_> PyPy seems pretty complicated and doesn't seem to get great performance
ontologiae_ has joined #ocaml
<gasche> whitequark: I do find this reasonable
<jdoles> If you ever tell to other people how much better your compiler is and then they try it and it fails, you have lost all credibility.
<whitequark> ggole_: PyPy seems like a much better fit as a dynlang VM than, for example, JVM.
<whitequark> I can elaborate if you want
<ggole_> But maybe that's because they don't have deeply experienced people like Pall or Bak driving things
<whitequark> I know some of details on how JRuby and Topaz [Ruby on PyPy] work, and what is problematic in those cases.
<jdoles> All a JIT needs to do is is compile to native code at some point.
<ggole_> whitequark: what do you think of their projection stuff? Is it a sane way to architect a jit?
<jdoles> Introducing yet another layer (JVM) into it can only make things slower.
<whitequark> ggole_: I think it's a very interesting way to make a JIT
<whitequark> however that's not the most important part in comparsion with JVM, or other existing VMs.
<ggole_> Oh?
ontologiae has quit [Ping timeout: 260 seconds]
<whitequark> one of the worst problems for JRuby is object representation, which is forced upon by JVM, and in particular its very rigid object system.
<whitequark> (method calls aren't a big problem since invokedynamic; indy basically reduces overhead on monomorphic/bimorphic call sites to zero.)
<ggole_> Right
<whitequark> so, for example, in Ruby everything is an object
<ggole_> You really want hidden classes or similar machinery
<ggole_> object maps or whatever Self called them
<whitequark> an object has: a class pointer, an ivar table, a flags field... plus JVM-internal stuff.
<whitequark> now take a ruby Integer. it must be an Object, because otherwise JVM won't be able to employ general method dispatch on it
<whitequark> but now every integer has a size of what, 64 bytes (I think), and has to be allocated on heap.
<whitequark> since it is allocated on heap (no user-defined value types in JVM), you get:
<ggole_> Yeah, no fixnums
<whitequark> a) memory access penalty
<whitequark> b) GC penalty
jcao219 has joined #ocaml
<whitequark> c) most importantly, no aliasing guarantees (that JVM can see).
<whitequark> oh, and of course JVM cannot optimize integers by placing them to registers
<ggole_> It only wants to do that for primitives?
<whitequark> this applies to all small objects as well, i.e. most of Ruby objects ever created. (One Rails page load creates 500k of them or so.)
<whitequark> ggole_: exactly.
<ggole_> (and references: but not the contents of references)
<ggole_> Right
<ggole_> Yeah, that would suck pretty hard
<whitequark> ok, next part
<whitequark> environments.
<whitequark> in JVM, you cannot modify a value in an outer scope, so JRuby naturally maintains its own environment stack based on JVM objects
<whitequark> so, it's allocated on heap, there are no aliasing guarantees, and it is invisible to JVM's optimizer.
<whitequark> due to various... peculiarities in Ruby, JRuby cannot perform environment elision itself for *any* scope, which also sucks pretty hard.
<whitequark> (But this could be fixed with several small changes to Ruby, which will also make it 150% more sane.)
<ggole_> Sounds pretty grim
<whitequark> it is.
<whitequark> now, PyPy lets you control the layout of both objects and environments however you want.
ttamttam has joined #ocaml
<ggole_> So that's why the project to run ruby on pypy is interesting?
<ggole_> I see
<whitequark> even better, it will perform optimizations on your environments, but if for some reason you need to get an elided value back, it will transparently invoke the computation
<whitequark> so you also get free deopts.
<whitequark> it's as powerful as V8, but is not tied to JS.
<whitequark> so, PyPy is really interesting, as a research project and platform for future VMs.
<whitequark> however, I don't think it will be used in production in foreseeable future.
<whitequark> it took JRuby about six years to get to this point, with hard work of very skilled people... and almost none of that is related to VM neatness.
<whitequark> mostly, it is reverse-engineering and reimplementing often insane Ruby MRI APIs.
<ggole_> The C stuff?
<ggole_> That seems like an awful problem to hae
<whitequark> another project to produce a meta-circular implementation, Rubinius, exists for about as long
<whitequark> but it still does not run majority of production code.
<whitequark> ggole_: it is.
darkf has joined #ocaml
ttamttam has quit [Client Quit]
<whitequark> Ruby doesn't have a specification (which works for a desktop/server impl), and the default implementation is not friendly.
<ggole_> It seems that writing an interpreter in C and implementing all the "slow" stuff in C is a bad idea for performance in the long run
<whitequark> oh, you won't believe
<whitequark> JRuby is *faster* than the C impl
<gasche> the main problem is not having proper language design
<ggole_> I have no problem believing that
<whitequark> despite the C impl being, well, in C, having fixnums and flonums and other immediates, being optimized by the C compiler, containing a bytecode interpreter with inline caches and generally being highly optimized
<whitequark> much more than e.g. CPython.
<whitequark> (ugh. CPython is even worse than Ruby MRI. That makes me sad.)
<whitequark> gasche: you are completely right.
<ggole_> It's not entirely that simple: JS is fairly fast now, but you couldn't really accuse it of being too well designed
<whitequark> ggole_: oh no, there is a problem with JS being fast
<ggole_> On the other hand, there is lua
<gasche> LuaJIT became faster than most JS engines with a fraction of the effort
<whitequark> that problem is that you cannot rely on JS being fast.
<gasche> because Lua had a sane design process to begin with
<whitequark> I do not think that anyone except JS engine devs have a working mental model of how to write fast JS.
<gasche> (and because Mike Pall is a great compiler writer, of course)
<ggole_> But they've done much better than the Python or Ruby guys
<whitequark> ggole_: it's also much, much smaller than Python/Ruby.
<ggole_> (Possibly because of large amounts of invested effort though)
<ggole_> Oh, sure
<gasche> yeah I think the effort is key
<gasche> and maybe also the experience
<whitequark> and no backwards compatibility.
<whitequark> Ruby's got a decade of code to support.
<gasche> that interacts with stuff outside the language with C bridges
<gasche> like Python, that is a major problem for alternate implementations
<whitequark> gasche: oh, this is mostly solved in Ruby now.
<gasche> JS was extremely limited and well-sandboxed in comparison
<gasche> whitequark: interesting, can you elaborate?
<ggole_> See, lua has very strong C support. But it is not jammed into the language as a vast standard library.
<whitequark> gasche: mainly, FFI.
<whitequark> and libraries which provide indirection and allow to choose between implementations... MRI C bridge, FFI bridge, native Java one
<whitequark> it used to be a big nuisance several years ago, but not today.
<whitequark> there is also interesting ongoing work on enhancing performance of both FFI bridges and C bridges for alternative implementations, but I do not know its details well enough.
<ggole_> Do JITs tend to know enough about C calling conventions to emit direct calls?
jcao219 has quit [Ping timeout: 246 seconds]
<ggole_> Or do they go through wrappers?
<whitequark> ggole_: yes, the new Java JIT can do that, AFAIK.
<whitequark> well, there's a set of preconditions, but you can write code which does it properly.
<ggole_> Right
<whitequark> *Hotspot JIT.
<ggole_> Seems you could do things like construct C objects directly if you had all the right facilities
<ggole_> It's just code, after all
<whitequark> not straightforwardly in Java, at least.
<ousado> you'd need to know how the compiler does the layout
<whitequark> it does have unsafe memory operations, but I bet that comes with a bag of problems on its own.
<whitequark> ousado: it's documented in the ABI.
<ggole_> C is simple enough that it is plausible
<whitequark> unlike C++, where compiler has a lot of leeway, it's simple for C.
<ggole_> There are some tricky things like struct return that you have to be careful with
<whitequark> there still is a number of extensions and badly behaving features, but generally it tends to work rather than not
<whitequark> I'd say that a bigger problem is interacting with code which wants to do its own memory management
<whitequark> or even just captures pointers
<whitequark> because: 1. ownership 2. Hotspot's got compacting GC.
<whitequark> I don't think anyone solved this nicely.
<ggole_> You can pin objects iirc: but I believe that is expensie
<whitequark> hm, never heard about that in Hotspot
<whitequark> but I don't really know it that well, so maybe.
<whitequark> JRuby used to have a MRI C bridge emulator; it issued handles and used an additional level of indirection
<whitequark> to map Java<>pseudo-C structs
<whitequark> needless to say, a lot of macro tricks were involved in order to allow "access" to "members" of "stuctures".
<whitequark> it got bad enough so JRuby disabled the bridge by default. oh also, it used JNI, which does use wrappers for JIT calls (AFAIK).
<whitequark> which means it was also slow as a dog.
<ggole_> Ugh
<whitequark> a lot of things around ruby are "ugh." :(
<ggole_> Interop is a tricky problem in general
<whitequark> yes; but here it is made worse by general lack of design, and the fact that ruby core team is all-Japanese.
<whitequark> and, while all of them are excellent C hackers, they don't really know compiler design or interface design that well. especially low-level parts.
<whitequark> MRI's GC is not even valid C.
<whitequark> um... it relies on a certain construct which, in practice, on most archs, forces the compiler to spill lvars to stack
<whitequark> which allows its conservative (wrt/ stack; it's precise on heap) GC to collect roots.
<ggole_> It's odd that so many poorly implemented dynamic languages were implemented and gained success after so much work was done earlier on good runtimes
<whitequark> ggole_: I think I have an explanation for that.
<ggole_> If you go and read things like the Orbit papers, there was a lot of strong work there
<ggole_> All ignored :/
<whitequark> ggole_: a good demonstration for that will be Ruby's parser.
<whitequark> which I was unlucky enough to attempt to reimplement sanely. I will probably have nightmares till the rest of my life.
<ggole_> I've heard stories about parse.y :)
<whitequark> oh, I probably have the most amazing story about parse.y.
<whitequark> in order to implement a particular feature of lexer, I've studied its source for quite a bit of time, but no luck.
<whitequark> so, I went on the net and lurked; and I found a chapter of a Japanese book which explained how the lexer worked.
<ggole_> Wow. It's huge.
<whitequark> naturally, I've had to hire a translator: http://whitequark.org/blog/2013/04/01/ruby-hacking-guide-ch-11-finite-state-lexer/
<whitequark> it's a giant article which goes into a great detail and actually explains the feature.
<whitequark> however... it was written in 2002
<whitequark> and the author reverse engineered parse.y himself in order to write it.
<whitequark> there's a particular paragraph at the end when he wonders what a certain state of the lexer performs, and he thinked about it for a week
<whitequark> only to come to conclusion that it does absolutely nothing and is just an artifact, a piece of old code.
<ggole_> Incredible.
eikke has quit [Ping timeout: 267 seconds]
<whitequark> which can be safely removed.
<whitequark> now, fast forward to 2013... it is STILL THERE, verbatim
<whitequark> generally, the lexer is an insane FSM contraption, with more than 80 goto's and abuse of LALR(1) lookahead parser actions to mutate the lexer's state
<whitequark> *lexer/parser
<ggole_> Oh yeah. You wouldn't edit something like this if you could help it.
<whitequark> Ruby's grammar is also context-sensitive
<whitequark> like, truly context-sensitive; it cannot be lexed without parsing.
<whitequark> foo / # 2 /
<ggole_> It amazes me that people continue to make that mistake
<whitequark> it's a division if foo is a variable, and a method call with a regexp as an argument if it is not a variable.
gnuvince has joined #ocaml
justdit has quit [Ping timeout: 240 seconds]
darkf has quit [Quit: Leaving]
<whitequark> there's also a quirk where the parsing depends not only on unbounded preceding context, but also on a single-line lookahead
<whitequark> eg
<whitequark> foo
<whitequark> .bar
<whitequark> is a method call.
<whitequark> there are also quirks where symbols like % < ( [ { are all overloaded 2 or 3 times in various contexts in the grammar
<whitequark> and you can use *any* sequence or character in order to delimit strings or heredocs
<whitequark> even, for example, literal \0 in the source
<whitequark> %\0 foo\0 is a valid string
<whitequark> or %%%
<whitequark> or a backslash, which in this case loses its escaping function: %\\
<ggole_> Like sed
<whitequark> kinda
<ggole_> But... in a general purpose language O_o
<whitequark> there are also flip-flops like sed in it :)
<whitequark> 5.times { |i| p i if i==3..i==5 }
<whitequark> it hides a finite automata with 3 or 4 states, with the state being stored in a hidden variable.
<whitequark> there are also three syntactically distinct ways to make a closure
<MarcWeber> gasche: http://dpaste.com/1259751/ Now it should make more sense.
<ggole_> Guess I'm not gonna run out and learn Ruby.
<whitequark> foo { |var| }, foo do |var| end, ->(var) { }
<MarcWeber> But you're right, maybe I should just ask on the github page how to do it ..
<MarcWeber> ggole_: Why not? Ruby gets a lot of small tasks done nicely
<whitequark> ggole_: I just started.
<whitequark> oh wait, it's four ways to make a closure
<whitequark> because also ->(var) do end; and you can omit () in the -> var do end, which leads to extra awkwardness in the lexer.
<whitequark> {} and do..end also have different precedence; this is quite important, because ruby allows to omit parens practically anywhere
<whitequark> oh, and do can also begin the loop body; while true do end. so, do is overloaded three times.
<whitequark> now imagine this: "while foo do end"
<whitequark> or "while foo -> do end"
<whitequark> in order to handle this, lexer has two "stack states", which are also mutated in the parser actions, and sometimes lexer actions.
<whitequark> the conflict is resolved in... hm
<whitequark> I have honestly no fucking clue how is it resolved. It is resolved somehow, I guess.
<whitequark> Despite writing a parser.
<whitequark> *ahem*. In order to support all this weirdness, there is a lot, lot of essentially duplicated LALR(1) rules
<MarcWeber> I never looked at the parser ..
<whitequark> and it is not like you can safely transform them, because, remember, parsing actions which depend on whether the lookahead is performed on this step or not.
<whitequark> did I mention that Ruby has 137 distinct lexemes?
jcao219 has joined #ocaml
<ggole_> I am beginning to regret bringing the subject up. o_O
<whitequark> in particular, FOUR variants of "do", because three are too simple, and another one was left, where the ambiguity is resolved at parser level.
<MarcWeber> But back to ocaml: I try to debug an issue within haxe (which is written in ocaml). the haxe compiler behaves differently on some small input difference, and I want to understand why. So my attempt is to output intermediate results to understand when the difference happens.
<MarcWeber> whitequark: Create "why ruby sucks" page :)
<whitequark> and: tLPAREN2 tRPAREN tLPAREN_ARG tLBRACK tLBRACK2 tRBRACK tLBRACE tLBRACE_ARG.
<jdoles> MarcWeber: +1
<MarcWeber> But any language sucks in many ways .. luckily you don't have to cope with all details always.
<whitequark> ggole_: there's for example also Proc#binding, which allows to pry open the environment of any closure at any point, and inhibits most of interesting optimizations.
<MarcWeber> But I was very surprised about the design of rubys rake make system. They introduced '-m' switch to make all dependencies "parallelizable" magically - asking the end user to find out whether this is ok or not ..
<whitequark> but I did not yet explain the conclusion
<whitequark> which is kind of surprising.
<MarcWeber> So move on - or paste the url to "why-ruby-is-(great/sucks)" page :)
<whitequark> users absolutely love this insane rube goldberg machine, because it is so polished by generations of japanese programmers so you can barely notice any of it while *using* the language, as opposed to implementing it.
<whitequark> of course, it leaks. lack of good tools. bad syntax error reporting. etc. but this actually looks relatively minor.
<MarcWeber> Anyway - trying to dump intermediate states with 'show' kinda failed - trynig Std.dump makes the resulting executable segfault. So I tried ocamlc (instead of ocamlopt) - get a executable file which shows the help output, but behaves totall differently ..
* whitequark sighs
<whitequark> ggole_: shall I continue with the semantics of Foundry? :)
<ggole_> I'm about to be afk for a few hours: it'll hae to be later
<whitequark> sure.
* whitequark looks at the wallpaper, burned through by the intense hatred
<MarcWeber> whitequark: It would be more ineresting to read about how to write a 'python/ruby/..' like language which is equally polished (user view), but easier to implement :)
<whitequark> I think an explanation of why I switched to ocaml is now entirely superfluous.
<ggole_> Haha
<whitequark> MarcWeber: oh, that's easy
<ousado> :)
<whitequark> you pour thousands of hours of work in it.
<ggole_> OK, I'm out: see you all later (few hours?)
<ggole_> Thanks for the amusing rant.
<ggole_> o/
<whitequark> :D
ggole_ has quit []
avsm_cl has joined #ocaml
mika1 has quit [Quit: Leaving.]
smondet has joined #ocaml
jcao219 has quit [Ping timeout: 240 seconds]
pootler_ has quit [Ping timeout: 248 seconds]
pootler__ has quit [Read error: Operation timed out]
travisbrady has joined #ocaml
mika1 has joined #ocaml
<whitequark> jdoles: to summarize "why"... right now, embedded is dominated by C/C++, and neither is a particularly good language for the task. (UB; poor builtins/intrinsics; poor type system; not even optional safety.)
<whitequark> so, I very strongly believe that there is room for better languages in the field.
<Drup> whitequark: I didn't really follow the conversation so sorry if it's a stupid question but, isn't ATS exactly that ?
<whitequark> oh, ATS is an interesting approach.
<jdoles> whitequark: for really embedded systems, there exist verififcation systems too.
<jdoles> verification*
<whitequark> Drup: but it requires really radical changes in the mindset.
osa1 has quit [Ping timeout: 260 seconds]
<whitequark> jdoles: I'm well aware of it.
<jdoles> whitequark: the kind of tools where you can prove by the push of a button in the cloud that it all works like specified.
<Drup> whitequark: probably, I don't know enough on low level programming languages to tell.
<whitequark> Drup: you likely will have a very hard time trying to help a C programmer form a mental model of ATS good enough to write actual software. That is my opinion.
<whitequark> jdoles: this is an interesting claim. source?
<Drup> whitequark: and you thing it's better for Rust (or Rust-like languages) ? I'm not really sure about that
<whitequark> Drup: I do think it is better for Rust. Rust is quite close to modern C++ in a lot of places.
<jdoles> whitequark: it only "works" according to first-order logic with modal operators.
<jdoles> whitequark: i.e. basically the set of formulas which an automated theorem proving system can efficiently work with.
<jdoles> whitequark: lots of non-trivial properties don't work.
<whitequark> Drup: for example, shared_ptr/unique_ptr; C++11 lambdas; etc.
<MarcWeber> How would you go about finding out why a ocamlc and ocamlopt complied executables behave differently?
<whitequark> jdoles: (it) the verification system?
<jdoles> whitequark: but... since most of these embedded systems don't care about that, that's OK.
<jdoles> whitequark: basically.
<whitequark> yes, I can believe that you can build such a system.
<whitequark> hell, you can do much better--CompCert exists
<jdoles> whitequark: sure, but industry isn't that far yet.
<jdoles> whitequark: and industry wants cheap effective solutions.
<jdoles> whitequark: if their employees need to think, it's too expensive.
<jdoles> whitequark: or well, that appears to be the trend.
<Drup> it's not exactly the same
<whitequark> oh, it has always been a trend.
<whitequark> Drup: right. but Rust does not represent a fundamental change when compared with C++.
<whitequark> in this context, that is.
<whitequark> jdoles: talking about efficiency
<Drup> whitequark: no I mean, compcert is not a verification tool, it's a *sane* compiler but it doesn't verify that you program is conform to a given specification.
<whitequark> Drup: oh sure. but compcert itself is verified.
<whitequark> and it's a nontrivial piece of code.
ben_zen has joined #ocaml
osa1 has joined #ocaml
<amiller_> Error: The files bintree.cmi and bintree.cmi make inconsistent assumptions
<amiller_> unusual, right?
ollehar has joined #ocaml
ben_zen has quit [Ping timeout: 241 seconds]
djcoin has quit [Quit: WeeChat 0.4.0]
<amiller_> when i run ocamlobjinfo i get
<amiller_> Interfaces imported:
<amiller_> 6afce261465fc1561e6f1a630f88dc80Test
<amiller_> 11b4fcf2c4db919d20b8d85b3fcf289aTest
<amiller_> 36b5bc8227dc9914c6d9fd9bdcfadb45Pervasive
<amiller_> this is after compiling with the compile-libs-hack code
<amiller_> why two interfaces?
<avsm_cl> gasche: does the ocaml test suite need the comipler to be installed?
<avsm_cl> gasche: getting this old failure which normally means the wrong runtime .so is getting picked up: ... testing 'bigarrays.ml': ocamlcFatal error: unknown C primitive `caml_ba_dim_1'
<amiller_> okay i fixed it i think, the problem is that the camlspotter code runs Typemod.type_implementation twice
<amiller_> each time, it calls 'Env.save_signature' in the case that no explicit mli file is provided
<amiller_> a cheap trick is to override Clflags.savefiles
<amiller_> er write_files
csakatoku has joined #ocaml
malo has quit [Ping timeout: 246 seconds]
osa1 has quit [Ping timeout: 248 seconds]
<gasche> avsm_cl: it doesn't need to be installed anymore
<gasche> that's one of the nice things Damien did on the testsuite recently
<avsm_cl> gasche: hrm. this is on macppc, so i'll see if the test fails on x86 too
<gasche> Damien installed some continuous integration service (provided by INRIA), and we have hourly testsuite reports for various windows and linux builds, but no mac machine yet
<gasche> (the fault to Apple for making it so hard to get a macos virtual image)
<avsm_cl> gasche: yeah, i saw the jenkins setup -- i'm just resurrecting some of my older ppc machines and seeing which ones are alive
<avsm_cl> i have a openbsd/macppc now, and a linux macppc64 here
<avsm_cl> should be able to hook them into running regularly along with the arm and sparcs soon
malo has joined #ocaml
Yoric has quit [Quit: Instantbird 1.5a1pre -- http://www.instantbird.com]
<gasche> does anyone know what's the proper way to credit a patch's contributor when committing to a SVN repository?
Yoric has joined #ocaml
ollehar has quit [Ping timeout: 264 seconds]
avsm_cl has quit [Quit: My Mac Mini has gone to sleep. ZZZzzz…]
raichoo has quit [Quit: leaving]
gnuvince has quit [Ping timeout: 260 seconds]
avsm_cl has joined #ocaml
mika1 has quit [Quit: Leaving.]
f[x] has joined #ocaml
gautamc has quit [Read error: Connection reset by peer]
skchrko has quit [Quit: Leaving]
gautamc has joined #ocaml
chambart has quit [Ping timeout: 264 seconds]
raichoo has joined #ocaml
<avsm_cl> thomasga: ha, now it fails if you dont specify enough tags
<avsm_cl> will think about what the subset relation is a bit more
<avsm_cl> i think the current behaviour's correct
troydm has joined #ocaml
pootler_ has joined #ocaml
pootler__ has joined #ocaml
<gasche> avsm_cl: thanks for the clean bug report; I'll let Damien handle this, so it may not get fixed before a few days
<avsm_cl> it's not a biggie, i was just making sure that the failure on macppc wasnt real
<avsm_cl> onto sparc now! the machine sounds like a plane
justdit has joined #ocaml
<avsm_cl> the ocamlot opam installations on these should be interesting… after a month of waiting for the build :-)
<gasche> is it so slow to compile?
<avsm_cl> 500Mhz...
<gasche> well caml light used to compile on much less :]
<avsm_cl> i've stuck in a memory NFS swap, should speed it up
<avsm_cl> ocamlfind+oasis+ocamlbuild per flag. people got lazy :P
<gasche> I was surprised to find out that some code patterns consumed much more memory at compile-time than we expected
<gasche> avsm_cl: if you had a way to add "peak memory use" tracking to ocamlot compile jobs, that could be interested
<gasche> s/ted/ting/
<Armael> ocaml on sparc ? neat
<gasche> one obvious thing is that the workaround we used for Camlp4 in trunk could easily be defeated by new optimizations too clever for their own good, and right now there is no monitoring in place to make sure memory consumption doesn't have a regression
<avsm_cl> good idea. should be easy to record with an asynchronous script. right now we've got just enough ocamlot up and running to do basic tests, but are scrapping the lwt code and moving to Async/Jenga for the builds
<Armael> i have access to an old sparc64
<avsm_cl> sparc64 is bytecode only, sadly
<Armael> it currently runs netbsd
<gasche> (i've been simply using /usr/bin/time -v and, on my machine, it reports peak memory usage)
<Armael> yeah, anyway, i don't have much use of it
<Armael> it's quite slow, like 266Mhz
<gasche> avsm_cl: at which level is jenga involved?
<avsm_cl> coordinating the build tree (which is quite complicated. hundreds of thousands of tasks)
osnr has joined #ocaml
osnr has quit [Changing host]
osnr has joined #ocaml
<avsm_cl> compiler variants, different CPU/OS combinations, priorities, multiple versions, constraint exploration
<avsm_cl> turned out to be quite the interesting project
<gasche> ok
<gasche> so it's more orchestration than build managment per se
<avsm_cl> its both, since we want to archive all the results
<avsm_cl> one interesting thing that dsheets and i are considering is to map the SVN tree as a sparse filesystem of commit ids, and "fill in the blanks" with spare resources
<avsm_cl> so we could have an OPAM remote with every commit as a subdir, and have prebuilt ocaml binaries for every revision available for that OS
<avsm_cl> it would make bisecting considerably easier
justdit has quit [Quit: Lost terminal]
<gasche> indeed
<gasche> but when would you perform bisecting?
osnr has quit [Ping timeout: 240 seconds]
<gasche> if continuous integration catches regression as early as possible, bisecting seems restricted to more manual interventions
<avsm_cl> gasche: initially just manual, eventually automated exploration in ocamlot
es917 has quit [Quit: es917]
tane has quit [Quit: Verlassend]
Yoric has quit [Ping timeout: 276 seconds]
Drup has quit [Ping timeout: 260 seconds]
JcGoood has quit [Quit: Quitte]
thomasga has quit [Quit: Leaving.]
Kakadu has quit [Quit: Konversation terminated!]
travisbrady has quit [Quit: travisbrady]
csakatoku has quit [Remote host closed the connection]
ggole has joined #ocaml
ollehar has joined #ocaml
ggole has quit [Ping timeout: 276 seconds]
gnuvince has joined #ocaml
ollehar has quit [Read error: Connection reset by peer]
travisbrady has joined #ocaml
jbrown has quit [Remote host closed the connection]
ggole has joined #ocaml
csakatoku has joined #ocaml
ontologiae_ has quit [Ping timeout: 268 seconds]
emmanuelux has joined #ocaml
osa1 has joined #ocaml
Yoric has joined #ocaml
skchrko has joined #ocaml
dsheets has quit [Ping timeout: 264 seconds]
eni has joined #ocaml
csakatoku has quit [Ping timeout: 240 seconds]
eni has quit [Read error: No route to host]
eni has joined #ocaml
mcclurmc has quit [Read error: Operation timed out]
osa1 has quit [Read error: Operation timed out]
osa1 has joined #ocaml
tane has joined #ocaml
gautamc has quit [Ping timeout: 264 seconds]
ulfdoz has joined #ocaml
osnr has joined #ocaml
osnr has quit [Changing host]
osnr has joined #ocaml
f[x] has quit [Ping timeout: 264 seconds]
csakatoku has joined #ocaml
osa1_ has joined #ocaml
csakatoku has quit [Ping timeout: 240 seconds]
osa1 has quit [Ping timeout: 276 seconds]
Yoric has quit [Quit: Instantbird 1.5a1pre -- http://www.instantbird.com]
ttamttam has joined #ocaml
ttamttam has left #ocaml []
ggole has quit []
q66 has quit [Ping timeout: 240 seconds]
ollehar has joined #ocaml
Kakadu has joined #ocaml
Fnar has joined #ocaml
q66 has joined #ocaml
Watcher7|off is now known as Watcher7
jbrown has joined #ocaml
Zeev has quit [Ping timeout: 240 seconds]
<jdoles> Can anyone look at this opam output? http://paste.debian.net/hidden/fb2a589d/
<troydm> yeah i can see text
<troydm> it's 41 lines long
<troydm> just kiddin
<troydm> check that you have libglut.so.3 in your /usr/lib/ dir
<smondet> jdoles: just installed lablgl with opam, had no error
<troydm> if it's named some other name just make a symbolic link
<smondet> (with freeglut3-dev ubuntu package)
<smondet> what are you using? (OS, gl, glu, glut)
<troydm> i think his libglut.so.3 library is not in LD_LIBRARY_PATH
<troydm> it might be in soem obscure location like /usr/lib/gl
<troydm> or something like that
<jdoles> smondet: Ubuntu 12.04 LTS, the rest shouldn't be really relevant.
<troydm> do cd /usr && find **/libglut.so.3
<troydm> jdoles: you must have freeglut3 package installed
<troydm> sudo apt-get install freeglut3
<jdoles> troydm: /usr/lib/x86_64-linux-gnu/libglut.so.3
<troydm> ohh it's 64-bit and that's why it's not loading automaticly
<jdoles> troydm: how can I show the path where ld searches?
<jdoles> troydm: (which has more elements than just those on LD_LIBRARY_PATH)
<troydm> do export LD_LIBRARY_PATH=/u/usr/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH && opam install lablgl
<jdoles> troydm: I'd like a solution that works for everything.
<jdoles> troydm: and doesn't break anything else.
<troydm> sorry without /u/usr
<troydm> just /usr
<troydm> just add /usr/lib/x86_64-linux-gnu/ directory to your LD_LIBRARY_PATH
<troydm> it's not other way around it
<troydm> you should probably even add it to your .bashrc
ollehar has quit [Ping timeout: 240 seconds]
<troydm> however note you are installing 64-bit lablgl
<troydm> since your glut is 64-bit
<jdoles> troydm: a file below /etc/ld.so.conf.d contains /usr/lib/x86_64-linux-gnu/
ulfdoz has quit [Read error: Operation timed out]
<jdoles> troydm: why isn't that used then?
csakatoku has joined #ocaml
<troydm> jdoles: echo $LD_LIBRARY_PATH and make sure you'll see it there
<jdoles> troydm: ldconfig -v | grep libglut.so.3 shows that it can be found.
<jdoles> troydm: clearly, something is working differently from what it should.
<troydm> jdoles: do a echo $LD_LIBRARY_PATH | grep x86_64
<troydm> do you see that directory there?
csakatoku has quit [Ping timeout: 256 seconds]
<jdoles> troydm: I didn't set it, by design.
<troydm> hmm opam probably messes something up you should ask opam guys, open a bug ticket
<jdoles> Aren't they supposed to fix things instead of break things that already work?
<troydm> well when i had problem with opam i've opened the ticket and they've fixed bug in few days
<troydm> very quickly
<smondet> jdoles: just thought of a difference, which ocaml compiler are you using? (I am in switch fully managed by opam)
<troydm> 4.00.1 i think
<jdoles> smondet: me too.
tane has quit [Quit: Verlassend]
Watcher7 is now known as Watcher7|off
Watcher7|off is now known as Watcher7
speredenn has joined #ocaml
f[x] has joined #ocaml
ulfdoz has joined #ocaml
metasyntax has quit [Quit: Leaving]
_andre has quit [Quit: leaving]
pkrnj has joined #ocaml
Drup has joined #ocaml
csakatoku has joined #ocaml
tane has joined #ocaml
csakatoku has quit [Ping timeout: 264 seconds]
pkrnj has quit [Ping timeout: 264 seconds]
wmeyer has joined #ocaml
<wmeyer> Drup: Hashtbl support it, but to say to you don't worry, there is
<wmeyer> always a combined solution from list and set. At the moment if the
<wmeyer> performance is not critical you can use it.
<wmeyer> I miss some functions in stdlib.
<wmeyer> but yeah, I'd go gor MultiMap if that was available ...
<wmeyer> gasche: I saw your commits, thank you. This time I was extremely un-productive :-) but promise to do some work, after having a rest of the daytime job this weekend.
osnr has quit [Quit: Leaving.]
gautamc has joined #ocaml
<wmeyer> hi, anybody? :-)
<wmeyer> terribly boring Friday
ulfdoz has quit [Ping timeout: 264 seconds]
osa1_ has quit [Quit: Konversation terminated!]
ulfdoz has joined #ocaml
* wmeyer thinks Chopin played by Rubinstein is a great companion for boring Fridays
<Kakadu> wmeyer: hey
<wmeyer> Kakadu: hello
<def-lkb_> hi wmeyer
<pippijn> hi wmeyer
<wmeyer> hi def-lkb_
<wmeyer> hu pippijn
<wmeyer> hi*
<wmeyer> I haven't been around for ages it seems like
mattrepl has quit [Quit: mattrepl]
<wmeyer> on this channel
<Kakadu> wmeyer: I have tried to google your monad extension with ppx but without success
<wmeyer> try this: http://www.google.com/search?q=omonad does it work on your machine?
<pippijn> wmeyer: what does your monad extension do better than the camlp4 monad extensions?
<wmeyer> Kakadu: it's sad, that you can't find it
<wmeyer> pippijn: only one thing, does not require camlp4 - replacing the parsing frontend
<pippijn> ok
<wmeyer> so you say just : -ppx ppx_omonad
<wmeyer> and that's pretty much it
<pippijn> ok
<Kakadu> wmeyer: By some reason I have tried only pa_monad TT
<wmeyer> Kakadu: I have to position that webpage
<pippijn> I have not many problems with camlp4
<wmeyer> pippijn: me either
<pippijn> and debian is an idiot for having 3.12
<pippijn> so
<pippijn> no 4.00.1 for me
<wmeyer> but it's very hard core piece of code
<wmeyer> opam
<pippijn> yeah
mcclurmc has joined #ocaml
<pippijn> I'm not into opam, yet
<wmeyer> pippijn: bad for you, i am so much happy with opam
<pippijn> yeah, maybe I will get there someday
<pippijn> I like my packages in the system
<wmeyer> you can always use /usr/local root for opam
<wmeyer> i see no problem with that
<pippijn> maybe when there is a dh_opam tool that makes debian packages out of opam ones
Snark has quit [Quit: leaving]
<pippijn> then I will use it
<wmeyer> you say --root [your fav location]
ulfdoz_ has joined #ocaml
ulfdoz has quit [Read error: Connection reset by peer]
<wmeyer> pippijn: Why not to wrap apt-get around opam?
ulfdoz_ is now known as ulfdoz
<wmeyer> or something similar
<gasche> there is an oasis-to-rpm in the working
<pippijn> wmeyer: that's what I meant by dh_opam
<wmeyer> so we now generate opam from oasis, then we generate debian package out of opam, a bit crazy
<wmeyer> pippijn: ah I see, it would be nice
<wmeyer> but actually I don't rely too much on debian or ubuntu packages
<wmeyer> in the end i often compile stuff and put under the user directory. However I tend to use a lot of -dev packages.
<pippijn> yeah, I don't
<wmeyer> I trust opam, and there was never a problem
<pippijn> good
<wmeyer> pippijn: how is your life? I see you disaperaed from the horizon
<pippijn> it's ok
<pippijn> I'm thinking about strings right now
<pippijn> and numbers
<pippijn> in fact, I would like to ask a question
<pippijn> maybe you know the answer, or have some input
<wmeyer> yep, please go ahead
<pippijn> ok, so
<pippijn> given a number n, I want to know how in many ways you can get that number by adding and multiplying other, smaller numbers
<pippijn> integers
<pippijn> for example
<pippijn> 4 = 1*4
<pippijn> 4 = 2*2
<pippijn> 4 = 2*1 + 1*2
<pippijn> 4 = 1*1 + 1*3
<pippijn> 4 = 1*2 + 2*1
<wmeyer> oh, it looks like undencidable problem, did you read about diophantine equations?
<pippijn> what
<pippijn> why is that undecidable?
<pippijn> it's decidable for any n
<pippijn> for the number 4, there are 5 ways
<wmeyer> ok my bad, it looks complex but decidable
<pippijn> actually
<wmeyer> maybe actually i mis interpreterd
<pippijn> I'm not interested in all combinations
<wmeyer> ;)
<pippijn> I'm interested in the set of numbers involved in such combinations
<pippijn> sets*
<wmeyer> just one, the most efficient one?
<pippijn> no
<wmeyer> ah
<pippijn> all the sets of numbers
<pippijn> for 4, you get 5 sets: { {1}, {2}, {1, 2}, {1, 3}, {4} }
<pippijn> for 5, you get 6 sets, for 6 you get 10
Kakadu has quit []
<pippijn> each set can be used in an integer equation with a*e1 + b*e2 + c*e3 + ... = n
<pippijn> where e's are elements in the set and a, b, c are variables
osnr has joined #ocaml
osnr has quit [Changing host]
osnr has joined #ocaml
<mfp> seems reducible to some (generalized) knapsack(-like?) problem
<pippijn> is it really that hard?
<pippijn> I thought it would just be some combinatoric formula involving factorial or something
<pippijn> mfp: I just want the number
<pippijn> the number of sets
<pippijn> not the sets themselves
<pippijn> I can put it differently
<pippijn> given a string containing only 'a', like aaaa
<pippijn> in how many ways can you split the string so that a regex e1*e2*e3*... yields the string
<pippijn> for example, (aa)* matches aaaa
<pippijn> (aa*)(a)* also matches aaaa
<pippijn> I mean (a)*(aa)*
<wmeyer> pippijn: don't have the answer, can't even name the algorithm that would solve it, but I think it can be tried solved with prolog first
<pippijn> I can solve it
<wmeyer> then you have optimal implementation of brutforce
<pippijn> but I want to know the formula
<pippijn> I want to know what comes for 24 without trying
<def-lkb_> http://oeis.org/search?q=1,2,3,5,6,10,12,18,22,30,37,51&sort=&language=english&go=Search
<pippijn> def-lkb_: haha :D
<pippijn> that's awesome
<def-lkb_> awesome website indeed ;)
<wmeyer> lol
<pippijn> that sucks, though
<wmeyer> def-lkb_: I didn't know it
<pippijn> I thought it would be simple :\
ollehar has joined #ocaml
<pippijn> the mathematica code is cheating
<pippijn> Partitions[n]
<pippijn> "Cardinality of set of sets of parts of all partitions of n."
<pippijn> yeah
<pippijn> haha
xavierm02 has joined #ocaml
<xavierm02> hey
<pippijn> def-lkb_: it's insane that this website has exactly the function I was looking for
<jdoles> pippijn: you can get 4.01 working on Debian Squeeze if you want.
<pippijn> jdoles: from experimental?
<jdoles> pippijn: no.
<def-lkb_> pippijn: I am sad that it seems there is no simple function to compute the cardinality of the sets :/
<jdoles> pippijn: build it yourself.
<pippijn> def-lkb_: yeah :(
<pippijn> I really thought there would be
<xavierm02> About this ( http://caml.inria.fr/pub/docs/u3-ocaml/ocaml-mixins.html ), in "class virtual ['a] group", the use " method virtual zero: 'a". Why do they use a method and not a field?
<orbitz> Set doesn't have one? i thought it had a cardinal function
<pippijn> I wanted to include it in my paper
<pippijn> I think I'll go ask my math professor about that on monday
<jdoles> OCaml has virtual classes?
<pippijn> jdoles: virtual = abstract
<jdoles> pippijn: I see it now.
<def-lkb_> orbitz: I think pippijn would prefer avoiding computing the set itself :)
<jdoles> pippijn: why do they not name things the way the rest of world names things?
<pippijn> well, I am computing the set
<jdoles> With their ordinateurs...
<pippijn> but I'm writing a paper about it
<def-lkb_> oh ok.
<pippijn> and I would like to say something about the cardinality
<pippijn> comparing it to 2^n
<mfp> pippijn: would an asymptotic approximation should be OK? that seems doable
<pippijn> which is the number of combinations of those partitions
<pippijn> mfp: no
<mfp> -should
<xavierm02> Why don't they use "val virtual zero: 'a"?
<pippijn> because my program computes the sets
<pippijn> and I want to write something about the result
<def-lkb_> xavierm02: values are private to the class (structural objects only expose methods, therefore everything exported is a method). and method can have side-effects.
<pippijn> I'm already doing the hardest work computing the actual sets in the program, so there is no point in having an approximation
foo303 has quit [Read error: Operation timed out]
* wmeyer too tired for this brain consuming discussion :-)
<mfp> the point of having an asymptotic approximation in closed form is being able to get the cardinality of those sets for large N; if you don't care about n -> inf, it's fine :)
<wmeyer> but the website is awesome
<pippijn> nothing useful on google: https://paste.xinu.at/0yxaP/
<pippijn> yeah, that website is awesome
<pippijn> mfp: yeah, I don't care about it
<pippijn> but
<pippijn> it would be interesting, still
<pippijn> officially I don't care, but inofficially I do :)
<mfp> :)
<pippijn> do you have an idea on how to get there?
foo303 has joined #ocaml
<mfp> I seem to remember something from TAOCP or concrete mathematics
<pippijn> I can't get over the awesomeness that is an "integer sequence search engine" :D
<mfp> about non-standard positional number systems
<mfp> where you do not have a unique representation
<xavierm02> def-lkb_ : thank you :)
xavierm02 has quit [Remote host closed the connection]
eni has quit [Ping timeout: 246 seconds]
<mfp> my best guess is: either the exercises of TAOCP vol 2 chapter 4 (positional number systems) or concrete mathematics ch 6 (special numbers), both of them far too dense for me on Sat 00:00 AM :-|
<ollehar> what's taocp?
<ollehar> nm, google
<orbitz> knuth
travisbrady has quit [Quit: travisbrady]
smondet has quit [Quit: Weeekend !]
speredenn has quit [Remote host closed the connection]
csakatoku has joined #ocaml
tww has joined #ocaml
<pippijn> I wonder if it can be proven that there is no formula for this sequence
csakatoku has quit [Ping timeout: 276 seconds]
<pippijn> I can't find any other resource on this particular sequence
<pippijn> def-lkb_: you know what's fun?
<pippijn> def-lkb_: I compute these sets with my regex engine
<def-lkb_> pippijn: :D… Are you going to attempt to proove that ?
<pippijn> no
<pippijn> I have other things to do
<pippijn> more urgent things
<pippijn> to do with regexes
tww has quit [Quit: tww]
csakatoku has joined #ocaml
cdidd has quit [Read error: Connection reset by peer]
troydm has quit [Ping timeout: 248 seconds]
troydm has joined #ocaml
f[x] has quit [Ping timeout: 264 seconds]
csakatoku has quit [Ping timeout: 255 seconds]
tane has quit [Quit: Verlassend]
ulfdoz has quit [Ping timeout: 276 seconds]
chrisdotcode has joined #ocaml
ollehar has quit [Read error: Connection reset by peer]