The hint file [1] of my 2012 IOCCC entry provides a nice introduction to the binary lambda calculus that this awesome piece of work runs on.
All the lambda terms on that page were manually optimized for minimum blc size. This C compiler term on the other hand was produced by several layers of translation, making it rather large. I suspect that a handwritten version could be written in a thousand pages of lambdas...
Btw, while it makes a nice stress test for LaTeX, I find it a bit odd that "pages of PDF" is the preferred way to express the size of the lambda binary:-)
Looking forward to seeing the binary lambda calculus back-end added to ELVM.
I actually mentioned your hint file in details.md. Quite a roundabout way to decode its secrets!
I too suspect that writing in lambda's native functional style could save a lot of space. Compiling lisp.c from the ELVM repository generates a code much longer than LambdaLisp [1], which empirically shows that well I believe.
As for the pages of PDF, in mathematical terms, since any variable encodes to weight 1, I believe it would be something close to an encoding that degenerates all De Bruijn indices to 1, or in other words, one that only tries to weigh (or gives larger weight to) the complexity of abstraction depths and applications. Since that erases information about the variable I would guess it's not a universal method for weighing lambda sizes.
In this particular case for LambdaVM programs however, since the memory initialization clause nor the instruction clause never increases the maximum De Bruijn index value, I believe both the BLC size and "lambda page size" approximately grows linearly with the number of instructions, so I thought it would serve as an approximately-off-by-a-factor metric for weighing its size.
As for the ELVM lambda calculus back-end, I'll be sending the pull request very soon!
Not to mention printing resources. I already had to pause to refill the black toner cartridge twice and reloaded the paper tray countless times, and the end is nowhere near.
I was deceived by a glimmer of false hope: a page full of ). Alas, I then glanced at the page number---far short of the 18000+ goal.
Not necessarily, this actually sounds like a great bootstrapping tool. All you need to write on a new platform is a BLC interpreter, which should be easier to implement. Using the C->BCL translation, you might be able to build something like TinyCC and work up from there.
I would think that the amount of RAM used by the lambda-monster (100+ GB to compile single-page C files) would make a practical application like that rather unpractical, for the time being.
No; the term has no normal form. It contains a lot of applications of the fix-point combinator Y.
As a simpler example, here's a lambda term for reversing input:
Yes, let bindings would get translated like that, but there would be some straightforward optimizations done, such as size-reducing beta-reductions, which strip out unused let bindings, and inline the single-use ones, as well as the multi-use ones whose definition is smaller than the unary encoded de-Bruijn index.
All the lambda terms on that page were manually optimized for minimum blc size. This C compiler term on the other hand was produced by several layers of translation, making it rather large. I suspect that a handwritten version could be written in a thousand pages of lambdas...
Btw, while it makes a nice stress test for LaTeX, I find it a bit odd that "pages of PDF" is the preferred way to express the size of the lambda binary:-)
Looking forward to seeing the binary lambda calculus back-end added to ELVM.
[1] https://www.ioccc.org/2012/tromp/hint.html