Yesterday, I reported the bizarre observation that certain functions can behave as if they had negative complexity. If you haven’t checked that arti

Solving the mystery behind Abstract Algorithm’s magical optimizations

submited by
Style Pass
2024-09-29 17:30:02

Yesterday, I reported the bizarre observation that certain functions can behave as if they had negative complexity. If you haven’t checked that article yet, it isn’t necessary, but you should, as it may blow your mind. In short, the λ-term f(bits) = copy(comp(inc,n,bits)), when given to optimal λ-calculus evaluator, is asymptotically faster than g(bits) = comp(inc,n,bits); i.e.,copy (a O(1) operation for a fixed size) behaves as if it had a O(1/n) complexity, causing the program to run faster by doing more things (!?).

That’s not the only bizarre complexity result I had when experimenting with the Abstract Algorithm. Years ago, I noticed that it is capable of computing programs that wouldn’t fit in a computer the size of the universe. Recently, I posted how it can apply an O(1) operation N times in O(log(N)) time (!?). Despite insightful answers being provided, it was hard to make sense of such unintuitive behaviors.

After a lot of research and some recent insights, it finally hit me. Turns out the explanation for all of that is pretty simple, and, in this article, I’ll explain what is going on carefully (not jokingly this time). If you’re in a hurry, feel free to skip to the tl;dr at the end.

Leave a Comment