No-op compiler benchmarking

submited by
Style Pass
2025-01-08 22:00:17

On the subject of trying to make a language/compiler that is “very fast to build” (for whatever definition of that makes sense to you)…

It seems like you pretty much have to start at the parse/lex (at least in the spirit of all good, unnecessary projects!). I extended the dumb script I wrote last time here https://github.com/sgraham/dumbbench.

It generates a not-at-all realistic file of similar contents in various flavours: C, Python, Lua, JavaScript, and Luv (my toy language). At first I’m trying to test lex/parse, but most compilers aren’t going to expose that. So I just tried to use whatever flags seemed the most “early out” that I could find, just to get an order of magnitude for each.

All the files are unrealistically large at 10,000,000 lines (according to wc -l) and vary in size from 193M to 237M depending on the language’s syntax. The Lua one had to be split up in to multiple subfiles (all dofiled by the main file) as the implementations have small fixed size limits on what it calls chunks, but I think it’s still a fair test as it still has to process all the code, and all the symbols still end up in the same global namespace as in other languages.

As you can of course tell from the repository name, this is not a good benchmark! But the timings surprised me all the same. All timings run on my (midrange?) Ryzen 9 24 core 3900X desktop on Windows 11. I believe all of these tools run this test single-threaded (or at least I didn’t see any significant > 1 core load). Other than the previously mentioned Lua split-up, there’s no #include-like behaviour in the test file, so Windows filesystem shouldn’t be hampering performance too much.

Leave a Comment