Microbenchmarks are experiments

submited by
Style Pass
2024-11-28 14:00:04

More languages, more insights! A few interesting takeaways: * Java and Kotlin are quick! Possible explanation: Google is heavily invested in performance here. * Js is really fast as far as interpreted / jit languages go. * Python is quite slow without things like PyPy. pic.twitter.com/GIshus2UXO

This one has cool animations of balls bouncing around. It claims more insights compared to the other microbenchmark that was all the rage last week! Yay!

Sigh. 2025 is almost around the corner and people still seem to think that the goal of a benchmark is to produce a number which by itself reveals some hidden truth about the world. You write a loop in Dart, you write a loop in C. You run it three times (statistics!). You compare the running time and, lo and behold, these two numbers reveal you all you need to know about Dart and C performance. The rest of the day can be spent browsing numerological magazines for the meaning of your date of birth…

Benchmarks are not numerology. Their results are not a divine revelation. Benchmarks are experiments. Their results are meaningless without interpretation and validation. Our understanding of performance is driven by the same cyclic process that drives science in general: you formulate a hypothesis, you devise the experiment, you analyze the results, you adjust the hypothesis and repeat."Zen and The Art of Motorcycle Maintenance" has a pretty good description of scientific method, which I think every programmer should read and will likely benefit from.

Leave a Comment