Brendan Gregg's Blog

submited by
Style Pass
2024-05-16 02:00:05

Benchmarks are often used for product evaluations, and they are often so inaccurate one may as well flip a coin. But it's even worse than that.

Most of the benchmarks I've debugged were false or misleading for one reason or another. A long time ago I was explaining this to a sales person, whose prospects used benchmarks to evaluate his product, and he asked:

He was delighted. His logic was that if the benchmarks were usually wrong – producing random numbers for his product and his competitor's – then at least half the time they should be wrong, but in his favor. If he won half the benchmarks, he'd have great growth.

I was annoyed: his product did perform well, so he should have been winning more than 90%, not 50%. Reality was even worse: he didn't win 90%, 50%, or even 25%. It didn't make sense until I debugged some cases.

When buying a product based on performance, customers often want to be really sure it delivers. That can mean not running one benchmark, but several, and wanting the product to win them all.

Leave a Comment