How you average numbers matters

submited by
Style Pass
2021-06-11 08:00:13

Every now and then, I end up having to explain to skeptical people why it matters how their programs treat the numbers they ingest. With IEEE 754 and doubles, people seem to think that one can just will nilly add a bunch of numbers and average them, and get reliably accurate results.

In fact, things are better now than they used to be even as recently as 20 years ago. But, given enough numbers, and enough operations, it is still possible to run into interesting inaccuracies. This is a fact of life, and not an earth-shattering observation, but, sometimes, it does take a bit of effort to explain to someone who is hearing about this stuff for the first time, and is absolutely sure of his/her programming prowess.

To illustrate this, I am first going to start with a slightly contrived example so that we can easily, and independently, calculate the expected magnitudes, and assess the accuracy of the computer’s calculations.

That is, we have a vector of one hundred thousand numbers. Odd-indexed elements are rather large, and even indexed ones are relatively small. Clearly, this is a contrived example, but it helps verify things arithmetically.

Leave a Comment