Software engineering keeps getting more abstract, but one thing is unchanging: the importance of floating-point arithmetic. Every computer programmer is bound to work with numbers (they call them computers for a reason), so it’s genuinely useful to understand the way machines do math, no matter if your code is for a to-do app, a stock exchange, or a fridge. How are numbers stored exactly? What’s the significance of special values? And why is 0.1 + 0.2 not equal to 0.3? Let’s explore this all!
Let’s start with one key assumption: in all the world, on every continent, there’s one and only one way of doing floating-point arithmetic.
You’d be correct to think more than one format must have been invented. Plenty were – in the early days of computing, practically every system with floating-point capabilities had its own. Later on, brand-specific formats emerged: IBM went for hexadecimal floating-point in their mainframes, Microsoft created Microsoft Binary Format for their BASIC products, DEC cooked up yet something else for the VAX architecture.
This changed when Intel decided in the late 70s to design the floating-point chip to rule them all – which required a format to rule them all too, the best possible. The decision culminated in the Intel 8087 coprocessor of 1980, but even before that, other companies in the space caught wind of this work and set up a common effort at the Institute of Electrical and Electronics Engineers (IEEE) to standardize floating-point arithmetic – the IEEE 754 working group. Two competing drafts prevailed: the Intel 8087 spec vs. the DEC VAX one. After some more arguments and error analysis, in 1981 Intel’s draft won out, rapidly got adopted by everyone, and the rest is history (though it took the committee another four years of bickering to publish that draft, of course).