Vladimir Voevodsky was a leading proponent of the formalisation of mathematics. Until his death in 2017, he lectured frequently on the risks of errors in complicated technical arguments and the necessity to start using computers to verify the work of mathematicians. The opinions of a Fields Medallist naturally carried weight. From the other side, the verification world, a major impetus for the formalisation of mathematics was the floating point division bug in the Pentium, back in 1994. (See my prior post on the ALEXANDRIA project.) However, Trybulec started his Mizar project in 1973, de Bruijn had created the first version of AUTOMATH by 1968, and Wang had built an automatic theorem prover already in 1958, in hope of eventually formalising mathematics. People have been dreaming about this for a long time. What is the point?
The Cambridge hardware verification group had a pragmatic focus. Until the Pentium division bug, if I recall correctly, our systems didn’t even know about negative numbers. Subsequent formalisations of the real numbers, special functions such as $\exp x$ and probability theory were aligned to specific verification objectives. My colleagues expressed opinions that seem quite striking now: