by Pavel Panchekha with Ian Briggs and Yash Lad on Jun 3, 2024  | Tags: accuracy, approximation, awards, correctness, optimization, type systems Soft

A DSL for Implementing Math Functions

submited by
Style Pass
2024-06-09 20:00:06

by Pavel Panchekha with Ian Briggs and Yash Lad on Jun 3, 2024 | Tags: accuracy, approximation, awards, correctness, optimization, type systems

Software for physics, geometry, finance, and probability often calls library functions like sin, cos, exp, and log. These library functions should be fast and accurate for science, engineering, and finance to work best.

Unsurprisingly, lots of algorithms have been invented over the years for implementing these math functions, and new algorithms are still being discovered. For example, the RLIBM project—previously featured on this blog—has developed new tools for deriving approximating polynomials for single-precision functions.

The performance benefits of platform-, language-, and application-specific implementations are large. In some previous work, we saw speedups as large as 5× on microbenchmarks, and 10% on full applications. For this reason, hardware vendors (like NVIDIA and Intel), compiler projects (like GCC and LLVM), and applications (like CERN’s Geant4) all use custom math libraries specialized for their hardware, language, and application domain.

But writing a platform-, language-, or application-specific math library is error-prone: opaque constants, low-level bit-twiddling, and tricky algebraic transformations abound. Get any wrong, and the resulting function silently returns the wrong result—no crashes or exceptions help narrow down the issue. Even just spot-checking outputs requires numerical sophistication and access to a known-good library.

Leave a Comment