I’ve been writing a linker, called Wild (see previous posts). Today, I’m going to talk about my approach to testing the linker. I think this is an interesting case study in its own right, but also there’s aspects of the approach that can likely be applied to other projects.
These priorities are sometimes in conflict with each other. For example merging several tests together into a single test might make the test suite as a whole faster, but might also make diagnosing what’s wrong harder. Whether I choose to split or merge integration tests depends on circumstances. Sometimes splitting is the right approach, especially if there’s common work done by each separate test than can be cached, thus regaining the speed. Often however I prefer to merge. I’m more often running tests that pass than diagnosing tests that fail, so I’d prefer the speed. Also, often with extra tooling, diagnosing what’s wrong can be made easier, even in a large integration test that is doing many things.
Unit tests can be very fast, however when you refactor your code, if you change an interface that is unit tested, then the test needs updating or even rewriting. They can also very easily miss bugs when interfaces don’t change, but assumptions about who does what where in the code change.