I’m a big advocate of Empirical Software Engineering. I wrote a talk on it. I wrote a 6000-word post covering one controversy. I spend a lot of time reading papers and talking to software researchers. ESE matters a lot to me.
I’m also a big advocate of formal methods. I wrote a book on it, I’m helping run a conference on it, I professionally teach it for a living. There’s almost no empirical evidence that FM helps us deliver software cheaper, because it’s such a niche field and nobody’s really studied it. But we can study a simpler claim: does catching software defects earlier in the project life cycle reduce the cost of fixing bugs? Someone asked me just that.
I’ve been dreading this. As much as I value empirical evidence, software research is also a train wreck where both trains were carrying napalm and tires.
If you google “cost of a software bug” you will get tons of articles that say “bugs found in requirements are 100x cheaper than bugs found in implementations.” They all use this chart from the “IBM Systems Sciences Institute”: