Say you run a new team. You have carte blanche to implement any policies you want to make the people more productive and the code less buggy. What do you do?
Careers have been built selling the answer. Take up pair programming! Switch to Haskell! Use UML for everything! These techniques get their own books and conferences. But are they worth the effort? How long till they take effect? Do they even work at all?
These questions are important, if not unique to software engineering. How do we tell whether something will solve our problems? We could talk to experts, but experts disagree. We could rely on our own experiences, but those are limited. (Nobody has tried and compared everything.) We could survey people, but “popular” is not the same as “correct.” (Almost half of Americans consider astrology a science.) So how do any of us really know what we know?
Hopefully, as a field matures, scientific study and empirical research eventually replace folklore. Though we’re still in the early days of software engineering (compared to, say, mechanical engineering), few of the technical solutions we’ve studied impact software quality in a meaningful way. Static typing? One study, presented at FSE 2014, found no evidence that static typing is helpful—or harmful. Code standards and linters? Another paper, shared at ICSM 2008, found these can make things worse. Code review? Okay, now that, according to a 2016 article published in Empirical Software Engineering, actually works. But we can’t stake our team’s success on just “more code reviews.”