In the last few years, a lot has been said in favor of DORA metrics for measuring the success of developer enablement within your organization: how well your platform engineering, operations, and developer experience efforts are making it easier for developers to deliver features and maintain services. These five metrics (up from four in the original 2020 report) are:
I agree that measuring these is vital. But it must be said that the intent of these metrics was always to give an indicator of how well your team was delivering software, not a high-stakes metric that should be used, for example, to hire and fire team leads. While that mission has always been clear, the original metrics report asked leaders to determine whether teams were “elite performers” and strongly implied that better teams would always have better DORA metrics.
That conflict, between whether DORA metrics are an interesting stat that can show progress or a critical stat that represents success or failure for a team, has polarized opinion on DORA metrics. The reality is that DORA metrics are a strong indicator of the health of developer experience, but like any observed statistic, the information can be misused and misinterpreted.