In articles and discussions about monorepos, there's one frequently alleged key benefit: atomic commits across the whole tree let you make changes to both a library's implementation and the clients in a single commit. Many authors even go as far to claim that this is the only benefit of monorepos.
But that makes no sense! It's not how you'd actually make backwards incompatible changes, such as interface refactorings, in a large monorepo. Instead the process would be highly incremental, and more like the following: Push one commit to change the library, such that it supports both the old and new behavior with different interfaces. Once you're sure the commit from stage 1 won't be reverted, push N commits to switch each of the N clients to use the new interface. Once you're sure the commits from stage 2 won't be reverted, push one commit to remove the old implementation and interface from the library.
There's a bunch of reasons why this is a nicer sequencing than a single atomic commit, but they're mostly variations on the theme: mitigating risks. If something breaks, you want as few things as possible to break at once, and for the rollback to a known-good state to be simple. Here's how the risks are mitigated at the various stages in the process: There is nothing risky at all about the first commit. It is just adding new code that's not yet used by anyone. The commits for changing the clients can be done gradually, starting with the ones that the library owners are themselves working on, the projects that are most likely to detect bugs, or the clients that are most forgiving to errors. Depending on the risk profile of the change, you might even use these commits as a form of staged rollout, where you'll wait to see if the previous clients report any problems in production before sending the next batch of commits for code review. The final commit to remove the old implementation can only break a minimal number of clients: the ones that just started using the library between the removal commit being reviewed and pushed, and did so using the old interface. The ideal environment would have tooling in place to prevent that kind of backslipping from happening in the first place (e.g. lint warnings on new uses of deprecated interfaces).