Systems fail because engineers protect the wrong things, or protect the right things in wrong ways. - Ross Anderson, Security Engineering [image by shibara]
Recently, yours truly ran into the use of analytical wargaming as a methodology to address policy problems at the intersectionality of AI risks, cyber security, and international politics. While the critics of wargaming do espouse some valid points, it is an exceptionally useful tool when it comes to tackling a very specific kind of problems - where you need to not predict actions but rather anticipate the outcomes and consequences of actions. Because rationality doesn't work well with incomplete information, a lot of predictive research methods don't always work well for these kind of problems either. As forecasters say, quantification is useful but wisdom is more useful.
Anyways, system engineering suggests that it is best practice to build those components first which are most likely to fail or cause failure, so let us first look at the caveats generally associated with wargaming: