RISK & CYBERSECURITY

submited by
Style Pass
2024-05-06 11:30:06

Every major technological change is heralded with claims of significant, even apocalyptic, risks. These almost never turn out to be immediately  correct. What often turns out to be riskier are the 2nd order effects that are a result of what is done with the new technology.

No matter what, we do have to care about AI risks. Many past technological warnings of disaster have been avoided precisely because we did care. But the bigger risks come with what comes after what comes next. This is inherently unpredictable but it doesn’t mean we can’t try to foresee this or at least look for warning signs. To paraphrase the thesis from Collingridge’s The Social Control of Technology , when a technology is in its infancy and can be controlled we don’t understand its consequences -  and when we do it is so widespread and entrenched that it is difficult to then control. 

Clearly, this is all worth paying attention to, not so we are overly anxious about AI but so we can manage the risk and reap the massive rewards in safe and responsible ways, and  be ready to mitigate the inevitably surprising second order risks in appropriate ways.

Leave a Comment