Microsoft brainiacs who probed the security of more than 100 of the software giant's own generative AI products came away with a sobering message: The models amplify existing security risks and create new ones.
The 26 authors offered the observation that “the work of securing AI systems will never be complete" in a pre-print paper titled: Lessons from red-teaming 100 generative AI products.
That's the final lesson of eight offered in the paper, though it's not entirely apocalyptic. The authors, Azure CTO Mark Russinovich among them, argue that with further work, the cost of attacking AI systems can be raised – as has already happened for other IT security risks through defense-in-depth tactics and security-by-design principles. And in that respect it's perhaps all not too surprising – is any non-trivial computer system ever totally utterly secure? Some say yes, some say no.
Getting back on track: The Microsofties suggest there’s lots of work to do. The first lesson noted in the paper is to "understand what the system can do and where it is applied."