Personal news: I have received an Emergent Ventures grant to work full time on this substack for the next few months. We hope to contribute to a movement for growth, progress and innovation in Europe. If you’d like to get in touch please message pieter [dot] garicano [at] gmail [dot] com. You can follow me on twitter here.
An AI bank teller needs two humans to monitor it. A model safely released months ago is a systemic risk. A start-up trying to build an AI tutor must produce impact assessments, certificates, risk management systems, lifelong monitoring, undergo auditing and more. Governing this will be at least 50 different authorities. Welcome to the EU AI Act.
Originally, the way the AI Act was supposed to work is by regulating outcomes rather than capabilities. It places AI models into risk categories based on their uses — unacceptable, high, limited and minimal risk — and imposes regulations for each of those categories.
Unacceptable risk models are prohibited in all cases — this includes systems that do social scoring, emotion recognition in the workplace and real time biometric identification in public — while limited and minimal risk AI — such as data categorisation and basic chatbots — are relatively lightly regulated. 1