Meta Platforms is spurning the EU’s voluntary artificial intelligence safety pledge that is planned as a stopgap measure before the EU’s AI Act rules take full force in 2027.
Meta’s position contrasts with Microsoft and Alphabet’s Google, which both confirmed through spokespeople that they will sign the pledge. The company stands out from its Big Tech peers because its AI model, Llama, has open source features that are designed to be repurposed by users with relatively little control from the developer. That could make it more difficult to comply with requirements to map risks with the tools.
France’s open-source AI startup Mistral, which in June reached a €5.8bn valuation, will not sign it, according to a spokesperson.
The EU is seeking to set standards for regulating the fast-developing field of AI, while not stifling innovation in the emerging technology and risk ceding the industry to US firms. The non-binding pledge asks developers to comply with the key obligations of the AI Act before they become law.
Companies that sign it will commit to follow a list of practices that mirror the AI Act’s principles, including charting whether their AI tools are likely to be deployed in “high-risk” situations, such as education, employment or policing.