There’s no question artificial intelligence technologies are becoming increasingly powerful; it’s clear to anyone who spends time on the internet.

Are open-source AI models worth the risk?

submited by
Style Pass
2024-11-01 13:00:04

There’s no question artificial intelligence technologies are becoming increasingly powerful; it’s clear to anyone who spends time on the internet. Given that, how freely available should this technology be?

Open-source advocates say more sharing boosts healthy competition and distributes power while furthering scientific research and collaboration. But open models also pose the risk of aiding nefarious uses of the tech, from non-consensual intimate images (NCII) to election interference.

A new Science paper from researchers at Stanford’s Institute for Human-Centered AI (HAI) aims to take a clear-eyed look at just how much marginal risk open models pose relative to their closed counterparts, as well as benefits and policy considerations.

We spoke with Rishi Bommasani, society lead at the HAI’s Center for Research on Foundation Models and co-author of the paper, about where AI is actually proving most dangerous, why openness is important, and how regulators are thinking about the open-closed divide.

This conversation, the first of two parts, has been edited for length and clarity, and contains references to materials related to child abuse.

Leave a Comment