OpenAI’s plan to become a for-profit company could encourage the artificial intelligence startup to cut corners on safety, a whistleblower has said.
William Saunders, a former research engineer at OpenAI, told the Guardian he was concerned by reports that the ChatGPT developer was preparing to change its corporate structure and would no longer be controlled by its non-profit board.
Saunders, who flagged his concerns in testimony to the US Senate this month, said he was also concerned by reports that OpenAI’s chief executive, Sam Altman, could hold a stake in the restructured business.
“I’m most concerned about what this means for governance of safety decisions at OpenAI,” he said. “If the non-profit board is no longer in control of these decisions and Sam Altman holds a significant equity stake, this creates more incentive to race and cut corners.”
OpenAI was founded as a non-profit entity and its charter commits the startup to building artificial general intelligence (AGI) – which it describes as “systems that are generally smarter than humans” – that benefits “all of humanity”. However, the potential power of an AGI system has alarmed experts and practitioners including Saunders amid fears that the competitive race to build such technology could lead to safety concerns being overridden.