Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias

OpenAI’s former superalignment leader blasts company: ‘safety culture and processes have taken a backseat’

submited by
Style Pass
2024-05-18 23:30:03

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

Earlier this week, the two co-leaders of OpenAI’s superalignment team — Ilya Sutskever, former chief scientist and Jan Leike, a researcher — both announced within hours they were resigning from the company.

This was notable not only given their seniority at OpenAI (Sutskever was a co-founder), but because of what they were working on: superalignment refers to the development of systems and processes to control superintelligent AI models, ones that exceed human intelligence.

But following the departures of the two superalignment co-leads, OpenAI’s superalignment team has reportedly been disbanded, according to a new article from Wired (where my wife works as editor-in-chief).

Now today Leike has taken to his personal account on X to post a lengthy thread of messages excoriating OpenAI and its leadership for neglecting “safety” in favor of “shiny products.”

Leave a Comment