"Breaking: OpenAI Dissolves High-Profile Safety Team After Chief Scientist Sutskever’s Exit – Shocking News!"
Stay informed on the latest developments at OpenAI! The dissolution of the high-profile safety team following Chief Scientist Sutskever’s exit has stirred controversy in the tech community. Join us for full coverage of this shocking news, offering insights into the reasons behind the decision and its potential implications.
OpenAI Ends Well-Known Safety Team After Lead Scientist Sutskever Leaves
OpenAI is now more closely integrating its Superalignment group with its research projects to meet its safety objectives.
OpenAI has disbanded a team focused on ensuring the safety of future advanced AI systems after the departure of the group's two leaders, including co-founder and chief scientist Ilya Sutskever.
Instead of keeping the superalignment team as a separate entity, OpenAI is now merging the group more deeply into its research projects to meet its safety goals, the company told Bloomberg News. The team was established less than a year ago under the leadership of Sutskever and Jan Leike, another veteran of OpenAI.
The decision to reorganize the team follows a series of recent departures from OpenAI, raising questions about the company's balance between speed and safety in developing its AI products. Sutskever, a highly respected researcher, announced on Tuesday that he was leaving OpenAI after previously clashing with CEO Sam Altman over the pace of AI development.
Leike announced his resignation shortly after Sutskever's departure with a brief post on social media. "I resigned," he said. According to a source familiar with the situation who requested anonymity to discuss private conversations, Sutskever's exit was the final straw for Leike after disagreements with the company.
In a statement on Friday, Leike mentioned that the superalignment team had been struggling for resources. "Over the past few months, my team has been sailing against the wind," Leike wrote on X. "Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done."
Hours later, Altman replied to Leike's post. "He's right, we have a lot more to do," Altman wrote on X. "We are committed to doing it."
Other members of the superalignment team have also left OpenAI in recent months. Leopold Aschenbrenner and Pavel Izmailov were let go by the company. The Information earlier reported their departures. Izmailov had been moved off the team before his exit, according to a person familiar with the matter. Aschenbrenner and Izmailov did not respond to requests for comment.
John Schulman, a co-founder focused on large language models, will now be the scientific lead for OpenAI's alignment work, the company said. In a separate blog post, OpenAI announced that Research Director Jakub Pachocki will take over Sutskever's role as chief scientist.
"I am very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone," Altman said in a statement on Tuesday about Pachocki's appointment. AGI, or artificial general intelligence, refers to AI that can perform as well or better than humans on most tasks. AGI doesn't yet exist, but creating it is part of the company's mission.
OpenAI also has employees working on AI safety across various teams, along with individual teams dedicated to safety. One such team, the preparedness team, was launched last October and focuses on analyzing and preventing potential "catastrophic risks" of AI systems.
The superalignment team aimed to prevent the most long-term threats. OpenAI announced the formation of the superalignment team last July, stating it would focus on controlling and ensuring the safety of future AI software that is smarter than humans, a key technological goal for the company. OpenAI said it would dedicate 20% of its computing power to the team's work at that time.
In November, Sutskever was among several OpenAI board members who moved to fire Altman, sparking a whirlwind five days at the company. OpenAI President Greg Brockman resigned in protest, investors revolted, and nearly all of the company's roughly 770 employees signed a letter threatening to quit unless Altman was reinstated. In a surprising turn, Sutskever also signed the letter and expressed regret for his role in Altman's ouster. Altman was soon reinstated.
In the months after Altman's departure and return, Sutskever largely disappeared from public view, raising questions about his role at the company. He also stopped working from OpenAI's San Francisco office, according to a source familiar with the matter.
In his statement, Leike said he left after a series of disagreements with OpenAI about the company's "core priorities," which he felt did not focus enough on safety measures for creating AI that might be more capable than humans.
In a post earlier this week announcing his departure, Sutskever expressed confidence that OpenAI would develop AGI "that is both safe and beneficial" under its current leadership, including Altman.
What's Your Reaction?