Industry Voices Concern Over OpenAI’s Move to Disband Long-Term Risk Team
OpenAI has disbanded its Superalignment team, which was focused on the long-term risks of AI, less than a year after its formation. This move follows the recent departures of team leaders Ilya Sutskever and Jan Leike, who expressed concerns that the company’s safety culture had taken a backseat to product development. According to a story in cnbc.com, the Superalignment team had aimed to achieve breakthroughs in steering and controlling advanced AI systems, with OpenAI initially committing 20% of its computing power to this initiative over four years.
The dissolution of the team was confirmed by a source who mentioned that some members are being reassigned within the company. Leike, in a detailed post on social media, cited disagreements with OpenAI leadership over the company’s core priorities, emphasizing the need for a stronger focus on safety and societal impact. He noted that his team had been struggling for resources and faced increasing challenges in conducting their crucial research, ultimately leading to a breaking point.
The departure of Sutskever and Leike, both prominent figures in AI research, underscores a broader conflict within OpenAI about the balance between innovation and safety. The leadership crisis earlier in the year, which involved the temporary ousting of CEO Sam Altman, highlighted internal disagreements about the direction of the company. While some leaders, like Sutskever, focused on ensuring AI safety, others prioritized rapid technological advancements.
Despite these internal struggles, OpenAI continues to push forward with new developments, recently launching an updated AI model and a desktop version of ChatGPT. The new model, GPT-4o, promises faster performance and improved capabilities in text, video, and audio, reflecting the company’s ongoing commitment to expanding the use of its AI technologies. However, the disbanding of the Superalignment team raises questions about how OpenAI will address the long-term risks associated with increasingly powerful AI systems.
read more at cnbc.com
Leave A Comment