OpenAI announced that its new model aims to achieve ‘next level capabilities’ toward building artificial general intelligence (AGI), capable of powering chatbots, digital assistants, search engines and image generators. (Source: Image by RR)

Future AI Development Focuses on Comprehensive Safety and Security

OpenAI recently announced that it has started training a new flagship AI model to succeed GPT-4, which currently powers its ChatGPT. The San Francisco-based company expects this new model to bring enhanced capabilities and aims to develop artificial general intelligence (AGI) that can perform tasks as well as the human brain. Additionally, OpenAI is establishing a new Safety and Security Committee to address the risks associated with its advanced AI technologies.

OpenAI is striving to advance AI technology faster than its competitors while addressing concerns about the potential dangers of AI, such as spreading disinformation, job displacement, and existential threats to humanity. As reported in nytimes.com, the company released GPT-4 in March 2023, which powers various applications like chatbots, search engines, and image generators. Recently, an updated version, GPT-4o, introduced capabilities like image generation and voice interaction, though it has faced controversy over alleged unauthorized use of actress Scarlett Johansson’s voice.

Training AI models involves analyzing vast amounts of digital data, a process that can take months or years, followed by additional testing and fine-tuning. Consequently, OpenAI’s new model may not be available for another nine months to a year or more. The newly formed Safety and Security Committee, which includes key figures such as Sam Altman and other board members, will develop policies and processes to ensure the technology’s safety, with plans to implement these by late summer or fall.

The recent departure of co-founder Ilya Sutskever, who led OpenAI’s Superalignment team focusing on AI safety, has raised concerns about the company’s commitment to addressing AI risks. His resignation, along with that of Jan Leike, leaves the team’s future uncertain. OpenAI’s long-term safety research is now integrated into broader efforts to ensure technological safety, led by co-founder John Schulman. The new safety committee will oversee Schulman’s work and guide the company in managing technological risks.

read more at nytimes.com