AI startups OpenAI and Anthropic have signed groundbreaking deals with the U.S. government for research, testing, and evaluation of their AI models, amid increasing regulatory scrutiny over the safe and ethical use of these technologies, according to the U.S. Artificial Intelligence Safety Institute. (Source: Image by RR)

New Agreements with U.S. AI Safety Institute to Shape the Future of Responsible AI

OpenAI and Anthropic, two leading AI startups, have signed groundbreaking agreements with the U.S. government to collaborate on the research, testing, and evaluation of their AI models, according to an announcement by the U.S. Artificial Intelligence Safety Institute. As reported in theguardian.com, these agreements come at a critical time as both companies face increasing regulatory scrutiny over the safe and ethical use of AI technologies, particularly with California legislators poised to vote on new regulations governing AI development and deployment.

The collaboration will provide the U.S. AI Safety Institute with access to significant new AI models from both OpenAI and Anthropic before and after their public release. This access is intended to facilitate thorough testing and evaluation of the models’ capabilities and associated risks, ensuring they meet safety and ethical standards before being widely deployed. Jack Clark, Co-Founder and Head of Policy at Anthropic, emphasized the importance of this partnership in ensuring AI’s positive impact, noting that rigorous testing is crucial to the technology’s safe and trustworthy use.

The agreements also mark a significant step in defining U.S. leadership in the responsible development of AI. Jason Kwon, Chief Strategy Officer at OpenAI, expressed hope that this collaboration with the U.S. AI Safety Institute will set a framework for global AI development. The institute’s role in providing feedback on potential safety improvements further underscores the importance of this partnership in shaping the future of AI.

The U.S. AI Safety Institute, which is part of the National Institute of Standards and Technology (NIST) and was established under an executive order by President Joe Biden, will also work closely with the U.K. AI Safety Institute. This international collaboration aims to enhance the safety and reliability of AI technologies, reflecting a broader commitment to managing the risks associated with artificial intelligence on a global scale.

read more at reuters.com