Committee to Oversee Critical Safety Measures in AI Model Development and Deployment

OpenAI has established the Safety and Security Committee (SSC) as an independent oversight body to oversee the critical safety and security measures related to the development and deployment of its AI models. Chaired by Zico Kolter, with prominent members like Adam D’Angelo and General Paul Nakasone, the SSC will review and provide recommendations on the safety assessments of major model releases, ensuring that any safety concerns are addressed before models are deployed. According to a story in openai.com, the committee will have the authority to delay model releases if necessary and will receive ongoing safety evaluations from OpenAI’s teams.

The SSC has outlined five key areas of focus: establishing independent governance, enhancing security measures, increasing transparency, collaborating with external organizations, and unifying safety frameworks. In terms of security, OpenAI aims to deepen its cybersecurity operations by expanding internal information segmentation and increasing staffing for around-the-clock monitoring. The committee is also exploring industry-wide collaborations, such as the creation of an Information Sharing and Analysis Center (ISAC) to enhance collective resilience against cyber threats.

OpenAI is committed to transparency and plans to share more details about its safety processes. This includes the ongoing publication of system cards for models like GPT-4o and o1-preview, which provide insights into safety evaluations and risk mitigation efforts. OpenAI also aims to collaborate with external organizations, including government agencies and third-party safety organizations, to advance safety standards and explore independent testing of its systems. One key partnership is with Los Alamos National Labs to study AI’s safe applications in bioscientific research.

Lastly, OpenAI is building an integrated safety framework for model development and monitoring to unify its safety practices across teams. As models become more advanced, this framework will evolve to handle increased risks, ensuring that all model launches meet clearly defined safety and security criteria. OpenAI is restructuring its research, safety, and policy teams to strengthen collaboration, ensuring that safety considerations are central to every stage of model development and deployment.

read more at openai.com