Anthropic and OpenAI Team Up With U.S. AI Safety Institute to Secure Future of AI

As a seasoned analyst with years of experience in the technology sector, I find these recent developments in AI safety research extremely promising. The collaboration between the U.S. Artificial Intelligence Safety Institute and leading companies like Anthropic and OpenAI is a testament to the growing recognition that safety should be a top priority in the development of advanced AI systems.


The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) branch known as the U.S. Artificial Intelligence Safety Institute has revealed collaborative partnerships. These partnerships, solidified through Memorandums of Understanding (MOUs), are with two prominent AI companies, Anthropic and OpenAI. The goal of these alliances is to provide easy access to their latest AI models, enabling the institute to perform comprehensive evaluations, both pre- and post-release, ensuring the safety of these models once made public.

Through Memorandums of Understanding (MOUs), the U.S. Artificial Intelligence Safety Institute will join forces with Anthropic and OpenAI for joint research projects, primarily aimed at assessing AI abilities and pinpointing potential safety hazards. This partnership is anticipated to boost methodologies for addressing risks linked to sophisticated AI technologies. Elizabeth Kelly, the head of the U.S. AI Safety Institute, underscored the significance of safety in technological progress and voiced excitement about upcoming technical collaborations with these AI companies. She highlighted that these agreements mark a significant step forward in the institute’s continuous endeavors to ensure responsible AI development.

Beyond these collaborations, the American Artificial Intelligence Safety Institute plans to offer suggestions for enhancing safety within the models of both Anthropic and OpenAI. This work will be done in close cooperation with the U.K. Artificial Intelligence Safety Institute, symbolizing a global initiative aimed at fostering safe and reliable advancements in AI technology.

The work undertaken by the U.S. AI Safety Institute is grounded in NIST’s extensive background in fostering progress in measurement science, technology, and standardization. These collaborative assessments will bolster NIST’s broader AI endeavors, which coincide with the Biden-Harris administration’s Executive Order on AI. The objective is to facilitate the creation of safe, secure, and reliable AI systems, extending upon pledges made by key AI innovators to the administration regarding their responsible development.

Based on Reuters’ report, California legislators endorsed a contentious AI safety bill on Wednesday. This bill is now in the hands of Governor Gavin Newsom, who has until September 30th to either reject it or approve it as law. The bill requires safety checks and other protective measures for AI models that surpass specific cost limits or computational power. Some tech companies argue that these requirements could potentially slow down technological innovation.

Read More

2024-08-29 23:03