Top AI Companies Launch New Industry Group to Promote Safe AI Development
July 31, 2023
The world’s top artificial intelligence companies are launching a new industry body to work with policymakers and researchers on ways to regulate the development of AI. Google, Microsoft, OpenAI and Anthropic announced the Frontier Model Forum on July 26, 2023. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and publicly share information with governments and civil society. News of the forum comes after the four AI firms as well as Amazon, Meta and Inflection pledged to the Biden administration that they would subject their AI systems to third-party testing and clearly label AI-generated content before releasing them to the public. The announcement comes a day after AI experts warned lawmakers of potentially serious, even “catastrophic” societal risks stemming from unrestrained AI development, in particular, in cybersecurity, nuclear technology, chemistry and biology.
Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by U.S. and European Union lawmakers to craft binding legislation. Unfortunately, voluntary commitments are not enforceable, which is why it’s vital that Congress, together with the White House, promptly crafts legislation on the wide range of risks posed by generative AI. Much of the drive rests with Senate Majority Leader Chuck Schumer, who has prioritized getting members up to speed on the basics of the industry over the summer.
Critical intelligence for general counsel
Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.
Daily Updates
Sign up for our free daily newsletter for the latest news and business legal developments.