Leading Tech Companies Collaborate to Ensure Safe and Responsible Development of Frontier AI Models

AI Robot

Anthropic, Google, Microsoft, and OpenAI are joining forces to launch the Frontier Model Forum, a new industry body dedicated to ensuring the safe and responsible development of frontier AI models. The Forum aims to bring together the expertise of its member companies to benefit the entire AI ecosystem, including advancing technical evaluations and benchmarks and establishing industry best practices and standards.

What are Frontier Models?

The Forum defines frontier models as large-scale machine-learning models that surpass the capabilities of existing advanced models and are capable of performing a wide range of tasks.

Membership Criteria

Membership is open to organizations that:

  • Develop or use frontier AI models
  • Have a commitment to the safe and responsible development and deployment of AI
  • Are willing to collaborate and share knowledge within the industry

Interested organizations that meet these criteria are encouraged to join the effort to ensure the safe and responsible development of frontier AI models.

Addressing AI Safety and Responsibility

The importance of AI safety and responsibility is widely recognized by governments and industry leaders around the world. The US and UK governments, the European Union, the OECD, and the G7 have already made significant contributions to these efforts. However, further work is needed to establish safety standards and evaluations for frontier AI models. The Frontier Model Forum will serve as a platform for cross-organizational discussions and actions on AI safety and responsibility.

Key Focus Areas

The Forum will prioritize three key areas in the coming year to support the safe and responsible development of frontier AI models:

  1. Identifying best practices: The Forum will promote knowledge sharing and best practices among industry, governments, civil society, and academia. Safety standards and practices will be a primary focus to mitigate a wide range of potential risks.
  2. Advancing AI safety research: The Forum will support the AI safety ecosystem by identifying key research questions and coordinating research efforts to address them. Areas of focus include adversarial robustness, interpretability, scalable oversight, independent research access, and anomaly detection. A public library of technical evaluations and benchmarks for frontier AI models will also be developed and shared.
  3. Facilitating information sharing: Trusted and secure mechanisms will be established to share information among companies, governments, and relevant stakeholders regarding AI safety and risks. The Forum will adhere to responsible disclosure practices derived from areas such as cybersecurity.
See also  Moonhub AI Revolutionizes Recruitment with World's First AI-powered Recruiter

Industry Leaders’ Perspectives

Leaders from the founding companies of the Frontier Model Forum expressed their commitment to responsible AI development:

“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. We’re all going to need to work together to make sure AI benefits everyone.” – Kent Walker, President, Global Affairs, Google & Alphabet

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.” – Brad Smith, Vice Chair & President, Microsoft

“It is vital that AI companies—especially those working on the most powerful models—align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.” – Anna Makanju, Vice President of Global Affairs, OpenAI

“The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety. We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology.” – Dario Amodei, CEO, Anthropic

The Frontier Model Forum plans to establish an Advisory Board to guide its strategy and priorities, with representation from diverse backgrounds and perspectives. Institutional arrangements, including a charter, governance, and funding, will also be established. Consultations with civil society and governments are planned to ensure meaningful collaboration.

See also  Google Unveils Gemini: The New AI that Outperforms OpenAI

The Forum aims to support existing government and multilateral initiatives, such as the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council. Collaboration with other industry, civil society, and research efforts, including the Partnership on AI and MLCommons, will also be explored. See more here

About Author

Teacher, programmer, AI advocate, fan of One Piece and pretends to know how to cook. Michael graduated Computer Science and in the years 2019 and 2020 he was involved in several projects coordinated by the municipal education department, where the focus was to introduce students from the public network to the world of programming and robotics. Today he is a writer at Wicked Sciences, but says that his heart will always belong to Python.