Looking ahead to the AI Seoul Summit
How can the summits designs in Seoul, France and beyond stimulate international cooperation on the integrity of the AI border
Last year, the UK government hosted the first major global summit on the safety of Frontier AI in BLTCHLEY Park. The world focused on rapid progress on the limits of developing artificial intelligence and took the international concrete work to respond to possible future risks, including the BLTCHLEY Declaration; New artificial intelligence intelligence institutes; And the international scientific report on the safety of advanced artificial intelligence.
Six months after BLTCHLEY, the international community has an opportunity to build on this momentum and swear more global cooperation at the AI Seoul Summit for this week. Below we share some ideas on how the summit – and future – progress towards a joint global approach to the safety of Frontier AI.
Artificial intelligence capabilities continued to advance at a rapid pace
Since Bletchley, there has been a strong innovation and advanced throughout the field, including Google DeepMind. Artificial intelligence continues to lead breaches in critical scientific fields, with the new alphafold 3 model that predicts the structure and interactions of life particles with an unprecedented accuracy. This work will help transform our understanding of the biological world and accelerate the discovery of drugs. At the same time, the family of our Gemini models has already made products used by billions of people around the world more useful and accessible. We have also worked to improve how our models, mind, interaction, and recently have advanced in building the future of artificial intelligence assistants with Project Astra.
This progress in the abilities of artificial intelligence is to improve the lives of many people, but it also raises new questions that need to be addressed cooperatively in a number of major safety areas. Google DeepMind selects these challenges and address them through pioneering safety research. In the past few months alone, our advanced approach to developing a comprehensive set of safety and responsibility assessments of our advanced models, including early research to assess important capabilities such as deception, cyber security, self -spread and self -cooperation. We also released an in -depth exploration to align the aides of the future artificial intelligence with human values and interests. Beyond LLMS, we recently shared our vital security approach to Alphafold 3.
This work is driven by our condemnation that we need to be innovated in safety and governance as quickly as we create in capabilities – and that both matters should be done in a tarragte, and to inform each other constantly.
Building an international consensus on the risks of the boundary artificial intelligence
The maximization of benefits from advanced artificial intelligence systems requires building an international consensus on critical border safety issues, including the expectation and new risks of risk that exceed the current models. However, given the degree of high uncertainty about these potential future risks, there is a clear demand from politicians on an independent and scientific point of view.
For this reason, the launch of the new interim international scientific report on the safety of advanced artificial intelligence is an important element at the AI Seoul Summit – and we look forward to providing evidence from our research later this year. Over time, this type of effort can become central inputs for the summit, and if we succeed, we believe that it must be given a more permanent condition, similar to the function of the International Governmental Committee on Climate Change. This will be a vital contribution to the base of evidence that policy makers around the world need to inform international work.
We believe that these summits can provide a regular forum dedicated to building an international consensus and a joint and coordinated approach. Maintaining a unique focus on border safety will also ensure that these census are complementary and not repeated to other international governance efforts.
Putting best practices in assessments and coherent judgment framework
The assessments are a decisive element required to inform the decisions of artificial intelligence governance. It enables us to measure the capabilities, behavior and influence of the artificial intelligence system, which are important inputs to assess risks and design appropriate dilution. However, the science of artificial intelligence’s safety assessments are still early in its development.
This is why the Frontier Mode (FMF) forum, which launched Google with other leading AI laboratories, is participating with artificial intelligence safety institutes in the United States, the United Kingdom and other stakeholders in best practices to assess border models. The summit in the peaks of artificial intelligence can help expand the scope of this work at the international level and help avoid a group of national systems for testing and refined governance or contradict each other. It is important to avoid fragmentation, which can unintentionally harm or innovate.
American safety institutes and UK AI have already agreed to build a common approach to safety test, a first important step towards greater coordination. We believe that there is a chance over time to build this towards a common global approach. A preliminary priority from the Seoul summit can be the approval of a road map for a wide range of actors to cooperate in developing and unifying the standards and curricula of artificial intelligence assessment.
It will also be important to develop common frameworks for risk management. To contribute to these discussions, we recently presented the first version of Frontier Safety Framework, a group of protocols to determine the future artificial intelligence capabilities that can cause severe harm and detect and mitigate mechanisms. We expect the framework to develop significantly with our learning from its implementation, deepening our understanding of the risks and assessments of artificial intelligence, and cooperation with industry, academic circles and government. Over time, we hope to make our methods to share with others to agree on standards and best practices to assess the safety of future generations from artificial intelligence models.
Towards a global approach to border safety AI
Many of the potential risks that can arise from progress within the limits of artificial intelligence are universal by nature. While we go to the AI Seoul Summit, and we look at the future summits in France and abroad, we are excited about the opportunity to enhance global cooperation on the safety of Frontier Ai. We hope these summits will provide a forum dedicated to progress towards a common global approach. Getting this right is a decisive step towards opening the tremendous benefits of Amnesty International for society.
2024-05-20 07:00:00


