How our principles helped define AlphaFold’s release

a company
Reflections and lessons in one sharing of our largest breakthroughs with the world
Putting our mission to solve intelligence to develop science and benefit from humanity to practice with decisive responsibilities. To help create a positive effect of society, we must evaluate the moral effects of our research and applications proactively. We also know that every new technology has the possibility of damage, and we take long and short -term risks seriously. We have built our foundations on the pioneer responsible from the beginning – which is particularly focused on responsible governance, research and influence.
This begins with the development of clear principles that help achieve the benefits of artificial intelligence (AI), while alleviating its risks and possible negative consequences. The leading responsibility is a collective effort, which is why we have contributed to many standards of the artificial intelligence society, such as those developed by Google, the partnership on artificial intelligence, and the Organization for Economic Cooperation and Development).
Our operating principles have become our commitment to setting the priorities of the wide benefits, as well as the fields of research and applications that we refuse to follow. These principles have been at the heart of our decision -making since the foundation of Deepmind, and are still improved with the change of landscapes Amnesty International and growth. It is designed for our role as a scientific company that relies on research and consistent with the principles of prosthetic intelligence from Google.
From principles to practice
Written principles are only part of the puzzle – how to implement it is the key. As for the complex research made on the boundaries of artificial intelligence, this brings great challenges: How can researchers predict the benefits and possible damages that may occur in the distant future? How can we develop moral eyesight better than a wide range of perspectives? What does it take to explore difficult questions in addition to scientific progress in the actual time to prevent negative consequences?
We have spent many years to develop our skills and operations related to the responsible administration, research and influence through DeepMind, from creating internal tools groups and publishing papers on poor social issues to support the efforts made to increase deliberations and insight through the artificial intelligence field. To help enable the deep teams of the pioneer with responsibility and protection against damage, the IRC’s multidisciplinary review committee (IRC) meets every two weeks to evaluate projects, papers and cooperation carefully.
The leading responsibility is a group muscle, and every project is an opportunity to enhance our common skills and understanding. We have carefully designed our review process to include rotating experts from a wide range of disciplines, with researchers in machine learning, ethics ethics, safety experts who sit alongside engineers, security experts, policy professionals and more. These various voices regularly determine ways to expand the benefits of our technologies, suggest the fields of research and applications for change or slow, and highlight projects that need more external consultation.
While we made a lot of progress, many aspects of this lie in an unknown area. We will not get it properly every time and we are committed to learning and continuous repetition. We hope that the participation of our current process will be useful for others who are working on the responsible Amnesty International, and that we encourage comments as we continue to learn, which is why we have detailed ideas and lessons from one of our most complex and rewarding projects: alphafold. The solution of our alphafold AI system is a 50-year-old challenge in predicting protein structure-and we have been pleased to see scientists use it to accelerate progress in areas such as sustainability, food security, drug discovery and basic human biology since its launch of the broader society last year.
Focus on predicting protein structure
Our team of researchers in machine learning, biologists and engineers have seen a problem in placing protein as a great and unique opportunity for intelligence learning systems to create a great impact. In this square, there are standard measures of success or failure, and clear limits of what the artificial intelligence system needs to do to help scientists in their work-predict the 3D structure of protein. As with many biological systems, protein fold is very complicated for anyone to write the rules for how they work. But the artificial intelligence system may be able to learn these rules for itself.
Another important factor was the evaluation that was every two years, known as CASP (cash evaluation of the protein structure), which was founded by Professor John Mullet and Professor Krzysztof Fidelis. With each gathering, CASP provides an exceptional strong evaluation of progress, which requires participants to predict the recently discovered structures through experiments. The results are a great incentive for ambitious research and scientific excellence.
Understanding practical opportunities and risks
With our willingness to evaluate the CASP in 2020, we realized that Alphafold showed great potential to solve the challenge on hand. We have spent a great time and effort in analyzing practical effects, and the question: How can Alphafold accelerate biological research and applications? What might be unintended consequences? How can we share our progress in a responsible way?
This provided a wide range of opportunities and risks to consider, many of which are in areas where we have not necessarily had a strong experience. So we sought to obtain external inputs from more than 30 leaders in the field of research, biological security, biology ethics, human rights, and more, focusing on the diversity of experience and background.
Several closed topics appeared during these discussions:
- A wide -ranging benefit balance with the risk of damage. We have started with a cautious mentality about the risk of accidental or deliberate damage, including how Alphafold interacts with both future progress and current techniques. Through our discussions with external experts, it became clear that Alphafold will not make it easier than causing protein harm, given many practical barriers in front of this – but future developments must be carefully evaluated. Several experts have strongly argued that Alphafold, as it is related to many areas of scientific research, will have the greatest benefit through free access and access to a wide range.
- Micro -confidence measures are necessary for responsible use. Experimental biologists have explained how important it is to understand and share the standards of confidence that has been calibrated and used for each part of the alphafold predictions. By referring to any of Alphafold predictions that are likely to be accurate, users can estimate when they can trust and use them in their work – and when they should use alternative methods in their research. In the beginning, we thought about deleting the predictions that alphafold had low confidence or high uncertainty, but external experts we consulted proved the reason why this is especially important to keep these predictions in our launch, and we advised the most useful and transparent ways to provide this information.
- The fair benefit can mean additional support for the non -funded fields. We conducted many discussions on how to avoid unintentionally increasing discrepancies within the scientific community. For example, the so -called neglected tropical diseases, which are not inappropriate for the poorest parts of the world, often receive less research than research. We have been strongly encouraged to give priority to practical support and are proactively looking for a partnership with groups working in these areas.
Create our launch approach
Based on the above inputs, IRC supported a set of Alphafold versions to meet multiple needs, including:
- Publications reviewed by the peers and open source symbol, Including two papers in nature, accompanied by an open source code, to enable researchers to implement and improve alphafold more easily. Soon after, we added Google Colab allowing anyone to enter the protein sequence and receive an expected structure, as an alternative to running the open source code themselves.
- A major version of protein structure predictions In partnership with Embl-EBI (European Biological Information Institute of Embl), the leader of the existing community. As a public institution, Embl-IBI enables anyone to search for protein structures easily such as Google Search. The initial version guarantees expected forms of each protein in the human body, and our latest updates included expected structures for all indexing proteins known as science. This is a total of more than 200 million buildings, all of which are freely available on the Embl-EBI website with open access licenses, accompanied by support resources, such as web seminars on the interpretation of these structures.
- Building 3D perceptions in the database, With the prominent signs of high confidence and low -forecast areas, in general, it aims to be as clear as possible about the strengths and restrictions of Alphafold in our documents. We also designed the database to be as possible as possible, for example, taking into account the needs of people with a lack of color vision.
- Forming deeper partnerships with research groups working on areas with lack of financing, Like neglected diseases and important topics for global health. This includes DNDI (drugs for neglected diseases initiative), which promotes research in Chapas and Lycemanias, and the enzyme innovation center that develops plastic eating enzymes to help reduce plastic waste in the environment. Our growing general participation teams continue to work on these partnerships to support more cooperation in the future.
How do we build on this work
Since our initial release, hundreds of thousands of people from more than 190 countries have visited Alphafold and have used Alphafold’s open source code since launch. We have the honor to hear about the ways in which the alphafold predictions are accelerating important scientific efforts and working to tell some of these stories through our open project. To date, we are not aware of any misuse or harm related to Alphafold, although we are continuing to pay close attention to this.
Although Alphafold was more complicated than most deep research projects, we use the elements of what we learned and merged this into other versions.
We build on this work by:
- Increase the range of inputs from external experts At each stage of the process, exploring participatory ethics mechanisms on a larger scale.
- Expand our understanding of artificial intelligence of biology In general, along with any individual project or penetration, to develop a stronger vision of opportunities and risks over time.
- Finding ways to expand our partnerships With groups in the fields that current structures have.
Just like our research, this is a process of continuous learning. The development of artificial intelligence for extensive benefit is a societal effort that extends beyond DeepMind.
We are doing our best to be aware of the extent of hard work that still has to do in partnership with others – and how to advance responsible.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2022-09-14 00:00:00