AI

Competitive programming with AlphaCode – Google DeepMind

research

Published
Authors

Alphacode team

Note: This blog was first published on February 2, 2022. After the paper was published in the science on December 8, 2022, we made simple updates to the text to reflect this.

Solve new problems and put a new milestone in competitive programming

Create solutions to unexpected problems is the second nature of human intelligence – as a result of the informed critical thinking of the experiment. The automatic learning community has made tremendous progress in generating and understanding textual data, but progress in solving problems remains limited to mathematics problems and relatively simple programming, or recovering and copying current solutions.

As part of the DeepMind mission to solve intelligence, we created a system called AlphaCode to write computer programs at a competitive level. Alphacode achieved an estimated rank among the 54 % of the 54 % of the participants in programming competitions by solving new problems that require a mixture of critical thinking, logic, algorithms, coding, and the understanding of natural language.

It was posted on the Science Cover, Alpeacode paper details, which uses transformer -based language models to create unprecedented software, then smart candidates for a small set of promising software.

We shocked our performance using the hosted competitions on Codeforcees, a famous platform that hosts regular competitions that attract tens of thousands of participants from all over the world who come to test their coding skills. We chose to evaluate 10 modern competitions, each of our latest training data. Alphacode was placed almost at the level of the medium competitor, as the first time the artificial intelligence symbol generation system is a competitive level of performance in programming competitions.

To help others build on our results, we released our data collection of competitive programming problems and solutions on GitHub, including comprehensive tests to ensure that programs that pass these tests are correct – a decisive feature that lacks data collections. We hope this standard will lead to more innovations in solving problems and generating the code.

The problem is from Codeforcees, and the solution was created by alphacode.

Competitive programming is a common and difficult activity. Hundreds of thousands of programmers participate in coding competitions to gain experience and display their skills in fun and cooperative ways. During competitions, participants receive a series of long problems and a few hours to write programs to solve them.

Typical problems include finding ways to place roads and buildings within certain restrictions, or create strategies to win custom table games. Participants are then classified mainly based on the number of problems they solve. Companies use these competitions as employment tools and similar types of problems common in employment operations for software engineers.

I can safely say that the results of alphacode have exceeded my expectations. You are skeptical because even in simple competitive problems, it is often required not only the implementation of the algorithm, but also (this is the most difficult part) for invention. Alphacode managed to perform at the level of a promising new competitor. I can’t wait to find out what awaits us!

Mike Mirazayanov, founder of Kodforesis

The capabilities of solving the problems required to excel in these competitions exceed the current artificial intelligence systems. However, by combining the progress of transformer models on a large scale (which recently showed promising capabilities to create code) with samples and wide -ranging liquidation, we have made great progress in the number of problems we can solve. We train our pre -model on the specified general gypsum icon and set it on the relatively small competitive programming data set.

At the time of evaluation, we create a huge amount of C ++ and Python software for each problem, more size orders than previous work. Then we filter these solutions, renew them, and restore them to a small group of 10 candidate programs that we offer for external evaluation. This automated system replaces the process of experiments and error for competitors to correct errors, collect tests, tests, and present in the end.

With the permission of Codeforce, we evaluated Alphacode by simulating participation in 10 modern competitions. Creating impressive work for the competitive programming community in field where problems cannot be solved through shortcuts such as repetition solutions that were seen before or experience each algorithm that can be related. Instead, our model must create new and interesting solutions.

In general, ALPHACODE is almost at the level of the average competitor. Although this result is far from winning, this result represents a great leap in the potential to solve artificial intelligence problems and we hope our results will inspire the competitive programming community.

The solution to competitive programming problems is really difficult, as both good coding skills and solving problems in humans require. It was very affected that Alphacode can make progress in this field, and is excited to see how the model uses its understanding to produce the code and direct its random exploration to create solutions.

Peter Metrishev, software engineer, competitive program

In order to help human artificial intelligence, our systems must be able to develop problem -solving capabilities. Alphacode was ranked among the 54 % highest in the real world’s programming competitions, a progress that shows the potential of deep learning models for tasks that require critical thinking. These models benefit from modern automatic learning to express solutions to problems as a symbol, as they return to the root of the symbolic thinking of Amnesty International for decades. This is only the beginning.

Our exploration in the generation of the code leaves a vast field for improvement and hints to the most exciting ideas that can help programmers to improve their productivity and open the field to people who are not currently writing a symbol. We will continue this exploration, and we hope that more research will lead to tools to enhance programming and bring us closer to artificial intelligence to solve problems.

View alphacode solutions and explore the model in Alphacode.deepMind.com

Don’t miss more hot News like this! AI/" target="_blank" rel="noopener">Click here to discover the latest in AI news!

2022-12-08 00:00:00

Related Articles

Back to top button