AI

Google’s new AI agent rewrites code to automate vulnerability fixes

Google DeepMind has published the new Amnesty International agent designed to find security security gaps in the software code and fix them independently in the software code. The system, which has been called Codemender, has already contributed to 72 security reforms to open source projects in the past six months.

Determining and patching points is a very difficult process and takes a long time, even with the help of traditional mechanical methods such as fog. Private Google DeepMind, including artificial intelligence-based projects such as Big Sleep and OSS-Fuzz, have proven their effectiveness in discovering weaknesses on the zero day in a well-sanctified symbol. However, this success creates the neck of the new bottle: as AI accelerates the discovery of defects, the burden on human developers for their repair is intensified.

Codemender is designed to treat this imbalance. It works as an independent artificial intelligence factor that takes a comprehensive approach to repairing the safety of the code. Its capabilities are both interactive, which allows them to correct the newly discovered weaknesses immediately, and proactive, which allows them to rewrite the current code to get rid of full groups of safety defects before exploiting them. This allows human developers and project lawyer to devote more time to build features and improve software functions.

The system works by taking advantage of the possibilities of the advanced thinking of Gueini Gemini Deep Think from Google. This basis allows the agent to correct and solve complex security problems with a high degree of autonomy. To achieve this, the system is equipped with a set of tools that allow it to analyze the code and the reason before implementing any changes. Codemender also includes the verification process to ensure the authenticity of any modifications and do not provide new problems, known as slopes.

Although large language models are progressing quickly, the error when it comes to the safety of the code can have expensive consequences. Therefore, the automatic verification framework of Codemender is necessary. It is systematically verified that any proposed changes determine the root cause of the problem, functional correction, do not cut current tests, and adhere to the project coding pattern. High -quality corrections that meet these strict standards for human review appear.

To enhance the effectiveness of the code, DeepMind has developed new technologies for the artificial intelligence agent. Codemender uses an advanced program analysis, using a set of tools including fixed and dynamic analysis, differential test, Fuzzing, and SMT. These tools allow the code patterns, control flow and data flow systematically to determine the basic causes of security and architectural weakness.

The system also uses a multi -agent structure, where specialized agents are deployed to address specific aspects of the problem. For example, a critical tool based on the large language model detects the differences between the original and average code. This allows the basic agent to verify that the proposed changes do not put unintended side effects and to correct self -approach when necessary.

In one practical examples, Codemender treated a weakness with a breakdown report indicated that there is an exacerbation of a temporary store for the pile. Although the final correction only requires changing a few lines of code, the root cause was not immediately clear. Using error and symbol correction, the agent determined that the real problem was a problem in managing an incorrect stack with expandable coding elements (XML) during the analysis, which is located elsewhere at the base of the code. In another case, the agent invented a non -trivial patch for a complex object life problem, and modifying a system for creating a c code within the target project.

Besides responding simply to the current errors, Codemender is designed to steal the software proactively in exchange for future threats. The team published the agent to apply -fbounds safety The illustrations of parts of LIBWEBP, the widely used image pressure library. These illustrations are directed to the translator to add border examination operations to the code, which can prevent the attacker from exploiting a temporary store flow to implement arbitrary code.

This work is particularly close, given that the capacity of the temporary warehouse of the LIBWEBP, which was followed as CVE-2023-4863, was used by the representative of the threat in the iOS with a zero click it exploits several years ago. Deepmind notes that with these illustrative comments, this specified weakness, along with most of the other insulating excess in the explanatory sections, would have become inappropriate.

Fixing the pre -emptive code for the artificial intelligence agent includes the process of making advanced decisions. When applying the illustrations, it can automatically correct new assembly errors and the failure of the test that arises from its own changes. If health verification tools discover that the modification has broken the jobs, then the agent’s self -corrections based on the comments and trying a different solution.

Despite these promising early results, Google DeepMind takes a careful and deliberate publication, with a strong focus on reliability. Nowadays, every correction created by Codemender is reviewed by human researchers before submitting it to an open source project. The team gradually increases its offers to ensure high quality and systematically integrate reactions from the open source society.

Looking at the future, researchers plan to access important open projects projects with spots generated by coding programs. Through repetition of society’s observations, they ultimately hope to launch Codemender as a tool available to the public for all program developers.

DeepMind also intends to publish technical papers and reports in the coming months to exchange their technologies and results. This work represents the first steps in exploring the capabilities of artificial intelligence agents to proactively reform the code and enhance the safety of programs for all.

See also: Kamia’s privacy attack reveals what is preserved by artificial intelligence models

Do you want to learn more about artificial intelligence and large data from industry leaders? Check AI and Big Data Expo, which is held in Amsterdam, California, and London. The comprehensive event is part of Techex and is determined with other leading technological events including the Cyber ​​Security Expo, click here for more information.

AI News is supported by TechForge Media. Explore other web events and seminars here.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-10-06 13:56:00

Related Articles

Back to top button