AI

Evaluating potential cybersecurity threats of advanced AI

Artificial intelligence (AI) has always been the cornerstone of cybersecurity. From the disclosure of harmful programs to the analysis of network traffic, automatic learning models have been used and other artificial intelligence applications in cybersecurity for decades. As we are approaching artificial general intelligence (AGI), artificial intelligence capabilities become automation of defenses and fixing weaknesses more powerful.

But to harness such benefits, we must understand and reduce the risk of increasingly advanced AI’s use to enable or enhance electronic attacks. Our new framework for evaluating the emerging offensive electronic capabilities of Amnesty International helps us to do so exactly. It is the most comprehensive assessment of its kind to date: it covers each stage of the electronic attack chain, deals with a wide range of types of threats, and is based on the real world data.

The framework of our work experts enables cyber security to define the necessary defenses – and how to determine their priorities – before the harmful actors Amnesty International to carry out advanced electronic attacks.

Building a comprehensive standard

The updated border safety framework realizes that advanced artificial intelligence models can lead to automation and acceleration of electronic attacks, which may reduce the costs of the attackers. This, in turn, raises the risk of attacks on a larger scale.

To remain ahead of the emerging threat of electronic attacks from artificial intelligence, we have adapted the tested cybersecurity evaluation frameworks and testing, such as Miter Att & CK. These frameworks enabled us to evaluate the threats through the cyberspace -to -end series, from the survey to work on targets, and across a set of possible attack scenarios. However, these frameworks have not been designed to calculate the attackers who use artificial intelligence to violate a system. Our approach to this gap is closed by determining the “AI” position in a proactive way that can make the attacks faster, cheaper or easier – for example, by enabling the entire automatic electronic attacks.

We analyzed more than 12,000 attempts in the real world to use artificial intelligence in electronic attacks in 20 countries, relying on data from the Google threat intelligence group. This helped us define common patterns in how these attacks evolve. From these, we sponsored a list of seven categories of the typical model attack-including hunting, harmful programs and service denial attacks-and defined the stages of the critical bottle neck along the electronic attack chain where artificial intelligence can disrupt the traditional costs of the attack significantly. By focusing assessments on these bottlenecks, defenders can give priority to their security resources more effectively.

Finally, we created a standard for cybersecurity to assess cybersecurity points in a comprehensive manner and weakness of FRONTIERE AI. Our indicator consists of 50 challenges covering the entire attack chain, including areas such as collecting intelligence, exploiting weakness, and developing harmful programs. Our goal is to provide defenders with the ability to develop targeted dilces and simulate attacks on behalf of the Red team.

Insights of early evaluation

Our initial assessments using this standard indicate that it is unlikely that artificial intelligence models are currently allowing penetration capabilities of the actors of threats. However, when Frontier Ai is more advanced, the types of possible electronic attacks will develop, which requires continuous improvements in defense strategies.

We have also found that the existing cybersecurity assessments of artificial intelligence often ignore the main aspects of electronic attacks-such as evasion, as attackers hide their presence and perseverance, as they maintain long-term access to a drug system. However, such areas are specifically as the methods that Amnesty International can be particularly effective. Our framework lights up on this issue by discussing how artificial intelligence is reduced from the barriers that prevent success in these parts of the attack.

Empowering the cyberspace security community

With the continued expansion of the scope of artificial intelligence systems, its ability to automate and enhance cybersecurity has the ability to transform how defenders expect and respond to threats.

Our cyber security evaluation framework is designed to support this transformation by providing a clear vision of how to abuse artificial intelligence, and where current cyber protection may shorten. By highlighting these emerging risks, this framework and standard will help cybersecurity teams enhance their defenses and remain at the top of advanced threats.

2025-04-02 13:30:00

Related Articles

Back to top button