The State of AI Security in 2025: Key Insights from the Cisco Report

Since more companies adopt artificial intelligence, understanding their security risks has become more important than ever. Artificial intelligence reshapes industries and workflow, but also provides new security challenges that institutions must address. Protecting artificial intelligence systems is necessary to maintain confidence, protect privacy and ensure smooth commercial operations. This article summarizes the main visions of the “Security Agency from Artificial Intelligence in 2025” in Cisco. It offers a general overview of the location of artificial intelligence today and what companies should take into account in the future.
An increasing security threat to Amnesty International
If we know in 2024 anything, this is that the adoption of artificial intelligence moves faster than that many organizations can secure them. The CICO report states that about 72 % of institutions now use artificial intelligence in their business functions, however only 13 % feel that they are completely ready to increase their potential safely. This gap between adoption and preparation is largely driven by security concerns, which still is the main obstacle to AI’s use of the wider institution. What makes this situation more important is that artificial intelligence offers new types of threats that traditional cyber security methods are not fully equipped. Unlike traditional cybersecurity, which often protects fixed systems, artificial intelligence brings dynamic and adaptive threats that are difficult to predict. The report highlights many emerging threats that organizations should be aware of the following:
- Infrastructure attacks: Amnesty International’s infrastructure has become a major target for the attackers. A prominent example of a compromise of the NVIDIA container tools group, which allowed the attackers to reach file systems, operate harmful software instructions, and escalating privileges. Likewise, Ray, an Open source of the GPU, is at risk in one of the first artificial intelligence frameworks in the real world. These cases show how the weaknesses of the Amnesty International infrastructure can affect many users and systems.
- The risk of supply chain: The weaknesses of the artificial intelligence supply chain offer another great concern. About 60 % of organizations depend on open source AI components or ecosystems. This creates risks because the attackers can bargain with these tools widely used. The report mentions a technique called “Sleepy Pickle”, which allows opponents to tamper with artificial intelligence models even after distribution. This makes the detection very difficult.
- Prosecution attacks: New attack techniques are developing quickly. Methods such as rapid injection, penetration of protection, and the extraction of training data for attackers allow for safety controls and access to sensitive information in training data collections.
Attack tankers targeting artificial intelligence systems
The report highlights the appearance of attack carriers used by harmful actors to exploit weaknesses in artificial intelligence systems. These attacks can occur in various stages of the life of artificial intelligence cycle of data collection and typical training to publishing and inference. Often the goal is to make artificial intelligence act in unintended ways, leakage of private data, or implement harmful procedures.
In recent years, these attacks have become more advanced and more difficult to discover. The report sheds light on several types of attacks:
- Fracture of protection: This technique involves formulating reproductive claims that exceed the safety measures. Despite the improvements in artificial intelligence defenses, CISCO research has even showed that simple prison operations are still effective against advanced models such as Deepseek R1.
- Indirect immediate injection: Unlike direct attacks, the attack carrier includes the processing of input data or context that the artificial intelligence model indirectly uses. Attackers may provide source materials such as PDFs of malicious PDF files or web pages, causing artificial intelligence unintended or harmful outputs. These attacks are particularly dangerous because they do not require direct access to the artificial intelligence system, allowing attackers to exceed many traditional defenses.
- Extracting training and poisoning data: CISCO researchers have shown that Chatbots can be deceived in detecting parts of their training data. This raises serious concerns about the privacy of data, intellectual property, and compliance. The attackers can also poison training data by injecting malicious inputs. It is worrying that only 0.01 % of large data groups such as Laion-400M or Coyo-700M can affect the behavior of the model, and this can be done with a small budget (about $ 60 USD), which makes these attacks available to many bad actors.
The report highlights serious concerns about the current situation of these attacks, as researchers achieved a 100 % success rate against advanced models such as Deepseek R1 and Llama 2. This reveals security weaknesses and potential risks associated with their use. In addition, the report determines the emergence of new threats such as the sound fractures specially designed to target multimedia AI models.
The results of artificial intelligence security research from Cisco
The CISCO research team has evaluated various aspects of artificial intelligence safety and revealed many major results:
- Prison algorithms: The researchers have shown that even artificial intelligence models can be automatically deceived. Using a method called the attack tree with pruning (TAP), researchers have exceeded protection on GPT-4 and Llama 2.
- Risks in good refinement: Many companies are models to establish the importance of specific areas. However, the researchers found that precise control can weaken internal safety scales. The seized versions were three times more likely to break the protection, and more likely to produce harmful content than the original models.
- Training data extraction: CISCO researchers used a simple decomposition method to cheatbots to reproduce fragments of a news article that enables them to rebuild material sources. This raises risks to exposing sensitive or property data.
- Data poisoning: Data poisoning: The Cisco team explains how easy and inexpensive to poisoning web data sets widely. For about $ 60, researchers were able to poison 0.01 % of data groups such as Laion-400M or Coyo-700M. Moreover, they highlight that this level of poisoning is sufficient to cause noticeable changes in the behavior of the model.
The role of artificial intelligence in electronic crimes
Artificial intelligence is not just a goal – it has also become a tool for the e -criminal. The report indicates that automation and social engineering driven by artificial intelligence made the attacks more effective and difficult. From delicious fraud to sound cloning, artificial intelligence helps criminals create convincing and personal attacks. The report also determines the rise of malicious artificial intelligence tools such as “DarkGPT”, specifically designed to help Internet crimes by generating pure emails or exploiting security gaps. What makes these tools in particular is access to them. Even low -skilled criminals can now create very dedicated attacks that run away from traditional defenses.
Best practices to secure artificial intelligence
Given the volatile nature of artificial intelligence security, CISCO recommends several practical steps for institutions:
- Risk management via AI life cycle: It is important to determine and reduce risks at each stage of the life intelligence cycle of data sources and typical training to publishing and monitoring. This also includes securing third -party components, applying powerful handrails, and controlling access points tightly.
- Using the applicable cybersecurity practices: While artificial intelligence is unique, the best traditional cyber security practices are still necessary. Techniques such as access to access, permission management and data loss can play a vital role.
- Focus on weak areas: Organizations should focus on areas that are likely to target, such as supply chains and artificial intelligence applications for an external side. By understanding the whereabouts of weaknesses, companies can implement more targeted defenses.
- Employment and training of employees: With the spread of artificial intelligence tools, it is important to train users to use responsible artificial intelligence and risk awareness. The enlightened workforce helps reduce exposure to cross data and abuse them.
We look forward
Artificial intelligence will continue to grow, however, security risks will develop. Governments and organizations around the world recognize these challenges and begin to build policies and regulations to direct the safety of artificial intelligence. As the CICO report also highlights, the balance between the integrity of artificial intelligence and progress determines the next era for the development and spread of artificial intelligence. Organizations that give priority to security in addition to innovation will be more equipped to deal with challenges and seize emerging opportunities.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-05-16 18:34:00