Cyberattacks powered by AI are skyrocketing: faster, autonomous, and accessible, giving attackers the upper hand. To keep up, businesses must also adopt defenses that are just as advanced.
The landscape of AI-related threats is evolving at a breakneck pace. With Anthropic’s recent discovery of the first AI-powered cyberespionage campaign, we now have concrete evidence of what many security actors feared. Last year, I hypothesized that LLMs gave attacked companies an asymmetric advantage, but this balance of power has changed. The emergence of agent-based AI and the development of offensive infrastructures have paved the way for malicious actors to operationalize large-scale chains of agent-based tools. And at present, they have the advantage.
The impact is already visible in France. According to the French Ministry of the Interior, 348,000 cyber incidents were recorded in 2024, an increase of 74% compared to the previous five years. Attacks are multiplying across France and becoming more targeted and difficult to detect.
This situation is not surprising. Recent initiatives have shown that qualified offensive operators training agents specifically for threat hunting can outperform individual researchers. Similar capabilities are now available to malicious actors. They now have a roadmap to use AI to execute multi-stage attacks autonomously, without being limited by constraints related to human intervention.
Unchecked, these chains of agent-based attacks will pose serious challenges to security teams. However, they can leverage the same convergence between technical capabilities and processes to strengthen their defenses.
More vulnerabilities are being exploited
AI agents have significantly reduced the time between discovering a vulnerability and exploiting it.
Recently, Google announced that its Big Sleep project had identified numerous zero-day vulnerabilities in open-source projects. A collaboration between DeepMind and Project Zero, Big Sleep included a set of multi-phased agents designed to identify software vulnerabilities and develop functional exploits.
While Big Sleep allowed security personnel to prevent these exploits from materializing, there is no doubt that malicious actors are using the same techniques to compromise their targets. For French organizations subject to the requirements of the NIS2 directive, the speed at which vulnerabilities now turn into exploits increases the stakes in terms of compliance deadlines and incident reporting obligations.
Attackers are sequencing agents
It is no longer just a theory: attackers are decomposing the attack phases into distinct agent-based workloads and using chains of agents to execute each phase autonomously.
Anthropic’s report on cyber espionage revealed that Chinese malicious actors were using AI agents to independently carry out 80 to 90% of attacks. Human intervention was required less than seven times at critical decision points. AI agents, executing thousands of requests per second, have significantly reduced the time and human resources needed for an attack.
Additionally, in its 2025 report titled Threat Intelligence Report, Anthropic reveals that AI allows less skilled malicious actors to learn and execute more advanced tactics, techniques, and procedures. Cybercriminals with minimal technical expertise, for example, used Claude to develop and sell multiple variants of ransomware for 345 to 1,029 euros on online forums. They relied solely on AI to implement encryption algorithms and evasion techniques.
Thanks to AI, it has never been cheaper for attackers to arm exploits. Agent-based AI offers greater autonomy in these attacks, whose numbers continue to rise. This trend is expected to lead to an increase in attacks targeting companies’ most valuable data.
To counter this threat, attacked companies must respond with equally advanced defense strategies.
Fighting agents with agents
As attackers develop chains of agent-based tools, internal red teams and defense teams must also increase their use of agent-based AI. They need AI agents that leverage internal system resources to gain context, then decompose defensive tasks into workloads to expedite the identification and correction of vulnerabilities.
For effective execution, these agents must have a deep understanding of the software environment. An infrastructure that provides the right data and context to deployed agents is essential, such as knowledge graphs that map relationships across source code.
When they have access to knowledge graphs, agents can combine knowledge about the company with historical data on vulnerabilities and known security anti-patterns to help teams prioritize threats based on real attack patterns rather than theoretical risks.
In addition to prevention, agent-based defenses also promote resilience. Companies can break down tasks into detection and correction activities within their organization’s runbooks. Agents manage everything from identification to investigation, correction, and post-mortem analysis to reduce downtime and limit damage.
These use cases illustrate the steps companies can take to strengthen their agent-based defenses. The convergence of technical capabilities and processes offers new tools to combat malicious actors and develop defensive operations. Attackers are already taking advantage of these tools; now it is time for companies to follow suit.






