Attack Window Compression: How AI is Reducing Time-to-Impact in Modern Cyberattacks
NOTE: The views expressed here are my own and are in no way intended to reflect those of my employer or any other organization.
TL;DR: As adversaries begin to automate more of their operations with AI, defenders will need to leverage the same technology in new creative ways. We must be more proactive at finding and fixing vulnerabilities, while also enabling faster response times. We must also be mindful of our own use of AI, and ensure that we are not creating new problems for ourselves.
In September 2025, Anthropic disrupted what they documented as the first large-scale cyberattack executed with minimal human intervention. Chinese state-sponsored actors used Claude Code to autonomously conduct reconnaissance, exploit vulnerabilities, harvest credentials, and exfiltrate data across approximately 30 targets, with AI handling 80-90% of tactical operations. The human operator’s role was limited to a handful of critical decision points: “Yes, continue.” “Don’t continue.” “Thank you for this information.” It’s the cybersecurity equivalent of being a manager who only shows up for the important meetings.
I believe this incident is significant, but not for the reasons you may have read in the news. This incident didn’t introduce new attack techniques, it didn’t showcase any advanced tradecraft by the threat actors, but I believe it demonstrates something potentially more consequential for defenders: AI is compressing the time available to detect, respond to, and mitigate cyberattacks.
Autonomous operations at increased velocity
The Anthropic case study reveals timelines that should rightfully concern every CISO. The attackers used Claude Code to perform nearly autonomous reconnaissance across multiple targets simultaneously, maintaining separate operational contexts for each campaign. Within 1-4 hours, the AI independently generated attack payloads tailored to discovered vulnerabilities, validated exploitability, and documented findings, tasks requiring only 2-10 minutes of human review and approval.
The operational tempo was significant: The AI systematically queried databases, extracted data, and categorized findings by intelligence value over 2-6 hours, with human oversight consuming merely 5-20 minutes. It independently identified high-privilege accounts, created persistent backdoor user accounts, and generated comprehensive attack documentation enabling seamless handoff between operators.
It’s important to understand what this represents, and what it doesn’t
This isn’t evidence that automated attacks can outsmart traditional defenses or that LLMs have surpassed human intelligence in offensive operations. Humans remain significantly more sophisticated than LLMs when it comes to creative problem-solving, understanding complex business logic, and adapting to novel defensive measures. What this case demonstrates is that threat actors can delegate substantial amounts of grunt work to LLMs and potentially achieve more operational efficiency. The tedious, repetitive tasks that consume operator time, scanning networks, testing credentials, parsing logs, can now be offloaded to AI, freeing human adversaries to focus on higher-level strategic decisions and complex problem-solving.
Offensive AI may sound like an advanced adversary capability, but the reality is that it’s not introducing any fundamentally new attack vectors. A lot of this automation has already been possible for years with traditional scripting and tools. What AI has really changed is the velocity of attacks, and that velocity has an impact on every aspect of defense.
There’s a critical trade-off at play
Delegating offensive tasks to AI is likely a step backwards in sophistication for threat actors with respects to stealth and the ability to remain undetected. AI-driven attacks generate significant noise, systematic enumeration patterns, and repetitive behaviors that are far easier to detect than the careful, deliberate actions of a skilled human operator who understands how to blend into normal traffic. A sophisticated adversary moving slowly and deliberately might evade detection for months; an AI agent completing the same objectives in hours will likely leave telltale signatures everywhere.
But here’s the concerning reality: higher velocity attacks at the expense of stealth and sophistication may actually yield better results for adversaries. If you can compromise 10 targets in the time it would have taken to carefully infiltrate 1, the math works in your favor even if your failure rate increases. The decrease in break out time means defenders have less time to respond, and the sheer volume of simultaneous operations can overwhelm security teams. It’s the difference between a master jewel thief carefully planning a single heist and a crew hitting every jewelry store in the city on the same night. Some will get caught, but enough will succeed to make it worthwhile.
The vanishing window: quantifying time compression
Incident response trends show that break out time, and time to impact have already been reducing, even before adversaries were widely adopting AI. This is a trend that is likely to continue as AI adoption by adversaries increases.
-
CrowdStrike’s 2024 Global Threat Report documents that breakout time, the interval from initial compromise to lateral movement, has decreased to an average of 62 minutes, down from 84 minutes in 2022. The fastest recorded breakout was 2 minutes and 7 seconds. That’s barely enough time to grab coffee and check your email, much less mount an effective response.
-
Mandiant’s M-Trends reports show median dwell time dropping from 24 days in 2020 to 11 days in 2024, a 54% reduction in five years. For ransomware specifically, the timeline has compressed more dramatically: from approximately 30 days in 2020 to a median of just 5-6 days in 2024, an 80% reduction.
Now let’s consider AI acceleration. Unit 42 simulated an AI-powered ransomware attack achieving full compromise from initial access to exfiltration/impact in 25 minutes, a 100x increase in speed over current human-operated averages. Cloud-based attacks have compressed from 40+ minutes in early 2024 to 10 minutes or less by 2025, a 75% reduction in barely twelve months. In the Anthropic case study, the attackers achieved reconnaissance, vulnerability identification, exploitation, credential harvesting, and data exfiltration in hours to days, operations that would traditionally span weeks.
The pattern is consistent: attack windows are compressing by 50-75% with each technological shift, and AI represents a significant shift.
Impact across the attack lifecycle
For security professionals familiar with the MITRE ATT&CK framework, it’s useful to map AI’s impact across enterprise tactics and techniques.
-
Reconnaissance potentially compresses from hours to minutes as AI analyzes large datasets to identify targets.
-
Resource Development shrinks from hours to minutes, with Israeli researchers demonstrating AI generating exploits for 14 vulnerabilities in as little as 15 minutes at approximately $1 per exploit.
-
Initial Access benefits from AI-crafted phishing achieving higher success rates, enabling faster compromise.
Even more concerning is what happens post-compromise.
-
Execution and Persistence that once required hours of trial and error can now occur in minutes through agentic decision-making.
-
Credential Access and Lateral Movement become continuous, automated processes rather than deliberate human-directed actions. The Anthropic case showed AI independently determining which credentials provided access to which services, testing systematically across internal APIs, database systems, container registries, and logging infrastructure.
-
Impact, the final phase, transforms from days of deliberate data staging to hours of automated identification, categorization, and exfiltration. In ransomware operations currently averaging 5-6 days from initial access to deployment, industry projections suggest AI automation will compress this to 2-3 days by 2026.
The defender’s traditional advantage, time to detect and respond, is diminishing.
The defender’s dilemma: operating at machine speed
Here’s the reality for defenders: if attacks unfold in minutes to hours, the current human-paced response time becomes inadequate.
This speed differential creates a significant challenge. Traditional security operations, triaging alerts, investigating anomalies, coordinating incident response, operate on human timescales measured in hours and days. When human attackers can already achieve lateral movement in 62 minutes on average, or potentially complete full compromise in 25 minutes with AI augmentation, defenders are fighting an uphill battle.
The security industry’s response to AI-powered attacks has been predictable: a flood of AI-powered security products promising to solve all of our problems. The marketplace is saturated with solutions claiming to automate threat detection, accelerate response times, and eliminate the need for large security teams. Most of these claims are, to put it charitably, aspirational at best. It’s reminiscent of the “blockchain will solve everything” phase we lived through a few years ago, except now we’re hearing more about neural networks and less about distributed ledgers. The reality is that many organizations are still struggling with basic security hygiene, and adding AI to the stack doesn’t magically fix fundamental process and architecture problems.
The idea of full automation in security response is not a logical solution. While AI can certainly assist with correlation and analysis, removing humans from critical decision-making loops introduces its own risks. Automated responses to false positives can take down production systems, block legitimate users, or trigger cascading failures that impact the business far more than the original threat. Anyone who’s seen an overaggressive DLP system block an executive’s email to the board, or a SOAR playbook accidentally isolate a critical database server, understands why human judgment remains essential.
The challenge isn’t whether to use AI in security operations. It’s how to integrate these capabilities in ways that actually solve problems rather than create new ones, and how to maintain appropriate human oversight when the stakes are measured in minutes rather than days.
The critical role of proactive pen testing and red teaming
The compression of attack windows makes proactive vulnerability discovery not just best practice but operational necessity. If adversaries can move from initial access to impact in hours, discovering and remediating vulnerabilities after they’re actively exploited is too late.
This is where experienced red team operators and penetration testers become invaluable. While AI can augment certain aspects of security testing, automating repetitive scans, analyzing large codebases for common vulnerability patterns, or generating variations of known exploits, it cannot replace the critical thinking, creativity, and contextual understanding that skilled security researchers bring to vulnerability discovery.
Complex vulnerabilities often require understanding business logic flaws, chaining multiple lower-severity issues into critical exploits, or identifying architectural weaknesses that automated tools miss entirely. An AI might flag a deserialization vulnerability in an internal API, but an experienced red teamer understands how to chain that with an authentication bypass to achieve remote code execution on production systems.
The human element in red teaming is irreplaceable for several reasons:
-
Understanding attacker psychology and methodology: Experienced red teamers think like adversaries because they’ve studied real attack campaigns, understand threat actor motivations, and can anticipate non-obvious attack paths.
-
Adapting to unique environments: Every organization has unique configurations, custom applications, and specific business processes. Skilled penetration testers excel at understanding these contexts and identifying vulnerabilities specific to the environment.
-
Creative problem-solving: The most critical vulnerabilities often emerge from creative thinking: combining multiple seemingly minor issues, exploiting unexpected interactions between systems, or identifying vulnerabilities in the seams between different technologies. This requires human creativity and deep technical understanding.
-
Validating and prioritizing findings: AI-powered scanners generate findings, but experienced security professionals understand which vulnerabilities pose actual risk in a given context, which findings are false positives, and how to prioritize remediation efforts based on real-world exploitability and business impact.
The goal is shifting from periodic security assessments to continuous, human-led testing augmented by AI tools. This includes integrating security expertise throughout the development lifecycle, maintaining ongoing purple team programs that evolve with the threat landscape, and ensuring that human pen testers and red teamers are discovering and validating critical vulnerabilities before adversaries can find and exploit them.
As AI accelerates adversary capabilities, the organizations that maintain strong offensive security programs, staffed by experienced researchers who can think creatively, understand complex systems, and validate real-world attack paths, will be the ones that stay ahead of threats.
Secure-by-design AI: learning from Anthropic’s transparency
Anthropic’s response to the GTG-1002 campaign deserves recognition, but it did lack some important details. Within 10 days of detecting suspicious activity, they banned accounts, notified affected entities, coordinated with authorities, and conducted a comprehensive investigation. More importantly, they released both a public blog post and a detailed 13-page technical report. Kudos to Anthropic for their transparency and their efforts to mitigate the impact of the attack, this should be the standard for all AI providers. I would propose one suggestion however, that they include explicit details on IOCs for impacted customers in the future to help other organizations defend themselves.
This transparency is essential! The dual-use nature of AI systems means that capabilities enabling AI to assist cybersecurity professionals also enable its misuse by sophisticated adversaries. The attackers bypassed safety guardrails through task fragmentation, breaking attacks into small, seemingly innocent tasks, and role-play deception, convincing Claude it was conducting legitimate defensive testing.
These techniques aren’t exotic or theoretical. They’re straightforward social engineering applied to AI systems, and they worked. The attackers didn’t need to exploit zero-day vulnerabilities in Claude’s architecture or discover novel prompt injection methods. They simply broke their objectives into small enough pieces that each individual task looked legitimate. It’s the AI equivalent of the old “salami slicing” fraud technique, but for hacking instead of embezzlement.
This kind of transparency allows other AI providers to learn from real-world abuse cases rather than speculating about theoretical risks. It also allows security teams to better understand what adversary use of AI actually looks like in practice, which is often quite different from what we imagine in conference presentations.
The question is whether this level of transparency will become standard practice in the AI industry, or whether this represents an exception. When AI systems are abused for malicious purposes, disclosure shouldn’t be discretionary. The community needs to know what techniques are being used, which safeguards failed, and what indicators might help detect similar activity. Without this information sharing, every AI provider is essentially learning the same lessons independently, which benefits no one except the adversaries.
Defensive AI must evolve as fast as offensive AI
The Anthropic case illustrates a critical point about the offense-defense balance in the AI era: the same technologies that enable autonomous attacks are essential for defense. Anthropic noted that their Threat Intelligence team used Claude extensively to analyze the enormous amounts of data generated during their investigation. AI systems that can autonomously scan codebases, identify anomalies, and correlate threat intelligence aren’t optional, they’re necessary for operating at the speeds adversaries are now achieving.
The challenge isn’t just deploying AI tools, it’s addressing the systemic issues that create vulnerabilities in the first place. Defensive AI should focus not just on reactive detection but on proactive security: continuous automated testing, intelligent patch prioritization, and predictive analysis of which assets are most likely to be targeted based on attacker behavior patterns.
The reality is that many organizations are still struggling with basic security fundamentals. Adding AI to a security program that can’t consistently patch known vulnerabilities or enforce basic access controls is like installing a sophisticated alarm system on a building with no doors. The technology might be impressive, but it’s not addressing the actual problem. You don’t need machine learning to tell you that admin/admin is a bad password, or that exposing your database to the internet without authentication is inadvisable.
The path forward: faster response and systemic quality
As security professionals, we’ve long understood that attackers need only one successful path while defenders must protect every possible entry point. AI doesn’t change this fundamental asymmetry, but it could compress the timeframes for response significantly.
According to Mandiant’s M-Trends 2024 data, the median time to detect an intrusion (dwell time) is currently 11 days. Industry research shows ransomware operators are achieving median deployment times of approximately 6 days from initial access. This gives organizations a theoretical response window of about 5 days to contain an incident before ransomware deployment. However, as AI adoption by adversaries increases, projections suggest these timelines will compress further, with time-to-impact potentially shrinking to 1-2 days by 2026. At that point, organizations will have hours, not days, to respond effectively.
Meeting this challenge requires three parallel efforts:
-
First, defensive AI adoption, not as a future vision but as an immediate operational requirement. This does not mean that we need to go out and buy every AI-powered security product on the market, but we need to think creatively about how we can use AI to help us detect, respond to, and mitigate threats.
-
Second, proactive vulnerability discovery and remediation cycles that leverate modern AI capabilities. This means maintaining robust red team and penetration testing programs, backed by skilled security researchers who can identify complex vulnerabilities through both traditional and AI-powered means. Human expertise in security research is more valuable than ever as attack surfaces grow more complex and adversaries become more sophisticated. Organizations need experienced professionals who can think creatively about security, validate real-world attack paths, and prioritize vulnerabilities based on actual business risk.
-
Third, systemic improvement in software/infrastructure quality through secure-by-design principles to reduce the attack surface that adversaries can target in the first place.
The advantage in this landscape won’t go to those with the most sophisticated individual tools, but to those who can operate cohesively at speed, detecting, analyzing, and responding to threats in minutes rather than days, finding and fixing vulnerabilities before adversaries exploit them, and building systems secure by design rather than hardened by reaction.
The window is closing. The question is whether defenders will adapt quickly enough to meet adversaries operating at machine speed.
← Back to blog