AI Cyber Risk and the Rise of the Algorithmic Attacker
December 17, 2025
In an article by Lloyd J. Wilson of Shumaker, a September 2025 disclosure from Anthropic’s Threat Intelligence team is presented as a turning point in AI cyber risk. Anthropic reported detecting and disrupting, with high confidence, what it assessed as a Chinese state-sponsored cyber-espionage campaign in which an AI system performed the majority of the intrusion activity. According to the report, roughly 30 global organizations were targeted, with limited success, but the significance lies in the operational model: AI systems executing attacks at a velocity and scale previously constrained by human capacity.
Wilson explains that Anthropic assessed the AI as performing 80 to 90 percent of tactical tasks, including reconnaissance, exploit development, and lateral movement, with humans intervening only at strategic approval points. The attackers allegedly bypassed safeguards by role-playing as legitimate security professionals and used a custom orchestration framework built on the Model Context Protocol to connect the AI to external tools. By breaking complex attacks into discrete tasks, the AI executed chained operations autonomously, a capability Wilson frames as legally disruptive rather than merely technical.
For risk management professionals, Wilson argues this reframes attribution, causation, and duty of care. Traditional frameworks assume human intent, but agentic AI blurs lines of responsibility across deployers, vendors, and users. He notes that regulators and plaintiffs are likely to scrutinize whether AI risks were foreseeable and whether controls, warnings, and governance aligned with recognized standards were in place.
Wilson situates these issues within an evolving regulatory landscape, including the EU AI Act, US FTC enforcement, GDPR breach obligations, and voluntary frameworks such as the NIST AI Risk Management Framework. He concludes that organizations must treat AI cyber risk as an enterprise governance issue, revisiting contracts, insurance, and oversight models. The core takeaway for compliance teams is clear: when the “actor” is an algorithm, expectations around controls, documentation, and accountability rise accordingly.
Critical intelligence for general counsel
Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.
Daily Updates
Sign up for our free daily newsletter for the latest news and business legal developments.