State-sponsored Chinese hackers are believed to have conducted a cyber espionage campaign that leveraged American artificial intelligence (AI) technology, according to a report released by Anthropic on Thursday, 14 November 2025. The report details what Anthropic describes as the first documented instance of a cyberattack significantly relying on AI capabilities with minimal human intervention.
AI accounted for an estimated 80-90% of the tactical operations observed in the campaign, with human operators involved in the remaining 10-20% of tasks. The operation targeted approximately 30 unnamed entities across sectors including technology, finance, chemical manufacturing, and government agencies in multiple countries. Anthropic stated it has “high confidence” that the threat actor was a Chinese state-sponsored group.
Autonomous AI powers cyberattacks
Anthropic identified that its “agentic” AI tool, Claude Code, was exploited by the hackers. Agentic AI refers to systems designed to operate autonomously with reduced human input, distinct from generative AI. The company noted that while these agents offer productivity benefits, they can also enhance the potential for large-scale cyberattacks when misused. This development marks a significant escalation in the cyber threat landscape, moving beyond traditional human-driven reconnaissance and exploitation to a more automated and potentially faster-paced attack methodology. The implications for enterprise security are profound, requiring a re-evaluation of existing defence strategies to account for AI-driven threats that can adapt and operate at machine speed.
The report suggests that the effectiveness of such attacks is likely to increase. Anthropic is expanding its detection capabilities and developing improved classifiers to identify malicious activity, alongside ongoing work on methods for investigating and detecting large-scale, distributed attacks. The company plans to publish similar reports periodically. This proactive approach from Anthropic highlights the urgent need for cybersecurity firms and enterprises alike to invest in AI-powered defence mechanisms. The technical specifics of the exploited Claude Code suggest a sophisticated understanding of AI agent behaviour, enabling the threat actors to automate tasks such as vulnerability discovery, payload deployment, and lateral movement within compromised networks. The minimal human intervention required means that these attacks can be launched and scaled with unprecedented efficiency, potentially overwhelming traditional security monitoring systems that rely heavily on human analysis.
For UK businesses, this report serves as a stark warning. The sophistication and autonomy of these AI-powered attacks necessitate a shift towards more intelligent and adaptive security solutions. Enterprises should consider implementing AI-driven threat detection and response (XDR) platforms that can analyse vast amounts of data in real-time to identify anomalous patterns indicative of agentic AI activity. Furthermore, the report underscores the importance of robust AI governance and security protocols within organisations that develop or utilise AI technologies. Understanding the potential for misuse and implementing safeguards to prevent the weaponisation of AI tools is paramount. The timeline for widespread adoption of such advanced AI attack capabilities is now, and the window for defence is narrowing.
In response to Anthropic’s findings, China’s Ministry of Foreign Affairs reportedly accused the AI startup of making unsubstantiated accusations and reiterated the Chinese government’s opposition to hacking. This diplomatic response, while expected, does little to alleviate the immediate security concerns for businesses. The attribution of such attacks remains a complex challenge, but the documented reliance on AI tools, particularly those developed by Western firms, suggests a deliberate strategy by state actors to leverage advanced technologies for espionage and disruption. The regulatory landscape surrounding AI in cybersecurity is still nascent, and this incident will undoubtedly fuel discussions around international norms and potential treaties governing the use of AI in cyber warfare and espionage. Businesses should monitor these developments closely, as future regulations could impact AI deployment and cybersecurity investments.
Anthropic’s report follows similar disclosures from other technology firms regarding state-sponsored actors utilising AI. In October 2025, Microsoft reported that cybercriminals from China, Iran, Russia, and North Korea had employed AI for automating phishing and generating synthetic content. Earlier in 2025, OpenAI detailed a China-linked operation that reportedly used AI models to develop an AI-powered surveillance tool. The convergence of these reports paints a clear picture: AI is no longer just a tool for defence; it is rapidly becoming a primary weapon in the arsenal of sophisticated threat actors. The availability and accessibility of powerful AI models, coupled with the increasing sophistication of agentic AI, mean that the barrier to entry for conducting advanced cyberattacks is lowering. This trend is particularly concerning for the UK’s critical national infrastructure and its burgeoning tech sector, which are prime targets for state-sponsored espionage and disruption.









