The conventional wisdom in DeFi security is that smart contract audits, bug bounties, and multisig wallets provide adequate protection for on-chain assets. That assumption died on 7 April 2026, when Anthropic revealed that its unreleased AI model, Claude Mythos Preview, had autonomously discovered thousands of zero-day vulnerabilities in every major operating system, web browser, and cryptographic library underpinning the internet — including the infrastructure DeFi protocols depend on to secure over $200 billion in total value locked. The implications for brokers, exchanges, institutional allocators, and protocol operators are immediate and far-reaching: the security model the industry has relied on for a decade is no longer fit for purpose.
Key Facts
- Mythos Preview achieved a 72.4% exploit success rate on discovered vulnerabilities, compared to near-zero for previous AI models — Anthropic, Project Glasswing, April 2026
- The model found a 27-year-old vulnerability in OpenBSD used in critical financial infrastructure — Anthropic, April 2026
- Crypto losses hit $3.4 billion in 2025 and $168.6 million in Q1 2026 alone — Cointelegraph, April 2026
- DeFi lending TVL has surged past $55 billion, with Aave alone nearing $50 billion — The Block, 2026
- Anthropic committed $100 million in credits and $4 million in donations to secure open-source software through Project Glasswing — Anthropic, April 2026
- Most 2025 crypto losses came from operational failures — stolen keys and social engineering — not on-chain code exploits — CoinDesk, January 2026
- Project Glasswing partners include AWS, Apple, Google, Microsoft, JPMorganChase, and NVIDIA among 12 launch organisations — Anthropic, April 2026
What Mythos Actually Found — And Why DeFi Should Pay Attention
Anthropic’s Mythos Preview is not a theoretical exercise. On the CyberGym vulnerability reproduction benchmark, it scored 83.1% accuracy compared to Claude Opus 4.6’s 66.6%. More critically, when tasked with developing working exploits from discovered vulnerabilities, Mythos achieved a 72.4% success rate — a leap from the near-zero percent managed by its predecessor. As Anthropic stated in its Project Glasswing announcement: “AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.”
The technical demonstrations are sobering. Mythos autonomously chained four vulnerabilities in a web browser, deploying a complex JIT heap spray technique to escape both renderer and OS sandboxes. It exploited subtle race conditions in the Linux kernel, bypassed KASLR protections, and achieved privilege escalation without human guidance. It constructed a 20-gadget ROP chain split across multiple packets to achieve remote root access on FreeBSD via an NFS vulnerability.
For DeFi, the critical finding is what Mythos discovered in the cryptographic layer. The model identified weaknesses in TLS, AES-GCM, and SSH — the exact protocols that MPC and multisig wallet implementations rely on for key management, transaction signing, and node communication. It also located a 16-year-old vulnerability in FFmpeg that automated scanning tools had missed across five million test runs, demonstrating that traditional security tooling has systematic blind spots that AI can now exploit.
Having tracked DeFi infrastructure security since the early days of Compound and MakerDAO, the gap between what the industry assumes is secure and what an adversarial AI can now penetrate has never been wider.
The Scale of What Is at Stake — Market Data and Attack Surface
The financial exposure is staggering. DeFi lending alone has surpassed $55 billion in TVL, with Aave’s parabolic growth toward $50 billion signalling unprecedented institutional capital concentration in smart contract-based protocols, according to The Block. The Ethereum Foundation recently completed staking 70,000 ETH — roughly $143 million — shifting from selling ETH to earning yield on-chain. These are not retail experiments. They are institutional-grade capital deployments that assume the underlying security stack is sound.
Yet the evidence suggests otherwise. Cryptocurrency theft hit $3.4 billion in 2025, with the Bybit exchange hack alone accounting for $1.4 billion — 44% of annual losses, per Cointelegraph. In Q1 2026, hackers stole $168.6 million from 34 DeFi protocols. Critically, most major 2025 losses stemmed from operational failures — compromised private keys and social engineering — rather than on-chain code exploits, as CoinDesk reported.
Quick Take: The industry has been optimising for the wrong threat model. While auditors scrutinise Solidity logic, the real attack surface — key management infrastructure, node communication layers, and the cryptographic libraries everything depends on — has been largely taken on faith. AI-powered vulnerability discovery changes that calculus overnight.
| Security Layer | Pre-Mythos Assumption | Post-Mythos Reality |
|---|---|---|
| Smart contract code | Audited = secure | Audits cover known patterns; AI finds novel vectors |
| Cryptographic libraries | Battle-tested over decades | 27-year-old zero-days discovered in OpenBSD |
| Key management (MPC/multisig) | Distributed trust eliminates single points of failure | Underlying transport (TLS, SSH) now has known weaknesses |
| Node infrastructure | OS-level security is someone else’s problem | Linux kernel privilege escalation chained autonomously |
The TradFi Parallel No One Is Drawing
Here is the cross-industry insight that competing coverage has missed entirely: DeFi is repeating the exact mistake that traditional finance made with algorithmic trading in the 2000s — and the correction will follow the same painful arc.
When electronic trading went mainstream, banks invested heavily in execution speed and alpha generation while treating infrastructure security as a cost centre. It took the 2010 Flash Crash, Knight Capital’s $440 million loss in 45 minutes from a software deployment error, and a string of exchange outages to force the industry into accepting that operational resilience was not optional — it was existential. Regulators responded with frameworks like the EU’s Digital Operational Resilience Act (DORA), which mandates ICT risk management, incident reporting, and third-party dependency testing for financial entities.
DeFi is now at its Knight Capital moment. The industry has poured billions into yield optimisation, governance token economics, and cross-chain bridging, while the cryptographic substrate has been assumed to be inviolable. Mythos demonstrated that it is not. The difference is that in TradFi, Knight Capital’s failure affected one firm’s balance sheet. In DeFi, a compromised cryptographic library could cascade across every protocol simultaneously — a correlated risk event with no circuit breaker.
The lesson from TradFi’s painful education is that security standardisation follows catastrophic loss, not precaution. The question for DeFi operators is whether they will learn from the parallel or wait for their own Knight Capital moment — one that, given the composability of DeFi, could be orders of magnitude worse.
Regulatory Pressure Meets the AI Security Gap
The timing could not be more consequential. In the United States, the CLARITY Act is advancing through Congress, attempting to define which digital assets fall under SEC versus CFTC jurisdiction and imposing new operational requirements on DeFi platforms. In the UK, the DeFi Education Fund has urged the Financial Conduct Authority to adopt a narrow definition of “control,” arguing that regulatory obligations should hinge on whether an entity has unilateral authority over user funds — not merely whether it developed a protocol.
Meanwhile, Hyperliquid launched a $29 million policy centre in Washington to shape U.S. DeFi regulation, and over 10 top crypto executives joined a U.S. Senate roundtable on DeFi rules and market reform. The regulatory machinery is in motion.
But here is the tension: none of the current regulatory proposals account for the AI-accelerated threat landscape that Mythos has exposed. The CLARITY Act focuses on asset classification, disclosure, and market structure. Europe’s MiCA regulation addresses consumer protection and stablecoin reserves. Neither framework mandates the kind of continuous, AI-informed security testing that Mythos’s capabilities now demand. Regulators are building a framework for yesterday’s threats while the attack surface has fundamentally shifted.
As Anthropic noted in its Glasswing announcement, the inclusion of JPMorganChase among its 12 launch partners signals that institutional finance already recognises this gap. The question is whether DeFi-native organisations will reach the same conclusion before a catastrophic exploit forces their hand.
What Happens Next — Three Predictions
The emergence of AI-powered vulnerability discovery at this scale will reshape DeFi security in three concrete ways over the next 12 to 18 months.
First, AI-driven continuous auditing will become table stakes for institutional DeFi participation. The current model — a point-in-time audit before deployment, followed by a bug bounty — is built for a world where vulnerabilities are discovered slowly. When AI can find and weaponise zero-days at scale, protocols will need continuous, AI-informed security monitoring as a baseline requirement. Expect insurance protocols and institutional custodians to mandate this before deploying capital, much as TradFi mandates penetration testing for regulated entities.
Second, DeFi’s security spend will undergo a structural rebalancing. The industry currently spends disproportionately on Solidity-level audits while assuming the underlying stack — operating systems, cryptographic libraries, transport protocols — is secure. Mythos has invalidated that assumption. Security budgets will need to expand beyond smart contract logic to encompass the full infrastructure stack, including node security, transaction validation layers, and cryptographic dependency management. Protocols that fail to adapt will find themselves uninsurable and uninvestable.
Third, the regulatory response will accelerate. The Glasswing disclosure gives regulators empirical evidence that self-regulation is insufficient. Expect the next wave of DeFi-focused legislation — whether through amendments to the CLARITY Act, MiCA’s forthcoming technical standards, or standalone cybersecurity mandates — to include requirements for AI-informed security testing, incident response protocols, and cryptographic dependency disclosure. The convergence of AI and blockchain is no longer just an investment thesis; it is becoming a regulatory imperative.
Quick Take: The protocols that survive will be those that treat AI-discovered vulnerabilities as an operational reality — not a theoretical risk. Project Glasswing’s $100 million commitment suggests Anthropic believes this is not a distant threat but an immediate one. DeFi operators should take that signal seriously.
Frequently Asked Questions
What is Anthropic’s Mythos Preview and what did it find?
Mythos Preview is an unreleased AI model from Anthropic that autonomously discovered thousands of high-severity zero-day vulnerabilities across every major operating system, web browser, and cryptographic library. It achieved a 72.4% exploit success rate, far surpassing previous AI models, and found flaws in protocols like TLS and AES-GCM that DeFi infrastructure depends on.
How does AI vulnerability discovery affect DeFi security?
DeFi protocols rely on the same cryptographic libraries and transport protocols that Mythos found vulnerabilities in. This means the security assumptions underpinning wallet infrastructure, node communication, and transaction signing may be compromised — a systemic risk to the over $200 billion locked in DeFi protocols.
What is Project Glasswing?
Project Glasswing is Anthropic’s initiative to responsibly deploy Mythos Preview for defensive security. It includes 12 launch partners — AWS, Apple, Google, Microsoft, JPMorganChase, and others — with $100 million in usage credits and $4 million in donations to open-source security organisations to help patch discovered vulnerabilities before they can be exploited.
Are smart contract audits still effective in the age of AI?
Smart contract audits remain valuable for Solidity-level logic but are insufficient on their own. AI can discover novel attack vectors that pattern-based auditing tools miss — Mythos found a 16-year-old FFmpeg vulnerability that automated tools missed across five million test runs. A hybrid approach combining AI-powered continuous testing with human expert review is now the industry standard.
What should DeFi protocol operators do now?
Operators should expand their security scope beyond smart contract code to include the full infrastructure stack — cryptographic dependencies, node security, and transport protocols. Implementing continuous AI-informed security monitoring, diversifying cryptographic dependencies, and establishing incident response protocols for zero-day disclosure are immediate priorities.
Will regulators mandate AI security testing for DeFi?
Current legislation like the CLARITY Act and MiCA does not explicitly require AI-informed security testing. However, the Glasswing disclosure provides regulators with evidence to justify such requirements. Industry observers expect the next wave of DeFi regulation to include mandatory cybersecurity standards, similar to how TradFi adopted DORA requirements for operational resilience.



















