Anthropic: AI agent discovers $4.6 million vulnerability in real-world contracts

AI Summary1 min read

TL;DR

Anthropic's AI models tested on real-world contracts discovered $4.6 million in vulnerabilities and new zero-day flaws, highlighting AI's role in security auditing.

Tags

AnthropicAI securityvulnerability detectionsmart contractsClaude Opus

According to Foresight News , Anthropic released a report stating that its researchers tested Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 models on their self-built SCONE-bench benchmark (containing 405 real-world contracts attacked between 2020 and 2025). In contracts attacked after the knowledge update date (March 2025), the three models collectively discovered exploitable vulnerabilities worth approximately $4.6 million. Furthermore, in simulation tests of 2,849 recently deployed contracts without known vulnerabilities, Sonnet 4.5 and GPT-5 each discovered two new zero-day vulnerabilities, potentially causing a total loss of $3,694, with the GPT-5 API costing $3,476.

Visit Website