Can AI Agents Boost Ethereum Security? OpenAI and Paradigm Created a Testing Ground

AI Summary3 min read

TL;DR

OpenAI and Paradigm launched EVMbench, a benchmark tool to test AI agents' ability to detect, patch, and exploit Ethereum smart contract vulnerabilities. It uses real-world code from audits and aims to improve security as AI becomes more involved in blockchain.

Key Takeaways

  • EVMbench evaluates AI agents across three modes: detect vulnerabilities, patch them without breaking functionality, and exploit them in sandboxed environments.
  • The tool draws on 120 curated vulnerabilities from 40 audits, including scenarios from Tempo's security auditing process, to ground testing in real-world code.
  • In exploit mode, GPT-5.3-Codex scored 72.2%, outperforming GPT-5's 31.9%, but performance was weaker in detect and patch tasks.
  • Researchers caution that EVMbench doesn't fully capture real-world security complexity but stress the importance of testing AI in economically relevant environments.
  • The launch highlights ongoing tensions between AI development pace, with Vitalik Buterin advocating for 'soft pause' capabilities in AI systems.

Tags

Artificial IntelligenceEthereumsmart contractsOpenAIParadigmAIevmai agentsethereum smart contracts
OpenAI. Image: Shutterstock/Decrypt

ChatGPT maker OpenAI and crypto-focused investment firm Paradigm have introduced EVMbench, a tool to help improve Ethereum Virtual Machine smart contract security.

EVMbench is designed to evaluate AI agents’ ability to detect, patch, and exploit high-severity vulnerabilities in Ethereum Virtual Machine (EVM) smart contracts.

Smart contracts are the heart of the Ethereum network, holding the code that powers everything from decentralized finance protocols to token launches. The weekly number of smart contracts deployed on Ethereum reached an all-time high of 1.7 million in November 2025, with 669,500 deployed last week alone, according to Token Terminal.



EVMbench draws on 120 curated vulnerabilities from 40 audits, most sourced from open audit competitions such as Code4rena, according to an OpenAI blog post. It also includes scenarios from the security auditing process for Tempo, Stripe's purpose-built layer-1 blockchain focused on high-throughput, low-cost stablecoin payments.

Payments giant Stripe launched the public testnet for Tempo in December, saying at the time that it was being built with input from Visa, Shopify, and OpenAI, among others.

The goal is to ground testing in economically meaningful, real-world code—particularly as AI-driven stablecoin payments expand, the firm added.

Introducing EVMbench—a new benchmark that measures how well AI agents can detect, exploit, and patch high-severity smart contract vulnerabilities. https://t.co/op5zufgAGH

— OpenAI (@OpenAI) February 18, 2026

EVMbench is meant to evaluate AI models across three modes: Detect, patch, and exploit. In “detect,” agents audit repositories and are scored on their recall of ground-truth vulnerabilities. In “patch,” agents must eliminate vulnerabilities without breaking intended functionality. Finally, in the “exploit” phase, agents attempt end-to-end fund-draining attacks in a sandboxed blockchain environment, with grading performed via deterministic transaction replay.

In exploit mode, GPT-5.3-Codex running via OpenAI's Codex CLI achieved a score of 72.2%, compared to 31.9% for GPT-5, which was released six months earlier. Performance was weaker in the detect and patch tasks, where agents sometimes failed to audit exhaustively or struggled to preserve full contract functionality.

The ChatGPT makers' researchers cautioned that EVMbench does not fully capture real-world security complexity. Still, they added that measuring AI performance in economically relevant environments is critical as models become powerful tools for both attackers and defenders.

Sam Altman's OpenAI and Ethereum co-founder Vitalik Buterin have previously been at odds over the pace of AI development.

In January 2025, Altman said that his firm was "confident we know how to build AGI as we have traditionally understood it." But Buterin advocated that AI systems should include a "soft pause" capability that could temporarily restrict industrial-scale AI operations if warning signs emerge.

Visit Website