Is Artificial General Intelligence Already Here? One AI Founder Thinks So

AI Summary3 min read

TL;DR

Eliza Labs founder Shaw Walters argues current AI models already constitute Artificial General Intelligence (AGI), though it differs from human intelligence. He warns AGI systems will be fallible like humans, making perfect security impossible as AI agents gain more control in crypto and consumer platforms.

Key Takeaways

  • Eliza Labs founder Shaw Walters believes current leading AI models already meet the definition of AGI, calling it an 'inflection point'
  • Walters rejects the idea of a single dominant AGI system, stating 'life loves variants' and there won't be an 'AI God'
  • AI agents have evolved from experimental chatbots to persistent systems embedded in crypto and consumer platforms, with examples like OpenClaw and Agentic Wallets
  • As AI advances toward AGI, Walters warns it behaves more like fallible humans than predictable machines, making foolproof security safeguards impossible
  • The development of reliable structured outputs with GPT-4 was a key breakthrough enabling practical AI agents that can perform actions

Tags

Artificial Intelligenceartificial inteligenceAITechnologyartificial general intelligenceagiai agentsEliza LabsShaw WaltersArtificial General IntelligenceAGIAI AgentsBlockchain AI
AI mockup Source: Decrypt

Artificial general intelligence may have already arrived.

That’s according to Eliza Labs’ founder Shaw Walters, who spoke with Decrypt last week during ETHDenver. Walters said current leading models already meet his definition of artificial general intelligence, better known as AGI.

“I think that we're at the inflection point where we have AGI,” he said. “I completely believe that this is general intelligence. It's nothing like us. It learns in a completely different way, but it is intelligent nonetheless, and it is very general.”

Originally launched in 2024 as ai16z, Walters founded Eliza Labs, which created the open-source ElizaOS, one of the first frameworks for creating autonomous AI agents for blockchains.

First coined in 1997 and later popularized by researchers, including SingularityNET founder Ben Goertzel, Artificial General Intelligence refers to a theoretical form of AI designed to match or exceed human cognitive abilities across a broad spectrum of tasks. 

While prominent AI developers, including OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei, warn that AGI could arrive within the next decade, Walters rejected the idea that it will emerge as a single dominant system.

“I just do not see it as the AI God,” he said. “There's never going to be one, because life loves variants.”

Walters said he first began working on AI agents during the GPT-3 era, when structured outputs were unreliable.

“It felt like most of the work I was doing was putting training wheels on a baby,” he said. “Just keeping it on, getting it to respond with the structure that I need to parse out what the action was. It was an enormous problem.”

Progress came with the launch of GPT-4 in 2023, which Walters said enabled more reliable responses.

“It was incredibly good at giving me a structured response, and now I could actually do action calling,” he said. “That was where we went from barely working at all to being able to make an agent that does things, but it was still very limited.”

AI agents have moved from experimental chatbots to persistent systems embedded across crypto and consumer platforms. 

In February, OpenClaw surged to roughly 147,000 GitHub stars and spawned projects including the AI “social media” platform Moltbook, while Coinbase launched “Agentic Wallets” on Base and Fetch.ai said its agents can complete purchases using Visa infrastructure.

However, as agents gained root access and wallet control, Walters said the initial excitement gave way to deep security concerns.

As developers at ETHDenver promoted the benefits of AI agents in crypto, Walters warned that as AI advances toward AGI, it behaves less like a predictable machine and more like a fallible human, making foolproof safeguards impossible to engineer.

“At the end of the day, you're dealing with something that's more like a human and less like a calculator,” he said. “It's gonna do stupid things sometimes, and there’s just no way to build a super secure system that's going to keep them from doing something dumb.”

Visit Website