Built for ETHGlobal Open Agents 2026

Your agents fail because they
forget everything.
fix it.

Zero Agent is a memory-driven TypeScript framework for self-evolving AI agents. It remembers every task, reuses what worked, improves what failed, and adapts its strategy — automatically.

One framework.
Zero forgetting.

Agents that remember every task.

Experience Memory stores structured records of every run — strategy used, tool name, quality score (0–100, computed deterministically without API calls), reflection data, and memory notes. Uses DJB2 hash-based IDs for deduplication. Your agent never starts from zero.

Experience Memory
Fetch ETH price92/100
Summarize research88/100
Parse JSON output75/100
Scrape webpage
Generate report95/100
Strategy Selection
Adaptive
Reuse Tool92%
Generate New68%
Improve Existing34%
Delegate (AXL)15%
Reject Task5%

Picks the right approach before acting.

Strategy Adapter analyzes past experiences and available tools using term-extraction similarity search across 26 stop words. Confidence ranges from 0.6 (nothing found) to 0.9 (proven high-quality reuse). Five strategies: reuse, generate, improve, delegate via AXL, or reject.

Generates tools when none exist.

EvolutionEngine creates tools via LLM inference through 0G Compute (decentralized) or OpenAI fallback. Each generated tool goes through sandbox testing with a 70% pass threshold, up to 3 retry attempts with iterative feedback. Temperature locked at 0.2 for deterministic output.

tool-generator.ts

Evolution Engine

Active Generation
Generate

LLM generates a self-contained async execute() function. Tries 0G Compute first, falls back to OpenAI gpt-4o-mini. Temperature: 0.2.

Sandbox & Evaluate

Runs in isolated-vm (16MB limit). Evaluated against LLM-generated test cases. Pass threshold: 70%.

Max Attempts
  • 3 retries
  • Iterative feedback
Timeout
  • 120s hard limit
  • 30s per test case
Tool Improver
Semver Patch

Auto-improves failed tools.

ToolImprover takes a failed tool, sends its code + error reason to the LLM, gets back an improved version with proper semver patch bumping (1.0.0 → 1.0.1), runs it through sandbox evaluation, and only persists if it passes quality gates. Conservative by design.

research-agent.eth
ENS Identity Record
Verified

On-Agent Identity

Public ENS text records on Sepolia store agent description, capabilities, tool registry pointer (0G root hash), AXL peer ID, and project URL.

RecordsSepolia
description & capabilities
zeroagent.toolRegistry (0G)
zeroagent.axlPeerId (P2P)
Ethereum / Sepolia0GStorage

ENS-native agent identity on Sepolia.

Every agent gets on-chain identity through 5 ENS text records under the zeroagent.* namespace — description, capabilities (JSON array), tool registry pointer (0G root hash), AXL peer ID, and project URL. Auto-detects wallet ENS name via reverse lookup.

Framework Modules

13 composable modules that work together to create self-evolving agents. MIT licensed, fully offline-capable, and built for production.

Experience Memory

Stores structured ExperienceRecords with DJB2 hash-based IDs. Each record includes task, strategy, tool used, quality score (0–100 deterministic), reflection data, and memory note. Supports local JSON or optional 0G Storage persistence. Similarity search uses term extraction with 26 stop words.

Task HistoryQuality ScoringPattern Recognition0G Persistence

Strategy Adapter

Selects from 5 strategies using term-extraction similarity search. Confidence ranges 0.6–0.9. Reuse is preferred when past quality score ≥80 with matching tool. Improvement triggers when same tool previously failed. Tools below 50% success rate auto-trigger regeneration.

5 StrategiesConfidence ScoringTerm ExtractionZero Waste

Evolution Engine

Generates tools via LLM through 0G Compute (decentralized) or OpenAI gpt-4o-mini fallback. Runs up to 3 attempts with iterative feedback on failure. Each tool must pass sandbox evaluation at 70% threshold. Temperature locked at 0.2 for deterministic output.

0G ComputeOpenAI FallbackSandbox TestingAuto-Evaluation

Tool Improver

Takes failed tools and generates improved versions via LLM with proper semver patch bumping (1.0.0 → 1.0.1). Sends original code + error reason as context. Improved tools must pass sandbox eval before persisting. Conservative — only keeps proven improvements.

Semver PatchingLLM ImprovementQuality GatesConservative Updates

Reflection Engine

Deterministic post-task analysis with zero API calls. Base score: 80 (+10 if tool used, +5 if under 5s). Detects failures by checking output for 'error' key. Flags improvementNeeded when score <70 or task failed. Recommends next strategy automatically.

No API NeededStructured LearningStrategy HintsScore 0-100

Identity & Communications

ENS identity manager writes 5 text records on Sepolia (description, capabilities, toolRegistry, axlPeerId, url). AXL client enables P2P task delegation between agents with message deduplication (1,000 entry FIFO) and request-response correlation via polling.

ENS SepoliaGensyn AXL P2PAgent DiscoveryOn-Chain Profile

The Self-Evolving Loop

From task receipt to stored experience — every run makes your agent smarter. 13 composable modules, 0 external API dependencies for reflection.

SelfEvolvingAgent

1. Receive Task

A TaskRequest comes in through handleTask() or run(). The agent detects trivial chat (greetings, capability questions) and responds directly. Otherwise it begins by searching ToolRegistry and ExperienceMemory for relevant context via term-extraction similarity search.

StrategyAdapter

2. Select Strategy

Analyzes past experiences (quality ≥80 = reuse, same tool failed = improve) and available tools against the task query. Returns one of 5 strategies with confidence 0.6–0.9. Tools below 50% success rate auto-trigger regeneration instead of reuse.

EvolutionEngine

3. Generate or Reuse

If no tool matches or strategy says generate: EvolutionEngine calls ToolGenerator which tries 0G Compute broker first (decentralized LLM), then falls back to OpenAI gpt-4o-mini. Each attempt gets sandboxed, evaluated at 70% threshold, up to 3 retries with iterative feedback.

ToolSandbox + ToolImprover

4. Execute & Improve

Runs the tool in isolated-vm sandbox (16MB memory limit, fetch bridge, dangerous globals nullified). If execution fails AND tool exists: ToolImprover generates semver-bumped improved variant → evaluate → persist → re-execute. All concurrent writes are serialized via promise-chain locks.

ReflectionEngine + ExperienceMemory

5. Reflect & Store

Deterministic reflection produces quality score (base 80 +10 tool bonus +5 speed bonus), what worked/failed, improvementNeeded flag, memory note, and recommended next strategy — all without external API calls. Result truncated to 500 chars and stored as an ExperienceRecord with DJB2 hash-based ID.

Frequently asked questions

Everything you need to know about the framework and how it works.

Most agent frameworks ship with a fixed set of tools — if the tool doesn't fit, the agent fails or hallucinates. Zero Agent separates tool memory from experience memory. Your agents remember what strategy worked (confidence 0.6–0.9), which tools failed, and what to do differently. They generate new tools when needed, auto-improve broken ones, and adapt their approach across 5 strategies — all without human intervention.

Stop building agents that forget.

Zero Agent gives your AI agents memory, strategy, and self-improvement — all in ~13 composable TypeScript modules. MIT licensed, works offline, production-ready on day one.