Your agents fail because they
forget everything. fix it.
Zero Agent is a memory-driven TypeScript framework for self-evolving AI agents. It remembers every task, reuses what worked, improves what failed, and adapts its strategy — automatically.
One framework.
Zero forgetting.
Agents that remember every task.
Experience Memory stores structured records of every run — strategy used, tool name, quality score (0–100, computed deterministically without API calls), reflection data, and memory notes. Uses DJB2 hash-based IDs for deduplication. Your agent never starts from zero.
Picks the right approach before acting.
Strategy Adapter analyzes past experiences and available tools using term-extraction similarity search across 26 stop words. Confidence ranges from 0.6 (nothing found) to 0.9 (proven high-quality reuse). Five strategies: reuse, generate, improve, delegate via AXL, or reject.
Generates tools when none exist.
EvolutionEngine creates tools via LLM inference through 0G Compute (decentralized) or OpenAI fallback. Each generated tool goes through sandbox testing with a 70% pass threshold, up to 3 retry attempts with iterative feedback. Temperature locked at 0.2 for deterministic output.
Evolution Engine
Active GenerationLLM generates a self-contained async execute() function. Tries 0G Compute first, falls back to OpenAI gpt-4o-mini. Temperature: 0.2.
Runs in isolated-vm (16MB limit). Evaluated against LLM-generated test cases. Pass threshold: 70%.
- •3 retries
- •Iterative feedback
- •120s hard limit
- •30s per test case
Auto-improves failed tools.
ToolImprover takes a failed tool, sends its code + error reason to the LLM, gets back an improved version with proper semver patch bumping (1.0.0 → 1.0.1), runs it through sandbox evaluation, and only persists if it passes quality gates. Conservative by design.
On-Agent Identity
Public ENS text records on Sepolia store agent description, capabilities, tool registry pointer (0G root hash), AXL peer ID, and project URL.
ENS-native agent identity on Sepolia.
Every agent gets on-chain identity through 5 ENS text records under the zeroagent.* namespace — description, capabilities (JSON array), tool registry pointer (0G root hash), AXL peer ID, and project URL. Auto-detects wallet ENS name via reverse lookup.
Framework Modules
13 composable modules that work together to create self-evolving agents. MIT licensed, fully offline-capable, and built for production.
Experience Memory
Stores structured ExperienceRecords with DJB2 hash-based IDs. Each record includes task, strategy, tool used, quality score (0–100 deterministic), reflection data, and memory note. Supports local JSON or optional 0G Storage persistence. Similarity search uses term extraction with 26 stop words.
Strategy Adapter
Selects from 5 strategies using term-extraction similarity search. Confidence ranges 0.6–0.9. Reuse is preferred when past quality score ≥80 with matching tool. Improvement triggers when same tool previously failed. Tools below 50% success rate auto-trigger regeneration.
Evolution Engine
Generates tools via LLM through 0G Compute (decentralized) or OpenAI gpt-4o-mini fallback. Runs up to 3 attempts with iterative feedback on failure. Each tool must pass sandbox evaluation at 70% threshold. Temperature locked at 0.2 for deterministic output.
Tool Improver
Takes failed tools and generates improved versions via LLM with proper semver patch bumping (1.0.0 → 1.0.1). Sends original code + error reason as context. Improved tools must pass sandbox eval before persisting. Conservative — only keeps proven improvements.
Reflection Engine
Deterministic post-task analysis with zero API calls. Base score: 80 (+10 if tool used, +5 if under 5s). Detects failures by checking output for 'error' key. Flags improvementNeeded when score <70 or task failed. Recommends next strategy automatically.
Identity & Communications
ENS identity manager writes 5 text records on Sepolia (description, capabilities, toolRegistry, axlPeerId, url). AXL client enables P2P task delegation between agents with message deduplication (1,000 entry FIFO) and request-response correlation via polling.
The Self-Evolving Loop
From task receipt to stored experience — every run makes your agent smarter. 13 composable modules, 0 external API dependencies for reflection.
1. Receive Task
A TaskRequest comes in through handleTask() or run(). The agent detects trivial chat (greetings, capability questions) and responds directly. Otherwise it begins by searching ToolRegistry and ExperienceMemory for relevant context via term-extraction similarity search.
2. Select Strategy
Analyzes past experiences (quality ≥80 = reuse, same tool failed = improve) and available tools against the task query. Returns one of 5 strategies with confidence 0.6–0.9. Tools below 50% success rate auto-trigger regeneration instead of reuse.
3. Generate or Reuse
If no tool matches or strategy says generate: EvolutionEngine calls ToolGenerator which tries 0G Compute broker first (decentralized LLM), then falls back to OpenAI gpt-4o-mini. Each attempt gets sandboxed, evaluated at 70% threshold, up to 3 retries with iterative feedback.
4. Execute & Improve
Runs the tool in isolated-vm sandbox (16MB memory limit, fetch bridge, dangerous globals nullified). If execution fails AND tool exists: ToolImprover generates semver-bumped improved variant → evaluate → persist → re-execute. All concurrent writes are serialized via promise-chain locks.
5. Reflect & Store
Deterministic reflection produces quality score (base 80 +10 tool bonus +5 speed bonus), what worked/failed, improvementNeeded flag, memory note, and recommended next strategy — all without external API calls. Result truncated to 500 chars and stored as an ExperienceRecord with DJB2 hash-based ID.
Frequently asked questions
Everything you need to know about the framework and how it works.