itertron

Use any model. Run offline. Keep your data.

$ curl -fsSL https://itertron.com/install.sh | bash

Paste in your terminal to install, or build from source

Why Itertron

No vendor lock-in
Switch between Anthropic, OpenAI, Gemini, OpenRouter, or local models with one env var.
Native GPU inference
Built-in mlx-lm for Apple Silicon, llama-server with CUDA for Linux and Windows.
Persistent memory
Learns across sessions. Auto-extracts context and consolidates knowledge overnight.
Multi-agent
Spawn background agents with git worktree isolation for parallel work.
30+ tools
File ops, bash, glob, grep, web fetch, Jupyter notebooks, MCP servers, and more.
Fully local
No telemetry, no cloud dependency. Your code stays on your machine.

How it compares

Itertron Claude Code Codex CLI Aider OpenCode
Multi-vendor 5 providers Anthropic only OpenAI only Via litellm Via AI SDK
Local GPU mlx-lm + llama-server No No Via Ollama Via Ollama
Memory SQLite + auto-extract File-based No No No
Skills Yes Closed-source No No No

Supported providers

Anthropic
OpenAI
Gemini
OpenRouter
Local

Get started

Install in one command. Bring your own API key, or run fully offline.

View on GitHub