Nyzhi is a terminal-first AI coding agent built in Rust that reads your code, writes changes, runs tests, and verifies results — all from the command line.
TL;DR
Install with one command, set an API key, and run nyz run "Add unit tests for parse_config". Nyzhi connects to 17+ LLM providers, ships with 50+ tools, and works in both interactive chat and one-shot modes.
What Nyzhi Is
Nyzhi is an AI coding agent that lives in your terminal. You describe what you want in plain language; it figures out how to do it. It reads files, edits code, runs shell commands, executes tests, and checks that everything works — with structured evidence you can trust.
Key capabilities:
- Read — Files, diffs, semantic search, LSP diagnostics, directory trees
- Write — Create and edit files, apply patches, refactor across modules
- Run — Shell commands, tests, builds, linters
- Verify — Build logs, test output, lint results as structured evidence
It works in two modes: interactive (a rich TUI where you chat in real time) and non-interactive (nyz run "<task>" for one-shot runs from scripts or CI).
LLM Providers
Nyzhi is provider-agnostic. Connect any of 17+ LLM providers — open-source first, plus every major proprietary API:
| Open-source & open-weight | Proprietary |
|---|---|
| Kimi (Moonshot), MiniMax, GLM (Z.ai), DeepSeek | OpenAI, Anthropic, Gemini |
| Groq, Together AI, Ollama (local) | OpenRouter (aggregator) |
Ollama runs models entirely on your machine — no API keys, no network calls, full privacy. See Providers for the full list and configuration.
50+ Built-in Tools
Nyzhi ships with a rich tool catalog:
- Files — read, write, edit, patch, glob, grep, move, copy, delete
- Git — status, diff, log, commit, checkout
- Shell — run commands with streaming output
- Code analysis — semantic search, LSP integration, directory trees
- Browser — fetch URLs, automate pages
- Teams — spawn coordinated sub-agents with task boards and mailbox messaging
Extend further with MCP (Model Context Protocol) servers — plug in filesystem tools, remote APIs, or custom integrations without changing Nyzhi itself.
Autonomous Execution
Autopilot runs tasks end-to-end in five phases:
- Expansion — Turn your idea into a full specification
- Planning — Break work into ordered steps
- Execution — Create and modify files, run commands
- QA — Run verification checks, fix issues
- Validation — Confirm requirements are met
State persists between phases. If you interrupt, you can resume from where you left off.
Multi-Agent Teams
Spawn coordinated sub-agents that work together:
/team 3 Build a REST API with auth, database, and tests
A lead agent breaks the task into sub-tasks, assigns them via a shared task board, and coordinates through mailbox messaging. See Teams for details.
Sessions and Memory
- Sessions — Every conversation auto-saves as JSON. Resume, search, and export. See Sessions.
- Memory — Project-scoped and user-scoped knowledge persists across sessions. The agent learns conventions, decisions, and patterns. See Memory.
Hello World
Get from zero to your first task in under two minutes:
# 1. Install
curl -fsSL https://get.nyzhi.com | sh
# 2. Set an API key (pick one)
export OPENAI_API_KEY="sk-..."
# export ANTHROPIC_API_KEY="sk-ant-..."
# export DEEPSEEK_API_KEY="sk-..."
# 3. Run your first task
nyz run "Add a hello world function that prints 'Hello, Nyzhi!'"
Example output:
> nyz run "Add a hello world function that prints 'Hello, Nyzhi!'"
Reading src/main.rs...
Editing src/main.rs...
Running: cargo run
Compiling my-project v0.1.0
Finished dev [unoptimized + debuginfo]
Running `target/debug/my-project`
Hello, Nyzhi!
Terminal UI
The interactive TUI (nyz) gives you:
- Slash commands —
/help,/sessions,/resume,/autopilot,/trust, and more - Tab completion — Commands and
@-mentioned file paths - Themes — Nyzhi Dark, Tokyo Night, Catppuccin, Dracula, Solarized, Gruvbox
- Accent colors — Copper, blue, emerald, violet, and others
See TUI for the full command reference and shortcuts.
Why Nyzhi
| Benefit | Description |
|---|---|
| Single binary | One executable. No Node, Python, or Docker. |
| Zero runtime deps | Built entirely in Rust. Install once, run anywhere. |
| Provider-agnostic | Switch providers with a config change. Use routing to auto-select models by task. |
| Privacy-respecting | Run locally with Ollama. API keys stay on your machine. OAuth tokens stored in auth.json. |
| Extensible | MCP servers, hooks, custom commands, and skills. |
Next Steps
- Getting Started — Install, connect a provider, and run your first task in under two minutes
- Configuration — Global, project, and local TOML config
- Providers — Connect OpenAI, Anthropic, Gemini, DeepSeek, Ollama, and more