A powerful, offline-capable CLI AI Agent that leverages Ollama’s local LLMs to generate—and even execute—Terminal commands for you. Save time, reduce context-switching, and let AI handle the nitty-gritty of terminal work.
- Overview
- Why “AI Commander”?
- Features
- AI Workflow
- Prerequisites
- Installation
- Usage
- Commands Cheat Sheet
- Limitations & Best Practices
- Contributing
- License
AI Commander is a Terminal-focused CLI tool that:
- Generates Terminal commands from your natural-language prompts.
- Optionally runs those commands automatically (if you trust the AI).
- Works completely offline using Ollama’s local LLMs—no internet, no shepherd from the cloud.
- Supports multiple models, letting you choose a smaller, faster model for casual tasks or a beefier one for complex scripting.
“Remembering every single shell flag is so 2020. Let the AI handle that for you.” — Definitely not Darth Vader.
-
Stop Googling syntax. Tired of hunting for that obscure
lsflag orgccoption? Let AI pick the right command. -
Offline & Free. Because your data is your business. Ollama’s local models ensure privacy and zero bandwidth costs.
-
Learn by Example. Each generated command is a mini–tutorial. You’ll soon recognize patterns—“So that’s how
du | sort-by sizeworks!”
- 🎯 Natural Language → Terminal: Describe what you need in plain English, and AI Commander outputs the exact command.
- 🤖 AI Agent Mode: Automatically execute the generated Terminal command—no copy/paste required.
- ⚡ Model Selection: Pick from your installed Ollama models (e.g.,
gemma3:1b,deepseek-r1:1.5b,llama3.2:latest). - 🔄 Chaining & Complex Ops: If a task needs multiple commands (compile & run, unzip & list), AI Commander will chain them—one per line, strict Terminal syntax.
- 🔍 Dry Run & Flags: Preview before executing (because AI occasionally “hallucinates” and types
rm -rf /by accident—just kidding, it’s sanitised… mostly). - 📦 Single Binary Option: Build an executable with PyInstaller for an ultra-streamlined experience.
flowchart TD
A[📝 User Prompt] --> B[🤖 Select Ollama Model]
B --> C[🔍 Parse Prompt & Compose Final System Prompt]
C --> D[⚙️ API Call: Ollama LLM Generates Terminal Command]
D --> E[🔄 Format & Validate Command]
E --> F{❓ Run Mode?}
F -->|Yes| G[▶️ Execute Command in the Terminal]
F -->|No| H[📋 Output Command Only]
G --> I[📊 Show Output / Errors]
H --> I[✅ End]
-
User Prompt You type:
commander gen "find all .log files and remove them". -
Select Model AI Commander picks your configured Ollama model (e.g.,
deepseek-r1:1.5b). -
Parse & Compose The prompt is sanitized and combined with a system message that enforces “strict Terminal command only” rules.
-
Generate Ollama’s LLM spins out a precise Terminal pipeline:
ls *.log | where type == "file" | each { rm $it.name } -
Format & Validate The tool checks for absolute paths, extra whitespace, or forbidden characters (spoiler: no backticks or markdown).
-
Run?
- If you invoked
commander run, it submits the command to your local Terminal shell. - If you used
commander gen, it simply prints the command to STDOUT.
- If you invoked
-
Show & Done You either copy & paste the command, or watch it execute live. No middleman.
Pro Tip: Always double-check “destructive” commands (e.g., involving
rm) before hitting Enter. AI Commander can’t read your mind when you say “wipe everything.”
Before rolling with AI Commander, ensure your system is up to snuff:
-
Operating System
- Windows 10+ (PowerShell & Terminal installed)
- macOS 10.14+ (Homebrew or downloadable Terminal binary)
- Linux (Ubuntu 18.04+ / Fedora / Arch)
-
Python: ≥ 3.11
-
Memory: Minimum 4 GB RAM (8 GB recommended if running larger models)
-
Disk: ~200 MB for the tool + additional 500 MB–3 GB per Ollama model, depending on size
-
Ollama:
- Install from ollama.com (Windows: use the installer; macOS/Linux: use brew or official binary).
- Ensure you have at least one model pulled (e.g.,
ollama pull llama3.2:latest).
-
Terminal (Optional):
- Install via Terminal.dev/install.
- Ensure that
nuis in yourPATH.
-
Package Manager:
pip(preferred) orpoetryorpipx/uv
“Installation is like cooking—follow the recipe and don’t burn the house down.”
-
Clone the Repo
git clone https://github.com/mmycin/commander.git cd commander -
Environment Setup
-
Copy
.env.sample→.env -
Edit
.envand set:PROCESS=DEVELOPMENT # For local testing # or PROCESS=PRODUCTION # For packaging and global use
-
Save the file.
-
-
Install Dependencies
pip install -r requirements.txt
-
(Optional) Build the Executable
Only do this if you hate Python shebangs and want a single binary.
pyinstaller --onefile --name commander main.py
This produces a single executable in
dist/(add to yourPATHfor global use). You can also attach an icon if you’re feeling fancy:pyinstaller --onefile --icon assets/icon.ico --name commander main.py
Warning: If you accidentally ask AI Commander to
rm -rf ., please don’t blame us. Double-check!
After installation, verify that ollama and nu are accessible:
commander --helpYou should see:
Usage: main.py [OPTIONS] COMMAND [ARGS]...
Commander — An AI-powered CLI assistant built with Ollama's local LLMs.
It generates and executes terminal commands for specific tasks using natural language input. Let the AI
handle your shell work — securely, locally, and smartly.
Created by Mycin.
Options:
--help Show this message and exit.
Commands:
gen Generate a Terminal command from a natural-language prompt.
run Generate and execute a Terminal command.
init Initialize or switch Ollama model.
model Show current and available Ollama models.
set Change the active Ollama model.
$ commander init
Initializing Commander...
Available Models:
=================
1. gemma3:1b
2. deepseek-r1:1.5b
3. llama3.2:latest
Enter the model you want to use: deepseek
Model set to deepseek-r1:1.5b- Input: Begin typing any portion of the model name (case-insensitive).
- Behavior: The tool picks the first unique match.
commander gen "create a directory named hello"-
Output:
mkdir hello
No extra fluff—just the raw command. Perfect for copy→paste or scripting.
commander run "search for all .log files in logs/ and delete them"-
Generated:
ls logs/*.log | each { rm $it.name } -
Executed: AI Commander sends that pipeline to your Terminal shell. Watch the magic happen.
commander model-
Example Output:
Current Ollama Model Name: deepseek-r1:1.5b Available Models: ================ 1. gemma3:1b 2. deepseek-r1:1.5b 3. llama3.2:latest
commander set llama-
Output:
Model successfully set to: llama3.2:latest
Use the first few letters of a model (e.g., gem, deep, llama)—as long as it’s unambiguous. If it’s ambiguous, it will prompt you again.
Here’s a refactored and enhanced version of your Commands Cheat Sheet section with improved clarity, formatting, and tone — while preserving the Terminal vibe and adding a touch of friendliness:
The cheat sheet is mostly clear but has a few inconsistencies and formatting issues, particularly in the table. Here's a revised version with corrections for clarity, consistency, and accuracy, while maintaining the original structure and tone. I've fixed the table formatting, corrected the ls | where type == "dir" command (which isn't standard in most shells), and clarified some descriptions. I've also ensured all commands are accurate for a typical Unix-like terminal environment.
"For those who like to live on the edge... Always double-check what your AI friend just told you to run!"
Here’s a quick glance at what Commander can do with just a few words from you. These prompts turn into real, working Terminal commands — ready to roll.
| 🛠️ Task | 🗣️ Prompt Example | 🧾 Generated Command(s) |
|---|---|---|
| 📂 List all files (detailed) | list files |
ls -al |
| 📁 Show only directories | list only directories |
ls -d */ |
| 🏗️ Create nested folders | make dir project/src/utils |
mkdir -p project/src/utils |
| 🧑💻 Compile & run C program | run main.c |
gcc main.c -o main && ./main |
| 📦 Move file to folder | move data.txt to backup |
mv data.txt backup/ |
🧹 Remove all .log files |
delete all .log files |
rm *.log |
| 🔢 Count lines in a file | how many lines in notes.txt |
wc -l notes.txt |
| 💽 Check disk usage | check disk usage |
df -h |
| 🔐 Make script executable | make script.sh executable |
chmod +x script.sh |
| 🔍 Search text in file | find TODO in app.rs |
grep "TODO" app.rs |
| 📦 Zip a folder | compress logs folder |
zip -r logs.zip logs |
| 📂 Unzip an archive | extract archive.zip |
unzip archive.zip |
| 🌀 Initialize Git repo | git init |
git init |
| 🧬 Clone a GitHub repo | clone repo https://github.com/foo/bar |
git clone https://github.com/foo/bar |
| 👁️ View JSON config file | show config.json |
cat config.json |
💡 Pro Tip:
While these examples use basic commands, Commander can unleash Terminal's real power by chaining filters, pipes, and even loops. Example:
ls | where extension == "log" | each { rm $it.name }Let me know if you want this exported to a Markdown file or visualized in a PDF!
-
AI Isn’t Perfect
- Sometimes you’ll ask it to “delete all temporary files,” and it decides
rm -rf ./. It’s rare, but always read the generated command. - When in doubt, use
commander genfirst, inspect the command, then copy/paste.
- Sometimes you’ll ask it to “delete all temporary files,” and it decides
-
Permissions & Safety
- AI Commander does not run with
sudoprivileges—don’t expect it to fix your root-owned files. - It tries to avoid dangerous patterns (no
rm -rf /ormv / /dev/null), but still: be cautious.
- AI Commander does not run with
-
Model Knowledge Cutoff
-
Ollama’s local models are “frozen” snapshots. If Terminal introduces new commands after your model’s training date, AI Commander may suggest outdated syntax.
-
To mitigate:
- Update your model (
ollama pull llama3.2:latest) - Blame your future self.
- Update your model (
-
-
Platform Differences
- Though Terminal aims for cross-platform consistency, minor variations (e.g.,
lsflags on Windows vs. Linux) may occur. - If AI-generated commands fail, tweak manually.
- Though Terminal aims for cross-platform consistency, minor variations (e.g.,
We love pull requests—especially for:
- New prompt-to-command examples (the more edge cases, the better!).
- Model integration tests (does
deepseek-r1handledu | sort-bycorrectly? Let’s find out!). - Better error handling—help us make “silent failure on violation” actually silent.
- Documentation improvements—you’re reading it; feel free to make it even shinier.
-
Fork the repository.
-
Create a new branch:
git checkout -b feature/amazing-new-command
-
Commit your changes:
git commit -m "Add example: search JSON for key 'user'" -
Push to your fork:
git push origin feature/amazing-new-command
-
Open a Pull Request. We’ll review and (hopefully) merge it with maximum admiration.
Note: We follow the “keep it fun” rule—treat others with kindness and respect. We promise not to reject your PR just because you used too many emojis (although we might ask you to dial it down to one per line).
MIT License
Copyright (c) 2025 Mycin
You know the drill—see LICENSE file for the full text.
Happy Commanding! – Mycin, somewhere in Dhaka, with a cup of chai and too many open terminals.