A free /codex-style code review skill for Claude Code, powered by the GitHub Models API.
Uses GPT-4.1, GPT-5, o3, DeepSeek R1, Grok-3, Llama-4, and 20+ other models via your GitHub account (free tier works, GitHub Copilot subscription gives better limits). Drop-in replacement for OpenAI's paid codex CLI when you just want a cross-model second opinion on your diff without paying OpenAI.
Three modes, invoked from inside Claude Code:
/copilot review → independent code review of your current branch diff
/copilot challenge → adversarial "try to break my code" review
/copilot <anything> → free-form consult about the codebase
Or from the shell, as a drop-in codex exec / codex review replacement:
copilot review "focus on SQL injection" --base main
copilot exec "what does auth.ts do?"
copilot exec "list the top 3 risks in this diff" -m openai/gpt-5The gstack ecosystem for Claude Code uses OpenAI's codex CLI as the "second opinion" voice in /autoplan, /plan-eng-review, and /codex. Codex is great but:
- Requires a ChatGPT Plus ($20/mo) subscription or OpenAI API credits
- You might already be paying for GitHub Copilot ($10/mo) and not want another AI bill
This skill bridges the gap. It uses your GitHub Copilot subscription (or free GitHub account) to call the same cross-model review flow via GitHub Models, a public chat completions API that OpenAI-compatible by design.
codex is a full agentic runtime with a sandbox, tool use, and multi-turn reasoning. This skill is a stateless chat wrapper. Specifically:
- No sandbox. It can't run
git diff,cat,ls, etc. The wrapper pre-gathers git context (diff, status, log) into the prompt before sending. - No multi-turn agentic loop. One prompt, one response.
- No session continuity. Each call is independent.
- Rate limited. GitHub Models free tier has per-minute and daily caps.
- Input size capped by GitHub Models per model: 4k tokens for gpt-5, 8k for gpt-4.1/gpt-4o. Big diffs get truncated with a clear notice.
For huge review jobs, install the real codex CLI. For everything else, this is plenty.
git clone https://github.com/oenco/claude-copilot-skill.git
cd claude-copilot-skill
./install.shThe installer:
- Copies the skill to
~/.claude/skills/copilot/ - Puts a thin
copilotwrapper on your PATH (~/.local/bin/copilot) - Optionally installs a
codexPATH shim so gstack's/autoplantransparently picks up copilot as a codex fallback (pass--no-shimto skip) - Prompts you to create a GitHub PAT
One-liner for the brave:
curl -fsSL https://github.com/oenco/claude-copilot-skill/main/install.sh | bash -s --Or clone and inspect first (recommended).
Create a GitHub Personal Access Token at https://github.com/settings/tokens/new.
No scopes are required — leave them all unchecked. GitHub Models authorizes based on your account, not the token scope. You need this because the API identifies you; it does not need repo access.
Then save the token one of two ways:
Option A: file (recommended)
mkdir -p ~/.config/claude-copilot
echo -n 'ghp_yourtoken' > ~/.config/claude-copilot/token
chmod 600 ~/.config/claude-copilot/tokenOption B: environment variable
Add to ~/.bashrc, ~/.zshrc, or your shell init:
export GITHUB_MODELS_TOKEN=ghp_yourtokenThe wrapper checks $GITHUB_MODELS_TOKEN first, then $GITHUB_MODELS_TOKEN_FILE, then ~/.config/claude-copilot/token.
GitHub Copilot subscribers get higher rate limits automatically. A plain GitHub account works too with tighter caps.
Open any git repo, start a Claude Code session, and run:
/copilot review
Claude Code will invoke the skill, which:
- Computes the diff against your base branch
- Truncates it to fit the model's input cap (with a notice)
- Sends it to GitHub Models with a structured review prompt
- Parses findings tagged
[P1](critical) and[P2](recommended) - Sets a PASS/FAIL gate
- Shows you the full verbatim output
For an adversarial "try to break it" pass:
/copilot challenge
For a general question:
/copilot what's the risk profile of the auth flow in this repo?
copilot is a drop-in for codex:
copilot review "focus on security" --base main
copilot exec "summarize this diff in 3 bullets"
copilot exec "what does UserService.create do?" -m openai/o3
copilot --versionAll codex flags (-C, -s, --enable, -c, --json) are accepted and either honored or logged as no-ops.
Default: openai/gpt-4.1.
Why gpt-4.1, not gpt-5? GitHub Models free tier caps gpt-5 at 4000 tokens of input per request. That's useless for real diffs. gpt-4.1 has a 1M-token context window and a more generous per-minute cap. For short consult prompts where you want maximum reasoning, override with -m openai/gpt-5.
Override defaults:
copilot review "..." -m openai/gpt-5
COPILOT_MODEL=openai/o3 copilot exec "..."Available:
- OpenAI:
gpt-5,gpt-5-mini,gpt-5-nano,gpt-4.1,gpt-4.1-mini,gpt-4o,o1,o3,o3-mini,o4-mini - DeepSeek:
deepseek-r1,deepseek-v3-0324 - xAI:
grok-3,grok-3-mini - Meta:
llama-4-maverick-17b-128e-instruct-fp8,llama-3.3-70b-instruct - Mistral:
codestral-2501,mistral-medium-2505 - Microsoft:
phi-4,phi-4-reasoning
Full catalog: https://github.com/marketplace/models
If you use gstack and its /autoplan review pipeline, this skill integrates transparently.
The installer puts a codex shim at ~/.local/bin/codex that execs copilot. When /autoplan runs codex exec "..." for outside-voice review, it hits the shim, which delegates to the GitHub Models API via copilot. No edits to gstack files needed.
If you later install the real OpenAI codex CLI via npm install -g @openai/codex, it gets installed at ~/AppData/Roaming/npm/codex which sits EARLIER in PATH than ~/.local/bin/codex. The real codex wins automatically, and the shim falls unused. No cleanup needed.
Review log entries from the shim are still tagged copilot-review in gstack's review log, distinct from real codex-review entries, so plan review reports can distinguish the two.
- Token stored with mode 600 at
~/.config/claude-copilot/token(never echoed, never logged). - Env var alternative (
$GITHUB_MODELS_TOKEN) for CI / secrets managers. - Zero network calls except to
models.github.ai(the public GitHub Models endpoint). - Read-only: the skill never modifies code, only reads git state and sends prompts.
- No telemetry. No analytics. No third-party services.
The only dependency beyond bash and curl is python3 (for safe JSON encoding of diff payloads). Your token never leaves your machine except in the Authorization: Bearer header of a request to models.github.ai.
copilot: command not found
~/.local/bin is not on your PATH. Add export PATH="$HOME/.local/bin:$PATH" to ~/.bashrc or ~/.zshrc and restart your shell.
ERROR: HTTP 413 from GitHub Models — Request body too large
Your diff is bigger than the model's input cap. Solutions: (a) use a smaller base branch --base some-closer-branch, (b) review fewer commits with --base HEAD~5, or (c) switch to a higher-tier model -m openai/gpt-4.1.
ERROR: HTTP 401 — Bad credentials
Your token is invalid or expired. Regenerate at https://github.com/settings/tokens and update ~/.config/claude-copilot/token.
ERROR: HTTP 429 — Rate limited
You hit the GitHub Models rate limit. Wait 60 seconds. If you're a GitHub Copilot subscriber and still see this, your account may need to be re-linked to Models — check https://github.com/marketplace/models.
Response is empty or finish=length
The model hit the output token cap. For gpt-5/o-series, the reasoning tokens can eat your budget. Try -m openai/gpt-4.1 or shorten the prompt.
./install.sh --uninstallThis removes ~/.claude/skills/copilot/, ~/.local/bin/copilot, and the codex shim (only if we installed it). Your token file is left in place — delete manually if you want.
MIT. See LICENSE.
Built for Claude Code + gstack. Uses GitHub Models.