The only always-visible quota instrument for Claude Code. A macOS menu-bar widget + granular SQLite tracker + pacing CLI, built for developers on the Claude Max plan who refuse to get surprised by "you've used 99% of your weekly quota" at 3pm on a Thursday.
A full-width Bloomberg-terminal strip pinned to the top of your screen. It shows where you are on your current 5-hour session AND your 7-day rolling window, side by side with where you should be if you want to land at 99% by the weekly reset. No tab-switching, no
/status, no surprise lockouts.
Anthropic ships a /status command inside Claude Code. It's fine. It
tells you a percentage. But it doesn't tell you:
- Am I on pace? If I'm 60% through my weekly quota and 40% through the week, I'm burning hot. By how much? Will I hit 100% by Thursday? By Tuesday?
- What burned it? Was it that one refactor on Monday? The subagent that got stuck in a Bash loop? A specific repo?
- What's my real rate per active hour? Not per wall-clock hour — per hour I was actually working — so I can calibrate "10% remaining = 6 more hours of normal work" vs "10% remaining = 30 minutes of what I was just doing."
- Is my local burn even matching what Anthropic says I used? (It usually is. But the drift-check tells you when it isn't.)
cc-usage answers all of these. And it puts the answer in your menu bar so you don't have to ask.
Built on Übersicht. Two stacked bars (session + weekly) with:
- Quarter-tick calibration marks on each bar (this is a measurement instrument, not a dashboard)
- A single hairline target marker on the weekly bar showing where you should be right now if you want to land at your target % by the reset — not at 100%, at 99% or whatever you set
- Color semantics: monochrome base + one electric cyan accent. Amber only as a warning semaphore. No red — colorblind-safe, signals critical state by underlining the number, not recoloring it
- A pulsing live dot next to the last-updated timestamp — the only moving element in the widget, so you can tell at a glance it's live
- Never stale, never errors: the widget keeps a locally-calibrated
%-per-Mtokenratio and extrapolates session% and week% forward from the last successful API snapshot using per-turn token burn from your local SQLite — so when Anthropic rate-limits the/api/oauth/usageendpoint for hours at a time (yes, that happens), the widget still paints current-to-the-minute numbers. Session windows rolling over at the 5-hour boundary are detected and reset to 0% automatically. It will never paint a red error splash across your menu bar. - Never dark: a launchd watchdog
(
com.ubersicht.keepalive, installed byinstall.sh) keeps Übersicht itself running. If it crashes, you Cmd-Q it by accident, or you reboot, it's back within ~10 seconds with no manual intervention. The widget being "always visible" isn't a hope, it's an enforced invariant.
Click the weekly bar and you get the full calibration readout: how
much time has passed in the 168-hour window, how much quota you've
actually used, where "ideal now" is (the white hairline target
marker), your delta against ideal (with a heat label —
COOL / ON PACE / HOT / VERY HOT), and the projected landing if your
current pace holds. The cursor below the bar walks left or right of
the target marker to make the direction of drift unmistakable.
Click through to see per-day quota consumption since the weekly reset — bar height proportional to active hours worked, turn counts annotated, and your pct_share of the weekly bucket computed by token-contribution since the API doesn't expose per-day % directly. Today is highlighted with an outlined bar.
Every time Claude Code writes a JSONL line into ~/.claude/projects/,
cc-usage parses it into six normalized tables:
| Table | What it holds |
|---|---|
snapshots |
Polls of /api/oauth/usage (Anthropic's authoritative quota %) — every 15 min by the launchd agent |
turns |
One row per assistant message: input/output/cache tokens, model, project, duration, stop reason |
tool_calls |
One row per tool_use block (tool name, input JSON, payload size) |
tool_results |
One row per tool_result (error flag, result size), paired with tool_calls via tool_use_id |
user_prompts |
One row per user event (real prompts + tool wrappers), with length and pasted-image counts |
events |
Catch-all for non-turn events (turn durations, permission-mode flips, attachments, last-prompt, etc.) |
Every table has a stable UUID as its UNIQUE key, so the backfill is
idempotent — rerun it against any JSONL history, any number of
times, and it never doubles up.
cc-usage # full panel with pacing and constraint picker
cc-usage --charts # + hourly + daily burn charts
cc-usage --report # + per-model and per-project breakdowns
cc-usage --search my-repo # filter everything by project substring
cc-usage --validate # drift check: Anthropic quota Δ% vs local token burn
cc-usage --target 95 # recalibrate against a different weekly target %Sample output:
Claude Code usage · Sat Apr 11, 11:47AM PDT · target 99%
────────────────────────────────────────────────────────────────────────
Current session (5h window)
██████████████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░
43.0% used · 57.0% left · 13m to reset (Sat Apr 11, 12PM)
safe: 272.01%/h · recent: 0.00%/h over 44m → plenty of headroom
session rate: 8.60%/active-hour (5h · 1633 turns) → 6.63h of work fits in the 57% remaining
Weekly — all models
████████████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
39.0% used · 61.0% left · 5.8d to reset (Fri Apr 17, 6AM)
on-pace-for-99% baseline: 17.56% (you're +21.44% AHEAD)
to land at 99% by Fri Apr 17, 6AM: 10.42%/day budget for 5.8d
recent burn: 0.00%/day over last 44m → projected landing 39.0%
Weekly — Sonnet only
█████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
10.0% used · 90.0% left · 2.8d to reset (Tue Apr 14, 7AM)
on-pace-for-99% baseline: 59.39% (you're -49.39% behind)
────────────────────────────────────────────────────────────────────────
Constraint: Weekly — all models — keep burn under 10.42%/day to land at 99%.
Observed rate this week : 1.95%/active-hour (over 20 active hours · 9109 turns)
Tomorrow's budget : 5.34 active hours (10.42%/day) → steady
(Full sample with charts: docs/sample-output.txt)
The Constraint picker is the secret sauce: it looks at all three quota dimensions (5h session, 7-day all-models, 7-day Sonnet) and picks whichever is actually going to bite you first given your current burn rate. That's the bar the widget anchors on.
python stats.py # full report
python stats.py --days 7
python stats.py --project my-repo
python stats.py --todayThirteen sections of "what did Claude actually do with my tokens":
- Overview (row counts, date range)
- Token burn — by day, by model, cache hit rate
- Projects — top by tokens, turns, tool calls, avg turns/session
- Tools — inventory, call counts, error rates
- Tool-specific — top Bash commands, top Grep patterns, Read hot files
- Turn behavior — stop_reason distribution, iterations, duration
- Thinking vs visible output ratio
- User activity — prompts per day, text length, screenshot pastes
- Sessions — length distribution, longest, turns per session
- Hourly heatmap — turns by hour-of-day
- Errors — API errors, tool result errors
- Permission modes — plan / accept_edits / default distribution
- Sidechain tax — token share of subagent turns
cc-usage --validateCompares the Anthropic-reported Δ quota % against your locally-measured
token burn over the same interval. If the API says "you went from 40%
to 50%" and your local turns table says "I wrote 8M tokens in that
window," you can compute a %/Mtok rate per model and detect whether
Anthropic's metering drifts from what it should be.
Useful for two reasons:
- Sanity — confirms the
/api/oauth/usageendpoint actually does what the docs say - Forecasting — once you know your stable
%/Mtok, the CLI can translate "I have 10% left" into "I have ~1.8M output tokens left"
- macOS (the widget depends on Übersicht; the rest is cross-platform)
- Python 3.9+ with
requestsinstalled AND macOS Full Disk Access granted. The stock/usr/local/bin/python3usually fails Full Disk Access — the easiest workaround is to use a virtualenv whose parent directory is already on the Full Disk Access list. Most devs have one. - Übersicht —
brew install --cask ubersicht - An active Claude Code installation. cc-usage reads your OAuth
token from the keychain (read-only, never refreshed) and your JSONL
history from
~/.claude/projects/.
git clone https://github.com/<you>/cc-usage.git
cd cc-usage
./install.sh /absolute/path/to/your/python3That script will:
- Initialize the SQLite schema at
data/claude_usage.db - Copy the Übersicht widget into
~/Library/Application Support/Übersicht/widgets/cc-usage.jsx, rewritingPYTHON_BINandREPO_ROOTfor your machine - Render and install the launchd agent at
~/Library/LaunchAgents/com.cc-usage.snapshot.plist, thenlaunchctl bootstrapit so it starts firing immediately - Install the Übersicht keep-alive watchdog at
~/Library/LaunchAgents/com.ubersicht.keepalive.plistso Übersicht starts at login and auto-restarts within seconds of any crash / quit - Smoke-test the widget JSON payload end-to-end
Afterwards, add a shell alias for interactive CLI use:
alias cc-usage='/path/to/python3 /path/to/cc-usage/claude_code_usage.py'Then backfill your historical JSONL data (first run only — this can
take a few minutes if your ~/.claude/projects/ has months of logs):
python claude_usage_backfill.py --since allCC_USAGE_TZenv var — IANA timezone for local-time displays. DefaultAmerica/Los_Angeles(matches Anthropic's quota reset convention).--target NCLI flag — weekly target percentage (default 99). Lower it if you prefer to land earlier than the reset.
Claude Code CLI Anthropic API
──────────────── ──────────────
writes JSONL turns /api/oauth/usage
to ~/.claude/projects (OAuth + beta hdr)
│ │
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ backfill │ │ snapshot poller │
│ (idempotent, │ │ (launchd every │
│ UUID-keyed) │ │ 15 min) │
└────────┬────────┘ └────────┬────────┘
│ │
▼ ▼
┌─────────────────────────────────┐
│ SQLite — data/claude_usage.db │
│ snapshots / turns / tool_calls │
│ tool_results / user_prompts │
│ events │
└──────────┬──────────┬───────────┘
│ │
│ │
▼ ▼
cc-usage CLI --widget-json
(panel, charts, │
reports, validate) ▼
┌─────────────┐
│ Übersicht │
│ widget │
└─────────────┘
Key design decisions:
- SQLite, not Postgres or DuckDB. One file. Zero daemons. Every query runs in single-digit milliseconds against months of history.
- Backfill is a separate binary. The 15-min launchd snapshot spawns it as a subprocess with a 2h overlap window — so a backfill failure never crashes the snapshot step (the more critical of the two), and the overlap is free because every row has a UNIQUE UUID.
- The widget talks to the DB, not the API. Its 60-second render
loop reads the most recent
snapshotsrow as a calibration anchor, then extrapolates session% and week% forward using turn-level token burn and an empirically-fit%-per-Mtokenratio. That means the widget is always live-to-the-minute AND never hits the API on its own render path — so it can't possibly contribute to rate limits. The 15-min launchd agent is the only thing that actually touches/api/oauth/usage, and when that gets 429'd, the extrapolation simply keeps projecting forward from whatever the most recent successful snapshot was. - OAuth token is read-only, never refreshed. Refreshing rotates
the token and kicks the live Claude Code CLI back to
/login. We deliberately avoid that code path and just re-read from keychain on every invocation.
Does this work with the Claude Pro plan, or only Max?
The /api/oauth/usage endpoint is exposed for both, but the quota math
(session × weekly × Sonnet-specific) is designed around the Max plan's
three-window structure. Pro users get one window; the widget will just
show the weekly bar in that case.
Does this modify anything in ~/.claude/?
No. Reads only. The OAuth token is read from the macOS keychain with
security find-generic-password. The JSONL files are read-only-mmap'd
during backfill.
What happens if Übersicht itself crashes or I accidentally quit it?
install.sh registers a launchd watchdog
(~/Library/LaunchAgents/com.ubersicht.keepalive.plist) with
KeepAlive=true and ThrottleInterval=10. Any exit — crash, Cmd-Q,
SIGKILL, reboot — gets relaunched within seconds. The throttle caps
restart attempts at 6/min so a genuinely broken binary can't
busy-loop. You can verify by running
pkill -9 -f /Applications/Übersicht.app/Contents/MacOS/Übersicht
and watching pgrep -lf bersicht — a new PID appears almost
immediately.
Why do I only ever see one Übersicht menu-bar icon, even though
both macOS and the watchdog want to launch it?
Übersicht silently registers itself as a macOS Background Login Item
on first launch (visible in sfltool dumpbtm under developer "Felix
Hageloh"). At login, both that Login Item and the
com.ubersicht.keepalive watchdog would otherwise spawn their own
copy — two processes, two menu-bar icons. The watchdog sidesteps
this by invoking a small dedupe shim that first checks for an
existing Übersicht via pgrep -f "MacOS/Übersicht" and only spawns
through open -g /Applications/Übersicht.app if missing;
LaunchServices then collapses any race into a single process by
bundle ID. The shim polls until Übersicht dies, then exits so
KeepAlive=true reruns it. Net effect: exactly one Übersicht, still
self-healing on crash / Cmd-Q / reboot. If you ever see two icons,
the watchdog has been reverted to a direct-exec form — don't do
that.
Will it blow up my rate limits?
The widget refreshes every 60s but its render path never touches the
API — it uses the local DB as a calibration anchor and extrapolates
forward from per-turn token burn. Only the 15-min launchd agent
actually polls /api/oauth/usage (96/day, 672/week), and that's well
under any published limit. In practice Anthropic will still
sometimes 429 you for hours at a time on this endpoint, and when that
happens the widget just keeps extrapolating from the last successful
snapshot — you'll see numbers that stay current to the minute even
while the API is locking the launchd agent out. You will never see a
red error bar.
Can I move the repo after install?
No — the widget, launchd plist, and shell alias all reference absolute
paths written at install time. Either re-run ./install.sh or
hand-edit those three files.
How big does the DB get? Depends on how much you use Claude Code. A heavy-user year's worth of per-turn rows with full content snapshots is in the low hundreds of megabytes. The schema VACUUMs cleanly if you want to trim it.
MIT.
This is a third-party tool that reads your own local Claude Code state
and the public /api/oauth/usage endpoint. Don't @ them about it.


