How to Use GPT-5.5 Today at the CLI (Via Your Existing Codex Subscription)
Small 4/26 update before you run the commands below: Codex CLI 0.125.0 added GPT-5.5 to codex exec -m gpt-5.5 and the Codex MCP server. (For MCP, restart Claude Code after upgrading so the running server picks up the new binary.) The relay is no longer the only path to 5.5 from your terminal. The OpenAI public API still hasn’t shipped 5.5 as of today.
I’m still keeping the relay for fast one-off pipes and as a backup connection method. In my 4/26 tests the same review prompt averaged 4.5s through the relay vs 10.7s through codex exec; the relay is a thin OAuth call while codex exec boots a sandboxed session every time. One catch: after 50+ rapid calls Plus rate-limits probably kicked in and some prompts hung to 300s. For loops, space the calls. For multi-file agent work I generally switch to MCP.
TL;DR: You can use GPT-5.5 today from your terminal by running Simon Willison’s llm-openai-via-codex plugin on top of your existing ChatGPT/Codex login. No new API key, no separate API bill. 💪 OpenAI hasn’t shipped GPT-5.5 to the public API yet (as of 2026-04-24 midday), so this is a pretty sweet clean shell-side path until they do. 🚀 Thank you Simon!
I set this up on my dev test box and it took about two minutes end to end. I wrote the rest of this post after piping a git log -p into the new channel and getting back a clean 3-bullet patch review on the first try.
How to use GPT-5.5 today from your terminal
This works if you already subscribe to ChatGPT (Plus, Pro, Business, or Enterprise) and you’ve logged into the Codex CLI at least once on the box you’re installing on. It is not a way around the subscription. It reuses the auth you already have.
Four commands:
# 1. Install the LLM CLI as a uv-managed tool
uv tool install llm
# 2. Install Simon's plugin, pinned to the reviewed version
llm install 'llm-openai-via-codex==0.1a0'
# 3. Turn off LLM's default prompt/response logging
llm logs off
# 4. Run your first prompt
llm -m openai-codex/gpt-5.5 'In one sentence: what are you?'
If all four commands succeed, you’re done. The plugin reads your local ~/.codex/auth.json and routes prompts through the Codex backend (per the plugin source, it hits chatgpt.com/backend-api/codex the same way the official CLI does).
This is a third-party plugin. In Simon’s own words, it “hijacks your Codex CLI credentials” and calls the Codex backend directly. OpenAI’s public statements on using the Codex subscription outside their own clients are supportive but not formal. If that matters for your setup: pin the version (==0.1a0), leave ~/.codex/auth.json at mode 600, and skim the plugin source before any future upgrade.
What you can actually do with it
The flagship move is piping. Anything stdin can reach, GPT-5.5 can read:
# Review your last commit diff
git diff HEAD~1 | llm -m openai-codex/gpt-5.5 'Review this patch in 3 bullets.'
# Summarize a chunk of a log file
tail -500 app.log | llm -m openai-codex/gpt-5.5 'What's the most common error pattern here?'
# Explain a scary bash script before you run it
cat some-installer.sh | llm -m openai-codex/gpt-5.5 'Flag anything here that a sane person would not run without reading.'
Three other knobs worth knowing:
- Chat mode.
llm chat -m openai-codex/gpt-5.5gives you a multi-turn REPL. - Harder reasoning.
-o reasoning_effort xhightrades wall-clock minutes for visibly more thorough output (Simon’s SVG pelican run burned ~4 minutes and 9,000+ tokens for one image vs. 39 tokens at the default). Worth it for ambiguous debugging or architecture questions. - Pairs well with Claude Code. This is the workflow I care about most, even if one model gets a bit nerfed they usually keep each other honest and make a great team. Claude Code combined with CODEX is an aweomepair. The “Codex relay” (my nickname for this channel) handles shell-pipe one-offs that don’t need an agent loop. Claude Code shells out to
llm -m openai-codex/gpt-5.5via its Bash tool, gets GPT-5.5’s answer back, and keeps going. I used exactly that pipe while drafting this post. 💯
If it doesn’t work on the first try
A few things to check in order:
- Codex auth mode. The plugin needs the ChatGPT OAuth flow, not an old API-key login. Quick check:
grep '"auth_mode"' ~/.codex/auth.jsonshould return"chatgpt". If it says"api_key"or the file doesn’t exist, runcodex logout && codex loginand pick the ChatGPT option. uvon PATH. Ifllmisn’t found afteruv tool install llm, add$HOME/.local/binto your PATH.uvinstalls tools there by default.- Rollout window. GPT-5.5 rolled out to paid plans in waves. If your account hasn’t been flipped yet you’ll see a model-not-available error even with a valid subscription. Give it a few hours and retry.
- Prompt, don’t just list.
llm models -q openai-codexreturns results even when auth is broken, so treat it as navigation, not a health check. An actual prompt is the real test.
Why this works even though the API isn’t live yet
Is GPT-5.5 in the API yet?
Not as of April 24, 2026. OpenAI announced GPT-5.5 on April 23 and made it available in ChatGPT and the Codex product. Pricing is public. The API is “coming very soon,” with no confirmed date. This post will age; check the announcement for current status.
5.4 still works great via MCP and exec, but Simon’s method is a fun way to get 5.5 on your command line today!!! 🔥
Is this an official OpenAI API path?
No. It’s a community plugin that reads your local Codex credentials and calls the Codex backend directly. It is not endorsed by OpenAI. The subscription-backed Codex CLI itself is official; the third-party access path we’re using here is not. Use with eyes open.
What’s next
I’ve got three ways to reach Codex from this box now: the Codex MCP server inside my Claude Code sessions, codex exec as a sandboxed CLI, and this new “Codex relay.” Very different shapes (agentic vs sandboxed vs pipeable). A head-to-head is coming.
Bottom Line
If you already pay for ChatGPT or Codex, GPT-5.5 is four commands away from your shell. Pin the plugin, keep logs off by default, and treat the local Codex credential file as valuable. That is the only real tax. For terminal-side one-offs and Claude Code handoffs, this is now my go-to channel until the official API ships. If you try it, drop a comment with what you piped into it first. 👍
Sources and Further Reading
- Simon Willison on GPT-5.5 – the context post and SVG pelican benchmark
- llm-openai-via-codex release notes – Simon’s own framing of the plugin
- llm-openai-via-codex on GitHub – source (worth skimming before install)
- Introducing GPT-5.5 – OpenAI’s announcement with API pricing
- LLM logging docs – how to toggle
llm logsand why you probably want it off
Accurate at time of writing. Something off? Drop a comment.