Reverse Engineering Codex CLI: Generating Pelican Images with GPT-5-Codex-Mini
November 9, 2025 — Reverse‑Engineering GPT‑5‑Codex‑Mini
OpenAI partially released a new model — GPT‑5‑Codex‑Mini — describing it as:
> “…a more compact and cost‑efficient version of GPT‑5‑Codex.”
Currently, it’s only accessible via the Codex CLI tool and VS Code extension, with API access “coming soon”.
I wanted direct prompt access to this new model — so I used Codex itself to help reverse‑engineer Codex CLI.
▶️ Watch my full walkthrough and results: YouTube Video
---
Contents
- Cheeky Beginnings
- Codex CLI in Rust
- Designing the `codex prompt` Command
- Iterating on the Code
- Testing — Pelicans on Bicycles
- Bonus: `--debug` Mode
- Learnings from the Codex Private API
---
Cheeky Beginnings
OpenAI clearly did not intend for the public to hit the GPT‑5‑Codex‑Mini model directly.
Codex CLI:
- Talks to special backend endpoints
- Uses a custom authentication mechanism linked to a ChatGPT account
- Isn’t documented for public use
Instead of calling their API outright, I explored leveraging Codex CLI’s existing open‑source code (Apache 2.0 licensed) to add my own prompt support — using its legit API pipeline.
💡 Sometimes “loopholes” present interesting technical playgrounds — but weigh the ethics and security before attempting similar feats.
---
Codex CLI in Rust
OpenAI’s openai/codex repo contains the CLI source — recently rewritten in Rust.
❗ I barely know Rust — so I cloned the repository and let Codex itself handle compilation.
git clone git@github.com:simonw/codex
cd codexRunning Codex in danger mode:
codex --dangerously-bypass-approvals-and-sandboxPrompt to Codex:
> Figure out how to build the Rust version of this tool and then build it
It compiled itself successfully. This seed knowledge also meant Codex could later test code it generated for new features.
---
Designing `codex prompt` Command {#designing-codex-prompt}
I asked Codex to implement:
- `codex prompt "prompt here"` — send prompt to Codex API using current model/auth
- `-m` — override model
- `-s/--system "system prompt"` — custom system message
- `--models` — list models available to Codex
Design borrowed from my llm CLI.
---
Iterating on the Code
Codex produced a plan and working code.
Plan Summary:
✔ Inspect CLI structure and utilities for sending prompts
✔ Implement new `codex prompt` subcommand
✔ Format, lint, and test using `just` tasksNotably, Codex:
- Found the `justfile`
- Ran `just fmt` and `just fix -p codex-cli` without me telling it to
---
First Test
./target/debug/codex prompt 'Generate an SVG of a pelican riding a bicycle' -m gpt-5-codex-miniIssue:
Model began behaving as if in full “workspace mode,” checking directories instead of sending raw output.
---
Troubleshooting Tips
If you see similar misbehavior when prompting Codex models:
- Force tool‑free mode in system config
- Strip workspace context
- Manage streaming output to avoid multiline reasoning dumps
- Test with raw API calls
I revised:
> This command should not run any tools or use workspace context — send prompt only, stream response, stop.
Result: API responded 400 Bad Request — invalid instructions without its expected baseline instructions.
Ultimately, we retained default “developer” instructions but disabled tool access.
---
Adding `--debug`
Finally, I requested:
> Include `--debug` flag to print JSON request & response to stderr, plus URL and HTTP verb.
That gave visibility into Codex’s undocumented private endpoint and request structure.
---
Testing Pelicans {#testing-pelicans}
With working build, I ran several tests:
1. Default GPT‑5‑Codex
./target/debug/codex prompt "Generate an SVG of a pelican riding a bicycle"
---
2. GPT‑5
./target/debug/codex prompt "Generate an SVG of a pelican riding a bicycle" -m gpt-5
---
3. GPT‑5‑Codex‑Mini
./target/debug/codex prompt "Generate an SVG of a pelican riding a bicycle" -m gpt-5-codex-mini
---
Bonus: Debug Output {#bonus-debug}
Running with debug shows:
./target/debug/codex prompt -m gpt-5-codex-mini "Generate an SVG of a pelican riding a bicycle" --debugEndpoint:
https://chatgpt.com/backend-api/codex/responsesObservations:
- Model Key: `"model": "gpt-5-codex-mini"`
- Instructions: Required default instructions block — likely hidden system prompt
- Developer Role: Pre‑prompt sets dev context before user input
- Tool Config: `"tool_choice": "auto"`, streaming enabled
---
Key Learnings {#codex-private-api}
- Codex API structure parallels OpenAI Chat API — with extra "developer" role
- Default system instructions appear mandatory for execution
- Tool selection + streaming flags are part of payload
- Even undocumented APIs reveal patterns that help design your own AI orchestration
---
Final Thoughts
This experiment shows:
- You can extend open‑source CLI tools to access models in new ways
- Debug flags reveal valuable API internals
- Maintaining the expected “developer instructions” is key to making Codex endpoints happy
Full working code is on my prompt‑subcommand branch.
---
Tip: If you plan to integrate AI generation with multi‑platform publishing, consider using orchestration tools that handle both creation and distribution — e.g., AiToEarn官网 — so your creative outputs (SVGs included) can go straight from CLI to your audience.