Spaces:
Paused
Codex Proxy
Your Local Codex Coding Assistant Gateway
Expose Codex Desktop's capabilities as a standard OpenAI API, seamlessly connecting any AI client.
Quick Start โข Features โข Architecture โข Client Setup โข Configuration
็ฎไฝไธญๆ | English
Codex Proxy is a lightweight local gateway that translates the Codex Desktop Responses API into a standard OpenAI-compatible /v1/chat/completions endpoint. Use Codex coding models directly in Cursor, Continue, VS Code, or any OpenAI-compatible client.
Just a ChatGPT account and this proxy โ your own personal AI coding assistant gateway, running locally.
๐ Quick Start
Desktop App (Easiest)
Download the installer from GitHub Releases โ no setup required:
| Platform | Installer |
|---|---|
| Windows | Codex Proxy Setup x.x.x.exe |
| macOS | Codex Proxy-x.x.x.dmg |
| Linux | Codex Proxy-x.x.x.AppImage |
Open the app and log in with your ChatGPT account. The desktop app listens on 127.0.0.1:8080 (local access only).
CLI / Server Deployment
git clone https://github.com/icebear0828/codex-proxy.git
cd codex-proxy
Docker (Recommended)
cp .env.example .env # Create env file (edit to configure)
docker compose up -d
# Open http://localhost:8080 to log in
macOS / Linux
npm install # Install backend deps + auto-download curl-impersonate
cd web && npm install && cd .. # Install frontend deps
npm run dev # Dev mode (hot reload)
# Or: npm run build && npm start # Production mode
Windows
npm install # Install backend deps
cd web && npm install && cd .. # Install frontend deps
npm run dev # Dev mode (hot reload)
On Windows, curl-impersonate is not available. The proxy falls back to system curl. For full TLS impersonation, use Docker or WSL.
Verify
# Open http://localhost:8080, log in with your ChatGPT account, then:
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "codex",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'
Cross-container access: If other Docker containers need to connect to codex-proxy, use the host's LAN IP (e.g.,
http://192.168.x.x:8080/v1) instead ofhost.docker.internal.
๐ Features
1. ๐ Full Protocol Compatibility
- Compatible with
/v1/chat/completions(OpenAI),/v1/messages(Anthropic), and Gemini formats - SSE streaming output, works with all OpenAI SDKs and clients
- Automatic bidirectional translation between Chat Completions and Codex Responses API
- Structured Outputs โ supports
response_format(OpenAIjson_object/json_schema) and GeminiresponseMimeTypefor enforcing JSON output without prompt engineering
2. ๐ Account Management & Smart Rotation
- OAuth PKCE login โ one-click browser auth, no manual token copying
- Multi-account rotation โ
least_usedandround_robinscheduling strategies - Auto token refresh โ JWT renewed automatically before expiry
- Real-time quota monitoring โ dashboard shows remaining usage per account
3. ๐ Proxy Pool
- Per-account proxy routing โ assign different upstream proxies to different accounts for IP diversity and risk isolation
- Four assignment modes โ Global Default, Direct (no proxy), Auto (round-robin rotation), or a specific proxy
- Health checks โ scheduled (default every 5 min) + manual, reports exit IP and latency via ipify API
- Auto-mark unreachable โ unreachable proxies are automatically flagged and excluded from auto-rotation
- Dashboard management โ add/remove/check/enable/disable proxies, per-account proxy selector
3. ๐ก๏ธ Anti-Detection & Protocol Impersonation
- Chrome TLS fingerprint โ curl-impersonate replicates the full Chrome 136 TLS handshake
- Desktop header replication โ
originator,User-Agent,sec-ch-*headers in exact Codex Desktop order - Desktop context injection โ every request includes the Codex Desktop system prompt for full feature parity
- Cookie persistence โ automatic Cloudflare cookie capture and replay
- Timing jitter โ randomized delays on scheduled operations to eliminate mechanical patterns
4. ๐ Session & Version Management
- Multi-turn conversations โ automatic
previous_response_idfor context continuity - Appcast version tracking โ polls Codex Desktop update feed, auto-syncs
app_versionandbuild_number - Web dashboard โ account management, usage monitoring, and status overview in one place
๐๏ธ Architecture
Codex Proxy
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ Client (Cursor / Continue / SDK) โ
โ โ โ
โ POST /v1/chat/completions โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ Routes โโโโถโ Translation โโโโถโ Proxy โ โ
โ โ (Hono) โ โ OpenAIโCodex โ โ curl TLS โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โโโโโโฌโโโโโโ โ
โ โฒ โ โ
โ โ โโโโโโโโโโโโโโโโโ โ โ
โ โโโโโโโโโโโโ Translation โโโโโโโโโโ โ
โ โ CodexโOpenAI โ SSE stream โ
โ โโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ
โ โ Auth โ โ Fingerprint โ โ Session โ โ
โ โ OAuth/JWTโ โ Headers/UA โ โ Manager โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
curl subprocess
(Chrome TLS)
โ
โผ
chatgpt.com
/backend-api/codex/responses
๐ฆ Available Models
| Model ID | Alias | Reasoning Efforts | Description |
|---|---|---|---|
gpt-5.2-codex |
codex |
low / medium / high / xhigh | Frontier agentic coding model (default) |
gpt-5.2 |
โ | low / medium / high / xhigh | Professional work & long-running agents |
gpt-5.1-codex-max |
โ | low / medium / high / xhigh | Extended context / deepest reasoning |
gpt-5.1-codex |
โ | low / medium / high | GPT-5.1 coding model |
gpt-5.1 |
โ | low / medium / high | General-purpose GPT-5.1 |
gpt-5-codex |
โ | low / medium / high | GPT-5 coding model |
gpt-5 |
โ | minimal / low / medium / high | General-purpose GPT-5 |
gpt-oss-120b |
โ | low / medium / high | Open-source 120B model |
gpt-oss-20b |
โ | low / medium / high | Open-source 20B model |
gpt-5.1-codex-mini |
โ | medium / high | Lightweight, fast coding model |
gpt-5-codex-mini |
โ | medium / high | Lightweight coding model |
Model name suffixes: Append
-fastto any model name to enable Fast mode, or-high/-lowetc. to change reasoning effort. Examples:codex-fast,gpt-5.2-codex-high-fast.Note:
gpt-5.4andgpt-5.3-codexfamilies have been removed for free accounts. Plus and above accounts retain access. Models are dynamically fetched from the backend and will automatically sync the latest available catalog.
๐ Client Setup
Claude Code
Set environment variables to route Claude Code through codex-proxy:
export ANTHROPIC_BASE_URL=http://localhost:8080
export ANTHROPIC_API_KEY=your-api-key
# Default model is gpt-5.2-codex (codex alias), no need to set ANTHROPIC_MODEL
# To switch models or use suffixes:
# export ANTHROPIC_MODEL=codex-fast # โ gpt-5.2-codex + Fast mode
# export ANTHROPIC_MODEL=codex-high # โ gpt-5.2-codex + high reasoning
# export ANTHROPIC_MODEL=codex-high-fast # โ gpt-5.2-codex + high + Fast
# export ANTHROPIC_MODEL=gpt-5.2 # โ General-purpose GPT-5.2
# export ANTHROPIC_MODEL=gpt-5.1-codex-mini # โ Lightweight, fast model
claude # Launch Claude Code
All Claude Code model names (Opus / Sonnet / Haiku) map to the configured default model (
gpt-5.2-codex). To use a specific model, set theANTHROPIC_MODELenvironment variable to a Codex model name.
You can also copy environment variables from the Anthropic SDK Setup card in the dashboard (
http://localhost:8080).
Cursor
Settings โ Models โ OpenAI API Base:
http://localhost:8080/v1
API Key (from the dashboard):
codex-proxy-xxxxx
Continue (VS Code)
~/.continue/config.json:
{
"models": [{
"title": "Codex",
"provider": "openai",
"model": "codex",
"apiBase": "http://localhost:8080/v1",
"apiKey": "codex-proxy-xxxxx"
}]
}
OpenAI Python SDK
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8080/v1",
api_key="codex-proxy-xxxxx"
)
response = client.chat.completions.create(
model="codex",
messages=[{"role": "user", "content": "Hello!"}],
stream=True
)
for chunk in response:
print(chunk.choices[0].delta.content or "", end="")
OpenAI Node.js SDK
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "http://localhost:8080/v1",
apiKey: "codex-proxy-xxxxx",
});
const stream = await client.chat.completions.create({
model: "codex",
messages: [{ role: "user", content: "Hello!" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
โ๏ธ Configuration
All configuration is in config/default.yaml:
| Section | Key Settings | Description |
|---|---|---|
server |
host, port, proxy_api_key |
Listen address and API key |
api |
base_url, timeout_seconds |
Upstream API URL and timeout |
client_identity |
app_version, build_number |
Codex Desktop version to impersonate |
model |
default, default_reasoning_effort, default_service_tier |
Default model, reasoning effort and speed mode |
auth |
rotation_strategy, rate_limit_backoff_seconds |
Rotation strategy and rate limit backoff |
Environment Variable Overrides
| Variable | Overrides |
|---|---|
PORT |
server.port |
CODEX_PLATFORM |
client_identity.platform |
CODEX_ARCH |
client_identity.arch |
๐ก API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions |
POST | Chat completions (main endpoint) |
/v1/models |
GET | List available models |
/health |
GET | Health check |
/auth/accounts |
GET | Account list and quota |
/auth/login |
GET | OAuth login entry |
/debug/fingerprint |
GET | Debug: view current impersonation headers |
/api/proxies |
GET | Proxy pool list (with assignments) |
/api/proxies |
POST | Add proxy (HTTP/HTTPS/SOCKS5) |
/api/proxies/:id |
PUT | Update proxy config |
/api/proxies/:id |
DELETE | Remove proxy |
/api/proxies/:id/check |
POST | Health check single proxy |
/api/proxies/:id/enable |
POST | Enable proxy |
/api/proxies/:id/disable |
POST | Disable proxy |
/api/proxies/check-all |
POST | Health check all proxies |
/api/proxies/assign |
POST | Assign proxy to account |
/api/proxies/assign/:accountId |
DELETE | Unassign proxy from account |
/api/proxies/settings |
PUT | Update proxy pool settings |
๐ง Commands
| Command | Description |
|---|---|
npm run dev |
Start dev server with hot reload |
npm run build |
Compile TypeScript to dist/ |
npm start |
Run compiled production server |
๐ Requirements
- Node.js 18+
- curl โ system curl works out of the box; install curl-impersonate for full Chrome TLS fingerprinting
- ChatGPT account โ standard account is sufficient
โ ๏ธ Notes
- The Codex API is stream-only. When
stream: falseis set, the proxy streams internally and returns the assembled response as a single JSON object. - This project relies on Codex Desktop's public API. Upstream version updates may cause breaking changes.
- Deploy on Linux / macOS for full TLS impersonation. On Windows, curl-impersonate is not available and the proxy falls back to system curl.
๐ License
This project is licensed under Non-Commercial terms:
- Allowed: Personal learning, research, self-hosted deployment
- Prohibited: Any commercial use, including but not limited to selling, reselling, paid proxy services, or integration into commercial products
This project is not affiliated with OpenAI. Users assume all risks and must comply with OpenAI's Terms of Service.