Spaces:
Paused
Paused
File size: 15,290 Bytes
bd64e44 794d451 bd64e44 794d451 bd64e44 794d451 8068f1c 794d451 bd64e44 794d451 bd64e44 794d451 bd64e44 794d451 bd64e44 794d451 bd64e44 794d451 d40d20b bd64e44 794d451 bd64e44 53d3b3b bd64e44 3cf831b bd64e44 794d451 43aa0a0 bd64e44 5416ffb 43aa0a0 5416ffb 59d671c 43aa0a0 bd64e44 794d451 43aa0a0 5416ffb 43aa0a0 794d451 43aa0a0 794d451 bd64e44 5416ffb bd64e44 3cf831b bd64e44 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 | <div align="center">
<h1>Codex Proxy</h1>
<h3>Your Local Codex Coding Assistant Gateway</h3>
<p>Expose Codex Desktop's capabilities as a standard OpenAI API, seamlessly connecting any AI client.</p>
<p>
<img src="https://img.shields.io/badge/Runtime-Node.js_18+-339933?style=flat-square&logo=nodedotjs&logoColor=white" alt="Node.js">
<img src="https://img.shields.io/badge/Language-TypeScript-3178C6?style=flat-square&logo=typescript&logoColor=white" alt="TypeScript">
<img src="https://img.shields.io/badge/Framework-Hono-E36002?style=flat-square" alt="Hono">
<img src="https://img.shields.io/badge/Docker-Supported-2496ED?style=flat-square&logo=docker&logoColor=white" alt="Docker">
<img src="https://img.shields.io/badge/Desktop-Win%20%7C%20Mac%20%7C%20Linux-8A2BE2?style=flat-square&logo=electron&logoColor=white" alt="Desktop">
<img src="https://img.shields.io/badge/License-Non--Commercial-red?style=flat-square" alt="License">
</p>
<p>
<a href="#-quick-start">Quick Start</a> โข
<a href="#-features">Features</a> โข
<a href="#-architecture">Architecture</a> โข
<a href="#-client-setup">Client Setup</a> โข
<a href="#-configuration">Configuration</a>
</p>
<p>
<a href="./README.md">็ฎไฝไธญๆ</a> |
<strong>English</strong>
</p>
</div>
---
**Codex Proxy** is a lightweight local gateway that translates the [Codex Desktop](https://openai.com/codex) Responses API into a standard OpenAI-compatible `/v1/chat/completions` endpoint. Use Codex coding models directly in Cursor, Continue, VS Code, or any OpenAI-compatible client.
Just a ChatGPT account and this proxy โ your own personal AI coding assistant gateway, running locally.
## ๐ Quick Start
### Desktop App (Easiest)
Download the installer from [GitHub Releases](https://github.com/icebear0828/codex-proxy/releases) โ no setup required:
| Platform | Installer |
|----------|-----------|
| Windows | `Codex Proxy Setup x.x.x.exe` |
| macOS | `Codex Proxy-x.x.x.dmg` |
| Linux | `Codex Proxy-x.x.x.AppImage` |
Open the app and log in with your ChatGPT account. The desktop app listens on `127.0.0.1:8080` (local access only).
### CLI / Server Deployment
```bash
git clone https://github.com/icebear0828/codex-proxy.git
cd codex-proxy
```
#### Docker (Recommended)
```bash
cp .env.example .env # Create env file (edit to configure)
docker compose up -d
# Open http://localhost:8080 to log in
```
#### macOS / Linux
```bash
npm install # Install backend deps + auto-download curl-impersonate
cd web && npm install && cd .. # Install frontend deps
npm run dev # Dev mode (hot reload)
# Or: npm run build && npm start # Production mode
```
#### Windows
```bash
npm install # Install backend deps
cd web && npm install && cd .. # Install frontend deps
npm run dev # Dev mode (hot reload)
```
> On Windows, curl-impersonate is not available. The proxy falls back to system curl. For full TLS impersonation, use Docker or WSL.
### Verify
```bash
# Open http://localhost:8080, log in with your ChatGPT account, then:
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "codex",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'
```
> **Cross-container access**: If other Docker containers need to connect to codex-proxy, use the host's LAN IP (e.g., `http://192.168.x.x:8080/v1`) instead of `host.docker.internal`.
## ๐ Features
### 1. ๐ Full Protocol Compatibility
- Compatible with `/v1/chat/completions` (OpenAI), `/v1/messages` (Anthropic), and Gemini formats
- SSE streaming output, works with all OpenAI SDKs and clients
- Automatic bidirectional translation between Chat Completions and Codex Responses API
- **Structured Outputs** โ supports `response_format` (OpenAI `json_object` / `json_schema`) and Gemini `responseMimeType` for enforcing JSON output without prompt engineering
### 2. ๐ Account Management & Smart Rotation
- **OAuth PKCE login** โ one-click browser auth, no manual token copying
- **Multi-account rotation** โ `least_used` and `round_robin` scheduling strategies
- **Auto token refresh** โ JWT renewed automatically before expiry
- **Real-time quota monitoring** โ dashboard shows remaining usage per account
### 3. ๐ Proxy Pool
- **Per-account proxy routing** โ assign different upstream proxies to different accounts for IP diversity and risk isolation
- **Four assignment modes** โ Global Default, Direct (no proxy), Auto (round-robin rotation), or a specific proxy
- **Health checks** โ scheduled (default every 5 min) + manual, reports exit IP and latency via ipify API
- **Auto-mark unreachable** โ unreachable proxies are automatically flagged and excluded from auto-rotation
- **Dashboard management** โ add/remove/check/enable/disable proxies, per-account proxy selector
### 3. ๐ก๏ธ Anti-Detection & Protocol Impersonation
- **Chrome TLS fingerprint** โ curl-impersonate replicates the full Chrome 136 TLS handshake
- **Desktop header replication** โ `originator`, `User-Agent`, `sec-ch-*` headers in exact Codex Desktop order
- **Desktop context injection** โ every request includes the Codex Desktop system prompt for full feature parity
- **Cookie persistence** โ automatic Cloudflare cookie capture and replay
- **Timing jitter** โ randomized delays on scheduled operations to eliminate mechanical patterns
### 4. ๐ Session & Version Management
- **Multi-turn conversations** โ automatic `previous_response_id` for context continuity
- **Appcast version tracking** โ polls Codex Desktop update feed, auto-syncs `app_version` and `build_number`
- **Web dashboard** โ account management, usage monitoring, and status overview in one place
## ๐๏ธ Architecture
```
Codex Proxy
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ Client (Cursor / Continue / SDK) โ
โ โ โ
โ POST /v1/chat/completions โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ Routes โโโโถโ Translation โโโโถโ Proxy โ โ
โ โ (Hono) โ โ OpenAIโCodex โ โ curl TLS โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โโโโโโฌโโโโโโ โ
โ โฒ โ โ
โ โ โโโโโโโโโโโโโโโโโ โ โ
โ โโโโโโโโโโโโ Translation โโโโโโโโโโ โ
โ โ CodexโOpenAI โ SSE stream โ
โ โโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ
โ โ Auth โ โ Fingerprint โ โ Session โ โ
โ โ OAuth/JWTโ โ Headers/UA โ โ Manager โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
curl subprocess
(Chrome TLS)
โ
โผ
chatgpt.com
/backend-api/codex/responses
```
## ๐ฆ Available Models
| Model ID | Alias | Reasoning Efforts | Description |
|----------|-------|-------------------|-------------|
| `gpt-5.2-codex` | `codex` | low / medium / high / xhigh | Frontier agentic coding model (default) |
| `gpt-5.2` | โ | low / medium / high / xhigh | Professional work & long-running agents |
| `gpt-5.1-codex-max` | โ | low / medium / high / xhigh | Extended context / deepest reasoning |
| `gpt-5.1-codex` | โ | low / medium / high | GPT-5.1 coding model |
| `gpt-5.1` | โ | low / medium / high | General-purpose GPT-5.1 |
| `gpt-5-codex` | โ | low / medium / high | GPT-5 coding model |
| `gpt-5` | โ | minimal / low / medium / high | General-purpose GPT-5 |
| `gpt-oss-120b` | โ | low / medium / high | Open-source 120B model |
| `gpt-oss-20b` | โ | low / medium / high | Open-source 20B model |
| `gpt-5.1-codex-mini` | โ | medium / high | Lightweight, fast coding model |
| `gpt-5-codex-mini` | โ | medium / high | Lightweight coding model |
> **Model name suffixes**: Append `-fast` to any model name to enable Fast mode, or `-high`/`-low` etc. to change reasoning effort.
> Examples: `codex-fast`, `gpt-5.2-codex-high-fast`.
>
> **Note**: `gpt-5.4` and `gpt-5.3-codex` families have been removed for free accounts. Plus and above accounts retain access.
> Models are dynamically fetched from the backend and will automatically sync the latest available catalog.
## ๐ Client Setup
### Claude Code
Set environment variables to route Claude Code through codex-proxy:
```bash
export ANTHROPIC_BASE_URL=http://localhost:8080
export ANTHROPIC_API_KEY=your-api-key
# Default model is gpt-5.2-codex (codex alias), no need to set ANTHROPIC_MODEL
# To switch models or use suffixes:
# export ANTHROPIC_MODEL=codex-fast # โ gpt-5.2-codex + Fast mode
# export ANTHROPIC_MODEL=codex-high # โ gpt-5.2-codex + high reasoning
# export ANTHROPIC_MODEL=codex-high-fast # โ gpt-5.2-codex + high + Fast
# export ANTHROPIC_MODEL=gpt-5.2 # โ General-purpose GPT-5.2
# export ANTHROPIC_MODEL=gpt-5.1-codex-mini # โ Lightweight, fast model
claude # Launch Claude Code
```
> All Claude Code model names (Opus / Sonnet / Haiku) map to the configured default model (`gpt-5.2-codex`).
> To use a specific model, set the `ANTHROPIC_MODEL` environment variable to a Codex model name.
> You can also copy environment variables from the **Anthropic SDK Setup** card in the dashboard (`http://localhost:8080`).
### Cursor
Settings โ Models โ OpenAI API Base:
```
http://localhost:8080/v1
```
API Key (from the dashboard):
```
codex-proxy-xxxxx
```
### Continue (VS Code)
`~/.continue/config.json`:
```json
{
"models": [{
"title": "Codex",
"provider": "openai",
"model": "codex",
"apiBase": "http://localhost:8080/v1",
"apiKey": "codex-proxy-xxxxx"
}]
}
```
### OpenAI Python SDK
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8080/v1",
api_key="codex-proxy-xxxxx"
)
response = client.chat.completions.create(
model="codex",
messages=[{"role": "user", "content": "Hello!"}],
stream=True
)
for chunk in response:
print(chunk.choices[0].delta.content or "", end="")
```
### OpenAI Node.js SDK
```typescript
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "http://localhost:8080/v1",
apiKey: "codex-proxy-xxxxx",
});
const stream = await client.chat.completions.create({
model: "codex",
messages: [{ role: "user", content: "Hello!" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
```
## โ๏ธ Configuration
All configuration is in `config/default.yaml`:
| Section | Key Settings | Description |
|---------|-------------|-------------|
| `server` | `host`, `port`, `proxy_api_key` | Listen address and API key |
| `api` | `base_url`, `timeout_seconds` | Upstream API URL and timeout |
| `client_identity` | `app_version`, `build_number` | Codex Desktop version to impersonate |
| `model` | `default`, `default_reasoning_effort`, `default_service_tier` | Default model, reasoning effort and speed mode |
| `auth` | `rotation_strategy`, `rate_limit_backoff_seconds` | Rotation strategy and rate limit backoff |
### Environment Variable Overrides
| Variable | Overrides |
|----------|-----------|
| `PORT` | `server.port` |
| `CODEX_PLATFORM` | `client_identity.platform` |
| `CODEX_ARCH` | `client_identity.arch` |
## ๐ก API Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/v1/chat/completions` | POST | Chat completions (main endpoint) |
| `/v1/models` | GET | List available models |
| `/health` | GET | Health check |
| `/auth/accounts` | GET | Account list and quota |
| `/auth/login` | GET | OAuth login entry |
| `/debug/fingerprint` | GET | Debug: view current impersonation headers |
| `/api/proxies` | GET | Proxy pool list (with assignments) |
| `/api/proxies` | POST | Add proxy (HTTP/HTTPS/SOCKS5) |
| `/api/proxies/:id` | PUT | Update proxy config |
| `/api/proxies/:id` | DELETE | Remove proxy |
| `/api/proxies/:id/check` | POST | Health check single proxy |
| `/api/proxies/:id/enable` | POST | Enable proxy |
| `/api/proxies/:id/disable` | POST | Disable proxy |
| `/api/proxies/check-all` | POST | Health check all proxies |
| `/api/proxies/assign` | POST | Assign proxy to account |
| `/api/proxies/assign/:accountId` | DELETE | Unassign proxy from account |
| `/api/proxies/settings` | PUT | Update proxy pool settings |
## ๐ง Commands
| Command | Description |
|---------|-------------|
| `npm run dev` | Start dev server with hot reload |
| `npm run build` | Compile TypeScript to `dist/` |
| `npm start` | Run compiled production server |
## ๐ Requirements
- **Node.js** 18+
- **curl** โ system curl works out of the box; install [curl-impersonate](https://github.com/lexiforest/curl-impersonate) for full Chrome TLS fingerprinting
- **ChatGPT account** โ standard account is sufficient
## โ ๏ธ Notes
- The Codex API is **stream-only**. When `stream: false` is set, the proxy streams internally and returns the assembled response as a single JSON object.
- This project relies on Codex Desktop's public API. Upstream version updates may cause breaking changes.
- Deploy on **Linux / macOS** for full TLS impersonation. On Windows, curl-impersonate is not available and the proxy falls back to system curl.
## ๐ License
This project is licensed under **Non-Commercial** terms:
- **Allowed**: Personal learning, research, self-hosted deployment
- **Prohibited**: Any commercial use, including but not limited to selling, reselling, paid proxy services, or integration into commercial products
This project is not affiliated with OpenAI. Users assume all risks and must comply with OpenAI's Terms of Service.
---
<div align="center">
<sub>Built with Hono + TypeScript | Powered by Codex Desktop API</sub>
</div>
|