icebear0828 Claude Opus 4.6 commited on
Commit
bd64e44
·
1 Parent(s): 22a7de1

docs: add bilingual README (Chinese + English) with non-commercial license

Browse files
Files changed (2) hide show
  1. README.md +208 -110
  2. README_EN.md +265 -0
README.md CHANGED
@@ -1,41 +1,54 @@
1
- # Codex Proxy
2
 
3
- A reverse proxy that exposes the [Codex Desktop](https://openai.com/codex) API as an OpenAI-compatible `/v1/chat/completions` endpoint. Use any OpenAI-compatible client (Cursor, Continue, VS Code, etc.) with Codex models — for free.
 
 
4
 
5
- ## Architecture
 
 
 
 
 
6
 
7
- ```
8
- OpenAI-compatible client
9
-
10
- POST /v1/chat/completions
11
-
12
-
13
- ┌─────────────┐ POST /backend-api/codex/responses
14
- │ Codex Proxy │ ──────────────────────────────────────► chatgpt.com
15
- │ :8080 │ ◄────────────────────────────────────── (SSE stream)
16
- └─────────────┘
17
-
18
- SSE chat.completion.chunk
19
-
20
-
21
- Client
22
- ```
23
 
24
- The proxy translates OpenAI Chat Completions format to the Codex Responses API format, handles authentication (OAuth PKCE), multi-account rotation, and Cloudflare bypass via curl subprocess.
25
 
26
- ## Quick Start
 
 
27
 
28
  ```bash
29
- # 1. Install dependencies
 
 
 
 
30
  npm install
31
 
32
- # 2. Start the proxy (dev mode with hot reload)
33
  npm run dev
34
 
35
- # 3. Open the dashboard and log in with your ChatGPT account
36
  # http://localhost:8080
37
 
38
- # 4. Test a chat completion
39
  curl http://localhost:8080/v1/chat/completions \
40
  -H "Content-Type: application/json" \
41
  -d '{
@@ -45,98 +58,90 @@ curl http://localhost:8080/v1/chat/completions \
45
  }'
46
  ```
47
 
48
- ## Features
49
 
50
- - **OpenAI-compatible API** drop-in replacement for `/v1/chat/completions` and `/v1/models`
51
- - **OAuth PKCE login** native browser-based login, no manual token copying
52
- - **Multi-account rotation** add multiple ChatGPT accounts with automatic load balancing (`least_used` or `round_robin`)
53
- - **Auto token refresh** JWT tokens are refreshed automatically before expiry
54
- - **Cloudflare bypass** — all upstream requests use curl subprocess with native TLS
55
- - **Quota monitoring** — real-time Codex usage/quota display per account
56
- - **Web dashboard** — manage accounts, view usage, and monitor status at `http://localhost:8080`
57
- - **Auto-update detection** — polls the Codex Desktop appcast for new versions
58
 
59
- ## Available Models
 
 
 
 
60
 
61
- | Model ID | Alias | Description |
62
- |----------|-------|-------------|
63
- | `gpt-5.3-codex` | `codex` | Latest frontier agentic coding model (default) |
64
- | `gpt-5.2-codex` || Previous generation coding model |
65
- | `gpt-5.1-codex-max` | `codex-max` | Maximum capability coding model |
66
- | `gpt-5.2` || General-purpose model |
67
- | `gpt-5.1-codex-mini` | `codex-mini` | Lightweight, fast coding model |
68
 
69
- ## API Usage
 
 
 
70
 
71
- ### Chat Completions (streaming)
72
 
73
- ```bash
74
- curl http://localhost:8080/v1/chat/completions \
75
- -H "Content-Type: application/json" \
76
- -d '{
77
- "model": "codex",
78
- "messages": [
79
- {"role": "system", "content": "You are a helpful coding assistant."},
80
- {"role": "user", "content": "Write a Python function to check if a number is prime."}
81
- ],
82
- "stream": true
83
- }'
84
  ```
85
-
86
- ### Chat Completions (non-streaming)
87
-
88
- ```bash
89
- curl http://localhost:8080/v1/chat/completions \
90
- -H "Content-Type: application/json" \
91
- -d '{
92
- "model": "codex",
93
- "messages": [{"role": "user", "content": "Hello!"}],
94
- "stream": false
95
- }'
96
- ```
97
-
98
- ### List Models
99
-
100
- ```bash
101
- curl http://localhost:8080/v1/models
102
- ```
103
-
104
- ### Check Account Quota
105
-
106
- ```bash
107
- curl "http://localhost:8080/auth/accounts?quota=true"
 
 
 
 
 
 
 
 
108
  ```
109
 
110
- ## Configuration
111
-
112
- All configuration is in `config/default.yaml`:
113
-
114
- | Section | Key Settings |
115
- |---------|-------------|
116
- | `api` | `base_url`, `timeout_seconds` |
117
- | `client` | `originator`, `app_version`, `platform`, `arch` |
118
- | `model` | `default` model, `default_reasoning_effort` |
119
- | `auth` | `oauth_client_id`, `rotation_strategy`, `rate_limit_backoff_seconds` |
120
- | `server` | `host`, `port`, `proxy_api_key` |
121
 
122
- Environment variable overrides:
 
 
 
123
 
124
- | Variable | Overrides |
125
- |----------|-----------|
126
- | `PORT` | `server.port` |
127
- | `CODEX_PLATFORM` | `client.platform` |
128
- | `CODEX_ARCH` | `client.arch` |
129
- | `CODEX_JWT_TOKEN` | `auth.jwt_token` |
130
 
131
- ## Client Setup Examples
132
 
133
  ### Cursor
134
 
135
- Settings > Models > OpenAI API Base:
136
  ```
137
  http://localhost:8080/v1
138
  ```
139
 
 
 
 
 
 
140
  ### Continue (VS Code)
141
 
142
  `~/.continue/config.json`:
@@ -146,22 +151,115 @@ http://localhost:8080/v1
146
  "title": "Codex",
147
  "provider": "openai",
148
  "model": "codex",
149
- "apiBase": "http://localhost:8080/v1"
 
150
  }]
151
  }
152
  ```
153
 
154
- ## Scripts
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
155
 
156
- | Command | Description |
157
- |---------|-------------|
158
- | `npm run dev` | Start dev server with hot reload |
159
- | `npm run build` | Compile TypeScript to `dist/` |
160
- | `npm start` | Run compiled server |
161
- | `npm run check-update` | Check for new Codex Desktop versions |
162
- | `npm run extract -- --path <asar>` | Extract fingerprint from Codex app |
163
- | `npm run apply-update` | Apply extracted fingerprint updates |
164
 
165
- ## License
166
 
167
- For personal use only. This project is not affiliated with OpenAI.
 
 
 
1
+ <div align="center">
2
 
3
+ <h1>Codex Proxy</h1>
4
+ <h3>您的本地 Codex 编程助手中转站</h3>
5
+ <p>将 Codex Desktop 的能力以 OpenAI 标准协议对外暴露,无缝接入任意 AI 客户端。</p>
6
 
7
+ <p>
8
+ <img src="https://img.shields.io/badge/Runtime-Node.js_18+-339933?style=flat-square&logo=nodedotjs&logoColor=white" alt="Node.js">
9
+ <img src="https://img.shields.io/badge/Language-TypeScript-3178C6?style=flat-square&logo=typescript&logoColor=white" alt="TypeScript">
10
+ <img src="https://img.shields.io/badge/Framework-Hono-E36002?style=flat-square" alt="Hono">
11
+ <img src="https://img.shields.io/badge/License-Non--Commercial-red?style=flat-square" alt="License">
12
+ </p>
13
 
14
+ <p>
15
+ <a href="#-快速开始-quick-start">快速开始</a>
16
+ <a href="#-核心功能-features">核心功能</a> •
17
+ <a href="#-技术架构-architecture">技术架构</a> •
18
+ <a href="#-客户端接入-client-setup">客户端接入</a> •
19
+ <a href="#-配置说明-configuration">配置说明</a>
20
+ </p>
21
+
22
+ <p>
23
+ <strong>简体中文</strong> |
24
+ <a href="./README_EN.md">English</a>
25
+ </p>
26
+
27
+ </div>
28
+
29
+ ---
30
 
31
+ **Codex Proxy** 是一个轻量级本地中转服务,将 [Codex Desktop](https://openai.com/codex) Responses API 转换为 OpenAI 标准的 `/v1/chat/completions` 接口。通过本项目,您可以在 Cursor、Continue、VS Code 等任何兼容 OpenAI 协议的客户端中直接使用 Codex 编程模型。
32
 
33
+ 只需一个 ChatGPT 账号,配合本代理即可在本地搭建一个专属的 AI 编程助手网关。
34
+
35
+ ## 🚀 快速开始 (Quick Start)
36
 
37
  ```bash
38
+ # 1. 克隆仓库
39
+ git clone https://github.com/icebear0828/codex-proxy.git
40
+ cd codex-proxy
41
+
42
+ # 2. 安装依赖
43
  npm install
44
 
45
+ # 3. 启动代理(开发模式,支持热重载)
46
  npm run dev
47
 
48
+ # 4. 打开浏览器访问控制面板,使用 ChatGPT 账号登录
49
  # http://localhost:8080
50
 
51
+ # 5. 测试请求
52
  curl http://localhost:8080/v1/chat/completions \
53
  -H "Content-Type: application/json" \
54
  -d '{
 
58
  }'
59
  ```
60
 
61
+ ## 🌟 核心功能 (Features)
62
 
63
+ ### 1. 🔌 全协议兼容 (OpenAI-Compatible API)
64
+ - 完全兼容 `/v1/chat/completions` `/v1/models` 端点
65
+ - 支持 SSE 流式输出,可直接对接所有 OpenAI SDK 和客户端
66
+ - 自动完成 Chat Completions Codex Responses API 双向协议转换
 
 
 
 
67
 
68
+ ### 2. 🔐 账号管理与智能轮换 (Auth & Multi-Account)
69
+ - **OAuth PKCE 登录** — 浏览器一键授权,无需手动复制 Token
70
+ - **多账号轮换** — 支持 `least_used`(最少使用优先)和 `round_robin`(轮询)两种调度策略
71
+ - **Token 自动续期** — JWT 到期前自动刷新,无需人工干预
72
+ - **配额实时监控** — 控制面板展示各账号剩余用量
73
 
74
+ ### 3. 🛡️ 反检测与协议伪装 (Anti-Detection)
75
+ - **Chrome TLS 指纹** — 通过 curl-impersonate 模拟 Chrome 136 完整 TLS 握手特征
76
+ - **桌面端请求头复现** — `originator`、`User-Agent``sec-ch-*` 等请求头按真实 Codex Desktop 顺序排列
77
+ - **桌面上下文注入**每个请求自动注入 Codex Desktop 的系统提示词,实现完整的功能对等
78
+ - **Cookie 持久化** 自动捕获并回传 Cloudflare Cookie,维持会话连续性
79
+ - **时间抖动 (Jitter)** 定时操作加入随机偏移,消除机械化行为特征
 
80
 
81
+ ### 4. 🔄 会话管理与版本追踪 (Session & Version)
82
+ - **多轮对话关联** — 自动维护 `previous_response_id`,保持上下文连贯
83
+ - **Appcast 版本追踪** — 定时轮询 Codex Desktop 更新源,自动同步 `app_version` 与 `build_number`
84
+ - **Web 控制面板** — 账号管理、用量监控、状态总览,一站式操作
85
 
86
+ ## 🏗️ 技术架构 (Architecture)
87
 
 
 
 
 
 
 
 
 
 
 
 
88
  ```
89
+ Codex Proxy
90
+ ┌─────────────────────────────────────────────────────┐
91
+ │ │
92
+ │ Client (Cursor / Continue / SDK) │
93
+ │ │ │
94
+ POST /v1/chat/completions │
95
+ │ │ │
96
+ │ ▼ │
97
+ │ ┌──────────┐ ┌───────────────┐ ┌──────────┐ │
98
+ │ │ Routes │──▶│ Translation │──▶│ Proxy │ │
99
+ │ (Hono) │ │ OpenAI→Codex │ │ curl TLS │ │
100
+ │ └──────────┘ └───────────────┘ └────┬─────┘ │
101
+ │ ▲ │ │
102
+ │ │ ┌───────────────┐ │ │
103
+ │ └──────────│ Translation │◀───────┘ │
104
+ │ │ Codex→OpenAI │ SSE stream │
105
+ │ └───────────────┘ │
106
+ │ │
107
+ │ ┌──────────┐ ┌───────────────┐ ┌─────────────┐ │
108
+ │ │ Auth │ │ Fingerprint │ │ Session │ │
109
+ │ │ OAuth/JWT│ │ Headers/UA │ │ Manager │ │
110
+ │ └──────────┘ └───────────────┘ └─────────────┘ │
111
+ │ │
112
+ └─────────────────────────────────────────────────────┘
113
+
114
+ curl subprocess
115
+ (Chrome TLS)
116
+
117
+
118
+ chatgpt.com
119
+ /backend-api/codex/responses
120
  ```
121
 
122
+ ## 📦 可用模型 (Available Models)
 
 
 
 
 
 
 
 
 
 
123
 
124
+ | 模型 ID | 别名 | 说明 |
125
+ |---------|------|------|
126
+ | `gpt-5.2-codex` | `codex` | 最新 agentic 编程模型(默认) |
127
+ | `gpt-5.1-codex-mini` | `codex-mini` | 轻量快��编程模型 |
128
 
129
+ > 模型列表会随 Codex Desktop 版本更新自动同步。
 
 
 
 
 
130
 
131
+ ## 🔗 客户端接入 (Client Setup)
132
 
133
  ### Cursor
134
 
135
+ Settings Models OpenAI API Base:
136
  ```
137
  http://localhost:8080/v1
138
  ```
139
 
140
+ API Key(从控制面板获取):
141
+ ```
142
+ codex-proxy-xxxxx
143
+ ```
144
+
145
  ### Continue (VS Code)
146
 
147
  `~/.continue/config.json`:
 
151
  "title": "Codex",
152
  "provider": "openai",
153
  "model": "codex",
154
+ "apiBase": "http://localhost:8080/v1",
155
+ "apiKey": "codex-proxy-xxxxx"
156
  }]
157
  }
158
  ```
159
 
160
+ ### OpenAI Python SDK
161
+
162
+ ```python
163
+ from openai import OpenAI
164
+
165
+ client = OpenAI(
166
+ base_url="http://localhost:8080/v1",
167
+ api_key="codex-proxy-xxxxx"
168
+ )
169
+
170
+ response = client.chat.completions.create(
171
+ model="codex",
172
+ messages=[{"role": "user", "content": "Hello!"}],
173
+ stream=True
174
+ )
175
+
176
+ for chunk in response:
177
+ print(chunk.choices[0].delta.content or "", end="")
178
+ ```
179
+
180
+ ### OpenAI Node.js SDK
181
+
182
+ ```typescript
183
+ import OpenAI from "openai";
184
+
185
+ const client = new OpenAI({
186
+ baseURL: "http://localhost:8080/v1",
187
+ apiKey: "codex-proxy-xxxxx",
188
+ });
189
+
190
+ const stream = await client.chat.completions.create({
191
+ model: "codex",
192
+ messages: [{ role: "user", content: "Hello!" }],
193
+ stream: true,
194
+ });
195
+
196
+ for await (const chunk of stream) {
197
+ process.stdout.write(chunk.choices[0]?.delta?.content || "");
198
+ }
199
+ ```
200
+
201
+ ## ⚙️ 配置说明 (Configuration)
202
+
203
+ 所有配置位于 `config/default.yaml`:
204
+
205
+ | 分类 | 关键配置 | 说明 |
206
+ |------|---------|------|
207
+ | `server` | `host`, `port`, `proxy_api_key` | 服务监听地址与 API 密钥 |
208
+ | `api` | `base_url`, `timeout_seconds` | 上游 API 地址与请求超时 |
209
+ | `client_identity` | `app_version`, `build_number` | 模拟的 Codex Desktop 版本 |
210
+ | `model` | `default`, `default_reasoning_effort` | 默认模型与推理强度 |
211
+ | `auth` | `rotation_strategy`, `rate_limit_backoff_seconds` | 轮换策略与限流退避 |
212
+
213
+ ### 环境变量覆盖
214
+
215
+ | 环境变量 | 覆盖配置 |
216
+ |---------|---------|
217
+ | `PORT` | `server.port` |
218
+ | `CODEX_PLATFORM` | `client_identity.platform` |
219
+ | `CODEX_ARCH` | `client_identity.arch` |
220
+
221
+ ## 📡 API 端点一览 (API Endpoints)
222
+
223
+ | 端点 | 方法 | 说明 |
224
+ |------|------|------|
225
+ | `/v1/chat/completions` | POST | 聊天补全(核心端点) |
226
+ | `/v1/models` | GET | 可用模型列表 |
227
+ | `/health` | GET | 健康检查 |
228
+ | `/auth/accounts` | GET | 账号列表与配额查询 |
229
+ | `/auth/login` | GET | OAuth 登录入口 |
230
+ | `/debug/fingerprint` | GET | 调试:查看当前伪装头信息 |
231
+
232
+ ## 🔧 命令 (Commands)
233
+
234
+ | 命令 | 说明 |
235
+ |------|------|
236
+ | `npm run dev` | 开发模式启动(热重载) |
237
+ | `npm run build` | 编译 TypeScript 到 `dist/` |
238
+ | `npm start` | 运行编译后的生产版本 |
239
+
240
+ ## 📋 系统要求 (Requirements)
241
+
242
+ - **Node.js** 18+
243
+ - **curl** — 系统自带即可;安装 [curl-impersonate](https://github.com/lexiforest/curl-impersonate) 可获得完整 Chrome TLS 伪装
244
+ - **ChatGPT 账号** — 普通账号即可
245
+
246
+ ## ⚠️ 注意事项 (Notes)
247
+
248
+ - Codex API 为**流式输出专用**,设置 `stream: false` 时代理会内部流式收集后返回完整 JSON
249
+ - 本项目依赖 Codex Desktop 的公开接口,上游版本更新可能导致接口变动
250
+ - 建议在 **Linux / macOS** 上部署以获得完整 TLS 伪装能力(Windows 下 curl-impersonate 暂不可用)
251
+
252
+ ## 📄 许可协议 (License)
253
+
254
+ 本项目采用 **非商业许可 (Non-Commercial)**:
255
+
256
+ - **允许**:个人学习、研究、自用部署
257
+ - **禁止**:任何形式的商业用途,包括但不限于出售、转售、收费代理、商业产品集成
258
 
259
+ 本项目与 OpenAI 无关联。使用者需自行承担风险并遵守 OpenAI 的服务条款。
 
 
 
 
 
 
 
260
 
261
+ ---
262
 
263
+ <div align="center">
264
+ <sub>Built with Hono + TypeScript | Powered by Codex Desktop API</sub>
265
+ </div>
README_EN.md ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+
3
+ <h1>Codex Proxy</h1>
4
+ <h3>Your Local Codex Coding Assistant Gateway</h3>
5
+ <p>Expose Codex Desktop's capabilities as a standard OpenAI API, seamlessly connecting any AI client.</p>
6
+
7
+ <p>
8
+ <img src="https://img.shields.io/badge/Runtime-Node.js_18+-339933?style=flat-square&logo=nodedotjs&logoColor=white" alt="Node.js">
9
+ <img src="https://img.shields.io/badge/Language-TypeScript-3178C6?style=flat-square&logo=typescript&logoColor=white" alt="TypeScript">
10
+ <img src="https://img.shields.io/badge/Framework-Hono-E36002?style=flat-square" alt="Hono">
11
+ <img src="https://img.shields.io/badge/License-Non--Commercial-red?style=flat-square" alt="License">
12
+ </p>
13
+
14
+ <p>
15
+ <a href="#-quick-start">Quick Start</a> •
16
+ <a href="#-features">Features</a> •
17
+ <a href="#-architecture">Architecture</a> •
18
+ <a href="#-client-setup">Client Setup</a> •
19
+ <a href="#-configuration">Configuration</a>
20
+ </p>
21
+
22
+ <p>
23
+ <a href="./README.md">简体中文</a> |
24
+ <strong>English</strong>
25
+ </p>
26
+
27
+ </div>
28
+
29
+ ---
30
+
31
+ **Codex Proxy** is a lightweight local gateway that translates the [Codex Desktop](https://openai.com/codex) Responses API into a standard OpenAI-compatible `/v1/chat/completions` endpoint. Use Codex coding models directly in Cursor, Continue, VS Code, or any OpenAI-compatible client.
32
+
33
+ Just a ChatGPT account and this proxy — your own personal AI coding assistant gateway, running locally.
34
+
35
+ ## 🚀 Quick Start
36
+
37
+ ```bash
38
+ # 1. Clone the repo
39
+ git clone https://github.com/icebear0828/codex-proxy.git
40
+ cd codex-proxy
41
+
42
+ # 2. Install dependencies
43
+ npm install
44
+
45
+ # 3. Start the proxy (dev mode with hot reload)
46
+ npm run dev
47
+
48
+ # 4. Open the dashboard and log in with your ChatGPT account
49
+ # http://localhost:8080
50
+
51
+ # 5. Test a request
52
+ curl http://localhost:8080/v1/chat/completions \
53
+ -H "Content-Type: application/json" \
54
+ -d '{
55
+ "model": "codex",
56
+ "messages": [{"role": "user", "content": "Hello!"}],
57
+ "stream": true
58
+ }'
59
+ ```
60
+
61
+ ## 🌟 Features
62
+
63
+ ### 1. 🔌 Full Protocol Compatibility
64
+ - Complete `/v1/chat/completions` and `/v1/models` endpoint support
65
+ - SSE streaming output, works with all OpenAI SDKs and clients
66
+ - Automatic bidirectional translation between Chat Completions and Codex Responses API
67
+
68
+ ### 2. 🔐 Account Management & Smart Rotation
69
+ - **OAuth PKCE login** — one-click browser auth, no manual token copying
70
+ - **Multi-account rotation** — `least_used` and `round_robin` scheduling strategies
71
+ - **Auto token refresh** — JWT renewed automatically before expiry
72
+ - **Real-time quota monitoring** — dashboard shows remaining usage per account
73
+
74
+ ### 3. 🛡️ Anti-Detection & Protocol Impersonation
75
+ - **Chrome TLS fingerprint** — curl-impersonate replicates the full Chrome 136 TLS handshake
76
+ - **Desktop header replication** — `originator`, `User-Agent`, `sec-ch-*` headers in exact Codex Desktop order
77
+ - **Desktop context injection** — every request includes the Codex Desktop system prompt for full feature parity
78
+ - **Cookie persistence** — automatic Cloudflare cookie capture and replay
79
+ - **Timing jitter** — randomized delays on scheduled operations to eliminate mechanical patterns
80
+
81
+ ### 4. 🔄 Session & Version Management
82
+ - **Multi-turn conversations** — automatic `previous_response_id` for context continuity
83
+ - **Appcast version tracking** — polls Codex Desktop update feed, auto-syncs `app_version` and `build_number`
84
+ - **Web dashboard** — account management, usage monitoring, and status overview in one place
85
+
86
+ ## 🏗️ Architecture
87
+
88
+ ```
89
+ Codex Proxy
90
+ ┌─────────────────────────────────────────────────────┐
91
+ │ │
92
+ │ Client (Cursor / Continue / SDK) │
93
+ │ │ │
94
+ │ POST /v1/chat/completions │
95
+ │ │ │
96
+ │ ▼ │
97
+ │ ┌──────────┐ ┌───────────────┐ ┌──────────┐ │
98
+ │ │ Routes │──▶│ Translation │──▶│ Proxy │ │
99
+ │ │ (Hono) │ │ OpenAI→Codex │ │ curl TLS │ │
100
+ │ └──────────┘ └───────────────┘ └────┬─────┘ │
101
+ │ ▲ │ │
102
+ │ │ ┌───────────────┐ │ │
103
+ │ └──────────│ Translation │◀───────┘ │
104
+ │ │ Codex→OpenAI │ SSE stream │
105
+ │ └───────────────┘ │
106
+ │ │
107
+ │ ┌──────────┐ ┌───────────────┐ ┌─────────────┐ │
108
+ │ │ Auth │ │ Fingerprint │ │ Session │ │
109
+ │ │ OAuth/JWT│ │ Headers/UA │ │ Manager │ │
110
+ │ └──────────┘ └───────────────┘ └─────────────┘ │
111
+ │ │
112
+ └─────────────────────────────────────────────────────┘
113
+
114
+ curl subprocess
115
+ (Chrome TLS)
116
+
117
+
118
+ chatgpt.com
119
+ /backend-api/codex/responses
120
+ ```
121
+
122
+ ## 📦 Available Models
123
+
124
+ | Model ID | Alias | Description |
125
+ |----------|-------|-------------|
126
+ | `gpt-5.2-codex` | `codex` | Latest agentic coding model (default) |
127
+ | `gpt-5.1-codex-mini` | `codex-mini` | Lightweight, fast coding model |
128
+
129
+ > Models are automatically synced when new Codex Desktop versions are released.
130
+
131
+ ## 🔗 Client Setup
132
+
133
+ ### Cursor
134
+
135
+ Settings → Models → OpenAI API Base:
136
+ ```
137
+ http://localhost:8080/v1
138
+ ```
139
+
140
+ API Key (from the dashboard):
141
+ ```
142
+ codex-proxy-xxxxx
143
+ ```
144
+
145
+ ### Continue (VS Code)
146
+
147
+ `~/.continue/config.json`:
148
+ ```json
149
+ {
150
+ "models": [{
151
+ "title": "Codex",
152
+ "provider": "openai",
153
+ "model": "codex",
154
+ "apiBase": "http://localhost:8080/v1",
155
+ "apiKey": "codex-proxy-xxxxx"
156
+ }]
157
+ }
158
+ ```
159
+
160
+ ### OpenAI Python SDK
161
+
162
+ ```python
163
+ from openai import OpenAI
164
+
165
+ client = OpenAI(
166
+ base_url="http://localhost:8080/v1",
167
+ api_key="codex-proxy-xxxxx"
168
+ )
169
+
170
+ response = client.chat.completions.create(
171
+ model="codex",
172
+ messages=[{"role": "user", "content": "Hello!"}],
173
+ stream=True
174
+ )
175
+
176
+ for chunk in response:
177
+ print(chunk.choices[0].delta.content or "", end="")
178
+ ```
179
+
180
+ ### OpenAI Node.js SDK
181
+
182
+ ```typescript
183
+ import OpenAI from "openai";
184
+
185
+ const client = new OpenAI({
186
+ baseURL: "http://localhost:8080/v1",
187
+ apiKey: "codex-proxy-xxxxx",
188
+ });
189
+
190
+ const stream = await client.chat.completions.create({
191
+ model: "codex",
192
+ messages: [{ role: "user", content: "Hello!" }],
193
+ stream: true,
194
+ });
195
+
196
+ for await (const chunk of stream) {
197
+ process.stdout.write(chunk.choices[0]?.delta?.content || "");
198
+ }
199
+ ```
200
+
201
+ ## ⚙️ Configuration
202
+
203
+ All configuration is in `config/default.yaml`:
204
+
205
+ | Section | Key Settings | Description |
206
+ |---------|-------------|-------------|
207
+ | `server` | `host`, `port`, `proxy_api_key` | Listen address and API key |
208
+ | `api` | `base_url`, `timeout_seconds` | Upstream API URL and timeout |
209
+ | `client_identity` | `app_version`, `build_number` | Codex Desktop version to impersonate |
210
+ | `model` | `default`, `default_reasoning_effort` | Default model and reasoning effort |
211
+ | `auth` | `rotation_strategy`, `rate_limit_backoff_seconds` | Rotation strategy and rate limit backoff |
212
+
213
+ ### Environment Variable Overrides
214
+
215
+ | Variable | Overrides |
216
+ |----------|-----------|
217
+ | `PORT` | `server.port` |
218
+ | `CODEX_PLATFORM` | `client_identity.platform` |
219
+ | `CODEX_ARCH` | `client_identity.arch` |
220
+
221
+ ## 📡 API Endpoints
222
+
223
+ | Endpoint | Method | Description |
224
+ |----------|--------|-------------|
225
+ | `/v1/chat/completions` | POST | Chat completions (main endpoint) |
226
+ | `/v1/models` | GET | List available models |
227
+ | `/health` | GET | Health check |
228
+ | `/auth/accounts` | GET | Account list and quota |
229
+ | `/auth/login` | GET | OAuth login entry |
230
+ | `/debug/fingerprint` | GET | Debug: view current impersonation headers |
231
+
232
+ ## 🔧 Commands
233
+
234
+ | Command | Description |
235
+ |---------|-------------|
236
+ | `npm run dev` | Start dev server with hot reload |
237
+ | `npm run build` | Compile TypeScript to `dist/` |
238
+ | `npm start` | Run compiled production server |
239
+
240
+ ## 📋 Requirements
241
+
242
+ - **Node.js** 18+
243
+ - **curl** — system curl works out of the box; install [curl-impersonate](https://github.com/lexiforest/curl-impersonate) for full Chrome TLS fingerprinting
244
+ - **ChatGPT account** — standard account is sufficient
245
+
246
+ ## ⚠️ Notes
247
+
248
+ - The Codex API is **stream-only**. When `stream: false` is set, the proxy streams internally and returns the assembled response as a single JSON object.
249
+ - This project relies on Codex Desktop's public API. Upstream version updates may cause breaking changes.
250
+ - Deploy on **Linux / macOS** for full TLS impersonation. On Windows, curl-impersonate is not available and the proxy falls back to system curl.
251
+
252
+ ## 📄 License
253
+
254
+ This project is licensed under **Non-Commercial** terms:
255
+
256
+ - **Allowed**: Personal learning, research, self-hosted deployment
257
+ - **Prohibited**: Any commercial use, including but not limited to selling, reselling, paid proxy services, or integration into commercial products
258
+
259
+ This project is not affiliated with OpenAI. Users assume all risks and must comply with OpenAI's Terms of Service.
260
+
261
+ ---
262
+
263
+ <div align="center">
264
+ <sub>Built with Hono + TypeScript | Powered by Codex Desktop API</sub>
265
+ </div>