maltose1 commited on
Commit
733a08d
·
verified ·
1 Parent(s): aee8847

Upload 9 files

Browse files
Files changed (9) hide show
  1. Dockerfile +54 -0
  2. README.md +80 -11
  3. getCaptcha.py +29 -0
  4. main.py +688 -0
  5. models.json +38 -0
  6. requirements.txt +4 -0
  7. resigner.py +135 -0
  8. start.sh +25 -0
  9. tenbin.json +5 -0
Dockerfile ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 使用官方 Python 镜像作为父镜像
2
+ FROM python:3.9-slim
3
+
4
+ # 在容器中设置工作目录
5
+ WORKDIR /app
6
+
7
+ # 安装系统依赖
8
+ # git 用于克隆求解器仓库
9
+ # 其他库是无头浏览器 (chromium) 运行所必需的
10
+ RUN apt-get update && apt-get install -y --no-install-recommends \
11
+ git \
12
+ libnss3 \
13
+ libnspr4 \
14
+ libdbus-1-3 \
15
+ libatk1.0-0 \
16
+ libatk-bridge2.0-0 \
17
+ libcups2 \
18
+ libdrm2 \
19
+ libxkbcommon0 \
20
+ libxcomposite1 \
21
+ libxdamage1 \
22
+ libxfixes3 \
23
+ libxrandr2 \
24
+ libgbm1 \
25
+ libpango-1.0-0 \
26
+ libcairo2 \
27
+ libasound2 \
28
+ && rm -rf /var/lib/apt/lists/*
29
+
30
+ # 克隆 Turnstile-Solver 仓库
31
+ RUN git clone https://github.com/Theyka/Turnstile-Solver.git /app/Turnstile-Solver
32
+
33
+ # 为 Turnstile-Solver 安装依赖
34
+ RUN pip install --no-cache-dir -r /app/Turnstile-Solver/requirements.txt
35
+
36
+ # 为求解器安装浏览器
37
+ # 这将下载并修补 chromium
38
+ RUN python -m patchright install chromium
39
+
40
+ # 将应用程序的其余代码复制到容器中
41
+ COPY . .
42
+
43
+ # 安装主应用程序的依赖
44
+ RUN pip install --no-cache-dir -r requirements.txt
45
+
46
+ # 使启动脚本可执行
47
+ RUN chmod +x ./start.sh
48
+
49
+ # 暴露应用程序将运行的端口
50
+ # Hugging Face Spaces 将使用此端口将流量路由到应用程序。
51
+ EXPOSE 7860
52
+
53
+ # 指定在容器启动时要运行的命令
54
+ CMD ["/app/start.sh"]
README.md CHANGED
@@ -1,11 +1,80 @@
1
- ---
2
- title: Tianping
3
- emoji: 🐠
4
- colorFrom: purple
5
- colorTo: indigo
6
- sdk: docker
7
- pinned: false
8
- license: mit
9
- ---
10
-
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Tenbin OpenAI API Adapter
3
+ emoji: 🚀
4
+ colorFrom: blue
5
+ colorTo: green
6
+ sdk: docker
7
+ app_port: 7860
8
+ pinned: false
9
+ ---
10
+
11
+ # Tenbin OpenAI API 适配器
12
+
13
+ 本项目将 [Tenbin.ai](https://tenbin.ai/) (Oshiete AI) 的服务接口转换为与 OpenAI API 完全兼容的格式。
14
+
15
+ 它通过一个 Docker 容器,同时运行 FastAPI 核心服务和一个 [Turnstile-Solver](https://github.com/Theyka/Turnstile-Solver) 实例,实现了对 Tenbin 账户池的管理、请求转换、流式响应和人机验证的全自动处理。
16
+
17
+ ## 🚀 如何使用此 Space
18
+
19
+ 部署成功后,您可以像使用官方 OpenAI API 一样,在任何支持的客户端中使用此服务。
20
+
21
+ - **API Base URL**: `https://<your-space-name>.hf.space/v1`
22
+ - **API Key**: 您在 `Secrets` 中配置的任何一个 API 密钥。
23
+ - **模型**: 请参考 `models.json` 文件中的可用模型列表。
24
+
25
+ ### cURL 示例
26
+
27
+ ```bash
28
+ curl "https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME/v1/chat/completions" \
29
+ -H "Content-Type: application/json" \
30
+ -H "Authorization: Bearer YOUR_API_KEY" \
31
+ -d '{
32
+ "model": "claude-3.7-sonnet",
33
+ "messages": [
34
+ {
35
+ "role": "user",
36
+ "content": "你好,请介绍一下你自己"
37
+ }
38
+ ],
39
+ "stream": false
40
+ }'
41
+ ```
42
+ > **注意**: 请将 `YOUR_USERNAME`, `YOUR_SPACE_NAME` 和 `YOUR_API_KEY` 替换为您自己的信息。
43
+
44
+ ## ⚙️ 如何配置 (Secrets)
45
+
46
+ 为了安全,所有敏感信息都应该通过 Hugging Face 的 **Secrets** 来配置。请在你的 Space 的 **Settings -> Secrets** 页面添加以下内容:
47
+
48
+ 1. **`SESSION_IDS`** (必需)
49
+ - **说明**: 你的 Tenbin/Oshiete AI 账户的 `session_id` 列表。
50
+ - **格式**: 使用逗号分隔。
51
+ - **示例**: `id_abc123,id_def456,id_ghi789`
52
+
53
+ 2. **`API_KEYS`** (必需)
54
+ - **说明**: 用于访问此代理服务的客户端 API 密钥。
55
+ - **格式**: 使用逗号分隔。
56
+ - **示例**: `sk-mykey-1,sk-mykey-2`
57
+
58
+ 3. **`DEBUG_MODE`** (可选)
59
+ - **说明**: 设置为 `true` 可以启用详细的调试日志。
60
+ - **格式**: `true` 或 `false`。
61
+ - **示例**: `true`
62
+
63
+ ## 📝 文件说明
64
+
65
+ - `main.py`: 核心 FastAPI 应用。
66
+ - `getCaptcha.py`: 与 Turnstile 验证码求解器交互。
67
+ - `Dockerfile`: 构建 Docker 镜像的蓝图。
68
+ - `start.sh`: 在容器内启动所有服务的脚本。
69
+ - `requirements.txt`: Python 依赖。
70
+ - `models.json`: 模型名称映射表。
71
+ - `resigner.py`: (本地工具) 用于自动注册新 `session_id` 的脚本。
72
+
73
+ 一、实践
74
+ 这期不写思路了,我很急,必须让佬友们立刻用上!刚刚开过dang会,我现在红的发烫! :china:
75
+
76
+ 免费过Turnstile。 我是Windows系统,如果你也是,可以根据以下步骤安装Turnstile-Solver,如果系统不一样请自行参考原网站 Turnstile-Solver。
77
+ 1.1. 运行命令 git clone https://github.com/Theyka/Turnstile-Solver.git
78
+ 1.2. 进入文件夹后无脑运行命令(但记得python要3.8+) python -m venv venv,然后激活环境venv\Scripts\activate,紧接着配置环境pip install -r requirements.txt
79
+ 1.3. 我选择安装模拟的是,其他浏览器自行参考原github。python -m patchright install chromium
80
+ 1.4. 启动服务python api_solver.py,默认是5000端口。
getCaptcha.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import requests
2
+ import time
3
+ import os
4
+
5
+ TURNSTILE_SOLVER_URL = os.environ.get("TURNSTILE_SOLVER_URL", "http://127.0.0.1:5000")
6
+
7
+ def getTaskId():
8
+ url = f"{TURNSTILE_SOLVER_URL}/turnstile?url=https://tenbin.ai/workspace&sitekey=0x4AAAAAABGR2exxRproizri&action=issue_execution_token"
9
+
10
+ response = requests.get(url)
11
+ response.raise_for_status()
12
+ return response.json()['task_id']
13
+
14
+ def getCaptcha(task_id):
15
+
16
+ url = f"{TURNSTILE_SOLVER_URL}/result?id={task_id}"
17
+
18
+ while True:
19
+ try:
20
+ response = requests.get(url)
21
+ response.raise_for_status()
22
+ captcha = response.json().get('value', None)
23
+ if captcha:
24
+ return captcha
25
+ else:
26
+ time.sleep(1)
27
+ except Exception as e:
28
+ print(e)
29
+ time.sleep(1)
main.py ADDED
@@ -0,0 +1,688 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import time
4
+ import uuid
5
+ import threading
6
+ from typing import Any, Dict, List, Optional, TypedDict, Union
7
+
8
+ import requests
9
+ import websocket
10
+ from fastapi import FastAPI, HTTPException, Depends, Query
11
+ from fastapi.responses import StreamingResponse
12
+ from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
13
+ from pydantic import BaseModel, Field
14
+
15
+ from getCaptcha import getCaptcha, getTaskId
16
+
17
+
18
+ # Tenbin Account Management
19
+ class TenbinAccount(TypedDict):
20
+ session_id: str
21
+ is_valid: bool
22
+ last_used: float
23
+ error_count: int
24
+
25
+
26
+ # Global variables
27
+ VALID_CLIENT_KEYS: set = set()
28
+ TENBIN_ACCOUNTS: List[TenbinAccount] = []
29
+ TENBIN_MODELS: Dict[str, str] = {} # 模型映射表,key 是模型名称,value 是内部模型 ID
30
+ account_rotation_lock = threading.Lock()
31
+ MAX_ERROR_COUNT = 3
32
+ ERROR_COOLDOWN = 300 # 5 minutes cooldown for accounts with errors
33
+ DEBUG_MODE = os.environ.get("DEBUG_MODE", "false").lower() == "true"
34
+ REQUEST_TIMEOUT = 120.0 # 请求超时时间,秒
35
+
36
+
37
+ # Pydantic Models
38
+ class ChatMessage(BaseModel):
39
+ role: str
40
+ content: Union[str, List[Dict[str, Any]]]
41
+ reasoning_content: Optional[str] = None
42
+
43
+
44
+ class ChatCompletionRequest(BaseModel):
45
+ model: str
46
+ messages: List[ChatMessage]
47
+ stream: bool = True
48
+ temperature: Optional[float] = None
49
+ max_tokens: Optional[int] = None
50
+ top_p: Optional[float] = None
51
+ raw_response: bool = False # 是否返回原始响应
52
+
53
+
54
+ class ModelInfo(BaseModel):
55
+ id: str
56
+ object: str = "model"
57
+ created: int
58
+ owned_by: str = "tenbin"
59
+
60
+
61
+ class ModelList(BaseModel):
62
+ object: str = "list"
63
+ data: List[ModelInfo]
64
+
65
+
66
+ class ChatCompletionChoice(BaseModel):
67
+ message: ChatMessage
68
+ index: int = 0
69
+ finish_reason: str = "stop"
70
+
71
+
72
+ class ChatCompletionResponse(BaseModel):
73
+ id: str = Field(default_factory=lambda: f"chatcmpl-{uuid.uuid4().hex}")
74
+ object: str = "chat.completion"
75
+ created: int = Field(default_factory=lambda: int(time.time()))
76
+ model: str
77
+ choices: List[ChatCompletionChoice]
78
+ usage: Dict[str, int] = Field(
79
+ default_factory=lambda: {
80
+ "prompt_tokens": 0,
81
+ "completion_tokens": 0,
82
+ "total_tokens": 0,
83
+ }
84
+ )
85
+
86
+
87
+ class StreamChoice(BaseModel):
88
+ delta: Dict[str, Any] = Field(default_factory=dict)
89
+ index: int = 0
90
+ finish_reason: Optional[str] = None
91
+
92
+
93
+ class StreamResponse(BaseModel):
94
+ id: str = Field(default_factory=lambda: f"chatcmpl-{uuid.uuid4().hex}")
95
+ object: str = "chat.completion.chunk"
96
+ created: int = Field(default_factory=lambda: int(time.time()))
97
+ model: str
98
+ choices: List[StreamChoice]
99
+
100
+
101
+ # FastAPI App
102
+ app = FastAPI(title="Tenbin OpenAI API Adapter")
103
+ security = HTTPBearer(auto_error=False)
104
+
105
+
106
+ def log_debug(message: str):
107
+ """Debug日志函数"""
108
+ if DEBUG_MODE:
109
+ print(f"[DEBUG] {message}")
110
+
111
+
112
+ def load_client_api_keys():
113
+ """Load client API keys from environment variable (comma-separated) or client_api_keys.json"""
114
+ global VALID_CLIENT_KEYS
115
+ keys_str = os.environ.get("API_KEYS")
116
+ try:
117
+ if keys_str:
118
+ print("Loading client API keys from API_KEYS environment variable.")
119
+ keys = [key.strip() for key in keys_str.split(',')]
120
+ else:
121
+ print("Loading client API keys from file: client_api_keys.json")
122
+ with open("client_api_keys.json", "r", encoding="utf-8") as f:
123
+ keys = json.load(f)
124
+
125
+ VALID_CLIENT_KEYS = set(keys) if isinstance(keys, list) else set()
126
+ print(f"Successfully loaded {len(VALID_CLIENT_KEYS)} client API keys.")
127
+ except FileNotFoundError:
128
+ print("Error: client_api_keys.json not found and API_KEYS not set. Client authentication will fail.")
129
+ VALID_CLIENT_KEYS = set()
130
+ except Exception as e:
131
+ print(f"Error loading client API keys: {e}")
132
+ VALID_CLIENT_KEYS = set()
133
+
134
+
135
+ def load_tenbin_accounts():
136
+ """Load Tenbin accounts from environment variable (comma-separated session_ids) or tenbin.json"""
137
+ global TENBIN_ACCOUNTS
138
+ TENBIN_ACCOUNTS = []
139
+ session_ids_str = os.environ.get("SESSION_IDS")
140
+ try:
141
+ accounts_to_process = []
142
+ if session_ids_str:
143
+ print("Loading Tenbin accounts from SESSION_IDS environment variable.")
144
+ session_ids = [sid.strip() for sid in session_ids_str.split(',')]
145
+ accounts_to_process = [{"session_id": sid} for sid in session_ids]
146
+ else:
147
+ print("Loading Tenbin accounts from file: tenbin.json")
148
+ with open("tenbin.json", "r", encoding="utf-8") as f:
149
+ accounts_to_process = json.load(f)
150
+
151
+ if not isinstance(accounts_to_process, list):
152
+ print("Warning: Account data should be a list of objects.")
153
+ return
154
+
155
+ for acc in accounts_to_process:
156
+ session_id = acc.get("session_id")
157
+ if session_id:
158
+ TENBIN_ACCOUNTS.append({
159
+ "session_id": session_id,
160
+ "is_valid": True,
161
+ "last_used": 0,
162
+ "error_count": 0
163
+ })
164
+ print(f"Successfully loaded {len(TENBIN_ACCOUNTS)} Tenbin accounts.")
165
+ except FileNotFoundError:
166
+ print("Error: tenbin.json not found and SESSION_IDS not set. API calls will fail.")
167
+ except Exception as e:
168
+ print(f"Error loading tenbin.json: {e}")
169
+
170
+
171
+ def load_tenbin_models():
172
+ """Load Tenbin models from models.json"""
173
+ global TENBIN_MODELS
174
+ try:
175
+ with open("models.json", "r", encoding="utf-8") as f:
176
+ models_data = json.load(f)
177
+ if isinstance(models_data, dict):
178
+ TENBIN_MODELS = models_data
179
+ print(f"Successfully loaded {len(TENBIN_MODELS)} models.")
180
+ else:
181
+ print("Warning: models.json should contain a dictionary of model mappings.")
182
+ TENBIN_MODELS = {}
183
+ except FileNotFoundError:
184
+ print("Error: models.json not found. Model list will be empty.")
185
+ TENBIN_MODELS = {}
186
+ except Exception as e:
187
+ print(f"Error loading models.json: {e}")
188
+ TENBIN_MODELS = {}
189
+
190
+
191
+ def get_best_tenbin_account() -> Optional[TenbinAccount]:
192
+ """Get the best available Tenbin account using a smart selection algorithm."""
193
+ with account_rotation_lock:
194
+ now = time.time()
195
+ valid_accounts = [
196
+ acc for acc in TENBIN_ACCOUNTS
197
+ if acc["is_valid"] and (
198
+ acc["error_count"] < MAX_ERROR_COUNT or
199
+ now - acc["last_used"] > ERROR_COOLDOWN
200
+ )
201
+ ]
202
+
203
+ if not valid_accounts:
204
+ return None
205
+
206
+ # Reset error count for accounts that have been in cooldown
207
+ for acc in valid_accounts:
208
+ if acc["error_count"] >= MAX_ERROR_COUNT and now - acc["last_used"] > ERROR_COOLDOWN:
209
+ acc["error_count"] = 0
210
+
211
+ # Sort by last used (oldest first) and error count (lowest first)
212
+ valid_accounts.sort(key=lambda x: (x["last_used"], x["error_count"]))
213
+ account = valid_accounts[0]
214
+ account["last_used"] = now
215
+ return account
216
+
217
+
218
+ def build_tenbin_prompt(messages: List[ChatMessage]) -> str:
219
+ """将 OpenAI 格式的消息列表转换为 Tenbin 格式的单个字符串"""
220
+ prompt = ""
221
+ for msg in messages:
222
+ role = msg.role
223
+ content = msg.content
224
+ if isinstance(content, list):
225
+ # 简单处理多模态内容,只提取文本部分
226
+ content = " ".join([
227
+ item.get("text", "")
228
+ for item in content
229
+ if item.get("type") == "text"
230
+ ])
231
+
232
+ # 添加到提示中
233
+ if role == "system":
234
+ # 系统消息作为 Human 消息的前缀
235
+ prompt += f"\n\nHuman: <system>{content}</system>"
236
+ elif role == "user":
237
+ prompt += f"\n\nHuman: {content}"
238
+ elif role == "assistant":
239
+ prompt += f"\n\nAssistant: {content}"
240
+ # 忽略其他角色
241
+
242
+ # 添加最后的 "Assistant:" 提示
243
+ prompt += "\n\nAssistant:"
244
+ return prompt
245
+
246
+
247
+ async def authenticate_client(
248
+ auth: Optional[HTTPAuthorizationCredentials] = Depends(security),
249
+ ):
250
+ """Authenticate client based on API key in Authorization header"""
251
+ if not VALID_CLIENT_KEYS:
252
+ raise HTTPException(
253
+ status_code=503,
254
+ detail="Service unavailable: Client API keys not configured on server.",
255
+ )
256
+
257
+ if not auth or not auth.credentials:
258
+ raise HTTPException(
259
+ status_code=401,
260
+ detail="API key required in Authorization header.",
261
+ headers={"WWW-Authenticate": "Bearer"},
262
+ )
263
+
264
+ if auth.credentials not in VALID_CLIENT_KEYS:
265
+ raise HTTPException(status_code=403, detail="Invalid client API key.")
266
+
267
+
268
+ @app.on_event("startup")
269
+ async def startup():
270
+ """应用启动时初始化配置"""
271
+ print("Starting Tenbin OpenAI API Adapter server...")
272
+ load_client_api_keys()
273
+ load_tenbin_accounts()
274
+ load_tenbin_models()
275
+ print("Server initialization completed.")
276
+
277
+
278
+ def get_models_list_response() -> ModelList:
279
+ """Helper to construct ModelList response from cached models."""
280
+ model_infos = [
281
+ ModelInfo(
282
+ id=model_id,
283
+ created=int(time.time()),
284
+ owned_by="tenbin"
285
+ )
286
+ for model_id in TENBIN_MODELS.keys()
287
+ ]
288
+ return ModelList(data=model_infos)
289
+
290
+
291
+ @app.get("/v1/models", response_model=ModelList)
292
+ async def list_v1_models(_: None = Depends(authenticate_client)):
293
+ """List available models - authenticated"""
294
+ return get_models_list_response()
295
+
296
+
297
+ @app.get("/models", response_model=ModelList)
298
+ async def list_models_no_auth():
299
+ """List available models without authentication - for client compatibility"""
300
+ return get_models_list_response()
301
+
302
+
303
+ @app.get("/debug")
304
+ async def toggle_debug(enable: bool = Query(None)):
305
+ """切换调试模式"""
306
+ global DEBUG_MODE
307
+ if enable is not None:
308
+ DEBUG_MODE = enable
309
+ return {"debug_mode": DEBUG_MODE}
310
+
311
+
312
+ @app.post("/v1/chat/completions")
313
+ async def chat_completions(
314
+ request: ChatCompletionRequest, _: None = Depends(authenticate_client)
315
+ ):
316
+ """创建聊天完成 - 使用 Tenbin API"""
317
+ # 检查模型是否存在
318
+ if request.model not in TENBIN_MODELS:
319
+ raise HTTPException(status_code=404, detail=f"Model '{request.model}' not found.")
320
+
321
+ # 获取内部模型 ID
322
+ internal_model_id = TENBIN_MODELS[request.model]
323
+
324
+ if not request.messages:
325
+ raise HTTPException(status_code=400, detail="No messages provided in the request.")
326
+
327
+ log_debug(f"Processing request for model: {request.model} (internal ID: {internal_model_id})")
328
+
329
+ # 构建 Tenbin 格式的提示
330
+ prompt = build_tenbin_prompt(request.messages)
331
+ log_debug(f"Built prompt with length: {len(prompt)}")
332
+
333
+ # 尝试所有账户
334
+ for attempt in range(len(TENBIN_ACCOUNTS)):
335
+ account = get_best_tenbin_account()
336
+ if not account:
337
+ raise HTTPException(
338
+ status_code=503,
339
+ detail="No valid Tenbin accounts available."
340
+ )
341
+
342
+ session_id = account["session_id"]
343
+ log_debug(f"Using account with session_id ending in ...{session_id[-4:]}")
344
+
345
+ try:
346
+ # 获取执行令牌
347
+ execution_token = get_tenbin_execution_token(internal_model_id, session_id)
348
+
349
+ if request.stream:
350
+ log_debug("Returning stream response")
351
+ return StreamingResponse(
352
+ tenbin_stream_generator(request.model, prompt, session_id, execution_token),
353
+ media_type="text/event-stream",
354
+ headers={
355
+ "Cache-Control": "no-cache",
356
+ "Connection": "keep-alive",
357
+ "X-Accel-Buffering": "no",
358
+ },
359
+ )
360
+ else:
361
+ log_debug("Building non-stream response")
362
+ return build_tenbin_non_stream_response(request.model, prompt, session_id, execution_token)
363
+
364
+ except Exception as e:
365
+ error_detail = str(e)
366
+ log_debug(f"Tenbin API error: {error_detail}")
367
+
368
+ with account_rotation_lock:
369
+ # 增加错误计数
370
+ account["error_count"] += 1
371
+ log_debug(f"Account ...{session_id[-4:]} error count: {account['error_count']}")
372
+
373
+ # 如果错误看起来是认证问题,标记账户为无效
374
+ if "authentication" in error_detail.lower() or "unauthorized" in error_detail.lower():
375
+ account["is_valid"] = False
376
+ log_debug(f"Account ...{session_id[-4:]} marked as invalid due to auth error.")
377
+
378
+ # 所有尝试都失败
379
+ if request.stream:
380
+ return StreamingResponse(
381
+ error_stream_generator("All attempts to contact Tenbin API failed.", 503),
382
+ media_type="text/event-stream",
383
+ status_code=503,
384
+ )
385
+ else:
386
+ raise HTTPException(status_code=503, detail="All attempts to contact Tenbin API failed.")
387
+
388
+
389
+ def get_tenbin_execution_token(model: str, session_id: str) -> str:
390
+ """获取 Tenbin 执行令牌"""
391
+ try:
392
+ task_id = getTaskId()
393
+ captcha = getCaptcha(task_id)
394
+ url = "https://graphql.tenbin.ai/graphql"
395
+
396
+ payload = {
397
+ "operationName": "IssueExecutionTokensMultiple",
398
+ "variables": {
399
+ "turnstileToken": captcha,
400
+ "models": [model],
401
+ },
402
+ "query": "query IssueExecutionTokensMultiple($turnstileToken: String!, $models: [ChatModel!]!) {\n executionTokens: issueExecutionTokensMultiple(\n turnstileToken: $turnstileToken\n models: $models\n )\n}",
403
+ }
404
+
405
+ headers = {
406
+ "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36",
407
+ "Accept-Encoding": "gzip, deflate, br, zstd",
408
+ "Content-Type": "application/json",
409
+ "Cookie": f"sessionId={session_id}",
410
+ }
411
+
412
+ log_debug(f"Getting execution token for model: {model}")
413
+ response = requests.post(url, data=json.dumps(payload), headers=headers, timeout=REQUEST_TIMEOUT)
414
+ response.raise_for_status()
415
+
416
+ execution_token = response.json()["data"]["executionTokens"][0]
417
+ log_debug(f"Got execution token: {execution_token[:10]}...")
418
+ return execution_token
419
+ except Exception as e:
420
+ log_debug(f"Error getting execution token: {e}")
421
+ raise
422
+
423
+
424
+ def tenbin_stream_generator(model: str, prompt: str, session_id: str, execution_token: str):
425
+ """Tenbin WebSocket 流式响应生成器"""
426
+ stream_id = f"chatcmpl-{uuid.uuid4().hex}"
427
+ created_time = int(time.time())
428
+
429
+ # 发送初始角色增量
430
+ yield f"data: {StreamResponse(id=stream_id, created=created_time, model=model, choices=[StreamChoice(delta={'role': 'assistant'})]).json()}\n\n"
431
+
432
+ # 连接 WebSocket
433
+ url = "wss://graphql.tenbin.ai/graphql"
434
+ headers = {
435
+ "Host": "graphql.tenbin.ai",
436
+ "Connection": "Upgrade",
437
+ "Pragma": "no-cache",
438
+ "Cache-Control": "no-cache",
439
+ "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36 Edg/137.0.0.0",
440
+ "Upgrade": "websocket",
441
+ "Origin": "https://tenbin.ai",
442
+ "Sec-WebSocket-Version": "13",
443
+ "Accept-Encoding": "gzip, deflate, br, zstd",
444
+ "Accept-Language": "zh-CN,zh;q=0.9",
445
+ "Cookie": f"sessionId={session_id}",
446
+ "Sec-WebSocket-Key": "I/tTy5psJkboWYQfCypjVA==",
447
+ "Sec-WebSocket-Extensions": "permessage-deflate; client_max_window_bits",
448
+ "Sec-WebSocket-Protocol": "graphql-transport-ws",
449
+ }
450
+
451
+ ws = None
452
+ try:
453
+ log_debug("Connecting to WebSocket...")
454
+ ws = websocket.create_connection(url, header=headers)
455
+ ws.send(json.dumps({"type": "connection_init"}))
456
+ init_response = ws.recv()
457
+ log_debug(f"WebSocket init response: {init_response}")
458
+
459
+ # 发送订阅请求
460
+ payload = {
461
+ "id": str(uuid.uuid4()),
462
+ "type": "subscribe",
463
+ "payload": {
464
+ "variables": {
465
+ "prompt": prompt,
466
+ "executionToken": execution_token,
467
+ "stateToken": "",
468
+ },
469
+ "extensions": {},
470
+ "operationName": "StartConversation",
471
+ "query": "subscription StartConversation($executionToken: String!, $itemId: String, $itemDraftId: String, $systemPrompt: String, $prompt: String, $stateToken: String, $variables: [ConversationVariableInput!], $itemCallOption: ItemCallOption, $fileKey: String, $fileUploadIds: [String!], $selectedToolsByUser: [ToolType!]) {\n startConversation(\n executionToken: $executionToken\n itemId: $itemId\n itemDraftId: $itemDraftId\n systemPrompt: $systemPrompt\n prompt: $prompt\n stateToken: $stateToken\n variables: $variables\n itemCallOption: $itemCallOption\n fileKey: $fileKey\n fileUploadIds: $fileUploadIds\n selectedToolsByUser: $selectedToolsByUser\n ) {\n ...DeltaConversation\n __typename\n }\n}\n\nfragment DeltaConversation on AIConversationStreamResult {\n seq\n deltaToken\n isFinished\n newStateToken\n error\n fileUploadIds\n toolResult {\n id\n title\n url\n faviconUrl\n summary\n __typename\n }\n action\n activity\n toolError\n __typename\n}",
472
+ },
473
+ }
474
+
475
+ log_debug("Sending subscription request...")
476
+ ws.send(json.dumps(payload))
477
+
478
+ # 处理响应
479
+ accumulated_thinking = ""
480
+ thinking_mode = False
481
+ thinking_separator = "\n\n---\n\n"
482
+ is_thinking_model = model == "Claude-3.7-Sonnet-Extended"
483
+
484
+ while True:
485
+ try:
486
+ msg = ws.recv()
487
+ log_debug(f"Received message: {msg[:100]}..." if len(msg) > 100 else msg)
488
+
489
+ if msg.endswith('"type":"complete"}'):
490
+ log_debug("Received complete message")
491
+ break
492
+
493
+ try:
494
+ data = json.loads(msg)
495
+ if data.get("type") != "next":
496
+ continue
497
+
498
+ payload_data = data.get("payload", {}).get("data", {})
499
+ conversation = payload_data.get("startConversation", {})
500
+
501
+ delta_token = conversation.get("deltaToken", "")
502
+ is_finished = conversation.get("isFinished", False)
503
+
504
+ if delta_token:
505
+ if is_thinking_model:
506
+ # 检查是否包含思考/回答分隔符
507
+ if thinking_separator in accumulated_thinking + delta_token:
508
+ # 找到分隔符,切换到回答模式
509
+ if not thinking_mode:
510
+ # 如果之前没有发送过思考内容,先发送累积的思考内容
511
+ parts = (accumulated_thinking + delta_token).split(thinking_separator, 1)
512
+ thinking_content = parts[0]
513
+ answer_content = parts[1] if len(parts) > 1 else ""
514
+
515
+ if thinking_content:
516
+ yield f"data: {StreamResponse(id=stream_id, created=created_time, model=model, choices=[StreamChoice(delta={'reasoning_content': thinking_content})]).json()}\n\n"
517
+
518
+ if answer_content:
519
+ yield f"data: {StreamResponse(id=stream_id, created=created_time, model=model, choices=[StreamChoice(delta={'content': answer_content})]).json()}\n\n"
520
+
521
+ thinking_mode = True
522
+ accumulated_thinking = ""
523
+ else:
524
+ # 已经在回答模式,直接发送内容
525
+ yield f"data: {StreamResponse(id=stream_id, created=created_time, model=model, choices=[StreamChoice(delta={'content': delta_token})]).json()}\n\n"
526
+ else:
527
+ # 没有找到分隔符
528
+ if thinking_mode:
529
+ # 已经在回答模式,直接发送内容
530
+ yield f"data: {StreamResponse(id=stream_id, created=created_time, model=model, choices=[StreamChoice(delta={'content': delta_token})]).json()}\n\n"
531
+ else:
532
+ # 继续累积思考内容
533
+ accumulated_thinking += delta_token
534
+ else:
535
+ # 非思考模型,直接发送内容
536
+ yield f"data: {StreamResponse(id=stream_id, created=created_time, model=model, choices=[StreamChoice(delta={'content': delta_token})]).json()}\n\n"
537
+
538
+ if is_finished:
539
+ # 如果还有未发送的思考内容,发送它
540
+ if is_thinking_model and not thinking_mode and accumulated_thinking:
541
+ yield f"data: {StreamResponse(id=stream_id, created=created_time, model=model, choices=[StreamChoice(delta={'reasoning_content': accumulated_thinking})]).json()}\n\n"
542
+
543
+ # 发送完成信号
544
+ log_debug("Stream finished")
545
+ yield f"data: {StreamResponse(id=stream_id, created=created_time, model=model, choices=[StreamChoice(delta={}, finish_reason='stop')]).json()}\n\n"
546
+ yield "data: [DONE]\n\n"
547
+ break
548
+
549
+ except json.JSONDecodeError as e:
550
+ log_debug(f"JSON decode error: {e}")
551
+ continue
552
+
553
+ except websocket.WebSocketConnectionClosedException:
554
+ log_debug("WebSocket connection closed")
555
+ break
556
+
557
+ except Exception as e:
558
+ log_debug(f"Error processing message: {e}")
559
+ yield f"data: {json.dumps({'error': str(e)})}\n\n"
560
+ break
561
+
562
+ except Exception as e:
563
+ log_debug(f"WebSocket error: {e}")
564
+ yield f"data: {json.dumps({'error': str(e)})}\n\n"
565
+ yield "data: [DONE]\n\n"
566
+
567
+ finally:
568
+ if ws:
569
+ try:
570
+ ws.close()
571
+ log_debug("WebSocket connection closed")
572
+ except:
573
+ pass
574
+
575
+
576
+ def build_tenbin_non_stream_response(model: str, prompt: str, session_id: str, execution_token: str) -> ChatCompletionResponse:
577
+ """构建非流式响应"""
578
+ full_content = ""
579
+ full_reasoning_content = None
580
+
581
+ # 使用流式生成器,但累积所有内容
582
+ for chunk in tenbin_stream_generator(model, prompt, session_id, execution_token):
583
+ if not chunk.startswith("data: ") or chunk.strip() == "data: [DONE]":
584
+ continue
585
+
586
+ try:
587
+ data = json.loads(chunk[6:]) # 去掉 "data: " 前缀
588
+ if "choices" not in data:
589
+ continue
590
+
591
+ delta = data["choices"][0].get("delta", {})
592
+
593
+ if "content" in delta and delta["content"]:
594
+ full_content += delta["content"]
595
+
596
+ if "reasoning_content" in delta and delta["reasoning_content"]:
597
+ if full_reasoning_content is None:
598
+ full_reasoning_content = ""
599
+ full_reasoning_content += delta["reasoning_content"]
600
+
601
+ except json.JSONDecodeError:
602
+ continue
603
+
604
+ return ChatCompletionResponse(
605
+ model=model,
606
+ choices=[
607
+ ChatCompletionChoice(
608
+ message=ChatMessage(
609
+ role="assistant",
610
+ content=full_content,
611
+ reasoning_content=full_reasoning_content,
612
+ )
613
+ )
614
+ ],
615
+ )
616
+
617
+
618
+ async def error_stream_generator(error_detail: str, status_code: int):
619
+ """Generate error stream response"""
620
+ yield f'data: {json.dumps({"error": {"message": error_detail, "type": "tenbin_api_error", "code": status_code}})}\n\n'
621
+ yield "data: [DONE]\n\n"
622
+
623
+
624
+ if __name__ == "__main__":
625
+ import uvicorn
626
+
627
+ # 从环境变量获取端口,默认为 8000
628
+ port = int(os.environ.get("PORT", 8000))
629
+
630
+ # 设置环境变量以启用调试模式
631
+ if os.environ.get("DEBUG_MODE", "").lower() == "true":
632
+ DEBUG_MODE = True
633
+ print("Debug mode enabled via environment variable")
634
+
635
+ if not os.path.exists("tenbin.json") and not os.environ.get("SESSION_IDS"):
636
+ print("Warning: tenbin.json not found and SESSION_IDS not set. Creating a dummy file.")
637
+ dummy_data = [
638
+ {
639
+ "session_id": "your_session_id_here",
640
+ }
641
+ ]
642
+ with open("tenbin.json", "w", encoding="utf-8") as f:
643
+ json.dump(dummy_data, f, indent=4)
644
+ print("Created dummy tenbin.json. Please replace with valid Tenbin data or set SESSION_IDS secret.")
645
+
646
+ if not os.path.exists("client_api_keys.json") and not os.environ.get("API_KEYS"):
647
+ print("Warning: client_api_keys.json not found and API_KEYS not set. Creating a dummy file.")
648
+ dummy_key = f"sk-dummy-{uuid.uuid4().hex}"
649
+ with open("client_api_keys.json", "w", encoding="utf-8") as f:
650
+ json.dump([dummy_key], f, indent=2)
651
+ print(f"Created dummy client_api_keys.json with key: {dummy_key}. Or set API_KEYS secret.")
652
+
653
+ if not os.path.exists("models.json"):
654
+ print("Warning: models.json not found. Creating a dummy file.")
655
+ dummy_models = {
656
+ "claude-3.7-sonnet": "AnthropicClaude37Sonnet",
657
+ "claude-3.7-sonnet-extended": "AnthropicClaude37SonnetExtended"
658
+ }
659
+ with open("models.json", "w", encoding="utf-8") as f:
660
+ json.dump(dummy_models, f, indent=4)
661
+ print("Created dummy models.json.")
662
+
663
+ load_client_api_keys()
664
+ load_tenbin_accounts()
665
+ load_tenbin_models()
666
+
667
+ print("\n--- Tenbin OpenAI API Adapter ---")
668
+ print(f"Debug Mode: {DEBUG_MODE}")
669
+ print("Endpoints:")
670
+ print(" GET /v1/models (Client API Key Auth)")
671
+ print(" GET /models (No Auth)")
672
+ print(" POST /v1/chat/completions (Client API Key Auth)")
673
+ print(" GET /debug?enable=[true|false] (Toggle Debug Mode)")
674
+
675
+ print(f"\nClient API Keys: {len(VALID_CLIENT_KEYS)}")
676
+ if TENBIN_ACCOUNTS:
677
+ print(f"Tenbin Accounts: {len(TENBIN_ACCOUNTS)}")
678
+ else:
679
+ print("Tenbin Accounts: None loaded. Check tenbin.json.")
680
+ if TENBIN_MODELS:
681
+ models = sorted(list(TENBIN_MODELS.keys()))
682
+ print(f"Tenbin Models: {len(TENBIN_MODELS)}")
683
+ print(f"Available models: {', '.join(models[:5])}{'...' if len(models) > 5 else ''}")
684
+ else:
685
+ print("Tenbin Models: None loaded. Check models.json.")
686
+ print("------------------------------------")
687
+
688
+ uvicorn.run(app, host="0.0.0.0", port=port)
models.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "Claude-3-Haiku": "AnthropicClaude3Haiku",
3
+ "Claude-3-Opus": "AnthropicClaude3Opus",
4
+ "Claude-3-Sonnet": "AnthropicClaude3Sonnet",
5
+ "Claude-4-Opus": "AnthropicClaude4Opus",
6
+ "Claude-4-Sonnet": "AnthropicClaude4Sonnet",
7
+ "Claude-3.5-Haiku": "AnthropicClaude35Haiku",
8
+ "Claude-3.5-Sonnet": "AnthropicClaude35Sonnet",
9
+ "Claude-3.7-Sonnet": "AnthropicClaude37Sonnet",
10
+ "Claude-3.7-Sonnet-Extended": "AnthropicClaude37SonnetExtended",
11
+ "DeepSeek-R1": "DeepSeekR1",
12
+ "DeepSeek-V3": "DeepSeekV3",
13
+ "Gemini-1.5-Flash": "GeminiFlash15",
14
+ "Gemini-2.0-Flash": "GeminiFlash20",
15
+ "Gemini-2.5-Flash": "GeminiFlash25",
16
+ "Gemini-1.0-Pro": "GeminiPro10",
17
+ "Gemini-1.5-Pro": "GeminiPro15",
18
+ "Gemini-2.5-Pro": "GeminiPro25",
19
+ "Llama-3-Sonar-Large-32K-Online": "Llama3SonarLarge32kOnline",
20
+ "Llama-3.1-Sonar-Large-128K-Chat": "Llama31SonarLarge128kChat",
21
+ "Llama-3.1-Sonar-Large-128K-Online": "Llama31SonarLarge128kOnline",
22
+ "Llama-3-70B-Instruct": "Llama370bInstruct",
23
+ "Mixtral-8x7B-Instruct": "Mixtral8x7bInstruct",
24
+ "GPT-4": "OpenAIgpt4",
25
+ "GPT-4o-mini": "OpenAIgpt4oMini",
26
+ "GPT-3.5-Turbo": "OpenAIgpt35",
27
+ "GPT-4.1": "OpenAIgpt41",
28
+ "GPT-4.1-mini": "OpenAIgpt41Mini",
29
+ "GPT-4.1-nano": "OpenAIgpt41Nano",
30
+ "GPT-4.5": "OpenAIgpt45",
31
+ "o1": "OpenAIo1",
32
+ "o1-mini": "OpenAIo1Mini",
33
+ "o3": "OpenAIo3",
34
+ "o3-mini": "OpenAIo3Mini",
35
+ "o4-mini": "OpenAIo4Mini",
36
+ "Perplexity-Sonar": "PerplexitySonar",
37
+ "Plamo-1.0-Prime": "Plamo10Prime"
38
+ }
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ fastapi
2
+ uvicorn[standard]
3
+ requests
4
+ websocket-client
resigner.py ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import time
2
+ import requests
3
+ import uuid
4
+ import random
5
+ import string
6
+
7
+ TURNSTILE_URL = "http://" # 自行搭建的turnstile solver服务
8
+ API_KEY = "" # mail.eleme.uk的api
9
+
10
+ def create_task():
11
+ url = f"{TURNSTILE_URL}/turnstile"
12
+ resp = requests.get(url, params={
13
+ "url": "https://oshiete.ai/email_confirmation",
14
+ "sitekey": "0x4AAAAAABGR2exxRproizri",
15
+ "action": "request_registration_link"
16
+ })
17
+ print(resp.json())
18
+ return resp.json()['task_id']
19
+
20
+
21
+ def get_result(task_id):
22
+ url = f"{TURNSTILE_URL}/result"
23
+ resp = requests.get(url, params={
24
+ "id": task_id
25
+ })
26
+ if 'value' in resp.text:
27
+ return resp.json()['value']
28
+ return None
29
+
30
+ def solve_turnstile():
31
+ task_id = create_task()
32
+ while True:
33
+ result = get_result(task_id)
34
+ if result:
35
+ return result
36
+ break
37
+ time.sleep(1)
38
+
39
+ def generate_random_string(length=10):
40
+ characters = string.ascii_letters + string.digits
41
+ random_string = ''.join(random.choice(characters) for _ in range(length))
42
+ return random_string
43
+
44
+
45
+ def get_email():
46
+ url = "https://mail.eleme.uk/api/emails/generate"
47
+ headers = {
48
+ "X-API-Key": API_KEY,
49
+ "Content-Type": "application/json"
50
+ }
51
+ data = {
52
+ "name": generate_random_string(8),
53
+ "expiryTime": 3600000,
54
+ "domain": "ele.edu.kg"
55
+ }
56
+
57
+ try:
58
+ response = requests.post(url, headers=headers, json=data)
59
+ response.raise_for_status() # 检查响应状态
60
+ email_data = response.json()
61
+ return email_data.get("email"), email_data.get("id") # 假设API返回包含email字段的JSON
62
+ except requests.exceptions.RequestException as e:
63
+ print(f"获取邮箱时出错: {e}")
64
+ return None
65
+
66
+ def get_code(id: str):
67
+ url = f"https://mail.eleme.uk/api/emails/{id}"
68
+ headers = {
69
+ "X-API-Key": API_KEY
70
+ }
71
+
72
+ try:
73
+ response = requests.get(url, headers=headers)
74
+ response.raise_for_status()
75
+ emails_data = response.json()
76
+ # 提取验证码逻辑可能需要根据实际邮件内容调整
77
+ for email in emails_data['messages']:
78
+ if 'yGMO' in email['subject']:
79
+ # 获取邮件内容的详细信息
80
+ message_id = email['id']
81
+ url_message = f"https://mail.eleme.uk/api/emails/{id}/{message_id}"
82
+ message_response = requests.get(url_message, headers=headers)
83
+ message_response.raise_for_status()
84
+ message_data = message_response.json()
85
+ return message_data['message']['html'].split('code=')[1].split('<br>')[0]
86
+ except requests.exceptions.RequestException as e:
87
+ print(f"获取验证码时出错: {e}")
88
+ return None
89
+
90
+ def send_email(email: str):
91
+ url = "https://graphql.oshiete.ai/graphql"
92
+ resp = requests.post(url, json={
93
+ "operationName": "RequestRegistrationLink",
94
+ "variables": {
95
+ "email": email,
96
+ "turnstileToken": solve_turnstile()
97
+ },
98
+ "query": """
99
+ mutation RequestRegistrationLink($email: String!, $turnstileToken: String!) {
100
+ requestRegistrationLink(email: $email, turnstileToken: $turnstileToken)
101
+ }
102
+ """
103
+ })
104
+
105
+ def register(code: str):
106
+ url = "https://graphql.oshiete.ai/graphql"
107
+ resp = requests.post(url, json={
108
+ "operationName": "RegisterUser",
109
+ "variables": {
110
+ "code": code,
111
+ "dti": str(uuid.uuid4()),
112
+ "password": "Aa123321."
113
+ },
114
+ "query": """
115
+ mutation RegisterUser($code: String!, $password: String!, $dti: String) {
116
+ registerUser(code: $code, password: $password, dti: $dti) {
117
+ id
118
+ __typename
119
+ }
120
+ }
121
+ """
122
+ })
123
+ # 提取sessionId cookie
124
+ session_id = resp.cookies.get('sessionId')
125
+ return session_id
126
+
127
+ if __name__ == "__main__":
128
+ email, id = get_email()
129
+ send_email(email)
130
+ code = None
131
+ while not code:
132
+ code = get_code(id)
133
+ time.sleep(1)
134
+ session_id = register(code)
135
+ print(session_id)
start.sh ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # Copyright (c) 2024, I am your father.
4
+ # I am your father.
5
+
6
+ # Exit immediately if a command exits with a non-zero status.
7
+ set -e
8
+
9
+ # Start the Turnstile Solver in the background
10
+ echo "Starting Turnstile Solver..."
11
+ # The solver must be started from its own directory
12
+ (cd /app/Turnstile-Solver && python api_solver.py) &
13
+ SOLVER_PID=$!
14
+ echo "Turnstile Solver started with PID $SOLVER_PID"
15
+
16
+ # Wait a few seconds for the solver to initialize
17
+ # This is important because the solver might need to download browser components on first run
18
+ echo "Waiting for solver to initialize..."
19
+ sleep 15
20
+
21
+ # Start the FastAPI app in the foreground
22
+ echo "Starting FastAPI app..."
23
+ # The main app expects to be run from the root directory
24
+ # Use the PORT variable that Hugging Face provides, default to 7860 for spaces
25
+ exec uvicorn main:app --host 0.0.0.0 --port ${PORT:-7860}
tenbin.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "session_id":"jYPSNKxxxxxx"
4
+ }
5
+ ]