zhaoxiaozhao07 commited on
Commit
962759f
·
1 Parent(s): ed96778

feat(core): 重构配置模块并添加新功能

Browse files

- 重构了配置模块,简化了结构并提高了可维护性
- 添加了新的配置项,如备用访问令牌和思考内容处理策略
- 移除了与 token 池相关的配置和功能
- 更新了环境变量示例和文档说明

.env.example CHANGED
@@ -2,41 +2,36 @@
2
  # 复制此文件为 .env 并根据需要修改配置值
3
 
4
  # ========== API 基础配置 ==========
 
 
 
5
  # 客户端认证密钥(您自定义的 API 密钥,用于客户端访问本服务)
6
  AUTH_TOKEN=sk-your-api-key
7
 
8
  # 跳过客户端认证(仅开发环境使用)
9
  SKIP_AUTH_TOKEN=false
10
 
11
- # ========== Token池配置 ==========
12
- # Token失败阈值(失败多少次后标记为不可用)
13
- TOKEN_FAILURE_THRESHOLD=3
14
-
15
- # Token恢复超时时间(秒,失败token在此时间后重新尝试)
16
- TOKEN_RECOVERY_TIMEOUT=1800
17
-
18
- # Token健康检查间隔(秒,定期检查token状态)
19
- TOKEN_HEALTH_CHECK_INTERVAL=300
20
-
21
- # Z.ai 认证token配置(当匿名模式失败时使用)
22
- #
23
- # 使用独立的token文件配置
24
- # 在项目根目录创建 tokens.txt 文件,每行一个token或逗号分隔
25
- AUTH_TOKENS_FILE=tokens.txt
26
 
27
  # ========== 服务器配置 ==========
28
  # 服务监听端口
29
  LISTEN_PORT=8080
30
 
31
- # 服务名称(用于进程唯一性验证)
32
- SERVICE_NAME=z-ai2api-server
33
-
34
- # 调试日志
35
  DEBUG_LOGGING=true
36
 
37
- # 匿名用户模式
38
- # false: 使用认证用户令牌
 
 
 
 
 
 
39
  # true: 自动从 Z.ai 获取临时访问令牌,避免对话历史共享
 
40
  ANONYMOUS_MODE=true
41
 
42
  # Function Call 功能开关
@@ -44,10 +39,3 @@ TOOL_SUPPORT=true
44
 
45
  # 工具调用扫描限制(字符数)
46
  SCAN_LIMIT=200000
47
-
48
- # ========== 错误码400处理 ==========
49
-
50
- # 重试次数
51
- MAX_RETRIES=6
52
- # 初始重试延迟
53
- RETRY_DELAY=1
 
2
  # 复制此文件为 .env 并根据需要修改配置值
3
 
4
  # ========== API 基础配置 ==========
5
+ # Z.ai API 端点地址
6
+ API_ENDPOINT=https://chat.z.ai/api/chat/completions
7
+
8
  # 客户端认证密钥(您自定义的 API 密钥,用于客户端访问本服务)
9
  AUTH_TOKEN=sk-your-api-key
10
 
11
  # 跳过客户端认证(仅开发环境使用)
12
  SKIP_AUTH_TOKEN=false
13
 
14
+ # Z.ai 备用访问令牌(当匿名模式失败时使用)
15
+ # 注意:这是用于访问 Z.ai 服务的令牌,不是客户端认证密钥
16
+ BACKUP_TOKEN=eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjMxNmJjYjQ4LWZmMmYtNGExNS04NTNkLWYyYTI5YjY3ZmYwZiIsImVtYWlsIjoiR3Vlc3QtMTc1NTg0ODU4ODc4OEBndWVzdC5jb20ifQ.PktllDySS3trlyuFpTeIZf-7hl8Qu1qYF3BxjgIul0BrNux2nX9hVzIjthLXKMWAf9V0qM8Vm_iyDqkjPGsaiQ
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  # ========== 服务器配置 ==========
19
  # 服务监听端口
20
  LISTEN_PORT=8080
21
 
22
+ # 调试日志开关
 
 
 
23
  DEBUG_LOGGING=true
24
 
25
+ # ========== 功能配置 ==========
26
+ # 思考内容处理策略
27
+ # think: 转换为 <span> 标签(OpenAI 兼容)
28
+ # strip: 移除思考内容
29
+ # raw: 保留原始格式
30
+ THINKING_PROCESSING=think
31
+
32
+ # 匿名模式开关(推荐启用)
33
  # true: 自动从 Z.ai 获取临时访问令牌,避免对话历史共享
34
+ # false: 使用固定令牌 BACKUP_TOKEN
35
  ANONYMOUS_MODE=true
36
 
37
  # Function Call 功能开关
 
39
 
40
  # 工具调用扫描限制(字符数)
41
  SCAN_LIMIT=200000
 
 
 
 
 
 
 
.github/workflows/docker-build-push.yml ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Build and Push Docker Image
2
+
3
+ on:
4
+ # Trigger on push to main branch
5
+ push:
6
+ branches:
7
+ - main
8
+ - master
9
+ tags:
10
+ - 'v*'
11
+
12
+ # Allow manual trigger
13
+ workflow_dispatch:
14
+ inputs:
15
+ tag:
16
+ description: 'Docker image tag'
17
+ required: false
18
+ default: 'latest'
19
+
20
+ env:
21
+ REGISTRY: docker.io
22
+ IMAGE_NAME: julienol/z-ai2api-python
23
+
24
+ jobs:
25
+ build-and-push:
26
+ runs-on: ubuntu-latest
27
+
28
+ steps:
29
+ - name: Checkout code
30
+ uses: actions/checkout@v4
31
+
32
+ - name: Set up Docker Buildx
33
+ uses: docker/setup-buildx-action@v3
34
+
35
+ - name: Log in to Docker Hub
36
+ uses: docker/login-action@v3
37
+ with:
38
+ username: ${{ secrets.DOCKER_USERNAME }}
39
+ password: ${{ secrets.DOCKER_PASSWORD }}
40
+
41
+ - name: Extract metadata
42
+ id: meta
43
+ uses: docker/metadata-action@v5
44
+ with:
45
+ images: ${{ env.IMAGE_NAME }}
46
+ tags: |
47
+ type=ref,event=branch
48
+ type=ref,event=pr
49
+ type=semver,pattern={{version}}
50
+ type=semver,pattern={{major}}.{{minor}}
51
+ type=semver,pattern={{major}}
52
+ type=raw,value=latest,enable={{is_default_branch}}
53
+ type=raw,value=${{ github.event.inputs.tag }},enable=${{ github.event_name == 'workflow_dispatch' }}
54
+
55
+ - name: Build and push Docker image
56
+ uses: docker/build-push-action@v5
57
+ with:
58
+ context: .
59
+ file: ./Dockerfile
60
+ push: true
61
+ tags: ${{ steps.meta.outputs.tags }}
62
+ labels: ${{ steps.meta.outputs.labels }}
63
+ platforms: linux/amd64,linux/arm64
64
+ cache-from: type=gha
65
+ cache-to: type=gha,mode=max
66
+
67
+ - name: Update Docker Hub description
68
+ uses: peter-evans/dockerhub-description@v4
69
+ with:
70
+ username: ${{ secrets.DOCKER_USERNAME }}
71
+ password: ${{ secrets.DOCKER_PASSWORD }}
72
+ repository: ${{ env.IMAGE_NAME }}
73
+ readme-filepath: ./README.md
.gitignore CHANGED
@@ -5,7 +5,6 @@
5
  .conda/
6
  *.zip
7
  *.txt
8
- *.pid
9
  docs/
10
  output/
11
  main.build/
@@ -14,8 +13,6 @@ main.onefile-build/
14
  *report.xml
15
  *.yaml
16
  logs/
17
- backup/
18
- uv.lock
19
 
20
  # AI Toolset
21
  .augment/
 
5
  .conda/
6
  *.zip
7
  *.txt
 
8
  docs/
9
  output/
10
  main.build/
 
13
  *report.xml
14
  *.yaml
15
  logs/
 
 
16
 
17
  # AI Toolset
18
  .augment/
Dockerfile ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use Python 3.12 slim image for better performance and smaller size
2
+ FROM python:3.12-slim
3
+
4
+ # Set environment variables
5
+ ENV PYTHONUNBUFFERED=1 \
6
+ PYTHONDONTWRITEBYTECODE=1 \
7
+ PIP_NO_CACHE_DIR=1 \
8
+ PIP_DISABLE_PIP_VERSION_CHECK=1
9
+
10
+ # Set work directory
11
+ WORKDIR /app
12
+
13
+ # Install system dependencies
14
+ RUN apt-get update && \
15
+ apt-get install -y --no-install-recommends curl && \
16
+ rm -rf /var/lib/apt/lists/*
17
+
18
+ # Copy requirements first for better caching
19
+ COPY requirements.txt .
20
+
21
+ # Install Python dependencies with Brotli support
22
+ RUN pip install --no-cache-dir -r requirements.txt && \
23
+ pip install --no-cache-dir brotli
24
+
25
+ # Copy application code
26
+ COPY . .
27
+
28
+ # Create non-root user for security
29
+ RUN useradd --create-home --shell /bin/bash app && \
30
+ chown -R app:app /app
31
+
32
+ # Create tokens directory and set permissions
33
+ RUN mkdir -p /app/data && \
34
+ chown -R app:app /app/data
35
+
36
+ USER app
37
+
38
+ # Expose port
39
+ EXPOSE 8080
40
+
41
+ # Health check
42
+ HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
43
+ CMD curl -f http://localhost:8080/ || exit 1
44
+
45
+ # Run the application
46
+ CMD ["python", "main.py"]
README.md CHANGED
@@ -1,34 +1,30 @@
1
  # Z.AI OpenAI API 代理服务
2
 
3
  ![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)
4
- ![Python: 3.9-3.12](https://img.shields.io/badge/python-3.9--3.12-green.svg)
5
  ![FastAPI](https://img.shields.io/badge/framework-FastAPI-009688.svg)
6
- ![Version: 0.1.0](https://img.shields.io/badge/version-0.1.0-brightgreen.svg)
7
 
8
- > 🎯 **项目愿景**:提供完全兼容 OpenAI API Z.AI 代理服务,让用户无需修改现有代码即可接入 GLM-4.5 系列模型。
9
-
10
- 轻量级、高性能的 OpenAI API 兼容代理服务,通过 Claude Code Router 接入 Z.AI,支持 GLM-4.5 系列模型的完整功能。
11
 
12
  ## ✨ 核心特性
13
 
14
  - 🔌 **完全兼容 OpenAI API** - 无缝集成现有应用
15
  - 🤖 **Claude Code 支持** - 通过 Claude Code Router 接入 Claude Code (**CCR 工具请升级到 v1.0.47 以上**)
16
  - 🚀 **高性能流式响应** - Server-Sent Events (SSE) 支持
17
- - 🛠️ **增强工具调用** - 改进的 Function Call 实现,支持复杂工具链
18
  - 🧠 **思考模式支持** - 智能处理模型推理过程
19
- - 🐳 **Docker 部署** - 一键容器化部署(环境变量请参考`.env.example`)
 
20
  - 🛡️ **会话隔离** - 匿名模式保护隐私
21
  - 🔧 **灵活配置** - 环境变量灵活配置
22
  - 📊 **多模型映射** - 智能上游模型路由
23
- - 🔄 **Token 池管理** - 自动轮询、容错恢复、动态更新
24
- - 🛡️ **错误处理** - 完善的异常捕获和重试机制
25
- - 🔒 **服务唯一性** - 基于进程名称(pname)的服务唯一性验证,防止重复启动
26
 
27
  ## 🚀 快速开始
28
 
29
  ### 环境要求
30
 
31
- - Python 3.9-3.12
32
  - pip 或 uv (推荐)
33
 
34
  ### 安装运行
@@ -48,9 +44,7 @@ pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
48
  python main.py
49
  ```
50
 
51
- > 服务启动后访问接口文档:http://localhost:8080/docs
52
- > 💡 **提示**:默认端口为 8080,可通过环境变量 `LISTEN_PORT` 修改
53
- > ⚠️ **注意**:请勿将 `AUTH_TOKEN` 泄露给其他人,请使用 `AUTH_TOKENS` 配置多个认证令牌
54
 
55
  ### 基础使用
56
 
@@ -148,51 +142,21 @@ for chunk in response:
148
  | 变量名 | 默认值 | 说明 |
149
  | --------------------- | ----------------------------------------- | ---------------------- |
150
  | `AUTH_TOKEN` | `sk-your-api-key` | 客户端认证密钥 |
 
151
  | `LISTEN_PORT` | `8080` | 服务监听端口 |
152
  | `DEBUG_LOGGING` | `true` | 调试日志开关 |
153
- | `ANONYMOUS_MODE` | `true` | 匿名用户模式开关 |
 
154
  | `TOOL_SUPPORT` | `true` | Function Call 功能开关 |
155
  | `SKIP_AUTH_TOKEN` | `false` | 跳过认证令牌验证 |
156
  | `SCAN_LIMIT` | `200000` | 扫描限制 |
157
- | `AUTH_TOKENS_FILE` | `tokens.txt` | 认证token文件路径 |
158
-
159
- > 💡 详细配置请查看 `.env.example` 文件
160
-
161
- ## 🔄 Token池机制
162
-
163
- ### 功能特性
164
-
165
- - **负载均衡**:轮询使用多个auth token,分散请求负载
166
- - **自动容错**:token失败时自动切换到下一个可用token
167
- - **健康监控**:基于Z.AI API的role字段精确验证token类型
168
- - **自动恢复**:失败token在超时后自动重新尝试
169
- - **动态管理**:支持运行时更新token池
170
- - **智能去重**:自动检测和去除重复token
171
- - **类型验证**:只接受认证用户token (role: "user"),拒绝匿名token (role: "guest")
172
-
173
- ### Token配置方式
174
 
175
- 创建 `tokens.txt` 文件,支持多种格式的混合使用:
176
- 1. 每行一个token(换行分隔)
177
- 2. 逗号分隔的token
178
- 3. 混合格式(同时支持换行和逗号分隔)
179
 
180
- ## 监控API
181
-
182
- ```bash
183
- # 查看token池状态
184
- curl http://localhost:8080/v1/token-pool/status
185
-
186
- # 手动健康检查
187
- curl -X POST http://localhost:8080/v1/token-pool/health-check
188
-
189
- # 动态更新token池
190
- curl -X POST http://localhost:8080/v1/token-pool/update \
191
- -H "Content-Type: application/json" \
192
- -d '["new_token1", "new_token2"]'
193
- ```
194
-
195
- 详细文档请参考:[Token池功能说明](TOKEN_POOL_README.md)
196
 
197
  ## 🎯 使用场景
198
 
@@ -239,19 +203,6 @@ if response.choices[0].message.tool_calls:
239
  **Q: 如何获取 AUTH_TOKEN?**
240
  A: `AUTH_TOKEN` 为自己自定义的 api key,在环境变量中配置,需要保证客户端与服务端一致。
241
 
242
- **Q: 遇到 "Illegal header value b'Bearer '" 错误怎么办?**
243
- A: 这通常是因为 Token 获取失败导致的。请检查:
244
- - 匿名模式是否正确配置(`ANONYMOUS_MODE=true`)
245
- - Token 文件是否存在且格式正确(`tokens.txt`)
246
- - 网络连接是否正常,能否访问 Z.AI API
247
-
248
- **Q: 启动时提示"服务已在运行"怎么办?**
249
- A: 这是服务唯一性验证功能,防止重复启动。解决方法:
250
- - 检查是否已有服务实例在运行:`ps aux | grep z-ai2api-server`
251
- - 停止现有实例后再启动新的
252
- - 如果确认没有实例运行,删除 PID 文件:`rm z-ai2api-server.pid`
253
- - 可通过环境变量 `SERVICE_NAME` 自定义服务名称避免冲突
254
-
255
  **Q: 如何通过 Claude Code 使用本服务?**
256
 
257
  A: 创建 [zai.js](https://gist.githubusercontent.com/musistudio/b35402d6f9c95c64269c7666b8405348/raw/f108d66fa050f308387938f149a2b14a295d29e9/gistfile1.txt) 这个 ccr 插件放在`./.claude-code-router/plugins`目录下,配置 `./.claude-code-router/config.json` 指向本服务地址,使用 `AUTH_TOKEN` 进行认证。
@@ -336,25 +287,32 @@ A: 通过环境变量配置,推荐使用 `.env` 文件。
336
 
337
  要使用完整的多模态功能,需要获取正式的 Z.ai API Token:
338
 
 
 
 
 
 
 
 
 
339
  1. 打开 [Z.ai 聊天界面](https://chat.z.ai)
340
  2. 按 F12 打开开发者工具
341
  3. 切换到 "Application" 或 "存储" 标签
342
  4. 查看 Local Storage 中的认证 token
343
  5. 复制 token 值设置为环境变量
344
 
345
- > **重要提示**: 获取的 token 可能有时效性,多模态模型需要**官方 Z.ai API 非匿名 Token**,匿名 token 不支持多媒体处理
 
346
 
347
  ## 🛠️ 技术栈
348
 
349
  | 组件 | 技术 | 版本 | 说明 |
350
  | --------------- | --------------------------------------------------------------------------------- | ------- | ------------------------------------------ |
351
- | **Web 框架** | [FastAPI](https://fastapi.tiangolo.com/) | 0.116.1 | 高性能异步 Web 框架,支持自动 API 文档生成 |
352
  | **ASGI 服务器** | [Granian](https://github.com/emmett-framework/granian) | 2.5.2 | 基于 Rust 的高性能 ASGI 服务器,支持热重载 |
353
- | **HTTP 客户端** | [HTTPX](https://www.python-httpx.org/) / [Requests](https://requests.readthedocs.io/) | 0.27.0 / 2.32.5 | 异步/同步 HTTP 库,用于上游 API 调用 |
354
  | **数据验证** | [Pydantic](https://pydantic.dev/) | 2.11.7 | 类型安全的数据验证与序列化 |
355
  | **配置管理** | [Pydantic Settings](https://docs.pydantic.dev/latest/concepts/pydantic_settings/) | 2.10.1 | 基于 Pydantic 的配置管理 |
356
- | **日志系统** | [Loguru](https://loguru.readthedocs.io/) | 0.7.3 | 高性能结构化日志库 |
357
- | **用户代理** | [Fake UserAgent](https://pypi.org/project/fake-useragent/) | 2.2.0 | 动态用户代理生成 |
358
 
359
  ## 🏗️ 技术架构
360
 
@@ -380,36 +338,29 @@ A: 通过环境变量配置,推荐使用 `.env` 文件。
380
 
381
  ```
382
  z.ai2api_python/
383
- ├── app/ # 主应用模块
384
- │ ├── core/ # 核心模块
385
- │ │ ├── config.py # 配置管理(Pydantic Settings)
386
- │ │ ├── openai.py # OpenAI API 兼容层
387
- │ │ └── zai_transformer.py # Z.AI 请求/响应转换器
388
- ├── models/ # 数据模型
389
- │ └── schemas.py # Pydantic 数据模型
390
- └── utils/ # 工具模块
391
- ├── logger.py # Loguru 日志系统
392
- ├── reload_config.py # 热重载配置
393
- ├── sse_tool_handler.py # SSE 工具调用处理器
394
- └── token_pool.py # Token 池管理
395
- ├── tests/ # 测试文件
396
- ├── deploy/ # 部署配置
397
- ├── Dockerfile # Docker 镜像构建
398
- │ └── docker-compose.yml # 容器编排
399
- ├── main.py # FastAPI 应用入口
400
- ├── requirements.txt # 依赖清单
401
- ├── pyproject.toml # 项目配置
402
- ├── tokens.txt.example # Token 配置文件
403
- └── .env.example # 环境变量示例
404
  ```
405
 
406
- ## ⭐ Star History
407
-
408
- If you like this project, please give it a star ⭐
409
-
410
- [![Star History Chart](https://api.star-history.com/svg?repos=ZyphrZero/z.ai2api_python&type=Date)](https://star-history.com/#ZyphrZero/z.ai2api_python&Date)
411
-
412
-
413
  ## 🤝 贡献指南
414
 
415
  我们欢迎所有形式的贡献!
 
1
  # Z.AI OpenAI API 代理服务
2
 
3
  ![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)
4
+ ![Python: 3.8+](https://img.shields.io/badge/python-3.8+-green.svg)
5
  ![FastAPI](https://img.shields.io/badge/framework-FastAPI-009688.svg)
6
+ ![Version: 1.2.0](https://img.shields.io/badge/version-1.2.0-brightgreen.svg)
7
 
8
+ 轻量级 OpenAI API 兼容代理服务,通过 Claude Code Router 接入 Z.AI,支持 GLM-4.5 系列模型的完整功能。
 
 
9
 
10
  ## ✨ 核心特性
11
 
12
  - 🔌 **完全兼容 OpenAI API** - 无缝集成现有应用
13
  - 🤖 **Claude Code 支持** - 通过 Claude Code Router 接入 Claude Code (**CCR 工具请升级到 v1.0.47 以上**)
14
  - 🚀 **高性能流式响应** - Server-Sent Events (SSE) 支持
15
+ - 🛠️ **增强工具调用** - 改进的 Function Call 实现
16
  - 🧠 **思考模式支持** - 智能处理模型推理过程
17
+ - 🔍 **搜索模型集成** - GLM-4.5-Search 网络搜索能力
18
+ - 🐳 **Docker 部署** - 一键容器化部署
19
  - 🛡️ **会话隔离** - 匿名模式保护隐私
20
  - 🔧 **灵活配置** - 环境变量灵活配置
21
  - 📊 **多模型映射** - 智能上游模型路由
 
 
 
22
 
23
  ## 🚀 快速开始
24
 
25
  ### 环境要求
26
 
27
+ - Python 3.8+
28
  - pip 或 uv (推荐)
29
 
30
  ### 安装运行
 
44
  python main.py
45
  ```
46
 
47
+ 服务启动后访问:http://localhost:8080/docs
 
 
48
 
49
  ### 基础使用
50
 
 
142
  | 变量名 | 默认值 | 说明 |
143
  | --------------------- | ----------------------------------------- | ---------------------- |
144
  | `AUTH_TOKEN` | `sk-your-api-key` | 客户端认证密钥 |
145
+ | `API_ENDPOINT` | `https://chat.z.ai/api/chat/completions` | 上游 API 地址 |
146
  | `LISTEN_PORT` | `8080` | 服务监听端口 |
147
  | `DEBUG_LOGGING` | `true` | 调试日志开关 |
148
+ | `THINKING_PROCESSING` | `think` | 思考内容处理策略 |
149
+ | `ANONYMOUS_MODE` | `true` | 匿名模式开关 |
150
  | `TOOL_SUPPORT` | `true` | Function Call 功能开关 |
151
  | `SKIP_AUTH_TOKEN` | `false` | 跳过认证令牌验证 |
152
  | `SCAN_LIMIT` | `200000` | 扫描限制 |
153
+ | `BACKUP_TOKEN` | `eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9...` | Z.ai 固定访问令牌 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
154
 
155
+ ### 思考内容处理策略
 
 
 
156
 
157
+ - `think` - 转换为 `<thinking>` 标签(OpenAI 兼容)
158
+ - `strip` - 移除思考内容
159
+ - `raw` - 保留原始格式
 
 
 
 
 
 
 
 
 
 
 
 
 
160
 
161
  ## 🎯 使用场景
162
 
 
203
  **Q: 如何获取 AUTH_TOKEN?**
204
  A: `AUTH_TOKEN` 为自己自定义的 api key,在环境变量中配置,需要保证客户端与服务端一致。
205
 
 
 
 
 
 
 
 
 
 
 
 
 
 
206
  **Q: 如何通过 Claude Code 使用本服务?**
207
 
208
  A: 创建 [zai.js](https://gist.githubusercontent.com/musistudio/b35402d6f9c95c64269c7666b8405348/raw/f108d66fa050f308387938f149a2b14a295d29e9/gistfile1.txt) 这个 ccr 插件放在`./.claude-code-router/plugins`目录下,配置 `./.claude-code-router/config.json` 指向本服务地址,使用 `AUTH_TOKEN` 进行认证。
 
287
 
288
  要使用完整的多模态功能,需要获取正式的 Z.ai API Token:
289
 
290
+ ### 方式 1: 通过 Z.ai 网站
291
+
292
+ 1. 访问 [Z.ai 官网](https://chat.z.ai)
293
+ 2. 注册账户并登录,进入 [Z.ai API Keys](https://z.ai/manage-apikey/apikey-list) 设置页面,在该页面设置 _**个人 API Token**_
294
+ 3. 将 Token 放置在 `BACKUP_TOKEN` 环境变量中
295
+
296
+ ### 方式 2: 浏览器开发者工具(临时方案)
297
+
298
  1. 打开 [Z.ai 聊天界面](https://chat.z.ai)
299
  2. 按 F12 打开开发者工具
300
  3. 切换到 "Application" 或 "存储" 标签
301
  4. 查看 Local Storage 中的认证 token
302
  5. 复制 token 值设置为环境变量
303
 
304
+ > ⚠️ **注意**: 方式 2 获取的 token 可能有时效性,建议使用方式 1 获取长期有效的 API Token
305
+ > ❗ **重要提示**: 多模态模型需要**官方 Z.ai API 非匿名 Token**,匿名 token 不支持多媒体处理。
306
 
307
  ## 🛠️ 技术栈
308
 
309
  | 组件 | 技术 | 版本 | 说明 |
310
  | --------------- | --------------------------------------------------------------------------------- | ------- | ------------------------------------------ |
311
+ | **Web 框架** | [FastAPI](https://fastapi.tiangolo.com/) | 0.104.1 | 高性能异步 Web 框架,支持自动 API 文档生成 |
312
  | **ASGI 服务器** | [Granian](https://github.com/emmett-framework/granian) | 2.5.2 | 基于 Rust 的高性能 ASGI 服务器,支持热重载 |
313
+ | **HTTP 客户端** | [Requests](https://requests.readthedocs.io/) | 2.32.5 | 简洁易用的 HTTP 库,用于上游 API 调用 |
314
  | **数据验证** | [Pydantic](https://pydantic.dev/) | 2.11.7 | 类型安全的数据验证与序列化 |
315
  | **配置管理** | [Pydantic Settings](https://docs.pydantic.dev/latest/concepts/pydantic_settings/) | 2.10.1 | 基于 Pydantic 的配置管理 |
 
 
316
 
317
  ## 🏗️ 技术架构
318
 
 
338
 
339
  ```
340
  z.ai2api_python/
341
+ ├── app/
342
+ │ ├── core/
343
+ │ │ ├── __init__.py
344
+ │ │ ├── config.py # 配置管理
345
+ │ │ ├── openai.py # OpenAI API 实现
346
+ │ └── response_handlers.py # 响应处理器
347
+ ├── models/
348
+ │ ├── __init__.py
349
+ │ └── schemas.py # Pydantic 模型定义
350
+ ├── utils/
351
+ ├── __init__.py
352
+ │ ├── helpers.py # 辅助函数
353
+ │ │ ├── tools.py # 增强工具调用处理
354
+ │ │ └── sse_parser.py # SSE 流式解析器
355
+ └── __init__.py
356
+ ├── tests/ # 单元测试
357
+ ├── deploy/ # Docker 部署配置
358
+ ├── main.py # FastAPI 应用入口
359
+ ├── requirements.txt # Python 依赖
360
+ ├── .env.example # 环境变量示例
361
+ └── README.md # 项目文档
362
  ```
363
 
 
 
 
 
 
 
 
364
  ## 🤝 贡献指南
365
 
366
  我们欢迎所有形式的贡献!
app/__init__.py CHANGED
@@ -1,5 +1,6 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
 
3
 
4
  from app import core, models, utils
5
 
 
1
+ """
2
+ Application package initialization
3
+ """
4
 
5
  from app import core, models, utils
6
 
app/core/__init__.py CHANGED
@@ -1,6 +1,7 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
 
3
 
4
- from app.core import config, zai_transformer, openai
5
 
6
- __all__ = ["config", "zai_transformer", "openai"]
 
1
+ """
2
+ Core module initialization
3
+ """
4
 
5
+ from app.core import config, response_handlers, openai
6
 
7
+ __all__ = ["config", "response_handlers", "openai"]
app/core/config.py CHANGED
@@ -1,125 +1,37 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
 
3
 
4
  import os
5
- from typing import Dict, List, Optional
6
  from pydantic_settings import BaseSettings
7
- from app.utils.logger import logger
8
 
9
 
10
  class Settings(BaseSettings):
11
  """Application settings"""
12
-
13
  # API Configuration
14
- API_ENDPOINT: str = "https://chat.z.ai/api/chat/completions"
15
  AUTH_TOKEN: str = os.getenv("AUTH_TOKEN", "sk-your-api-key")
16
-
17
- # 认证token文件路径
18
- AUTH_TOKENS_FILE: str = os.getenv("AUTH_TOKENS_FILE", "tokens.txt")
19
-
20
- # Token池配置
21
- TOKEN_HEALTH_CHECK_INTERVAL: int = int(os.getenv("TOKEN_HEALTH_CHECK_INTERVAL", "300")) # 5分钟
22
- TOKEN_FAILURE_THRESHOLD: int = int(os.getenv("TOKEN_FAILURE_THRESHOLD", "3")) # 失败3次后标记为不可用
23
- TOKEN_RECOVERY_TIMEOUT: int = int(os.getenv("TOKEN_RECOVERY_TIMEOUT", "1800")) # 30分钟后重试失败的token
24
-
25
- def _load_tokens_from_file(self, file_path: str) -> List[str]:
26
- """
27
- 从文件加载token列表
28
-
29
- 支持多种格式的混合使用:
30
- 1. 每行一个token(换行分隔)
31
- 2. 逗号分隔的token
32
- 3. 混合格式(同时支持换行和逗号分隔)
33
- """
34
- tokens = []
35
- try:
36
- if os.path.exists(file_path):
37
- with open(file_path, 'r', encoding='utf-8') as f:
38
- content = f.read().strip()
39
-
40
- if not content:
41
- logger.debug(f"📄 Token文件为空: {file_path}")
42
- return tokens
43
-
44
- logger.debug(f"📄 开始解析token文件: {file_path}")
45
-
46
- # 智能解析:同时支持换行和逗号分隔
47
- # 1. 先按换行符分割处理每一行
48
- lines = content.split('\n')
49
-
50
- for line in lines:
51
- line = line.strip()
52
- # 跳过空行和注释行
53
- if not line or line.startswith('#'):
54
- continue
55
-
56
- # 2. 检查当前行是否包含逗号分隔
57
- if ',' in line:
58
- # 按逗号分割当前行
59
- comma_tokens = line.split(',')
60
- for token in comma_tokens:
61
- token = token.strip()
62
- if token: # 跳过空token
63
- tokens.append(token)
64
- else:
65
- # 整行作为一个token
66
- tokens.append(line)
67
-
68
- logger.info(f"📄 从文件加载了 {len(tokens)} 个token: {file_path}")
69
- else:
70
- logger.debug(f"📄 Token文件不存在: {file_path}")
71
- except Exception as e:
72
- logger.error(f"❌ 读取token文件失败 {file_path}: {e}")
73
- return tokens
74
-
75
- @property
76
- def auth_token_list(self) -> List[str]:
77
- """
78
- 解析认证token列表
79
-
80
- 仅从AUTH_TOKENS_FILE指定的文件加载token
81
- """
82
- # 从文件加载token
83
- tokens = self._load_tokens_from_file(self.AUTH_TOKENS_FILE)
84
-
85
- # 去重,保持顺序
86
- if tokens:
87
- seen = set()
88
- unique_tokens = []
89
- for token in tokens:
90
- if token not in seen:
91
- unique_tokens.append(token)
92
- seen.add(token)
93
-
94
- # 记录去重信息
95
- duplicate_count = len(tokens) - len(unique_tokens)
96
- if duplicate_count > 0:
97
- logger.warning(f"⚠️ 检测到 {duplicate_count} 个重复token,已自动去重")
98
-
99
- return unique_tokens
100
-
101
- return []
102
-
103
  # Model Configuration
104
  PRIMARY_MODEL: str = os.getenv("PRIMARY_MODEL", "GLM-4.5")
105
  THINKING_MODEL: str = os.getenv("THINKING_MODEL", "GLM-4.5-Thinking")
106
  SEARCH_MODEL: str = os.getenv("SEARCH_MODEL", "GLM-4.5-Search")
107
  AIR_MODEL: str = os.getenv("AIR_MODEL", "GLM-4.5-Air")
108
-
109
  # Server Configuration
110
  LISTEN_PORT: int = int(os.getenv("LISTEN_PORT", "8080"))
111
  DEBUG_LOGGING: bool = os.getenv("DEBUG_LOGGING", "true").lower() == "true"
112
- SERVICE_NAME: str = os.getenv("SERVICE_NAME", "z-ai2api-server")
113
-
 
114
  ANONYMOUS_MODE: bool = os.getenv("ANONYMOUS_MODE", "true").lower() == "true"
115
  TOOL_SUPPORT: bool = os.getenv("TOOL_SUPPORT", "true").lower() == "true"
116
  SCAN_LIMIT: int = int(os.getenv("SCAN_LIMIT", "200000"))
117
  SKIP_AUTH_TOKEN: bool = os.getenv("SKIP_AUTH_TOKEN", "false").lower() == "true"
118
-
119
- # Retry Configuration
120
- MAX_RETRIES: int = int(os.getenv("MAX_RETRIES", "5"))
121
- RETRY_DELAY: float = float(os.getenv("RETRY_DELAY", "1.0")) # 初始重试延迟(秒)
122
-
123
  # Browser Headers
124
  CLIENT_HEADERS: Dict[str, str] = {
125
  "Content-Type": "application/json",
@@ -132,9 +44,9 @@ class Settings(BaseSettings):
132
  "X-FE-Version": "prod-fe-1.0.70",
133
  "Origin": "https://chat.z.ai",
134
  }
135
-
136
  class Config:
137
  env_file = ".env"
138
 
139
 
140
- settings = Settings()
 
1
+ """
2
+ FastAPI application configuration module
3
+ """
4
 
5
  import os
6
+ from typing import Dict, Optional
7
  from pydantic_settings import BaseSettings
 
8
 
9
 
10
  class Settings(BaseSettings):
11
  """Application settings"""
12
+
13
  # API Configuration
14
+ API_ENDPOINT: str = os.getenv("API_ENDPOINT", "https://chat.z.ai/api/chat/completions")
15
  AUTH_TOKEN: str = os.getenv("AUTH_TOKEN", "sk-your-api-key")
16
+ BACKUP_TOKEN: str = os.getenv("BACKUP_TOKEN", "eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjMxNmJjYjQ4LWZmMmYtNGExNS04NTNkLWYyYTI5YjY3ZmYwZiIsImVtYWlsIjoiR3Vlc3QtMTc1NTg0ODU4ODc4OEBndWVzdC5jb20ifQ.PktllDySS3trlyuFpTeIZf-7hl8Qu1qYF3BxjgIul0BrNux2nX9hVzIjthLXKMWAf9V0qM8Vm_iyDqkjPGsaiQ")
17
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  # Model Configuration
19
  PRIMARY_MODEL: str = os.getenv("PRIMARY_MODEL", "GLM-4.5")
20
  THINKING_MODEL: str = os.getenv("THINKING_MODEL", "GLM-4.5-Thinking")
21
  SEARCH_MODEL: str = os.getenv("SEARCH_MODEL", "GLM-4.5-Search")
22
  AIR_MODEL: str = os.getenv("AIR_MODEL", "GLM-4.5-Air")
23
+
24
  # Server Configuration
25
  LISTEN_PORT: int = int(os.getenv("LISTEN_PORT", "8080"))
26
  DEBUG_LOGGING: bool = os.getenv("DEBUG_LOGGING", "true").lower() == "true"
27
+
28
+ # Feature Configuration
29
+ THINKING_PROCESSING: str = os.getenv("THINKING_PROCESSING", "think") # strip: 去除<details>标签;think: 转为<span>标签;raw: 保留原样
30
  ANONYMOUS_MODE: bool = os.getenv("ANONYMOUS_MODE", "true").lower() == "true"
31
  TOOL_SUPPORT: bool = os.getenv("TOOL_SUPPORT", "true").lower() == "true"
32
  SCAN_LIMIT: int = int(os.getenv("SCAN_LIMIT", "200000"))
33
  SKIP_AUTH_TOKEN: bool = os.getenv("SKIP_AUTH_TOKEN", "false").lower() == "true"
34
+
 
 
 
 
35
  # Browser Headers
36
  CLIENT_HEADERS: Dict[str, str] = {
37
  "Content-Type": "application/json",
 
44
  "X-FE-Version": "prod-fe-1.0.70",
45
  "Origin": "https://chat.z.ai",
46
  }
47
+
48
  class Config:
49
  env_file = ".env"
50
 
51
 
52
+ settings = Settings()
app/core/openai.py CHANGED
@@ -1,29 +1,24 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
 
3
 
4
  import time
5
- import json
6
- import asyncio
7
  from datetime import datetime
8
- from typing import List, Dict, Any
9
  from fastapi import APIRouter, Header, HTTPException
10
  from fastapi.responses import StreamingResponse
11
- import httpx
12
 
13
  from app.core.config import settings
14
- from app.models.schemas import OpenAIRequest, Message, ModelsResponse, Model
15
- from app.utils.logger import get_logger
16
- from app.core.zai_transformer import ZAITransformer, generate_uuid
17
- from app.utils.sse_tool_handler import SSEToolHandler
18
- from app.utils.token_pool import get_token_pool
19
-
20
- logger = get_logger()
21
 
22
  router = APIRouter()
23
 
24
- # 全局转换器实例
25
- transformer = ZAITransformer()
26
-
27
 
28
  @router.get("/v1/models")
29
  async def list_models():
@@ -31,564 +26,150 @@ async def list_models():
31
  current_time = int(time.time())
32
  response = ModelsResponse(
33
  data=[
34
- Model(id=settings.PRIMARY_MODEL, created=current_time, owned_by="z.ai"),
35
- Model(id=settings.THINKING_MODEL, created=current_time, owned_by="z.ai"),
36
- Model(id=settings.SEARCH_MODEL, created=current_time, owned_by="z.ai"),
37
- Model(id=settings.AIR_MODEL, created=current_time, owned_by="z.ai"),
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  ]
39
  )
40
  return response
41
 
42
 
43
  @router.post("/v1/chat/completions")
44
- async def chat_completions(request: OpenAIRequest, authorization: str = Header(...)):
45
- """Handle chat completion requests with ZAI transformer"""
46
- role = request.messages[0].role if request.messages else "unknown"
47
- logger.info(f"😶‍🌫️ 收到 客户端 请求 - 模型: {request.model}, 流式: {request.stream}, 消息数: {len(request.messages)}, 角色: {role}, 工具数: {len(request.tools) if request.tools else 0}")
48
-
 
 
49
  try:
50
  # Validate API key (skip if SKIP_AUTH_TOKEN is enabled)
51
  if not settings.SKIP_AUTH_TOKEN:
52
  if not authorization.startswith("Bearer "):
 
53
  raise HTTPException(status_code=401, detail="Missing or invalid Authorization header")
54
-
55
  api_key = authorization[7:]
56
  if api_key != settings.AUTH_TOKEN:
 
57
  raise HTTPException(status_code=401, detail="Invalid API key")
58
-
59
- # 使用新的转换器转换请求
60
- request_dict = request.model_dump()
61
- logger.info("🔄 开始转换请求格式: OpenAI -> Z.AI")
 
62
 
63
- transformed = await transformer.transform_request_in(request_dict)
64
- # logger.debug(f"🔄 转换后 Z.AI 请求体: {json.dumps(transformed['body'], ensure_ascii=False, indent=2)}")
65
-
66
- # 调用上游API
67
- async def stream_response():
68
- """流式响应生成器(包含重试机制)"""
69
- retry_count = 0
70
- last_error = None
71
- current_token = transformed.get("token", "") # 获取当前使用的token
72
-
73
- while retry_count <= settings.MAX_RETRIES:
74
- try:
75
- # 如果是重试,重新获取令牌并更新请求
76
- if retry_count > 0:
77
- delay = settings.RETRY_DELAY
78
- logger.warning(f"重试请求 ({retry_count}/{settings.MAX_RETRIES}) - 等待 {delay:.1f}s")
79
- await asyncio.sleep(delay)
80
-
81
- # 标记前一个token失败(如果不是匿名模式)
82
- if current_token and not settings.ANONYMOUS_MODE:
83
- transformer.mark_token_failure(current_token, Exception(f"Retry {retry_count}: {last_error}"))
84
-
85
- # 重新获取令牌
86
- logger.info("🔑 重新获取令牌用于重试...")
87
- new_token = await transformer.get_token()
88
- if not new_token:
89
- logger.error("❌ 重试时无法获取有效的认证令牌")
90
- raise Exception("重试时无法获取有效的认证令牌")
91
- transformed["config"]["headers"]["Authorization"] = f"Bearer {new_token}"
92
- current_token = new_token
93
-
94
- async with httpx.AsyncClient(timeout=60.0) as client:
95
- # 发送请求到上游
96
- logger.info(f"🎯 发送请求到 Z.AI: {transformed['config']['url']}")
97
- async with client.stream(
98
- "POST",
99
- transformed["config"]["url"],
100
- json=transformed["body"],
101
- headers=transformed["config"]["headers"],
102
- ) as response:
103
- # 检查响应状态码
104
- if response.status_code == 400:
105
- # 400 错误,触发重试
106
- error_text = await response.aread()
107
- error_msg = error_text.decode('utf-8', errors='ignore')
108
- logger.warning(f"❌ 上游返回 400 错误 (尝试 {retry_count + 1}/{settings.MAX_RETRIES + 1})")
109
-
110
- retry_count += 1
111
- last_error = f"400 Bad Request: {error_msg}"
112
-
113
- # 如果还有重试机会,继续循环
114
- if retry_count <= settings.MAX_RETRIES:
115
- continue
116
- else:
117
- # 达到最大重试次数,抛出错误
118
- logger.error(f"❌ 达到最大重试次数 ({settings.MAX_RETRIES}),请求失败")
119
- error_response = {
120
- "error": {
121
- "message": f"Request failed after {settings.MAX_RETRIES} retries: {last_error}",
122
- "type": "upstream_error",
123
- "code": 400
124
- }
125
- }
126
- yield f"data: {json.dumps(error_response)}\n\n"
127
- yield "data: [DONE]\n\n"
128
- return
129
-
130
- elif response.status_code != 200:
131
- # 其他错误,直接返回
132
- logger.error(f"❌ 上游返回错误: {response.status_code}")
133
- error_text = await response.aread()
134
- error_msg = error_text.decode('utf-8', errors='ignore')
135
- logger.error(f"❌ 错误详情: {error_msg}")
136
-
137
- error_response = {
138
- "error": {
139
- "message": f"Upstream error: {response.status_code}",
140
- "type": "upstream_error",
141
- "code": response.status_code
142
- }
143
- }
144
- yield f"data: {json.dumps(error_response)}\n\n"
145
- yield "data: [DONE]\n\n"
146
- return
147
-
148
- # 200 成功,处理响应
149
- logger.info(f"✅ Z.AI 响应成功,开始处理 SSE 流")
150
- if retry_count > 0:
151
- logger.info(f"✨ 第 {retry_count} 次重试成功")
152
-
153
- # 标记token使用成功(如果不是匿名模式)
154
- if current_token and not settings.ANONYMOUS_MODE:
155
- transformer.mark_token_success(current_token)
156
-
157
- # 初始化工具处理器(如果需要)
158
- has_tools = transformed["body"].get("tools") is not None
159
- has_mcp_servers = bool(transformed["body"].get("mcp_servers"))
160
- tool_handler = None
161
-
162
- # 如果有工具定义或MCP服务器,都需要工具处理器
163
- if has_tools or has_mcp_servers:
164
- chat_id = transformed["body"]["chat_id"]
165
- model = request.model
166
- tool_handler = SSEToolHandler(chat_id, model)
167
-
168
- if has_tools and has_mcp_servers:
169
- logger.info(f"🔧 初始化工具处理器: {len(transformed['body'].get('tools', []))} 个OpenAI工具 + {len(transformed['body'].get('mcp_servers', []))} 个MCP服务器")
170
- elif has_tools:
171
- logger.info(f"🔧 初始化工具处理器: {len(transformed['body'].get('tools', []))} 个OpenAI工具")
172
- elif has_mcp_servers:
173
- logger.info(f"🔧 初始化工具处理器: {len(transformed['body'].get('mcp_servers', []))} 个MCP服务器")
174
-
175
- # 处理状态
176
- has_thinking = False
177
- thinking_signature = None
178
-
179
- # 处理SSE流
180
- buffer = ""
181
- line_count = 0
182
- logger.debug("📡 开始接收 SSE 流数据...")
183
-
184
- async for line in response.aiter_lines():
185
- line_count += 1
186
- if not line:
187
- continue
188
-
189
- # 累积到buffer处理完整的数据行
190
- buffer += line + "\n"
191
-
192
- # 检查是否有完整的data行
193
- while "\n" in buffer:
194
- current_line, buffer = buffer.split("\n", 1)
195
- if not current_line.strip():
196
- continue
197
-
198
- if current_line.startswith("data:"):
199
- chunk_str = current_line[5:].strip()
200
- if not chunk_str or chunk_str == "[DONE]":
201
- if chunk_str == "[DONE]":
202
- yield "data: [DONE]\n\n"
203
- continue
204
-
205
- logger.debug(f"📦 解析数据块: {chunk_str[:1000]}..." if len(chunk_str) > 1000 else f"📦 解析数据块: {chunk_str}")
206
-
207
- try:
208
- chunk = json.loads(chunk_str)
209
-
210
- if chunk.get("type") == "chat:completion":
211
- data = chunk.get("data", {})
212
- phase = data.get("phase")
213
-
214
- # 记录每个阶段(只在阶段变化时记录)
215
- if phase and phase != getattr(stream_response, '_last_phase', None):
216
- logger.info(f"📈 SSE 阶段: {phase}")
217
- stream_response._last_phase = phase
218
-
219
- # 处理工具调用
220
- if phase == "tool_call" and tool_handler:
221
- for output in tool_handler.process_tool_call_phase(data, True):
222
- yield output
223
-
224
- # 处理其他阶段(工具结束)
225
- elif phase == "other" and tool_handler:
226
- for output in tool_handler.process_other_phase(data, True):
227
- yield output
228
-
229
- # 处理思考内容
230
- elif phase == "thinking":
231
- if not has_thinking:
232
- has_thinking = True
233
- has_thinking = True
234
- # 发送初始角色
235
- role_chunk = {
236
- "choices": [
237
- {
238
- "delta": {"role": "assistant"},
239
- "finish_reason": None,
240
- "index": 0,
241
- "logprobs": None,
242
- }
243
- ],
244
- "created": int(time.time()),
245
- "id": transformed["body"]["chat_id"],
246
- "model": request.model,
247
- "object": "chat.completion.chunk",
248
- "system_fingerprint": "fp_zai_001",
249
- }
250
- yield f"data: {json.dumps(role_chunk)}\n\n"
251
-
252
- delta_content = data.get("delta_content", "")
253
- if delta_content:
254
- # 处理思考内容格式
255
- if delta_content.startswith("<details"):
256
- content = (
257
- delta_content.split("</summary>\n>")[-1].strip()
258
- if "</summary>\n>" in delta_content
259
- else delta_content
260
- )
261
- else:
262
- content = delta_content
263
-
264
- thinking_chunk = {
265
- "choices": [
266
- {
267
- "delta": {
268
- "role": "assistant",
269
- "thinking": {"content": content},
270
- },
271
- "finish_reason": None,
272
- "index": 0,
273
- "logprobs": None,
274
- }
275
- ],
276
- "created": int(time.time()),
277
- "id": transformed["body"]["chat_id"],
278
- "model": request.model,
279
- "object": "chat.completion.chunk",
280
- "system_fingerprint": "fp_zai_001",
281
- }
282
- yield f"data: {json.dumps(thinking_chunk)}\n\n"
283
-
284
- # 处理答案内容
285
- elif phase == "answer":
286
- edit_content = data.get("edit_content", "")
287
- delta_content = data.get("delta_content", "")
288
-
289
- # 处理思考结束和答案开始
290
- if edit_content and "</details>\n" in edit_content:
291
- if has_thinking:
292
- # 发送思考签名
293
- thinking_signature = str(int(time.time() * 1000))
294
- sig_chunk = {
295
- "choices": [
296
- {
297
- "delta": {
298
- "role": "assistant",
299
- "thinking": {
300
- "content": "",
301
- "signature": thinking_signature,
302
- },
303
- },
304
- "finish_reason": None,
305
- "index": 0,
306
- "logprobs": None,
307
- }
308
- ],
309
- "created": int(time.time()),
310
- "id": transformed["body"]["chat_id"],
311
- "model": request.model,
312
- "object": "chat.completion.chunk",
313
- "system_fingerprint": "fp_zai_001",
314
- }
315
- yield f"data: {json.dumps(sig_chunk)}\n\n"
316
-
317
- # 提取答案内容
318
- content_after = edit_content.split("</details>\n")[-1]
319
- if content_after:
320
- content_chunk = {
321
- "choices": [
322
- {
323
- "delta": {
324
- "role": "assistant",
325
- "content": content_after,
326
- },
327
- "finish_reason": None,
328
- "index": 0,
329
- "logprobs": None,
330
- }
331
- ],
332
- "created": int(time.time()),
333
- "id": transformed["body"]["chat_id"],
334
- "model": request.model,
335
- "object": "chat.completion.chunk",
336
- "system_fingerprint": "fp_zai_001",
337
- }
338
- yield f"data: {json.dumps(content_chunk)}\n\n"
339
-
340
- # 处理增量内容
341
- elif delta_content:
342
- # 如果还没有发送角色
343
- if not has_thinking:
344
- role_chunk = {
345
- "choices": [
346
- {
347
- "delta": {"role": "assistant"},
348
- "finish_reason": None,
349
- "index": 0,
350
- "logprobs": None,
351
- }
352
- ],
353
- "created": int(time.time()),
354
- "id": transformed["body"]["chat_id"],
355
- "model": request.model,
356
- "object": "chat.completion.chunk",
357
- "system_fingerprint": "fp_zai_001",
358
- }
359
- yield f"data: {json.dumps(role_chunk)}\n\n"
360
-
361
- content_chunk = {
362
- "choices": [
363
- {
364
- "delta": {
365
- "role": "assistant",
366
- "content": delta_content,
367
- },
368
- "finish_reason": None,
369
- "index": 0,
370
- "logprobs": None,
371
- }
372
- ],
373
- "created": int(time.time()),
374
- "id": transformed["body"]["chat_id"],
375
- "model": request.model,
376
- "object": "chat.completion.chunk",
377
- "system_fingerprint": "fp_zai_001",
378
- }
379
- output_data = f"data: {json.dumps(content_chunk)}\n\n"
380
- logger.debug(f"➡️ 输出内容块到客户端: {output_data}")
381
- yield output_data
382
-
383
- # 处理完成
384
- if data.get("usage"):
385
- logger.info(f"📦 完成响应 - 使用统计: {json.dumps(data['usage'])}")
386
-
387
- # 只有在非工具调用模式下才发送普通完成信号
388
- if not tool_handler or not tool_handler.has_tool_call:
389
- finish_chunk = {
390
- "choices": [
391
- {
392
- "delta": {"role": "assistant", "content": ""},
393
- "finish_reason": "stop",
394
- "index": 0,
395
- "logprobs": None,
396
- }
397
- ],
398
- "usage": data["usage"],
399
- "created": int(time.time()),
400
- "id": transformed["body"]["chat_id"],
401
- "model": request.model,
402
- "object": "chat.completion.chunk",
403
- "system_fingerprint": "fp_zai_001",
404
- }
405
- finish_output = f"data: {json.dumps(finish_chunk)}\n\n"
406
- logger.debug(f"➡️ 发送完成信号: {finish_output[:1000]}...")
407
- yield finish_output
408
- logger.debug("➡️ 发送 [DONE]")
409
- yield "data: [DONE]\n\n"
410
-
411
- except json.JSONDecodeError as e:
412
- logger.debug(f"❌ JSON解析错误: {e}, 内容: {chunk_str[:1000]}")
413
- except Exception as e:
414
- logger.error(f"❌ 处理chunk错误: {e}")
415
-
416
- # 确保发送结束信号
417
- if not tool_handler or not tool_handler.has_tool_call:
418
- logger.debug("📤 发送最终 [DONE] 信号")
419
- yield "data: [DONE]\n\n"
420
-
421
- logger.info(f"✅ SSE 流处理完成,共处理 {line_count} 行数据")
422
- # 成功处理完成,退出重试循环
423
- return
424
-
425
- except Exception as e:
426
- logger.error(f"❌ 流处理错误: {e}")
427
- import traceback
428
- logger.error(traceback.format_exc())
429
-
430
- # 标记token失败(如果不是匿名模式)
431
- if current_token and not settings.ANONYMOUS_MODE:
432
- transformer.mark_token_failure(current_token, e)
433
-
434
- # 检查是否还可以重试
435
- retry_count += 1
436
- last_error = str(e)
437
-
438
- if retry_count > settings.MAX_RETRIES:
439
- # 达到最大重试次数,返回错误
440
- logger.error(f"❌ 达到最大重试次数 ({settings.MAX_RETRIES}),流处理失败")
441
- error_response = {
442
- "error": {
443
- "message": f"Stream processing failed after {settings.MAX_RETRIES} retries: {last_error}",
444
- "type": "stream_error"
445
- }
446
- }
447
- yield f"data: {json.dumps(error_response)}\n\n"
448
- yield "data: [DONE]\n\n"
449
- return
450
-
451
- # 返回流式响应
452
- logger.info("🚀 启动 SSE 流式响应")
453
 
454
- # 创建一个包装的生成器来追踪数据流
455
- async def logged_stream():
456
- chunk_count = 0
457
- try:
458
- logger.debug("📤 开始向客户端流式传输数据...")
459
- async for chunk in stream_response():
460
- chunk_count += 1
461
- logger.debug(f"📤 发送块[{chunk_count}]: {chunk[:1000]}..." if len(chunk) > 1000 else f" 📤 发送块[{chunk_count}]: {chunk}")
462
- yield chunk
463
- logger.info(f"✅ 流式传输完成,共发送 {chunk_count} 个数据块")
464
- except Exception as e:
465
- logger.error(f"❌ 流式传输中断: {e}")
466
- raise
467
 
468
- return StreamingResponse(
469
- logged_stream(),
470
- media_type="text/event-stream",
471
- headers={
472
- "Cache-Control": "no-cache",
473
- "Connection": "keep-alive",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
474
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
475
  )
476
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
477
  except HTTPException:
478
  raise
479
  except Exception as e:
480
- logger.error(f"处理请求时发生错误: {str(e)}")
481
  import traceback
482
-
483
- logger.error(f" 错误堆栈: {traceback.format_exc()}")
484
- raise HTTPException(status_code=500, detail=f"Internal server error: {str(e)}")
485
-
486
-
487
- @router.get("/v1/token-pool/status")
488
- async def get_token_pool_status():
489
- """获取token池状态信息"""
490
- try:
491
- token_pool = get_token_pool()
492
- if not token_pool:
493
- return {
494
- "status": "disabled",
495
- "message": "Token池未初始化,当前仅使用匿名模式",
496
- "anonymous_mode": settings.ANONYMOUS_MODE,
497
- "auth_tokens_file": settings.AUTH_TOKENS_FILE,
498
- "auth_tokens_configured": len(settings.auth_token_list) > 0
499
- }
500
-
501
- pool_status = token_pool.get_pool_status()
502
- return {
503
- "status": "active",
504
- "pool_info": pool_status,
505
- "config": {
506
- "anonymous_mode": settings.ANONYMOUS_MODE,
507
- "failure_threshold": settings.TOKEN_FAILURE_THRESHOLD,
508
- "recovery_timeout": settings.TOKEN_RECOVERY_TIMEOUT,
509
- "health_check_interval": settings.TOKEN_HEALTH_CHECK_INTERVAL
510
- }
511
- }
512
- except Exception as e:
513
- logger.error(f"获取token池状态失败: {e}")
514
- raise HTTPException(status_code=500, detail=f"Failed to get token pool status: {str(e)}")
515
-
516
-
517
- @router.post("/v1/token-pool/health-check")
518
- async def trigger_health_check():
519
- """手动触发token池健康检查"""
520
- try:
521
- token_pool = get_token_pool()
522
- if not token_pool:
523
- raise HTTPException(status_code=404, detail="Token池未初始化")
524
-
525
- # 记录开始时间
526
- import time
527
- start_time = time.time()
528
-
529
- logger.info("🔍 API触发Token池健康检查...")
530
- await token_pool.health_check_all()
531
-
532
- # 计算耗时
533
- duration = time.time() - start_time
534
-
535
- pool_status = token_pool.get_pool_status()
536
-
537
- # 统计健康检查结果 - 基于实际的健康状态
538
- total_tokens = pool_status['total_tokens']
539
- healthy_tokens = sum(1 for token_info in pool_status['tokens'] if token_info['is_healthy'])
540
- unhealthy_tokens = total_tokens - healthy_tokens
541
-
542
- # 构建响应
543
- response = {
544
- "status": "completed",
545
- "message": f"健康检查已完成,耗时 {duration:.2f} 秒",
546
- "summary": {
547
- "total_tokens": total_tokens,
548
- "healthy_tokens": healthy_tokens,
549
- "unhealthy_tokens": unhealthy_tokens,
550
- "health_rate": f"{(healthy_tokens/total_tokens*100):.1f}%" if total_tokens > 0 else "0%",
551
- "duration_seconds": round(duration, 2)
552
- },
553
- "pool_info": pool_status
554
- }
555
-
556
- # 添加建议
557
- if unhealthy_tokens > 0:
558
- response["recommendations"] = []
559
- if unhealthy_tokens == total_tokens:
560
- response["recommendations"].append("所有token都不健康,请检查token配置和网络连接")
561
- else:
562
- response["recommendations"].append(f"有 {unhealthy_tokens} 个token不健康,建议检查这些token的有效性")
563
-
564
- logger.info(f"✅ API健康检查完成: {healthy_tokens}/{total_tokens} 个token健康")
565
- return response
566
- except Exception as e:
567
- logger.error(f"健康检查失败: {e}")
568
- raise HTTPException(status_code=500, detail=f"Health check failed: {str(e)}")
569
-
570
-
571
- @router.post("/v1/token-pool/update")
572
- async def update_token_pool(tokens: List[str]):
573
- """动态更新token池"""
574
- try:
575
- from app.utils.token_pool import update_token_pool
576
-
577
- # 过滤空token
578
- valid_tokens = [token.strip() for token in tokens if token.strip()]
579
- if not valid_tokens:
580
- raise HTTPException(status_code=400, detail="至少需要提供一个有效的token")
581
-
582
- update_token_pool(valid_tokens)
583
-
584
- token_pool = get_token_pool()
585
- pool_status = token_pool.get_pool_status() if token_pool else None
586
-
587
- return {
588
- "status": "updated",
589
- "message": f"Token池已更新,共 {len(valid_tokens)} 个token",
590
- "pool_info": pool_status
591
- }
592
- except Exception as e:
593
- logger.error(f"更新token池失败: {e}")
594
- raise HTTPException(status_code=500, detail=f"Failed to update token pool: {str(e)}")
 
1
+ """
2
+ OpenAI API endpoints
3
+ """
4
 
5
  import time
 
 
6
  from datetime import datetime
7
+ from typing import List
8
  from fastapi import APIRouter, Header, HTTPException
9
  from fastapi.responses import StreamingResponse
 
10
 
11
  from app.core.config import settings
12
+ from app.models.schemas import (
13
+ OpenAIRequest, Message, UpstreamRequest, ModelItem,
14
+ ModelsResponse, Model
15
+ )
16
+ from app.utils.helpers import debug_log, generate_request_ids, get_auth_token
17
+ from app.utils.tools import process_messages_with_tools, content_to_string
18
+ from app.core.response_handlers import StreamResponseHandler, NonStreamResponseHandler
19
 
20
  router = APIRouter()
21
 
 
 
 
22
 
23
  @router.get("/v1/models")
24
  async def list_models():
 
26
  current_time = int(time.time())
27
  response = ModelsResponse(
28
  data=[
29
+ Model(
30
+ id=settings.PRIMARY_MODEL,
31
+ created=current_time,
32
+ owned_by="z.ai"
33
+ ),
34
+ Model(
35
+ id=settings.THINKING_MODEL,
36
+ created=current_time,
37
+ owned_by="z.ai"
38
+ ),
39
+ Model(
40
+ id=settings.SEARCH_MODEL,
41
+ created=current_time,
42
+ owned_by="z.ai"
43
+ ),
44
+ Model(
45
+ id=settings.AIR_MODEL,
46
+ created=current_time,
47
+ owned_by="z.ai"
48
+ ),
49
  ]
50
  )
51
  return response
52
 
53
 
54
  @router.post("/v1/chat/completions")
55
+ async def chat_completions(
56
+ request: OpenAIRequest,
57
+ authorization: str = Header(...)
58
+ ):
59
+ """Handle chat completion requests"""
60
+ debug_log("收到chat completions请求")
61
+
62
  try:
63
  # Validate API key (skip if SKIP_AUTH_TOKEN is enabled)
64
  if not settings.SKIP_AUTH_TOKEN:
65
  if not authorization.startswith("Bearer "):
66
+ debug_log("缺少或无效的Authorization头")
67
  raise HTTPException(status_code=401, detail="Missing or invalid Authorization header")
68
+
69
  api_key = authorization[7:]
70
  if api_key != settings.AUTH_TOKEN:
71
+ debug_log(f"无效的API key: {api_key}")
72
  raise HTTPException(status_code=401, detail="Invalid API key")
73
+
74
+ debug_log(f"API key验证通过,AUTH_TOKEN={api_key[:8]}......")
75
+ else:
76
+ debug_log("SKIP_AUTH_TOKEN已启用,跳过API key验证")
77
+ debug_log(f"请求解析成功 - 模型: {request.model}, 流式: {request.stream}, 消息数: {len(request.messages)}")
78
 
79
+ # Generate IDs
80
+ chat_id, msg_id = generate_request_ids()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
 
82
+ # Process messages with tools
83
+ processed_messages = process_messages_with_tools(
84
+ [m.model_dump() for m in request.messages],
85
+ request.tools,
86
+ request.tool_choice
87
+ )
 
 
 
 
 
 
 
88
 
89
+ # Convert back to Message objects
90
+ upstream_messages: List[Message] = []
91
+ for msg in processed_messages:
92
+ content = content_to_string(msg.get("content"))
93
+
94
+ upstream_messages.append(Message(
95
+ role=msg["role"],
96
+ content=content,
97
+ reasoning_content=msg.get("reasoning_content")
98
+ ))
99
+
100
+ # Determine model features
101
+ is_thinking = request.model == settings.THINKING_MODEL
102
+ is_search = request.model == settings.SEARCH_MODEL
103
+ is_air = request.model == settings.AIR_MODEL
104
+ search_mcp = "deep-web-search" if is_search else ""
105
+
106
+ # Determine upstream model ID based on requested model
107
+ if is_air:
108
+ upstream_model_id = "0727-106B-API" # AIR model upstream ID
109
+ upstream_model_name = "GLM-4.5-Air"
110
+ else:
111
+ upstream_model_id = "0727-360B-API" # Default upstream model ID
112
+ upstream_model_name = "GLM-4.5"
113
+
114
+ # Build upstream request
115
+ upstream_req = UpstreamRequest(
116
+ stream=True, # Always use streaming from upstream
117
+ chat_id=chat_id,
118
+ id=msg_id,
119
+ model=upstream_model_id, # Dynamic upstream model ID
120
+ messages=upstream_messages,
121
+ params={},
122
+ features={
123
+ "enable_thinking": is_thinking,
124
+ "web_search": is_search,
125
+ "auto_web_search": is_search,
126
  },
127
+ background_tasks={
128
+ "title_generation": False,
129
+ "tags_generation": False,
130
+ },
131
+ mcp_servers=[search_mcp] if search_mcp else [],
132
+ model_item=ModelItem(
133
+ id=upstream_model_id,
134
+ name=upstream_model_name,
135
+ owned_by="openai"
136
+ ),
137
+ tool_servers=[],
138
+ variables={
139
+ "{{USER_NAME}}": "User",
140
+ "{{USER_LOCATION}}": "Unknown",
141
+ "{{CURRENT_DATETIME}}": datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
142
+ }
143
  )
144
+
145
+ # Get authentication token
146
+ auth_token = get_auth_token()
147
+
148
+ # Check if tools are enabled and present
149
+ has_tools = (settings.TOOL_SUPPORT and
150
+ request.tools and
151
+ len(request.tools) > 0 and
152
+ request.tool_choice != "none")
153
+
154
+ # Handle response based on stream flag
155
+ if request.stream:
156
+ handler = StreamResponseHandler(upstream_req, chat_id, auth_token, has_tools)
157
+ return StreamingResponse(
158
+ handler.handle(),
159
+ media_type="text/event-stream",
160
+ headers={
161
+ "Cache-Control": "no-cache",
162
+ "Connection": "keep-alive",
163
+ }
164
+ )
165
+ else:
166
+ handler = NonStreamResponseHandler(upstream_req, chat_id, auth_token, has_tools)
167
+ return handler.handle()
168
+
169
  except HTTPException:
170
  raise
171
  except Exception as e:
172
+ debug_log(f"处理请求时发生错误: {str(e)}")
173
  import traceback
174
+ debug_log(f"错误堆栈: {traceback.format_exc()}")
175
+ raise HTTPException(status_code=500, detail=f"Internal server error: {str(e)}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
app/core/response_handlers.py ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Response handlers for streaming and non-streaming responses
3
+ """
4
+
5
+ import json
6
+ import time
7
+ from typing import Generator, Optional
8
+ import requests
9
+ from fastapi import HTTPException
10
+ from fastapi.responses import JSONResponse, StreamingResponse
11
+
12
+ from app.core.config import settings
13
+ from app.models.schemas import (
14
+ Message, Delta, Choice, Usage, OpenAIResponse,
15
+ UpstreamRequest, UpstreamData, UpstreamError, ModelItem
16
+ )
17
+ from app.utils.helpers import debug_log, call_upstream_api, transform_thinking_content
18
+ from app.utils.sse_parser import SSEParser
19
+ from app.utils.tools import extract_tool_invocations, remove_tool_json_content
20
+
21
+
22
+ def create_openai_response_chunk(
23
+ model: str,
24
+ delta: Optional[Delta] = None,
25
+ finish_reason: Optional[str] = None
26
+ ) -> OpenAIResponse:
27
+ """Create OpenAI response chunk for streaming"""
28
+ return OpenAIResponse(
29
+ id=f"chatcmpl-{int(time.time())}",
30
+ object="chat.completion.chunk",
31
+ created=int(time.time()),
32
+ model=model,
33
+ choices=[Choice(
34
+ index=0,
35
+ delta=delta or Delta(),
36
+ finish_reason=finish_reason
37
+ )]
38
+ )
39
+
40
+
41
+ def handle_upstream_error(error: UpstreamError) -> Generator[str, None, None]:
42
+ """Handle upstream error response"""
43
+ debug_log(f"上游错误: code={error.code}, detail={error.detail}")
44
+
45
+ # Send end chunk
46
+ end_chunk = create_openai_response_chunk(
47
+ model=settings.PRIMARY_MODEL,
48
+ finish_reason="stop"
49
+ )
50
+ yield f"data: {end_chunk.model_dump_json()}\n\n"
51
+ yield "data: [DONE]\n\n"
52
+
53
+
54
+ class ResponseHandler:
55
+ """Base class for response handling"""
56
+
57
+ def __init__(self, upstream_req: UpstreamRequest, chat_id: str, auth_token: str):
58
+ self.upstream_req = upstream_req
59
+ self.chat_id = chat_id
60
+ self.auth_token = auth_token
61
+
62
+ def _call_upstream(self) -> requests.Response:
63
+ """Call upstream API with error handling"""
64
+ try:
65
+ return call_upstream_api(self.upstream_req, self.chat_id, self.auth_token)
66
+ except Exception as e:
67
+ debug_log(f"调用上游失败: {e}")
68
+ raise
69
+
70
+ def _handle_upstream_error(self, response: requests.Response) -> None:
71
+ """Handle upstream error response"""
72
+ debug_log(f"上游返回错误状态: {response.status_code}")
73
+ if settings.DEBUG_LOGGING:
74
+ debug_log(f"上游错误响应: {response.text}")
75
+
76
+
77
+ class StreamResponseHandler(ResponseHandler):
78
+ """Handler for streaming responses"""
79
+
80
+ def __init__(self, upstream_req: UpstreamRequest, chat_id: str, auth_token: str, has_tools: bool = False):
81
+ super().__init__(upstream_req, chat_id, auth_token)
82
+ self.has_tools = has_tools
83
+ self.buffered_content = ""
84
+ self.tool_calls = None
85
+
86
+ def handle(self) -> Generator[str, None, None]:
87
+ """Handle streaming response"""
88
+ debug_log(f"开始处理流式响应 (chat_id={self.chat_id})")
89
+
90
+ try:
91
+ response = self._call_upstream()
92
+ except Exception:
93
+ yield "data: {\"error\": \"Failed to call upstream\"}\n\n"
94
+ return
95
+
96
+ if response.status_code != 200:
97
+ self._handle_upstream_error(response)
98
+ yield "data: {\"error\": \"Upstream error\"}\n\n"
99
+ return
100
+
101
+ # Send initial role chunk
102
+ first_chunk = create_openai_response_chunk(
103
+ model=settings.PRIMARY_MODEL,
104
+ delta=Delta(role="assistant")
105
+ )
106
+ yield f"data: {first_chunk.model_dump_json()}\n\n"
107
+
108
+ # Process stream
109
+ debug_log("开始读取上游SSE流")
110
+ sent_initial_answer = False
111
+
112
+ with SSEParser(response, debug_mode=settings.DEBUG_LOGGING) as parser:
113
+ for event in parser.iter_json_data(UpstreamData):
114
+ upstream_data = event['data']
115
+
116
+ # Check for errors
117
+ if self._has_error(upstream_data):
118
+ error = self._get_error(upstream_data)
119
+ yield from handle_upstream_error(error)
120
+ break
121
+
122
+ debug_log(f"解析成功 - 类型: {upstream_data.type}, 阶段: {upstream_data.data.phase}, "
123
+ f"内容长度: {len(upstream_data.data.delta_content)}, 完成: {upstream_data.data.done}")
124
+
125
+ # Process content
126
+ yield from self._process_content(upstream_data, sent_initial_answer)
127
+
128
+ # Check if done
129
+ if upstream_data.data.done or upstream_data.data.phase == "done":
130
+ debug_log("检测到流结束信号")
131
+ yield from self._send_end_chunk()
132
+ break
133
+
134
+ def _has_error(self, upstream_data: UpstreamData) -> bool:
135
+ """Check if upstream data contains error"""
136
+ return bool(
137
+ upstream_data.error or
138
+ upstream_data.data.error or
139
+ (upstream_data.data.inner and upstream_data.data.inner.error)
140
+ )
141
+
142
+ def _get_error(self, upstream_data: UpstreamData) -> UpstreamError:
143
+ """Get error from upstream data"""
144
+ return (
145
+ upstream_data.error or
146
+ upstream_data.data.error or
147
+ (upstream_data.data.inner.error if upstream_data.data.inner else None)
148
+ )
149
+
150
+ def _process_content(
151
+ self,
152
+ upstream_data: UpstreamData,
153
+ sent_initial_answer: bool
154
+ ) -> Generator[str, None, None]:
155
+ """Process content from upstream data"""
156
+ content = upstream_data.data.delta_content or upstream_data.data.edit_content
157
+
158
+ if not content:
159
+ return
160
+
161
+ # Transform thinking content
162
+ if upstream_data.data.phase == "thinking":
163
+ content = transform_thinking_content(content)
164
+
165
+ # Buffer content if tools are enabled
166
+ if self.has_tools:
167
+ self.buffered_content += content
168
+ else:
169
+ # Handle initial answer content
170
+ if (not sent_initial_answer and
171
+ upstream_data.data.edit_content and
172
+ upstream_data.data.phase == "answer"):
173
+
174
+ content = self._extract_edit_content(upstream_data.data.edit_content)
175
+ if content:
176
+ debug_log(f"发送普通内容: {content}")
177
+ chunk = create_openai_response_chunk(
178
+ model=settings.PRIMARY_MODEL,
179
+ delta=Delta(content=content)
180
+ )
181
+ yield f"data: {chunk.model_dump_json()}\n\n"
182
+ sent_initial_answer = True
183
+
184
+ # Handle delta content
185
+ if upstream_data.data.delta_content:
186
+ if content:
187
+ if upstream_data.data.phase == "thinking":
188
+ debug_log(f"发送思考内容: {content}")
189
+ chunk = create_openai_response_chunk(
190
+ model=settings.PRIMARY_MODEL,
191
+ delta=Delta(reasoning_content=content)
192
+ )
193
+ else:
194
+ debug_log(f"发送普通内容: {content}")
195
+ chunk = create_openai_response_chunk(
196
+ model=settings.PRIMARY_MODEL,
197
+ delta=Delta(content=content)
198
+ )
199
+ yield f"data: {chunk.model_dump_json()}\n\n"
200
+
201
+ def _extract_edit_content(self, edit_content: str) -> str:
202
+ """Extract content from edit_content field"""
203
+ parts = edit_content.split("</details>")
204
+ return parts[1] if len(parts) > 1 else ""
205
+
206
+ def _send_end_chunk(self) -> Generator[str, None, None]:
207
+ """Send end chunk and DONE signal"""
208
+ finish_reason = "stop"
209
+
210
+ if self.has_tools:
211
+ # Try to extract tool calls from buffered content
212
+ self.tool_calls = extract_tool_invocations(self.buffered_content)
213
+
214
+ if self.tool_calls:
215
+ # Send tool calls with proper format
216
+ for i, tc in enumerate(self.tool_calls):
217
+ tool_call_delta = {
218
+ "index": i,
219
+ "id": tc.get("id"),
220
+ "type": tc.get("type", "function"),
221
+ "function": tc.get("function", {}),
222
+ }
223
+
224
+ out_chunk = create_openai_response_chunk(
225
+ model=settings.PRIMARY_MODEL,
226
+ delta=Delta(tool_calls=[tool_call_delta])
227
+ )
228
+ yield f"data: {out_chunk.model_dump_json()}\n\n"
229
+
230
+ finish_reason = "tool_calls"
231
+ else:
232
+ # Send regular content
233
+ trimmed_content = remove_tool_json_content(self.buffered_content)
234
+ if trimmed_content:
235
+ content_chunk = create_openai_response_chunk(
236
+ model=settings.PRIMARY_MODEL,
237
+ delta=Delta(content=trimmed_content)
238
+ )
239
+ yield f"data: {content_chunk.model_dump_json()}\n\n"
240
+
241
+ # Send final chunk
242
+ end_chunk = create_openai_response_chunk(
243
+ model=settings.PRIMARY_MODEL,
244
+ finish_reason=finish_reason
245
+ )
246
+ yield f"data: {end_chunk.model_dump_json()}\n\n"
247
+ yield "data: [DONE]\n\n"
248
+ debug_log("流式响应完成")
249
+
250
+
251
+ class NonStreamResponseHandler(ResponseHandler):
252
+ """Handler for non-streaming responses"""
253
+
254
+ def __init__(self, upstream_req: UpstreamRequest, chat_id: str, auth_token: str, has_tools: bool = False):
255
+ super().__init__(upstream_req, chat_id, auth_token)
256
+ self.has_tools = has_tools
257
+
258
+ def handle(self) -> JSONResponse:
259
+ """Handle non-streaming response"""
260
+ debug_log(f"开始处理非流式响应 (chat_id={self.chat_id})")
261
+
262
+ try:
263
+ response = self._call_upstream()
264
+ except Exception as e:
265
+ debug_log(f"调用上游失败: {e}")
266
+ raise HTTPException(status_code=502, detail="Failed to call upstream")
267
+
268
+ if response.status_code != 200:
269
+ self._handle_upstream_error(response)
270
+ raise HTTPException(status_code=502, detail="Upstream error")
271
+
272
+ # Collect full response
273
+ full_content = []
274
+ debug_log("开始收集完整响应内容")
275
+
276
+ with SSEParser(response, debug_mode=settings.DEBUG_LOGGING) as parser:
277
+ for event in parser.iter_json_data(UpstreamData):
278
+ upstream_data = event['data']
279
+
280
+ if upstream_data.data.delta_content:
281
+ content = upstream_data.data.delta_content
282
+
283
+ if upstream_data.data.phase == "thinking":
284
+ content = transform_thinking_content(content)
285
+
286
+ if content:
287
+ full_content.append(content)
288
+
289
+ if upstream_data.data.done or upstream_data.data.phase == "done":
290
+ debug_log("检测到完成信号,停止收集")
291
+ break
292
+
293
+ final_content = "".join(full_content)
294
+ debug_log(f"内容收集完成,最终长度: {len(final_content)}")
295
+
296
+ # Handle tool calls for non-streaming
297
+ tool_calls = None
298
+ finish_reason = "stop"
299
+ message_content = final_content
300
+
301
+ if self.has_tools:
302
+ tool_calls = extract_tool_invocations(final_content)
303
+ if tool_calls:
304
+ # Content must be null when tool_calls are present (OpenAI spec)
305
+ message_content = None
306
+ finish_reason = "tool_calls"
307
+ debug_log(f"提取到工具调用: {json.dumps(tool_calls, ensure_ascii=False)}")
308
+ else:
309
+ # Remove tool JSON from content
310
+ message_content = remove_tool_json_content(final_content)
311
+ if not message_content:
312
+ message_content = final_content # 保留原内容如果清理后为空
313
+
314
+ # Build response
315
+ response_data = OpenAIResponse(
316
+ id=f"chatcmpl-{int(time.time())}",
317
+ object="chat.completion",
318
+ created=int(time.time()),
319
+ model=settings.PRIMARY_MODEL,
320
+ choices=[Choice(
321
+ index=0,
322
+ message=Message(
323
+ role="assistant",
324
+ content=message_content,
325
+ tool_calls=tool_calls
326
+ ),
327
+ finish_reason=finish_reason
328
+ )],
329
+ usage=Usage()
330
+ )
331
+
332
+ debug_log("非流式响应发送完成")
333
+ return JSONResponse(content=response_data.model_dump(exclude_none=True))
app/core/zai_transformer.py DELETED
@@ -1,730 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- import json
5
- import time
6
- import uuid
7
- import random
8
- from datetime import datetime
9
- from typing import Dict, List, Any, Optional, Generator, AsyncGenerator
10
- import httpx
11
- import asyncio
12
- from fake_useragent import UserAgent
13
-
14
- from app.core.config import settings
15
- from app.utils.logger import get_logger
16
- from app.utils.token_pool import get_token_pool, initialize_token_pool
17
-
18
- logger = get_logger()
19
-
20
- # 全局 UserAgent 实例(单例模式)
21
- _user_agent_instance = None
22
-
23
-
24
- def get_user_agent_instance() -> UserAgent:
25
- """获取或创建 UserAgent 实例(单例模式)"""
26
- global _user_agent_instance
27
- if _user_agent_instance is None:
28
- _user_agent_instance = UserAgent()
29
- return _user_agent_instance
30
-
31
-
32
- def get_dynamic_headers(chat_id: str = "") -> Dict[str, str]:
33
- """生成动态浏览器headers,包含随机User-Agent"""
34
- ua = get_user_agent_instance()
35
-
36
- # 随机选择浏览器类型,偏向Chrome和Edge
37
- browser_choices = ["chrome", "chrome", "chrome", "edge", "edge", "firefox", "safari"]
38
- browser_type = random.choice(browser_choices)
39
-
40
- try:
41
- if browser_type == "chrome":
42
- user_agent = ua.chrome
43
- elif browser_type == "edge":
44
- user_agent = ua.edge
45
- elif browser_type == "firefox":
46
- user_agent = ua.firefox
47
- elif browser_type == "safari":
48
- user_agent = ua.safari
49
- else:
50
- user_agent = ua.random
51
- except:
52
- user_agent = ua.random
53
-
54
- # 提取版本信息
55
- chrome_version = "139"
56
- edge_version = "139"
57
-
58
- if "Chrome/" in user_agent:
59
- try:
60
- chrome_version = user_agent.split("Chrome/")[1].split(".")[0]
61
- except:
62
- pass
63
-
64
- if "Edg/" in user_agent:
65
- try:
66
- edge_version = user_agent.split("Edg/")[1].split(".")[0]
67
- sec_ch_ua = f'"Microsoft Edge";v="{edge_version}", "Chromium";v="{chrome_version}", "Not_A Brand";v="24"'
68
- except:
69
- sec_ch_ua = f'"Not_A Brand";v="8", "Chromium";v="{chrome_version}", "Google Chrome";v="{chrome_version}"'
70
- elif "Firefox/" in user_agent:
71
- sec_ch_ua = None # Firefox不使用sec-ch-ua
72
- else:
73
- sec_ch_ua = f'"Not_A Brand";v="8", "Chromium";v="{chrome_version}", "Google Chrome";v="{chrome_version}"'
74
-
75
- headers = {
76
- "Content-Type": "application/json",
77
- "Accept": "application/json, text/event-stream",
78
- "User-Agent": user_agent,
79
- "Accept-Language": "zh-CN,zh;q=0.9,en;q=0.8",
80
- "X-FE-Version": "prod-fe-1.0.79",
81
- "Origin": "https://chat.z.ai",
82
- }
83
-
84
- if sec_ch_ua:
85
- headers["sec-ch-ua"] = sec_ch_ua
86
- headers["sec-ch-ua-mobile"] = "?0"
87
- headers["sec-ch-ua-platform"] = '"Windows"'
88
-
89
- if chat_id:
90
- headers["Referer"] = f"https://chat.z.ai/c/{chat_id}"
91
- else:
92
- headers["Referer"] = "https://chat.z.ai/"
93
-
94
- return headers
95
-
96
-
97
- def generate_uuid() -> str:
98
- """生成UUID v4"""
99
- return str(uuid.uuid4())
100
-
101
-
102
- def get_auth_token_sync() -> str:
103
- """同步获取认证令牌(用于非异步场景)"""
104
- if settings.ANONYMOUS_MODE:
105
- try:
106
- headers = get_dynamic_headers()
107
- with httpx.Client() as client:
108
- response = client.get("https://chat.z.ai/api/v1/auths/", headers=headers, timeout=10.0)
109
- if response.status_code == 200:
110
- data = response.json()
111
- token = data.get("token", "")
112
- if token:
113
- logger.debug(f"获取访客令牌成功: {token[:20]}...")
114
- return token
115
- except Exception as e:
116
- logger.warning(f"获取访客令牌失败: {e}")
117
-
118
- # 使用token池获取备份令牌
119
- token_pool = get_token_pool()
120
- if token_pool:
121
- token = token_pool.get_next_token()
122
- if token:
123
- logger.debug(f"从token池获取令牌: {token[:20]}...")
124
- return token
125
-
126
- # 没有可用的token
127
- logger.warning("⚠️ 没有可用的备份token")
128
- return ""
129
-
130
-
131
- class ZAITransformer:
132
- """ZAI转换器类"""
133
-
134
- def __init__(self):
135
- """初始化转换器"""
136
- self.name = "zai"
137
- self.base_url = "https://chat.z.ai"
138
- self.api_url = settings.API_ENDPOINT
139
- self.auth_url = f"{self.base_url}/api/v1/auths/"
140
-
141
- # 模型映射
142
- self.model_mapping = {
143
- settings.PRIMARY_MODEL: "0727-360B-API", # GLM-4.5
144
- settings.THINKING_MODEL: "0727-360B-API", # GLM-4.5-Thinking
145
- settings.SEARCH_MODEL: "0727-360B-API", # GLM-4.5-Search
146
- settings.AIR_MODEL: "0727-106B-API", # GLM-4.5-Air
147
- }
148
-
149
- async def get_token(self) -> str:
150
- """异步获取认证令牌"""
151
- if settings.ANONYMOUS_MODE:
152
- try:
153
-
154
- headers = get_dynamic_headers()
155
- async with httpx.AsyncClient() as client:
156
- response = await client.get(self.auth_url, headers=headers, timeout=10.0)
157
- if response.status_code == 200:
158
- data = response.json()
159
- token = data.get("token", "")
160
- if token:
161
- logger.debug(f"获取访客令牌成功: {token[:20]}...")
162
- return token
163
- except Exception as e:
164
- logger.warning(f"异步获取访客令牌失败: {e}")
165
-
166
- # 使用token池获取备份令牌
167
- token_pool = get_token_pool()
168
- if token_pool:
169
- token = token_pool.get_next_token()
170
- if token:
171
- logger.debug(f"从token池获取令牌: {token[:20]}...")
172
- return token
173
-
174
- # 没有可用的token
175
- logger.warning("⚠️ 没有可用的备份token")
176
- return ""
177
-
178
- def mark_token_success(self, token: str):
179
- """标记token使用成功"""
180
- token_pool = get_token_pool()
181
- if token_pool:
182
- token_pool.mark_token_success(token)
183
-
184
- def mark_token_failure(self, token: str, error: Exception = None):
185
- """标记token使用失败"""
186
- token_pool = get_token_pool()
187
- if token_pool:
188
- token_pool.mark_token_failure(token, error)
189
-
190
- async def transform_request_in(self, request: Dict[str, Any]) -> Dict[str, Any]:
191
- """
192
- 转换OpenAI请求为z.ai格式
193
- 整合现有功能:模型映射、MCP服务器等
194
- """
195
- logger.info(f"🔄 开始转换 OpenAI 请求到 Z.AI 格式: {request.get('model', settings.PRIMARY_MODEL)} -> Z.AI")
196
-
197
- # 获取认证令牌
198
- token = await self.get_token()
199
- logger.debug(f" 使用令牌: {token[:20] if token else 'None'}...")
200
-
201
- # 检查token是否有效
202
- if not token:
203
- logger.error("❌ 无法获取有效的认证令牌")
204
- raise Exception("无法获取有效的认证令牌,请检查匿名模式配置或token池配置")
205
-
206
- # 确定请求的模型特性
207
- requested_model = request.get("model", settings.PRIMARY_MODEL)
208
- is_thinking = requested_model == settings.THINKING_MODEL or request.get("reasoning", False)
209
- is_search = requested_model == settings.SEARCH_MODEL
210
- is_air = requested_model == settings.AIR_MODEL
211
-
212
- # 获取上游模型ID(使用模型映射)
213
- upstream_model_id = self.model_mapping.get(requested_model, "0727-360B-API")
214
- logger.debug(f" 模型映射: {requested_model} -> {upstream_model_id}")
215
- logger.debug(f" 模型特性检测: is_search={is_search}, is_thinking={is_thinking}, is_air={is_air}")
216
- logger.debug(f" SEARCH_MODEL配置: {settings.SEARCH_MODEL}")
217
-
218
- # 处理消息列表
219
- logger.debug(f" 开始处理 {len(request.get('messages', []))} 条消息")
220
- messages = []
221
- for idx, orig_msg in enumerate(request.get("messages", [])):
222
- msg = orig_msg.copy()
223
-
224
- # 处理system角色转换
225
- if msg.get("role") == "system":
226
-
227
- msg["role"] = "user"
228
- content = msg.get("content")
229
-
230
- if isinstance(content, list):
231
- msg["content"] = [
232
- {"type": "text", "text": "This is a system command, you must enforce compliance."}
233
- ] + content
234
- elif isinstance(content, str):
235
- msg["content"] = f"This is a system command, you must enforce compliance.{content}"
236
-
237
- # 处理user角色的图片内容
238
- elif msg.get("role") == "user":
239
- content = msg.get("content")
240
- if isinstance(content, list):
241
- new_content = []
242
- for part_idx, part in enumerate(content):
243
- # 处理图片URL(支持base64和http URL)
244
- if (
245
- part.get("type") == "image_url"
246
- and part.get("image_url", {}).get("url")
247
- and isinstance(part["image_url"]["url"], str)
248
- ):
249
- logger.debug(f" 消息[{idx}]内容[{part_idx}]: 检测到图片URL")
250
- # 直接传递图片内容
251
- new_content.append(part)
252
- else:
253
- new_content.append(part)
254
- msg["content"] = new_content
255
-
256
- # 处理assistant消息中的reasoning_content
257
- elif msg.get("role") == "assistant" and msg.get("reasoning_content"):
258
-
259
- # 如果有reasoning_content,保留它
260
- pass
261
-
262
- messages.append(msg)
263
-
264
- # 构建MCP服务器列表
265
- mcp_servers = []
266
- if is_search:
267
- mcp_servers.append("deep-web-search")
268
- logger.info(f"🔍 检测到搜索模型,添加 deep-web-search MCP 服务器")
269
- else:
270
- logger.debug(f" 非搜索模型,不添加 MCP 服务器")
271
-
272
- logger.debug(f" MCP服务器列表: {mcp_servers}")
273
-
274
- # 构建上游请求体
275
- chat_id = generate_uuid()
276
-
277
- body = {
278
- "stream": True, # 总是使用流式
279
- "model": upstream_model_id, # 使用映射后的模型ID
280
- "messages": messages,
281
- "params": {},
282
- "features": {
283
- "image_generation": False,
284
- "web_search": is_search,
285
- "auto_web_search": is_search,
286
- "preview_mode": False,
287
- "flags": [],
288
- "features": [],
289
- "enable_thinking": is_thinking,
290
- },
291
- "background_tasks": {
292
- "title_generation": False,
293
- "tags_generation": False,
294
- },
295
- "mcp_servers": mcp_servers, # 保留MCP服务器支持
296
- "variables": {
297
- "{{USER_NAME}}": "Guest",
298
- "{{USER_LOCATION}}": "Unknown",
299
- "{{CURRENT_DATETIME}}": datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
300
- "{{CURRENT_DATE}}": datetime.now().strftime("%Y-%m-%d"),
301
- "{{CURRENT_TIME}}": datetime.now().strftime("%H:%M:%S"),
302
- "{{CURRENT_WEEKDAY}}": datetime.now().strftime("%A"),
303
- "{{CURRENT_TIMEZONE}}": "Asia/Shanghai", # 使用更合适的时区
304
- "{{USER_LANGUAGE}}": "zh-CN",
305
- },
306
- "model_item": {
307
- "id": upstream_model_id,
308
- "name": requested_model,
309
- "owned_by": "z.ai"
310
- },
311
- "chat_id": chat_id,
312
- "id": generate_uuid(),
313
- }
314
-
315
- # 处理工具支持
316
- if settings.TOOL_SUPPORT and not is_thinking and request.get("tools"):
317
- body["tools"] = request["tools"]
318
- logger.info(f"启用工具支持: {len(request['tools'])} 个工具")
319
- else:
320
- body["tools"] = None
321
-
322
- # 构建请求配置
323
- dynamic_headers = get_dynamic_headers(chat_id)
324
-
325
- config = {
326
- "url": self.api_url, # 使用原始URL
327
- "headers": {
328
- **dynamic_headers, # 使用动态生成的headers
329
- "Authorization": f"Bearer {token}",
330
- "Cache-Control": "no-cache",
331
- "Connection": "keep-alive",
332
- "Pragma": "no-cache",
333
- "Sec-Fetch-Dest": "empty",
334
- "Sec-Fetch-Mode": "cors",
335
- "Sec-Fetch-Site": "same-origin",
336
- },
337
- }
338
-
339
- logger.info("✅ 请求转换完成")
340
-
341
- # 记录关键的请求信息用于调试
342
- logger.debug(f" 📋 发送到Z.AI的关键信息:")
343
- logger.debug(f" - 上游模型: {body['model']}")
344
- logger.debug(f" - MCP服务器: {body['mcp_servers']}")
345
- logger.debug(f" - web_search: {body['features']['web_search']}")
346
- logger.debug(f" - auto_web_search: {body['features']['auto_web_search']}")
347
- logger.debug(f" - 消息数量: {len(body['messages'])}")
348
- tools_count = len(body.get('tools') or [])
349
- logger.debug(f" - 工具数量: {tools_count}")
350
-
351
- async def transform_response_out(
352
- self, response_stream: Generator, context: Dict[str, Any]
353
- ) -> AsyncGenerator[str, None]:
354
- """
355
- 转换z.ai响应为OpenAI格式
356
- 支持流式和非流式输出
357
- """
358
- is_stream = context.get("req", {}).get("body", {}).get("stream", True)
359
-
360
- # 初始化结果对象(用于非流式)
361
- result = {
362
- "id": "",
363
- "choices": [
364
- {
365
- "finish_reason": None,
366
- "index": 0,
367
- "message": {
368
- "content": "",
369
- "role": "assistant",
370
- },
371
- }
372
- ],
373
- "created": int(time.time()),
374
- "model": context.get("req", {}).get("body", {}).get("model", ""),
375
- "object": "chat.completion",
376
- "usage": {
377
- "completion_tokens": 0,
378
- "prompt_tokens": 0,
379
- "total_tokens": 0,
380
- },
381
- }
382
-
383
- # 状态变量
384
- current_id = ""
385
- current_model = context.get("req", {}).get("body", {}).get("model", "")
386
- has_tool_call = False
387
- tool_args = ""
388
- tool_id = ""
389
- tool_call_usage = None
390
- content_index = 0
391
- has_thinking = False
392
-
393
- async for line in response_stream:
394
- if not line.strip():
395
- continue
396
-
397
- if line.startswith("data:"):
398
- chunk_str = line[5:].strip()
399
- if not chunk_str:
400
- continue
401
-
402
- try:
403
- chunk = json.loads(chunk_str)
404
-
405
- if chunk.get("type") == "chat:completion":
406
- data = chunk.get("data", {})
407
-
408
- # 保存ID和模型信息
409
- if data.get("id"):
410
- current_id = data["id"]
411
- if data.get("model"):
412
- current_model = data["model"]
413
-
414
- # 处理不同阶段
415
- phase = data.get("phase")
416
-
417
- if phase == "tool_call":
418
- # 处理工具调用
419
- if not has_tool_call:
420
- has_tool_call = True
421
-
422
- if is_stream:
423
- # 发送初始角色
424
- role_chunk = {
425
- "choices": [
426
- {
427
- "delta": {"role": "assistant"},
428
- "finish_reason": None,
429
- "index": 0,
430
- }
431
- ],
432
- "created": int(time.time()),
433
- "id": current_id,
434
- "model": current_model,
435
- "object": "chat.completion.chunk",
436
- }
437
- yield f"data: {json.dumps(role_chunk)}\n\n"
438
-
439
- # 处理工具调用块
440
- tool_call_id = data.get("tool_call", {}).get("id", "")
441
- tool_name = data.get("tool_call", {}).get("name", "")
442
- delta_args = data.get("delta_tool_call", {}).get("arguments", "")
443
-
444
- if tool_call_id and tool_call_id != tool_id:
445
- # 新工具调用
446
- if tool_id and is_stream:
447
- # 关闭前一个工具调用
448
- close_chunk = {
449
- "choices": [
450
- {
451
- "delta": {
452
- "tool_calls": [
453
- {"index": content_index, "function": {"arguments": ""}}
454
- ]
455
- },
456
- "finish_reason": None,
457
- "index": 0,
458
- }
459
- ],
460
- "created": int(time.time()),
461
- "id": current_id,
462
- "model": current_model,
463
- "object": "chat.completion.chunk",
464
- }
465
- yield f"data: {json.dumps(close_chunk)}\n\n"
466
- content_index += 1
467
-
468
- tool_id = tool_call_id
469
- tool_args = ""
470
-
471
- if is_stream:
472
- # 发送新工具调用
473
- new_tool_chunk = {
474
- "choices": [
475
- {
476
- "delta": {
477
- "tool_calls": [
478
- {
479
- "index": content_index,
480
- "id": tool_call_id,
481
- "type": "function",
482
- "function": {"name": tool_name, "arguments": ""},
483
- }
484
- ]
485
- },
486
- "finish_reason": None,
487
- "index": 0,
488
- }
489
- ],
490
- "created": int(time.time()),
491
- "id": current_id,
492
- "model": current_model,
493
- "object": "chat.completion.chunk",
494
- }
495
- yield f"data: {json.dumps(new_tool_chunk)}\n\n"
496
-
497
- # 处理参数增量
498
- if delta_args:
499
- tool_args += delta_args
500
- if is_stream:
501
- args_chunk = {
502
- "choices": [
503
- {
504
- "delta": {
505
- "tool_calls": [
506
- {
507
- "index": content_index,
508
- "function": {"arguments": delta_args},
509
- }
510
- ]
511
- },
512
- "finish_reason": None,
513
- "index": 0,
514
- }
515
- ],
516
- "created": int(time.time()),
517
- "id": current_id,
518
- "model": current_model,
519
- "object": "chat.completion.chunk",
520
- }
521
- yield f"data: {json.dumps(args_chunk)}\n\n"
522
-
523
- elif phase == "thinking":
524
- # 处理思考内容
525
- if not has_thinking:
526
- has_thinking = True
527
- # 初始化thinking字段
528
- if not is_stream:
529
- result["choices"][0]["message"]["thinking"] = {"content": ""}
530
-
531
- if is_stream:
532
- # 发送初始角色
533
- role_chunk = {
534
- "choices": [
535
- {
536
- "delta": {"role": "assistant"},
537
- "finish_reason": None,
538
- "index": 0,
539
- }
540
- ],
541
- "created": int(time.time()),
542
- "id": current_id,
543
- "model": current_model,
544
- "object": "chat.completion.chunk",
545
- }
546
- yield f"data: {json.dumps(role_chunk)}\n\n"
547
-
548
- delta_content = data.get("delta_content", "")
549
- if delta_content:
550
- # 处理思考内容格式
551
- if delta_content.startswith("<details"):
552
- content = (
553
- delta_content.split("</summary>\n>")[-1].strip()
554
- if "</summary>\n>" in delta_content
555
- else delta_content
556
- )
557
- else:
558
- content = delta_content
559
-
560
- if is_stream:
561
- thinking_chunk = {
562
- "choices": [
563
- {
564
- "delta": {"thinking": {"content": content}},
565
- "finish_reason": None,
566
- "index": 0,
567
- }
568
- ],
569
- "created": int(time.time()),
570
- "id": current_id,
571
- "model": current_model,
572
- "object": "chat.completion.chunk",
573
- }
574
- yield f"data: {json.dumps(thinking_chunk)}\n\n"
575
- else:
576
- result["choices"][0]["message"]["thinking"]["content"] += content
577
-
578
- elif phase == "answer":
579
- # 处理答案内容
580
- edit_content = data.get("edit_content", "")
581
- delta_content = data.get("delta_content", "")
582
-
583
- # 处理思考结束和答案开始
584
- if edit_content and "</details>\n" in edit_content:
585
- if has_thinking:
586
- signature = str(int(time.time() * 1000))
587
-
588
- if is_stream:
589
- # 发送思考签名
590
- sig_chunk = {
591
- "choices": [
592
- {
593
- "delta": {
594
- "role": "assistant",
595
- "thinking": {"content": "", "signature": signature},
596
- },
597
- "finish_reason": None,
598
- "index": 0,
599
- }
600
- ],
601
- "created": int(time.time()),
602
- "id": current_id,
603
- "model": current_model,
604
- "object": "chat.completion.chunk",
605
- }
606
- yield f"data: {json.dumps(sig_chunk)}\n\n"
607
- content_index += 1
608
- else:
609
- result["choices"][0]["message"]["thinking"]["signature"] = signature
610
-
611
- # 提取答案内容
612
- content_after = edit_content.split("</details>\n")[-1]
613
- if content_after:
614
- if is_stream:
615
- content_chunk = {
616
- "choices": [
617
- {
618
- "delta": {"role": "assistant", "content": content_after},
619
- "finish_reason": None,
620
- "index": 0,
621
- }
622
- ],
623
- "created": int(time.time()),
624
- "id": current_id,
625
- "model": current_model,
626
- "object": "chat.completion.chunk",
627
- }
628
- yield f"data: {json.dumps(content_chunk)}\n\n"
629
- else:
630
- result["choices"][0]["message"]["content"] += content_after
631
-
632
- # 处理增量内容
633
- elif delta_content:
634
- if is_stream:
635
- # 如果还没有发送角色
636
- if not has_thinking and not has_tool_call:
637
- role_chunk = {
638
- "choices": [
639
- {
640
- "delta": {"role": "assistant"},
641
- "finish_reason": None,
642
- "index": 0,
643
- }
644
- ],
645
- "created": int(time.time()),
646
- "id": current_id,
647
- "model": current_model,
648
- "object": "chat.completion.chunk",
649
- }
650
- yield f"data: {json.dumps(role_chunk)}\n\n"
651
-
652
- content_chunk = {
653
- "choices": [
654
- {
655
- "delta": {"role": "assistant", "content": delta_content},
656
- "finish_reason": None,
657
- "index": 0,
658
- }
659
- ],
660
- "created": int(time.time()),
661
- "id": current_id,
662
- "model": current_model,
663
- "object": "chat.completion.chunk",
664
- }
665
- yield f"data: {json.dumps(content_chunk)}\n\n"
666
- else:
667
- result["choices"][0]["message"]["content"] += delta_content
668
-
669
- # 处理完成
670
- if data.get("usage"):
671
- usage = data["usage"]
672
- if is_stream:
673
- finish_chunk = {
674
- "choices": [
675
- {
676
- "delta": {"role": "assistant", "content": ""},
677
- "finish_reason": "stop",
678
- "index": 0,
679
- }
680
- ],
681
- "usage": usage,
682
- "created": int(time.time()),
683
- "id": current_id,
684
- "model": current_model,
685
- "object": "chat.completion.chunk",
686
- }
687
- yield f"data: {json.dumps(finish_chunk)}\n\n"
688
- yield "data: [DONE]\n\n"
689
- else:
690
- result["id"] = current_id
691
- result["model"] = current_model
692
- result["usage"] = usage
693
- result["choices"][0]["finish_reason"] = "stop"
694
-
695
- elif phase == "other":
696
- # 处理其他阶段(可能包含usage信息)
697
- if data.get("usage"):
698
- tool_call_usage = data["usage"]
699
- if has_tool_call and is_stream:
700
- # 关闭最后一个工具调用并发送完成
701
- if tool_id:
702
- close_chunk = {
703
- "choices": [
704
- {
705
- "delta": {
706
- "tool_calls": [
707
- {"index": content_index, "function": {"arguments": ""}}
708
- ]
709
- },
710
- "finish_reason": "tool_calls",
711
- "index": 0,
712
- }
713
- ],
714
- "usage": tool_call_usage,
715
- "created": int(time.time()),
716
- "id": current_id,
717
- "model": current_model,
718
- "object": "chat.completion.chunk",
719
- }
720
- yield f"data: {json.dumps(close_chunk)}\n\n"
721
- yield "data: [DONE]\n\n"
722
-
723
- except json.JSONDecodeError as e:
724
- logger.debug(f"JSON解析错误: {e}")
725
- except Exception as e:
726
- logger.error(f"处理chunk错误: {e}")
727
-
728
- # 非流式模式返回完整结果
729
- if not is_stream:
730
- yield json.dumps(result)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
app/models/__init__.py CHANGED
@@ -1,6 +1,7 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
 
3
 
4
  from app.models import schemas
5
 
6
- __all__ = ["schemas"]
 
1
+ """
2
+ Models module initialization
3
+ """
4
 
5
  from app.models import schemas
6
 
7
+ __all__ = ["schemas"]
app/models/schemas.py CHANGED
@@ -1,5 +1,6 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
 
3
 
4
  from typing import Dict, List, Optional, Any, Union, Literal
5
  from pydantic import BaseModel
@@ -53,8 +54,8 @@ class UpstreamRequest(BaseModel):
53
  chat_id: Optional[str] = None
54
  id: Optional[str] = None
55
  mcp_servers: Optional[List[str]] = None
56
- model_item: Optional[Dict[str, Any]] = {} # Model item dictionary
57
- tools: Optional[List[Dict[str, Any]]] = None # Add tools field for OpenAI compatibility
58
  variables: Optional[Dict[str, str]] = None
59
  model_config = {"protected_namespaces": ()}
60
 
 
1
+ """
2
+ Application data models
3
+ """
4
 
5
  from typing import Dict, List, Optional, Any, Union, Literal
6
  from pydantic import BaseModel
 
54
  chat_id: Optional[str] = None
55
  id: Optional[str] = None
56
  mcp_servers: Optional[List[str]] = None
57
+ model_item: Optional[ModelItem] = None
58
+ tool_servers: Optional[List[str]] = None
59
  variables: Optional[Dict[str, str]] = None
60
  model_config = {"protected_namespaces": ()}
61
 
app/utils/__init__.py CHANGED
@@ -1,6 +1,7 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
 
3
 
4
- from app.utils import sse_tool_handler, reload_config, logger
5
 
6
- __all__ = ["sse_tool_handler", "reload_config", "logger"]
 
1
+ """
2
+ Utils module initialization
3
+ """
4
 
5
+ from app.utils import helpers, sse_parser, tools, reload_config
6
 
7
+ __all__ = ["helpers", "sse_parser", "tools", "reload_config"]
app/utils/helpers.py ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Utility functions for the application
3
+ """
4
+
5
+ import json
6
+ import re
7
+ import time
8
+ import random
9
+ from typing import Dict, List, Optional, Any, Tuple, Generator
10
+ import requests
11
+ from fake_useragent import UserAgent
12
+
13
+ from app.core.config import settings
14
+
15
+ # 全局 UserAgent 实例,避免每次调用都创建新实例
16
+ _user_agent_instance = None
17
+
18
+ def get_user_agent_instance() -> UserAgent:
19
+ """获取或创建 UserAgent 实例(单例模式)"""
20
+ global _user_agent_instance
21
+ if _user_agent_instance is None:
22
+ _user_agent_instance = UserAgent()
23
+ return _user_agent_instance
24
+
25
+
26
+ def debug_log(message: str, *args) -> None:
27
+ """Log debug message if debug mode is enabled"""
28
+ if settings.DEBUG_LOGGING:
29
+ if args:
30
+ print(f"[DEBUG] {message % args}")
31
+ else:
32
+ print(f"[DEBUG] {message}")
33
+
34
+
35
+ def generate_request_ids() -> Tuple[str, str]:
36
+ """Generate unique IDs for chat and message"""
37
+ timestamp = int(time.time())
38
+ chat_id = f"{timestamp * 1000}-{timestamp}"
39
+ msg_id = str(timestamp * 1000000)
40
+ return chat_id, msg_id
41
+
42
+
43
+ def get_browser_headers(referer_chat_id: str = "") -> Dict[str, str]:
44
+ """Get browser headers for API requests with dynamic User-Agent"""
45
+
46
+ # 获取 UserAgent 实例
47
+ ua = get_user_agent_instance()
48
+
49
+ # 随机选择一个浏览器类型,偏向使用 Chrome 和 Edge
50
+ browser_choices = ['chrome', 'chrome', 'chrome', 'edge', 'edge', 'firefox', 'safari']
51
+ browser_type = random.choice(browser_choices)
52
+
53
+ try:
54
+ # 根据浏览器类型获取 User-Agent
55
+ if browser_type == 'chrome':
56
+ user_agent = ua.chrome
57
+ elif browser_type == 'edge':
58
+ user_agent = ua.edge
59
+ elif browser_type == 'firefox':
60
+ user_agent = ua.firefox
61
+ elif browser_type == 'safari':
62
+ user_agent = ua.safari
63
+ else:
64
+ user_agent = ua.random
65
+ except:
66
+ # 如果获取失败,使用随机 User-Agent
67
+ user_agent = ua.random
68
+
69
+ # 提取浏览器版本信息
70
+ chrome_version = "139" # 默认版本
71
+ edge_version = "139"
72
+
73
+ if "Chrome/" in user_agent:
74
+ try:
75
+ chrome_version = user_agent.split("Chrome/")[1].split(".")[0]
76
+ except:
77
+ pass
78
+
79
+ if "Edg/" in user_agent:
80
+ try:
81
+ edge_version = user_agent.split("Edg/")[1].split(".")[0]
82
+ # Edge 基于 Chromium,使用 Edge 特定的 sec-ch-ua
83
+ sec_ch_ua = f'"Microsoft Edge";v="{edge_version}", "Chromium";v="{chrome_version}", "Not_A Brand";v="24"'
84
+ except:
85
+ sec_ch_ua = f'"Not_A Brand";v="8", "Chromium";v="{chrome_version}", "Google Chrome";v="{chrome_version}"'
86
+ elif "Firefox/" in user_agent:
87
+ # Firefox 不使用 sec-ch-ua
88
+ sec_ch_ua = None
89
+ else:
90
+ # Chrome 或其他基于 Chromium 的浏览器
91
+ sec_ch_ua = f'"Not_A Brand";v="8", "Chromium";v="{chrome_version}", "Google Chrome";v="{chrome_version}"'
92
+
93
+ # 构建动态 Headers
94
+ headers = {
95
+ "Content-Type": "application/json",
96
+ "Accept": "application/json, text/event-stream",
97
+ "User-Agent": user_agent,
98
+ "Accept-Language": "zh-CN,zh;q=0.9,en;q=0.8,en-US;q=0.7",
99
+ "sec-ch-ua-mobile": "?0",
100
+ "sec-ch-ua-platform": '"Windows"',
101
+ "sec-fetch-dest": "empty",
102
+ "sec-fetch-mode": "cors",
103
+ "sec-fetch-site": "same-origin",
104
+ "X-FE-Version": "prod-fe-1.0.70",
105
+ "Origin": settings.CLIENT_HEADERS["Origin"],
106
+ "Cache-Control": "no-cache",
107
+ "Pragma": "no-cache",
108
+ }
109
+
110
+ # 只有基于 Chromium 的浏览器才添加 sec-ch-ua
111
+ if sec_ch_ua:
112
+ headers["sec-ch-ua"] = sec_ch_ua
113
+
114
+ # 添加 Referer
115
+ if referer_chat_id:
116
+ headers["Referer"] = f"{settings.CLIENT_HEADERS['Origin']}/c/{referer_chat_id}"
117
+
118
+ # 调试日志
119
+ if settings.DEBUG_LOGGING:
120
+ debug_log(f"使用 User-Agent: {user_agent[:100]}...")
121
+
122
+ return headers
123
+
124
+
125
+ def get_anonymous_token() -> str:
126
+ """Get anonymous token for authentication"""
127
+ headers = get_browser_headers()
128
+ headers.update({
129
+ "Accept": "*/*",
130
+ "Accept-Language": "zh-CN,zh;q=0.9",
131
+ "Referer": f"{settings.CLIENT_HEADERS['Origin']}/",
132
+ })
133
+
134
+ try:
135
+ response = requests.get(
136
+ f"{settings.CLIENT_HEADERS['Origin']}/api/v1/auths/",
137
+ headers=headers,
138
+ timeout=10.0
139
+ )
140
+
141
+ if response.status_code != 200:
142
+ raise Exception(f"anon token status={response.status_code}")
143
+
144
+ data = response.json()
145
+ token = data.get("token")
146
+ if not token:
147
+ raise Exception("anon token empty")
148
+
149
+ return token
150
+ except Exception as e:
151
+ debug_log(f"获取匿名token失败: {e}")
152
+ raise
153
+
154
+
155
+ def get_auth_token() -> str:
156
+ """Get authentication token (anonymous or fixed)"""
157
+ if settings.ANONYMOUS_MODE:
158
+ try:
159
+ token = get_anonymous_token()
160
+ debug_log(f"匿名token获取成功: {token[:10]}...")
161
+ return token
162
+ except Exception as e:
163
+ debug_log(f"匿名token获取失败,回退固定token: {e}")
164
+
165
+ return settings.BACKUP_TOKEN
166
+
167
+
168
+ def transform_thinking_content(content: str) -> str:
169
+ """Transform thinking content according to configuration"""
170
+ # Remove summary tags
171
+ content = re.sub(r'(?s)<summary>.*?</summary>', '', content)
172
+ # Clean up remaining tags
173
+ content = content.replace("</thinking>", "").replace("<Full>", "").replace("</Full>", "")
174
+ content = content.strip()
175
+
176
+ if settings.THINKING_PROCESSING == "think":
177
+ content = re.sub(r'<details[^>]*>', '<span>', content)
178
+ content = content.replace("</details>", "</span>")
179
+ elif settings.THINKING_PROCESSING == "strip":
180
+ content = re.sub(r'<details[^>]*>', '', content)
181
+ content = content.replace("</details>", "")
182
+
183
+ # Remove line prefixes
184
+ content = content.lstrip("> ")
185
+ content = content.replace("\n> ", "\n")
186
+
187
+ return content.strip()
188
+
189
+
190
+ def call_upstream_api(
191
+ upstream_req: Any,
192
+ chat_id: str,
193
+ auth_token: str
194
+ ) -> requests.Response:
195
+ """Call upstream API with proper headers"""
196
+ headers = get_browser_headers(chat_id)
197
+ headers["Authorization"] = f"Bearer {auth_token}"
198
+
199
+ debug_log(f"调用上游API: {settings.API_ENDPOINT}")
200
+ debug_log(f"上游请求体: {upstream_req.model_dump_json()}")
201
+
202
+ response = requests.post(
203
+ settings.API_ENDPOINT,
204
+ json=upstream_req.model_dump(exclude_none=True),
205
+ headers=headers,
206
+ timeout=60.0,
207
+ stream=True
208
+ )
209
+
210
+ debug_log(f"上游响应状态: {response.status_code}")
211
+ return response
app/utils/logger.py DELETED
@@ -1,104 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- import sys
5
- from pathlib import Path
6
- from loguru import logger
7
-
8
- # Global logger instance
9
- app_logger = None
10
-
11
-
12
- def setup_logger(log_dir, log_retention_days=7, log_rotation="1 day", debug_mode=False):
13
- """
14
- Create a logger instance
15
-
16
- Parameters:
17
- log_dir (str): 日志目录
18
- log_retention_days (int): 日志保留天数
19
- log_rotation (str): 日志轮转间隔
20
- debug_mode (bool): 是否开启调试模式
21
- """
22
- global app_logger
23
-
24
- try:
25
- logger.remove()
26
-
27
- log_level = "DEBUG" if debug_mode else "INFO"
28
-
29
- console_format = (
30
- "<green>{time:HH:mm:ss}</green> | <level>{level: <8}</level> | <level>{message}</level>"
31
- if not debug_mode
32
- else "<green>{time:YYYY-MM-DD HH:mm:ss}</green> | <level>{level: <8}</level> | "
33
- "<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> | <level>{message}</level>"
34
- )
35
-
36
- logger.add(sys.stderr, level=log_level, format=console_format, colorize=True)
37
-
38
- if debug_mode:
39
- log_path = Path(log_dir)
40
- log_path.mkdir(parents=True, exist_ok=True)
41
-
42
- log_file = log_path / "{time:YYYY-MM-DD}.log"
43
- file_format = "{time:YYYY-MM-DD HH:mm:ss.SSS} | {level: <8} | {name}:{function}:{line} | {message}"
44
-
45
- logger.add(
46
- str(log_file),
47
- level=log_level,
48
- format=file_format,
49
- rotation=log_rotation,
50
- retention=f"{log_retention_days} days",
51
- encoding="utf-8",
52
- compression="zip",
53
- enqueue=True,
54
- catch=True,
55
- )
56
-
57
- app_logger = logger
58
-
59
- return logger
60
-
61
- except Exception as e:
62
- logger.remove()
63
- logger.add(sys.stderr, level="ERROR")
64
- logger.error(f"日志系统配置失败: {e}")
65
- raise
66
-
67
-
68
- def get_logger():
69
- """Get the logger instance"""
70
- global app_logger
71
- if app_logger is None:
72
-
73
- app_logger = logger
74
- logger.add(sys.stderr, level="INFO")
75
- return app_logger
76
-
77
-
78
- if __name__ == "__main__":
79
- """Test the logger"""
80
- import tempfile
81
-
82
- with tempfile.TemporaryDirectory() as temp_dir:
83
- try:
84
- setup_logger(temp_dir, debug_mode=True)
85
-
86
- logger.debug("这是一条调试日志")
87
- logger.info("这是一条信息日志")
88
- logger.warning("这是一条警告日志")
89
- logger.error("这是一条错误日志")
90
- logger.critical("这是一条严重日志")
91
-
92
- try:
93
- 1 / 0
94
- except ZeroDivisionError:
95
- logger.exception("发生了除零异常")
96
-
97
- print("✅ 日志测试完成")
98
-
99
- logger.remove()
100
-
101
- except Exception as e:
102
- print(f"❌ 日志测试失败: {e}")
103
- logger.remove()
104
- raise
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
app/utils/process_manager.py DELETED
@@ -1,303 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- """
5
- 进程管理模块
6
- 提供服务唯一性验证和进程管理功能
7
- """
8
-
9
- import os
10
- import sys
11
- import time
12
- import psutil
13
- from typing import Optional, List
14
- from pathlib import Path
15
-
16
- from app.utils.logger import get_logger
17
-
18
- logger = get_logger()
19
-
20
-
21
- class ProcessManager:
22
- """进程管理器 - 负责服务唯一性验证和进程管理"""
23
-
24
- def __init__(self, service_name: str = "z-ai2api-server", port: int = 8080):
25
- """
26
- 初始化进程管理器
27
-
28
- Args:
29
- service_name: 服务名称,用于进程名称标识
30
- port: 服务端口,用于唯一性检查
31
- """
32
- self.service_name = service_name
33
- self.port = port
34
- self.current_pid = os.getpid()
35
- self.pid_file = Path(f"{service_name}.pid")
36
-
37
- def check_service_uniqueness(self) -> bool:
38
- """
39
- 检查服务唯一性
40
-
41
- 通过以下方式验证:
42
- 1. 检查 PID 文件
43
- 2. 检查端口是否被占用
44
- 3. 检查进程名称 (pname) 是否已存在(可选)
45
-
46
- Returns:
47
- bool: True 表示可以启动服务,False 表示已有实例运行
48
- """
49
- logger.info(f"🔍 检查服务唯一性: {self.service_name} (端口: {self.port})")
50
-
51
- # 1. 优先检查 PID 文件(最可靠)
52
- if self._check_pid_file():
53
- return False
54
-
55
- # 2. 检查端口占用
56
- if self._check_port_usage():
57
- return False
58
-
59
- # 3. 检查进程名称(作为额外保障)
60
- if self._check_process_by_name():
61
- return False
62
-
63
- logger.info("✅ 服务唯一性检查通过,可以启动服务")
64
- return True
65
-
66
- def _check_process_by_name(self) -> bool:
67
- """
68
- 通过进程名称检查是否已有实例运行
69
-
70
- 这是一个保守的检查,只检查明确的服务进程标识
71
-
72
- Returns:
73
- bool: True 表示发现同名进程,False 表示未发现
74
- """
75
- try:
76
- running_processes = []
77
-
78
- for proc in psutil.process_iter(['pid', 'name', 'cmdline']):
79
- try:
80
- proc_info = proc.info
81
-
82
- # 跳过当前进程
83
- if proc_info['pid'] == self.current_pid:
84
- continue
85
-
86
- # 只检查进程名称直接匹配服务名称的情况
87
- # 这通常发生在使用 Granian 的 process_name 参数时
88
- if proc_info['name'] and proc_info['name'] == self.service_name:
89
- running_processes.append(proc_info)
90
- continue
91
-
92
- # 检查命令行参数中是否包含明确的服务标识
93
- cmdline = proc_info.get('cmdline', [])
94
- if cmdline and len(cmdline) >= 2:
95
- cmdline_str = ' '.join(cmdline)
96
-
97
- # 只检查通过 Granian 启动且明确指定了进程名称的服务
98
- if (f'--process-name={self.service_name}' in cmdline_str or
99
- f'process_name={self.service_name}' in cmdline_str):
100
- running_processes.append(proc_info)
101
-
102
- except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
103
- # 进程可能已经结束或无权限访问
104
- continue
105
-
106
- if running_processes:
107
- logger.warning(f"⚠️ 发现 {len(running_processes)} 个同名进程正在运行:")
108
- for proc_info in running_processes:
109
- cmdline = proc_info.get('cmdline', [])
110
- cmdline_preview = ' '.join(cmdline[:3]) + '...' if len(cmdline) > 3 else ' '.join(cmdline)
111
- logger.warning(f" PID: {proc_info['pid']}, 名称: {proc_info['name']}, 命令: {cmdline_preview}")
112
- logger.warning(f"❌ 服务 {self.service_name} 已在运行,请先停止现有实例")
113
- return True
114
-
115
- return False
116
-
117
- except Exception as e:
118
- logger.error(f"❌ 检查进程名称时发生错误: {e}")
119
- return False
120
-
121
- def _check_port_usage(self) -> bool:
122
- """
123
- 检查端口是否被占用
124
-
125
- Returns:
126
- bool: True 表示端口被占用,False 表示端口可用
127
- """
128
- try:
129
- # 获取所有网络连接
130
- connections = psutil.net_connections(kind='inet')
131
-
132
- for conn in connections:
133
- if (conn.laddr.port == self.port and
134
- conn.status in [psutil.CONN_LISTEN, psutil.CONN_ESTABLISHED]):
135
-
136
- # 尝试获取占用端口的进程信息
137
- try:
138
- proc = psutil.Process(conn.pid) if conn.pid else None
139
- proc_name = proc.name() if proc else "未知进程"
140
- logger.warning(f"⚠️ 端口 {self.port} 已被占用")
141
- logger.warning(f" 占用进程: PID {conn.pid}, 名称: {proc_name}")
142
- logger.warning(f"❌ 无法启动服务,端口 {self.port} 不可用")
143
- return True
144
- except (psutil.NoSuchProcess, psutil.AccessDenied):
145
- logger.warning(f"⚠️ 端口 {self.port} 已被占用(无法获取进程信息)")
146
- return True
147
-
148
- return False
149
-
150
- except Exception as e:
151
- logger.error(f"❌ 检查端口占用时发生错误: {e}")
152
- return False
153
-
154
- def _check_pid_file(self) -> bool:
155
- """
156
- 检查 PID 文件
157
-
158
- Returns:
159
- bool: True 表示发现有效的 PID 文件,False 表示无冲突
160
- """
161
- try:
162
- if not self.pid_file.exists():
163
- return False
164
-
165
- # 读取 PID 文件
166
- pid_content = self.pid_file.read_text().strip()
167
- if not pid_content.isdigit():
168
- logger.warning(f"⚠️ PID 文件格式无效: {self.pid_file}")
169
- self._cleanup_pid_file()
170
- return False
171
-
172
- old_pid = int(pid_content)
173
-
174
- # 检查进程是否仍在运行
175
- try:
176
- proc = psutil.Process(old_pid)
177
- if proc.is_running():
178
- logger.warning(f"⚠️ 发现有效的 PID 文件: {self.pid_file}")
179
- logger.warning(f" 进程 PID {old_pid} 仍在运行: {proc.name()}")
180
- logger.warning(f"❌ 服务可能已在运行,请检查进程或删除 PID 文件")
181
- return True
182
- else:
183
- logger.info(f"🧹 清理无效的 PID 文件: {self.pid_file}")
184
- self._cleanup_pid_file()
185
- return False
186
- except psutil.NoSuchProcess:
187
- logger.info(f"🧹 清理过期的 PID 文件: {self.pid_file}")
188
- self._cleanup_pid_file()
189
- return False
190
-
191
- except Exception as e:
192
- logger.error(f"❌ 检查 PID 文件时发生错误: {e}")
193
- return False
194
-
195
- def _cleanup_pid_file(self):
196
- """清理 PID 文件"""
197
- try:
198
- if self.pid_file.exists():
199
- self.pid_file.unlink()
200
- logger.debug(f"🧹 已删除 PID 文件: {self.pid_file}")
201
- except Exception as e:
202
- logger.error(f"❌ 删除 PID 文件失败: {e}")
203
-
204
- def create_pid_file(self):
205
- """创建 PID 文件"""
206
- try:
207
- self.pid_file.write_text(str(self.current_pid))
208
- logger.info(f"📝 创建 PID 文件: {self.pid_file} (PID: {self.current_pid})")
209
- except Exception as e:
210
- logger.error(f"❌ 创建 PID 文件失败: {e}")
211
-
212
- def cleanup_on_exit(self):
213
- """退出时清理资源"""
214
- logger.info(f"🧹 清理进程资源 (PID: {self.current_pid})")
215
- self._cleanup_pid_file()
216
-
217
- def get_running_instances(self) -> List[dict]:
218
- """
219
- 获取所有运行中的服务实例
220
-
221
- Returns:
222
- List[dict]: 运行中的实例信息列表
223
- """
224
- instances = []
225
-
226
- try:
227
- for proc in psutil.process_iter(['pid', 'name', 'cmdline', 'create_time']):
228
- try:
229
- proc_info = proc.info
230
-
231
- # 跳过当前进程
232
- if proc_info['pid'] == self.current_pid:
233
- continue
234
-
235
- # 使用与 _check_process_by_name 相同的保守逻辑
236
- is_service = False
237
-
238
- # 只检查进程名称直接匹配服务名称的情况
239
- if proc_info['name'] and proc_info['name'] == self.service_name:
240
- is_service = True
241
-
242
- # 检查命令行参数中是否包含明确的服务标识
243
- cmdline = proc_info.get('cmdline', [])
244
- if cmdline and len(cmdline) >= 2:
245
- cmdline_str = ' '.join(cmdline)
246
-
247
- # 只检查通过 Granian 启动且明确指定了进程名称的服务
248
- if (f'--process-name={self.service_name}' in cmdline_str or
249
- f'process_name={self.service_name}' in cmdline_str):
250
- is_service = True
251
-
252
- if is_service:
253
- instances.append({
254
- 'pid': proc_info['pid'],
255
- 'name': proc_info['name'],
256
- 'cmdline': cmdline,
257
- 'create_time': proc_info['create_time'],
258
- 'start_time': time.strftime('%Y-%m-%d %H:%M:%S',
259
- time.localtime(proc_info['create_time']))
260
- })
261
-
262
- except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
263
- continue
264
-
265
- except Exception as e:
266
- logger.error(f"❌ 获取运行实例时发生错误: {e}")
267
-
268
- return instances
269
-
270
-
271
- def ensure_service_uniqueness(service_name: str = "z-ai2api-server", port: int = 8080) -> bool:
272
- """
273
- 确保服务唯一性的便捷函数
274
-
275
- Args:
276
- service_name: 服务名称
277
- port: 服务端口
278
-
279
- Returns:
280
- bool: True 表示可以启动,False 表示应该退出
281
- """
282
- manager = ProcessManager(service_name, port)
283
-
284
- if not manager.check_service_uniqueness():
285
- logger.error("❌ 服务唯一性检查失败,程序退出")
286
-
287
- # 显示运行中的实例
288
- instances = manager.get_running_instances()
289
- if instances:
290
- logger.info("📋 当前运行的实例:")
291
- for instance in instances:
292
- logger.info(f" PID: {instance['pid']}, 启动时间: {instance['start_time']}")
293
-
294
- return False
295
-
296
- # 创建 PID 文件
297
- manager.create_pid_file()
298
-
299
- # 注册退出清理
300
- import atexit
301
- atexit.register(manager.cleanup_on_exit)
302
-
303
- return True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
app/utils/reload_config.py CHANGED
@@ -1,6 +1,3 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
  """
5
  热重载配置模块
6
  定义 Granian 服务器热重载时需要忽略的目录和文件模式
 
 
 
 
1
  """
2
  热重载配置模块
3
  定义 Granian 服务器热重载时需要忽略的目录和文件模式
app/utils/sse_parser.py ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ SSE (Server-Sent Events) parser for streaming responses
3
+ """
4
+
5
+ import json
6
+ from typing import Dict, Any, Generator, Optional, Type
7
+ import requests
8
+
9
+
10
+ class SSEParser:
11
+ """Server-Sent Events parser for streaming responses"""
12
+
13
+ def __init__(self, response: requests.Response, debug_mode: bool = False):
14
+ """Initialize SSE parser
15
+
16
+ Args:
17
+ response: requests.Response object with stream=True
18
+ debug_mode: Enable debug logging
19
+ """
20
+ self.response = response
21
+ self.debug_mode = debug_mode
22
+ self.buffer = ""
23
+ self.line_count = 0
24
+
25
+ def debug_log(self, format_str: str, *args) -> None:
26
+ """Log debug message if debug mode is enabled"""
27
+ if self.debug_mode:
28
+ if args:
29
+ print(f"[SSE_PARSER] {format_str % args}")
30
+ else:
31
+ print(f"[SSE_PARSER] {format_str}")
32
+
33
+ def iter_events(self) -> Generator[Dict[str, Any], None, None]:
34
+ """Iterate over SSE events
35
+
36
+ Yields:
37
+ dict: Parsed SSE event data
38
+ """
39
+ self.debug_log("开始解析 SSE 流")
40
+
41
+ for line in self.response.iter_lines():
42
+ self.line_count += 1
43
+
44
+ # Skip empty lines
45
+ if not line:
46
+ continue
47
+
48
+ # Decode bytes
49
+ if isinstance(line, bytes):
50
+ try:
51
+ line = line.decode("utf-8")
52
+ except UnicodeDecodeError:
53
+ self.debug_log(f"第{self.line_count}行解码失败,跳过")
54
+ continue
55
+
56
+ # Skip comment lines
57
+ if line.startswith(":"):
58
+ continue
59
+
60
+ # Parse field-value pairs
61
+ if ":" in line:
62
+ field, value = line.split(":", 1)
63
+ field = field.strip()
64
+ value = value.lstrip()
65
+
66
+ if field == "data":
67
+ self.debug_log(f"收到数据 (第{self.line_count}行): {value}")
68
+
69
+ # Try to parse JSON
70
+ try:
71
+ data = json.loads(value)
72
+ yield {"type": "data", "data": data, "raw": value}
73
+ except json.JSONDecodeError:
74
+ yield {"type": "data", "data": value, "raw": value, "is_json": False}
75
+
76
+ elif field == "event":
77
+ yield {"type": "event", "event": value}
78
+
79
+ elif field == "id":
80
+ yield {"type": "id", "id": value}
81
+
82
+ elif field == "retry":
83
+ try:
84
+ retry = int(value)
85
+ yield {"type": "retry", "retry": retry}
86
+ except ValueError:
87
+ self.debug_log(f"无效的 retry 值: {value}")
88
+
89
+ def iter_data_only(self) -> Generator[Dict[str, Any], None, None]:
90
+ """Iterate only over data events"""
91
+ for event in self.iter_events():
92
+ if event["type"] == "data":
93
+ yield event
94
+
95
+ def iter_json_data(self, model_class: Optional[Type] = None) -> Generator[Dict[str, Any], None, None]:
96
+ """Iterate only over JSON data events with optional validation
97
+
98
+ Args:
99
+ model_class: Optional Pydantic model class for validation
100
+
101
+ Yields:
102
+ dict: JSON data events
103
+ """
104
+ for event in self.iter_events():
105
+ if event["type"] == "data" and event.get("is_json", True):
106
+ try:
107
+ if model_class:
108
+ data = model_class.model_validate_json(event["raw"])
109
+ yield {"type": "data", "data": data, "raw": event["raw"]}
110
+ else:
111
+ yield event
112
+ except Exception as e:
113
+ self.debug_log(f"数据验证失败: {e}")
114
+ continue
115
+
116
+ def close(self) -> None:
117
+ """Close the response connection"""
118
+ if hasattr(self.response, "close"):
119
+ self.response.close()
120
+
121
+ def __enter__(self):
122
+ """Context manager entry"""
123
+ return self
124
+
125
+ def __exit__(self, exc_type, exc_val, exc_tb) -> None:
126
+ """Context manager exit"""
127
+ self.close()
app/utils/sse_tool_handler.py DELETED
@@ -1,694 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- """
5
- SSE Tool Handler - 处理工具调用的SSE流
6
- 基于 Z.AI 原生的 edit_index 和 edit_content 机制,更原生地处理工具调用
7
- """
8
-
9
- import json
10
- import re
11
- import time
12
- from typing import Dict, Any, Optional, Generator, List
13
-
14
- from app.utils.logger import get_logger
15
-
16
- logger = get_logger()
17
-
18
-
19
- class SSEToolHandler:
20
-
21
- def __init__(self, chat_id: str, model: str):
22
- self.chat_id = chat_id
23
- self.model = model
24
-
25
- # 工具调用状态
26
- self.has_tool_call = False
27
- self.tool_call_usage = None # 工具调用的usage信息
28
- self.content_index = 0
29
- self.has_thinking = False
30
-
31
- self.content_buffer = bytearray() # 使用字节数组提高性能
32
- self.last_edit_index = 0 # 上次编辑的位置
33
-
34
- # 工具调用解析状态
35
- self.active_tools = {} # 活跃的工具调用 {tool_id: tool_info}
36
- self.completed_tools = [] # 已完成的工具调用
37
- self.tool_blocks_cache = {} # 缓存解析的工具块
38
-
39
- def process_tool_call_phase(self, data: Dict[str, Any], is_stream: bool = True) -> Generator[str, None, None]:
40
- """
41
- 处理tool_call阶段
42
- """
43
- if not self.has_tool_call:
44
- self.has_tool_call = True
45
- logger.debug("🔧 进入工具调用阶段")
46
-
47
- edit_content = data.get("edit_content", "")
48
- edit_index = data.get("edit_index", 0)
49
-
50
- if not edit_content:
51
- return
52
-
53
- # logger.debug(f"📦 接收内容片段 [index={edit_index}]: {edit_content[:1000]}...")
54
-
55
- # 更新内容缓冲区
56
- self._apply_edit_to_buffer(edit_index, edit_content)
57
-
58
- # 尝试解析和处理工具调用
59
- yield from self._process_tool_calls_from_buffer(is_stream)
60
-
61
- def _apply_edit_to_buffer(self, edit_index: int, edit_content: str):
62
- """
63
- 在指定位置替换/插入内容更新内容缓冲区
64
- """
65
- edit_bytes = edit_content.encode('utf-8')
66
- required_length = edit_index + len(edit_bytes)
67
-
68
- # 扩展缓冲区到所需长度(如果需要)
69
- if len(self.content_buffer) < edit_index:
70
- # 如果edit_index超出当前缓冲区,用空字节填充
71
- self.content_buffer.extend(b'\x00' * (edit_index - len(self.content_buffer)))
72
-
73
- # 确保缓冲区足够长以容纳新内容
74
- if len(self.content_buffer) < required_length:
75
- self.content_buffer.extend(b'\x00' * (required_length - len(self.content_buffer)))
76
-
77
- # 在指定位置替换内容(不是插入,而是覆盖)
78
- end_index = edit_index + len(edit_bytes)
79
- self.content_buffer[edit_index:end_index] = edit_bytes
80
-
81
- # logger.debug(f"📝 缓冲区更新 [index={edit_index}, 长度={len(self.content_buffer)}]")
82
-
83
- def _process_tool_calls_from_buffer(self, is_stream: bool) -> Generator[str, None, None]:
84
- """
85
- 从内容缓冲区中解析和处理工具调用
86
- """
87
- try:
88
- # 解码内容并清理空字节
89
- content_str = self.content_buffer.decode('utf-8', errors='ignore').replace('\x00', '')
90
- yield from self._extract_and_process_tools(content_str, is_stream)
91
- except Exception as e:
92
- logger.debug(f"📦 内容解析暂时失败,等待更多数据: {e}")
93
- # 不抛出异常,继续等待更多数据
94
-
95
- def _extract_and_process_tools(self, content_str: str, is_stream: bool) -> Generator[str, None, None]:
96
- """
97
- 从内容字符串中提取和处理工具调用
98
- """
99
- # 查找所有 glm_block,包括不完整的
100
- pattern = r'<glm_block\s*>(.*?)(?:</glm_block>|$)'
101
- matches = re.findall(pattern, content_str, re.DOTALL)
102
-
103
- for block_content in matches:
104
- # 尝试解析每个块
105
- yield from self._process_single_tool_block(block_content, is_stream)
106
-
107
- def _process_single_tool_block(self, block_content: str, is_stream: bool) -> Generator[str, None, None]:
108
- """
109
- 处理单个工具块,支持增量解析
110
- """
111
- try:
112
- # 尝试修复和解析完整的JSON
113
- fixed_content = self._fix_json_structure(block_content)
114
- tool_data = json.loads(fixed_content)
115
- metadata = tool_data.get("data", {}).get("metadata", {})
116
-
117
- tool_id = metadata.get("id", "")
118
- tool_name = metadata.get("name", "")
119
- arguments_raw = metadata.get("arguments", "{}")
120
-
121
- if not tool_id or not tool_name:
122
- return
123
-
124
- logger.debug(f"🎯 解析完整工具块: {tool_name}(id={tool_id}), 参数: {arguments_raw}")
125
-
126
- # 检查是否是新工具或更新的工具
127
- yield from self._handle_tool_update(tool_id, tool_name, arguments_raw, is_stream)
128
-
129
- except json.JSONDecodeError as e:
130
- logger.debug(f"📦 JSON解析失败: {e}, 尝试部分解析")
131
- # JSON 不完整,尝试部分解析
132
- yield from self._handle_partial_tool_block(block_content, is_stream)
133
- except Exception as e:
134
- logger.debug(f"📦 工具块处理失败: {e}")
135
-
136
- def _fix_json_structure(self, content: str) -> str:
137
- """
138
- 修复JSON结构中的常见问题
139
- """
140
- if not content:
141
- return content
142
-
143
- # 计算括号平衡
144
- open_braces = content.count('{')
145
- close_braces = content.count('}')
146
-
147
- # 如果闭括号多于开括号,移除多余的闭括号
148
- if close_braces > open_braces:
149
- excess = close_braces - open_braces
150
- fixed_content = content
151
- for _ in range(excess):
152
- # 从右侧移除多余的闭括号
153
- last_brace_pos = fixed_content.rfind('}')
154
- if last_brace_pos != -1:
155
- fixed_content = fixed_content[:last_brace_pos] + fixed_content[last_brace_pos + 1:]
156
- return fixed_content
157
-
158
- return content
159
-
160
- def _handle_tool_update(self, tool_id: str, tool_name: str, arguments_raw: str, is_stream: bool) -> Generator[str, None, None]:
161
- """
162
- 处理工具的创建或更新 - 更可靠的参数完整性检查
163
- """
164
- # 解析参数
165
- try:
166
- if isinstance(arguments_raw, str):
167
- # 先处理转义和清理
168
- cleaned_args = self._clean_arguments_string(arguments_raw)
169
- arguments = json.loads(cleaned_args) if cleaned_args.strip() else {}
170
- else:
171
- arguments = arguments_raw
172
- except json.JSONDecodeError:
173
- logger.debug(f"📦 参数解析失败,暂不处理: {arguments_raw}")
174
- # 参数解析失败时,不创建或更新工具,等待更完整的数据
175
- return
176
-
177
- # 检查参数是否看起来完整(基本的完整性验证)
178
- is_args_complete = self._is_arguments_complete(arguments, arguments_raw)
179
-
180
- # 检查是否是新工具
181
- if tool_id not in self.active_tools:
182
- logger.debug(f"🎯 发现新工具: {tool_name}(id={tool_id}), 参数完整性: {is_args_complete}")
183
-
184
- self.active_tools[tool_id] = {
185
- "id": tool_id,
186
- "name": tool_name,
187
- "arguments": arguments,
188
- "arguments_raw": arguments_raw,
189
- "status": "active",
190
- "sent_start": False,
191
- "last_sent_args": {}, # 跟踪上次发送的参数
192
- "args_complete": is_args_complete,
193
- "pending_send": True # 标记需要发送
194
- }
195
-
196
- # 只有在参数看起来完整时才发送工具开始信号
197
- if is_stream and is_args_complete:
198
- yield self._create_tool_start_chunk(tool_id, tool_name, arguments)
199
- self.active_tools[tool_id]["sent_start"] = True
200
- self.active_tools[tool_id]["last_sent_args"] = arguments.copy()
201
- self.active_tools[tool_id]["pending_send"] = False
202
- logger.debug(f"📤 发送完整工具开始: {tool_name}(id={tool_id})")
203
-
204
- else:
205
- # 更新现有工具
206
- current_tool = self.active_tools[tool_id]
207
-
208
- # 检查是否有实质性改进
209
- if self._is_significant_improvement(current_tool["arguments"], arguments,
210
- current_tool["arguments_raw"], arguments_raw):
211
- logger.debug(f"🔄 工具参数有实质性改进: {tool_name}(id={tool_id})")
212
-
213
- current_tool["arguments"] = arguments
214
- current_tool["arguments_raw"] = arguments_raw
215
- current_tool["args_complete"] = is_args_complete
216
-
217
- # 如果之前没有发送过开始信号,且现在参数完整,发送开始信号
218
- if is_stream and not current_tool["sent_start"] and is_args_complete:
219
- yield self._create_tool_start_chunk(tool_id, tool_name, arguments)
220
- current_tool["sent_start"] = True
221
- current_tool["last_sent_args"] = arguments.copy()
222
- current_tool["pending_send"] = False
223
- logger.debug(f"📤 发送延迟的工具开始: {tool_name}(id={tool_id})")
224
-
225
- # 如果已经发送过开始信号,且参数有显著改进,发送参数更新
226
- elif is_stream and current_tool["sent_start"] and is_args_complete:
227
- if self._should_send_argument_update(current_tool["last_sent_args"], arguments):
228
- yield self._create_tool_arguments_chunk(tool_id, arguments)
229
- current_tool["last_sent_args"] = arguments.copy()
230
- logger.debug(f"📤 发送参数更新: {tool_name}(id={tool_id})")
231
-
232
- def _is_arguments_complete(self, arguments: Dict[str, Any], arguments_raw: str) -> bool:
233
- """
234
- 检查参数��否看起来完整
235
- """
236
- if not arguments:
237
- return False
238
-
239
- # 检查原始字符串是否看起来完整
240
- if not arguments_raw or not arguments_raw.strip():
241
- return False
242
-
243
- # 检查是否有明显的截断迹象
244
- raw_stripped = arguments_raw.strip()
245
-
246
- # 如果原始字符串不以}结尾,可能是截断的
247
- if not raw_stripped.endswith('}') and not raw_stripped.endswith('"'):
248
- return False
249
-
250
- # 检查是否有不完整的URL(常见的截断情况)
251
- for key, value in arguments.items():
252
- if isinstance(value, str):
253
- # 检查URL是否看起来完整
254
- if 'http' in value.lower():
255
- # 如果URL太短或以不完整的域名结尾,可能是截断的
256
- if len(value) < 10 or value.endswith('.go') or value.endswith('.goo'):
257
- return False
258
-
259
- # 检查其他可能的截断迹象
260
- if len(value) > 0 and value[-1] in ['.', '/', ':', '=']:
261
- # 以这些字符结尾可能表示截断
262
- return False
263
-
264
- return True
265
-
266
- def _is_significant_improvement(self, old_args: Dict[str, Any], new_args: Dict[str, Any],
267
- old_raw: str, new_raw: str) -> bool:
268
- """
269
- 检查新参数是否比旧参数有显著改进
270
- """
271
- # 如果新参数为空,不是改进
272
- if not new_args:
273
- return False
274
-
275
- if len(new_args) > len(old_args):
276
- return True
277
-
278
- # 检查值的改进
279
- for key, new_value in new_args.items():
280
- old_value = old_args.get(key, "")
281
-
282
- if isinstance(new_value, str) and isinstance(old_value, str):
283
- # 如果新值明显更长且更完整,是改进
284
- if len(new_value) > len(old_value) + 5: # 至少长5个字符才算显著改进
285
- return True
286
-
287
- # 如果旧值看起来是截断的,新值更完整,是改进
288
- if old_value.endswith(('.go', '.goo', '.com/', 'http')) and len(new_value) > len(old_value):
289
- return True
290
-
291
- # 检查原始字符串的改进
292
- if len(new_raw) > len(old_raw) + 10: # 原始字符串显著增长
293
- return True
294
-
295
- return False
296
-
297
- def _should_send_argument_update(self, last_sent: Dict[str, Any], new_args: Dict[str, Any]) -> bool:
298
- """
299
- 判断是否应该发送参数更新 - 更严格的标准
300
- """
301
- # 如果参数完全相同,不发送
302
- if last_sent == new_args:
303
- return False
304
-
305
- # 如果新参数为空但之前有参数,不发送(避免倒退)
306
- if not new_args and last_sent:
307
- return False
308
-
309
- # 如果新参数有更多键,发送更新
310
- if len(new_args) > len(last_sent):
311
- return True
312
-
313
- # 检查是否有值变得显著更完整
314
- for key, new_value in new_args.items():
315
- last_value = last_sent.get(key, "")
316
- if isinstance(new_value, str) and isinstance(last_value, str):
317
- # 只有在值显著增长时才发送更新(避免微小变化)
318
- if len(new_value) > len(last_value) + 5:
319
- return True
320
- elif new_value != last_value and new_value: # 确保新值不为空
321
- return True
322
-
323
- return False
324
-
325
- def _handle_partial_tool_block(self, block_content: str, is_stream: bool) -> Generator[str, None, None]:
326
- """
327
- 处理不完整的工具块,尝试提取可用信息
328
- """
329
- try:
330
- # 尝试提取工具ID和名称
331
- id_match = re.search(r'"id":\s*"([^"]+)"', block_content)
332
- name_match = re.search(r'"name":\s*"([^"]+)"', block_content)
333
-
334
- if id_match and name_match:
335
- tool_id = id_match.group(1)
336
- tool_name = name_match.group(1)
337
-
338
- # 尝试提取参数部分
339
- args_match = re.search(r'"arguments":\s*"([^"]*)', block_content)
340
- partial_args = args_match.group(1) if args_match else ""
341
-
342
- logger.debug(f"📦 部分工具块: {tool_name}(id={tool_id}), 部分参数: {partial_args[:50]}")
343
-
344
- # 如果是新工具,先创建记录
345
- if tool_id not in self.active_tools:
346
- # 尝试解析部分参数为字典
347
- partial_args_dict = self._parse_partial_arguments(partial_args)
348
-
349
- self.active_tools[tool_id] = {
350
- "id": tool_id,
351
- "name": tool_name,
352
- "arguments": partial_args_dict,
353
- "status": "partial",
354
- "sent_start": False,
355
- "last_sent_args": {},
356
- "args_complete": False,
357
- "partial_args": partial_args
358
- }
359
-
360
- if is_stream:
361
- yield self._create_tool_start_chunk(tool_id, tool_name, partial_args_dict)
362
- self.active_tools[tool_id]["sent_start"] = True
363
- self.active_tools[tool_id]["last_sent_args"] = partial_args_dict.copy()
364
- else:
365
- # 更新部分参数
366
- self.active_tools[tool_id]["partial_args"] = partial_args
367
- # 尝试更新解析的参数
368
- new_partial_dict = self._parse_partial_arguments(partial_args)
369
- if new_partial_dict != self.active_tools[tool_id]["arguments"]:
370
- self.active_tools[tool_id]["arguments"] = new_partial_dict
371
-
372
- except Exception as e:
373
- logger.debug(f"📦 部分块解析失败: {e}")
374
-
375
- def _clean_arguments_string(self, arguments_raw: str) -> str:
376
- """
377
- 清理和标准化参数字符串,改进对不完整JSON的处理
378
- """
379
- if not arguments_raw:
380
- return "{}"
381
-
382
- # 移除首尾空白
383
- cleaned = arguments_raw.strip()
384
-
385
- # 处理特殊值
386
- if cleaned.lower() == "null":
387
- return "{}"
388
-
389
- # 处理转义的JSON字符串
390
- if cleaned.startswith('{\\"') and cleaned.endswith('\\"}'):
391
- # 这是一个转义的JSON字符串,需要反转义
392
- cleaned = cleaned.replace('\\"', '"')
393
- elif cleaned.startswith('"{\\"') and cleaned.endswith('\\"}'):
394
- # 双重转义的情况
395
- cleaned = cleaned[1:-1].replace('\\"', '"')
396
- elif cleaned.startswith('"') and cleaned.endswith('"'):
397
- # 简单的引号包围,去除外层引号
398
- cleaned = cleaned[1:-1]
399
-
400
- # 处理不完整的JSON字符串
401
- cleaned = self._fix_incomplete_json(cleaned)
402
-
403
- # 标准化空格(移除JSON中的多余空格,但保留字符串值中的空格)
404
- try:
405
- # 先尝试解析,然后重新序列化以标准化格式
406
- parsed = json.loads(cleaned)
407
- if parsed is None:
408
- return "{}"
409
- cleaned = json.dumps(parsed, ensure_ascii=False, separators=(',', ':'))
410
- except json.JSONDecodeError:
411
- # 如果解析失败,只做基本的空格清理
412
- logger.debug(f"📦 JSON标准化失败,保持原样: {cleaned[:50]}...")
413
-
414
- return cleaned
415
-
416
- def _fix_incomplete_json(self, json_str: str) -> str:
417
- """
418
- 修复不完整的JSON字符串
419
- """
420
- if not json_str:
421
- return "{}"
422
-
423
- # 确保以{开头
424
- if not json_str.startswith('{'):
425
- json_str = '{' + json_str
426
-
427
- # 处理不完整的字符串值
428
- if json_str.count('"') % 2 != 0:
429
- # 奇数个引号,可能有未闭合的字符串
430
- json_str += '"'
431
-
432
- # 确保以}结尾
433
- if not json_str.endswith('}'):
434
- json_str += '}'
435
-
436
- return json_str
437
-
438
- def _parse_partial_arguments(self, arguments_raw: str) -> Dict[str, Any]:
439
- """
440
- 解析不完整的参数字符串,尽可能提取有效信息
441
- """
442
- if not arguments_raw or arguments_raw.strip() == "" or arguments_raw.strip().lower() == "null":
443
- return {}
444
-
445
- try:
446
- # 先尝试清理字符串
447
- cleaned = self._clean_arguments_string(arguments_raw)
448
- result = json.loads(cleaned)
449
- # 确保返回字典类型
450
- return result if isinstance(result, dict) else {}
451
- except json.JSONDecodeError:
452
- pass
453
-
454
- try:
455
- # 尝试修复常见的JSON问题
456
- fixed_args = arguments_raw.strip()
457
-
458
- # 处理转义字符
459
- if '\\' in fixed_args:
460
- fixed_args = fixed_args.replace('\\"', '"')
461
-
462
- # 如果不是以{开头,添加{
463
- if not fixed_args.startswith('{'):
464
- fixed_args = '{' + fixed_args
465
-
466
- # 如果不是以}结尾,尝试添加}
467
- if not fixed_args.endswith('}'):
468
- # 计算未闭合的引号和括号
469
- quote_count = fixed_args.count('"') - fixed_args.count('\\"')
470
- if quote_count % 2 != 0:
471
- fixed_args += '"'
472
- fixed_args += '}'
473
-
474
- return json.loads(fixed_args)
475
- except json.JSONDecodeError:
476
- # 尝试提取键值对
477
- return self._extract_key_value_pairs(arguments_raw)
478
- except Exception:
479
- # 如果所有方法都失败,返回空字典
480
- return {}
481
-
482
- def _extract_key_value_pairs(self, text: str) -> Dict[str, Any]:
483
- """
484
- 从文本中提取键值对,作为最后的解析尝试
485
- """
486
- result = {}
487
- try:
488
- # 使用正则表达式提取简单的键值对
489
- import re
490
-
491
- # 匹配 "key": "value" 或 "key": value 格式
492
- pattern = r'"([^"]+)":\s*"([^"]*)"'
493
- matches = re.findall(pattern, text)
494
-
495
- for key, value in matches:
496
- result[key] = value
497
-
498
- # 匹配数字值
499
- pattern = r'"([^"]+)":\s*(\d+)'
500
- matches = re.findall(pattern, text)
501
-
502
- for key, value in matches:
503
- try:
504
- result[key] = int(value)
505
- except ValueError:
506
- result[key] = value
507
-
508
- # 匹配布尔值
509
- pattern = r'"([^"]+)":\s*(true|false)'
510
- matches = re.findall(pattern, text)
511
-
512
- for key, value in matches:
513
- result[key] = value.lower() == 'true'
514
-
515
- except Exception:
516
- pass
517
-
518
- return result
519
-
520
- def _complete_active_tools(self, is_stream: bool) -> Generator[str, None, None]:
521
- """
522
- 完成所有活跃的工具调用 - 处理待发送的工具
523
- """
524
- tools_to_send = []
525
-
526
- for tool_id, tool in self.active_tools.items():
527
- # 如果工具还没有发送过且参数看起来完整,现在发送
528
- if is_stream and tool.get("pending_send", False) and not tool.get("sent_start", False):
529
- if tool.get("args_complete", False):
530
- logger.debug(f"📤 完成时发送待发送工具: {tool['name']}(id={tool_id})")
531
- yield self._create_tool_start_chunk(tool_id, tool["name"], tool["arguments"])
532
- tool["sent_start"] = True
533
- tool["pending_send"] = False
534
- tools_to_send.append(tool)
535
- else:
536
- logger.debug(f"⚠️ 跳过不完整的工具: {tool['name']}(id={tool_id})")
537
-
538
- tool["status"] = "completed"
539
- self.completed_tools.append(tool)
540
- logger.debug(f"✅ 完成工具调用: {tool['name']}(id={tool_id})")
541
-
542
- self.active_tools.clear()
543
-
544
- if is_stream and (self.completed_tools or tools_to_send):
545
- # 发送工具完成信号
546
- yield self._create_tool_finish_chunk()
547
-
548
- def process_other_phase(self, data: Dict[str, Any], is_stream: bool = True) -> Generator[str, None, None]:
549
- """
550
- 处理other阶段 - 检测工具调用结束和状态更新
551
- """
552
- edit_content = data.get("edit_content", "")
553
- edit_index = data.get("edit_index", 0)
554
- usage = data.get("usage")
555
-
556
- # 保存usage信息
557
- if self.has_tool_call and usage:
558
- self.tool_call_usage = usage
559
- logger.debug(f"💾 保存工具调用usage: {usage}")
560
-
561
- # 如果有edit_content,继续更新内容缓冲区
562
- if edit_content:
563
- self._apply_edit_to_buffer(edit_index, edit_content)
564
- # 继续处理可能的工具调用更新
565
- yield from self._process_tool_calls_from_buffer(is_stream)
566
-
567
- # 检测工具调用结束的多种标记
568
- if self.has_tool_call and self._is_tool_call_finished(edit_content):
569
- logger.debug("🏁 检测到工具调用结束")
570
-
571
- # 完成所有活跃的工具
572
- yield from self._complete_active_tools(is_stream)
573
-
574
- if is_stream:
575
- logger.info("🏁 发送工具调用完成信号")
576
- yield "data: [DONE]"
577
-
578
- # 重置工具调用状态
579
- self.has_tool_call = False
580
-
581
- def _is_tool_call_finished(self, edit_content: str) -> bool:
582
- """
583
- 检测工具调用是否结束的多种标记
584
- """
585
- if not edit_content:
586
- return False
587
-
588
- # 检测各种结束标记
589
- end_markers = [
590
- "null,", # 原有的结束标记
591
- '"status": "completed"', # 状态完成标记
592
- '"is_error": false', # 错误状态标记
593
- ]
594
-
595
- for marker in end_markers:
596
- if marker in edit_content:
597
- logger.debug(f"🔍 检测到结束标记: {marker}")
598
- return True
599
-
600
- # 检查是否所有工具都有完整的结构
601
- if self.active_tools and '"status": "completed"' in self.content_buffer:
602
- return True
603
-
604
- return False
605
-
606
- def _reset_all_state(self):
607
- """重置所有状态"""
608
- self.has_tool_call = False
609
- self.tool_call_usage = None
610
- self.content_index = 0
611
- self.content_buffer = bytearray()
612
- self.last_edit_index = 0
613
- self.active_tools.clear()
614
- self.completed_tools.clear()
615
- self.tool_blocks_cache.clear()
616
-
617
- def _create_tool_start_chunk(self, tool_id: str, tool_name: str, initial_args: Dict[str, Any] = None) -> str:
618
- """创建工具调用开始的chunk,支持初始参数"""
619
- # 使用提供的初始参数,如果没有则使用空字典
620
- args_dict = initial_args or {}
621
- args_str = json.dumps(args_dict, ensure_ascii=False)
622
-
623
- chunk = {
624
- "choices": [
625
- {
626
- "delta": {
627
- "role": "assistant",
628
- "content": None,
629
- "tool_calls": [
630
- {
631
- "id": tool_id,
632
- "type": "function",
633
- "function": {"name": tool_name, "arguments": args_str},
634
- }
635
- ],
636
- },
637
- "finish_reason": None,
638
- "index": self.content_index,
639
- "logprobs": None,
640
- }
641
- ],
642
- "created": int(time.time()),
643
- "id": self.chat_id,
644
- "model": self.model,
645
- "object": "chat.completion.chunk",
646
- "system_fingerprint": "fp_zai_001",
647
- }
648
- return f"data: {json.dumps(chunk, ensure_ascii=False)}\n\n"
649
-
650
- def _create_tool_arguments_chunk(self, tool_id: str, arguments: Dict) -> str:
651
- """创建工具参数的chunk - 只包含参数更新,不包含函数名"""
652
- chunk = {
653
- "choices": [
654
- {
655
- "delta": {
656
- "tool_calls": [
657
- {
658
- "id": tool_id,
659
- "function": {"arguments": json.dumps(arguments, ensure_ascii=False)},
660
- }
661
- ],
662
- },
663
- "finish_reason": None,
664
- "index": self.content_index,
665
- "logprobs": None,
666
- }
667
- ],
668
- "created": int(time.time()),
669
- "id": self.chat_id,
670
- "model": self.model,
671
- "object": "chat.completion.chunk",
672
- "system_fingerprint": "fp_zai_001",
673
- }
674
- return f"data: {json.dumps(chunk, ensure_ascii=False)}\n\n"
675
-
676
- def _create_tool_finish_chunk(self) -> str:
677
- """创建工具调用完成的chunk"""
678
- chunk = {
679
- "choices": [
680
- {
681
- "delta": {"role": "assistant", "content": None, "tool_calls": []},
682
- "finish_reason": "tool_calls",
683
- "index": 0,
684
- "logprobs": None,
685
- }
686
- ],
687
- "created": int(time.time()),
688
- "id": self.chat_id,
689
- "usage": self.tool_call_usage or None,
690
- "model": self.model,
691
- "object": "chat.completion.chunk",
692
- "system_fingerprint": "fp_zai_001",
693
- }
694
- return f"data: {json.dumps(chunk, ensure_ascii=False)}\n\n"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
app/utils/token_pool.py DELETED
@@ -1,453 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- """
5
- Token池管理器
6
- 实现AUTH_TOKEN的轮询机制,提供负载均衡和容错功能
7
- """
8
-
9
- import asyncio
10
- import time
11
- from typing import Dict, List, Optional, Tuple
12
- from dataclasses import dataclass, field
13
- from threading import Lock
14
- import httpx
15
-
16
- from app.utils.logger import logger
17
-
18
-
19
- @dataclass
20
- class TokenStatus:
21
- """Token状态信息"""
22
- token: str
23
- is_available: bool = True
24
- failure_count: int = 0
25
- last_failure_time: float = 0.0
26
- last_success_time: float = 0.0
27
- total_requests: int = 0
28
- successful_requests: int = 0
29
- token_type: str = "unknown" # "user", "guest", "unknown"
30
-
31
- @property
32
- def success_rate(self) -> float:
33
- """成功率"""
34
- if self.total_requests == 0:
35
- return 1.0
36
- return self.successful_requests / self.total_requests
37
-
38
- @property
39
- def is_healthy(self) -> bool:
40
- """
41
- 是否健康
42
-
43
- 健康的定义:
44
- 1. 必须是认证用户token (token_type = "user")
45
- 2. 当前可用 (is_available = True)
46
- 3. 成功率 >= 50% 或者总请求数 <= 3(新token容错)
47
-
48
- 注意:guest token不应该在AUTH_TOKENS中
49
- """
50
- # guest token永远不健康
51
- if self.token_type == "guest":
52
- return False
53
-
54
- # 未知类型token不健康
55
- if self.token_type != "user":
56
- return False
57
-
58
- # 不可用的token不健康
59
- if not self.is_available:
60
- return False
61
-
62
- # 对于认证用户token,基于成功率判断
63
- # 新token或请求数很少时,给予容错
64
- if self.total_requests <= 3:
65
- return self.failure_count == 0
66
-
67
- # 基于成功率判断健康状态
68
- return self.success_rate >= 0.5
69
-
70
-
71
- class TokenPool:
72
- """Token池管理器"""
73
-
74
- def __init__(self, tokens: List[str], failure_threshold: int = 3, recovery_timeout: int = 1800):
75
- """
76
- 初始化Token池
77
-
78
- Args:
79
- tokens: token列表
80
- failure_threshold: 失败阈值,超过此次数将标记为不可用
81
- recovery_timeout: 恢复超时时间(秒),失败token在此时间后重新尝试
82
- """
83
- self.failure_threshold = failure_threshold
84
- self.recovery_timeout = recovery_timeout
85
- self._lock = Lock()
86
- self._current_index = 0
87
-
88
- # 初始化token状态
89
- self.token_statuses: Dict[str, TokenStatus] = {}
90
- original_count = len(tokens)
91
- unique_tokens = []
92
-
93
- # 去重处理
94
- for token in tokens:
95
- if token and token not in self.token_statuses: # 过滤空token和重复token
96
- self.token_statuses[token] = TokenStatus(token=token)
97
- unique_tokens.append(token)
98
-
99
- duplicate_count = original_count - len(unique_tokens)
100
- if duplicate_count > 0:
101
- logger.warning(f"⚠️ 检测到 {duplicate_count} 个重复token,已自动去重")
102
-
103
- if not self.token_statuses:
104
- logger.warning("⚠️ Token池为空,将依赖匿名模式")
105
- else:
106
- logger.info(f"🔧 初始化Token池,共 {len(self.token_statuses)} 个token")
107
-
108
- def get_next_token(self) -> Optional[str]:
109
- """
110
- 获取下一个可用的token(轮询算法)
111
-
112
- Returns:
113
- 可用的token,如果没有可用token则返回None
114
- """
115
- with self._lock:
116
- if not self.token_statuses:
117
- return None
118
-
119
- available_tokens = self._get_available_tokens()
120
- if not available_tokens:
121
- # 尝试恢复过期的失败token
122
- self._try_recover_failed_tokens()
123
- available_tokens = self._get_available_tokens()
124
-
125
- if not available_tokens:
126
- logger.warning("⚠️ 没有可用的token")
127
- return None
128
-
129
- # 轮询选择token
130
- token = available_tokens[self._current_index % len(available_tokens)]
131
- self._current_index = (self._current_index + 1) % len(available_tokens)
132
-
133
- return token
134
-
135
- def _get_available_tokens(self) -> List[str]:
136
- """
137
- 获取当前可用的认证用户token列表
138
-
139
- 只返回满足以下条件的token:
140
- 1. is_available = True (可用状态)
141
- 2. token_type = "user" (认证用户token)
142
-
143
- 这确保轮询机制只会选择有效的认证用户token,跳过匿名用户token
144
- """
145
- available_user_tokens = [
146
- status.token for status in self.token_statuses.values()
147
- if status.is_available and status.token_type == "user"
148
- ]
149
-
150
- # 如果没有可用的认证用户token
151
- if not available_user_tokens and self.token_statuses:
152
- guest_tokens = [
153
- status.token for status in self.token_statuses.values()
154
- if status.token_type == "guest"
155
- ]
156
- if guest_tokens:
157
- logger.warning(f"⚠️ 检测到 {len(guest_tokens)} 个匿名用户token,轮询机制将跳过这些token")
158
-
159
- return available_user_tokens
160
-
161
- def _try_recover_failed_tokens(self):
162
- """尝试恢复失败的token"""
163
- current_time = time.time()
164
- recovered_count = 0
165
-
166
- for status in self.token_statuses.values():
167
- if (not status.is_available and
168
- current_time - status.last_failure_time > self.recovery_timeout):
169
- status.is_available = True
170
- status.failure_count = 0
171
- recovered_count += 1
172
- logger.info(f"🔄 恢复失败token: {status.token[:20]}...")
173
-
174
- if recovered_count > 0:
175
- logger.info(f"✅ 恢复了 {recovered_count} 个失败的token")
176
-
177
- def mark_token_success(self, token: str):
178
- """标记token使用成功"""
179
- with self._lock:
180
- if token in self.token_statuses:
181
- status = self.token_statuses[token]
182
- status.total_requests += 1
183
- status.successful_requests += 1
184
- status.last_success_time = time.time()
185
- status.failure_count = 0 # 重置失败计数
186
-
187
- if not status.is_available:
188
- status.is_available = True
189
- logger.info(f"✅ Token恢复可用: {token[:20]}...")
190
-
191
- def mark_token_failure(self, token: str, error: Exception = None):
192
- """标记token使用失败"""
193
- with self._lock:
194
- if token in self.token_statuses:
195
- status = self.token_statuses[token]
196
- status.total_requests += 1
197
- status.failure_count += 1
198
- status.last_failure_time = time.time()
199
-
200
- if status.failure_count >= self.failure_threshold:
201
- status.is_available = False
202
- logger.warning(f"🚫 Token已禁用: {token[:20]}... (失败 {status.failure_count} 次)")
203
-
204
- def get_pool_status(self) -> Dict:
205
- """获取token池状态信息"""
206
- with self._lock:
207
- available_count = len(self._get_available_tokens())
208
- total_count = len(self.token_statuses)
209
-
210
- # 统计健康token数量
211
- healthy_count = sum(1 for status in self.token_statuses.values() if status.is_healthy)
212
-
213
- status_info = {
214
- "total_tokens": total_count,
215
- "available_tokens": available_count,
216
- "unavailable_tokens": total_count - available_count,
217
- "healthy_tokens": healthy_count,
218
- "unhealthy_tokens": total_count - healthy_count,
219
- "current_index": self._current_index,
220
- "tokens": []
221
- }
222
-
223
- for token, status in self.token_statuses.items():
224
- status_info["tokens"].append({
225
- "token": f"{token[:10]}...{token[-10:]}",
226
- "token_type": status.token_type,
227
- "is_available": status.is_available,
228
- "failure_count": status.failure_count,
229
- "success_count": status.successful_requests,
230
- "success_rate": f"{status.success_rate:.2%}",
231
- "total_requests": status.total_requests,
232
- "is_healthy": status.is_healthy,
233
- "last_failure_time": status.last_failure_time,
234
- "last_success_time": status.last_success_time
235
- })
236
-
237
- return status_info
238
-
239
- def update_tokens(self, new_tokens: List[str]):
240
- """动态更新token列表"""
241
- with self._lock:
242
- # 保留现有token的状态信息
243
- old_statuses = self.token_statuses.copy()
244
- self.token_statuses.clear()
245
-
246
- original_count = len(new_tokens)
247
- unique_tokens = []
248
-
249
- # 去重并添加新token,保留已存在token的状态
250
- for token in new_tokens:
251
- if token and token not in self.token_statuses: # 过滤空token和重复token
252
- if token in old_statuses:
253
- self.token_statuses[token] = old_statuses[token]
254
- else:
255
- self.token_statuses[token] = TokenStatus(token=token)
256
- unique_tokens.append(token)
257
-
258
- # 记录去重信息
259
- duplicate_count = original_count - len(unique_tokens)
260
- if duplicate_count > 0:
261
- logger.warning(f"⚠️ 更新时检测到 {duplicate_count} 个重复token,已自动去重")
262
-
263
- # 重置索引
264
- self._current_index = 0
265
-
266
- logger.info(f"🔄 更新Token池,共 {len(self.token_statuses)} 个token")
267
-
268
- async def health_check_token(self, token: str, auth_url: str = "https://chat.z.ai/api/v1/auths/") -> bool:
269
- """
270
- 异步健康检查单个token
271
-
272
- 使用Z.AI认证API验证token的有效性,通过检查响应内容判断token是否有效
273
-
274
- Args:
275
- token: 要检查的token
276
- auth_url: 认证URL
277
-
278
- Returns:
279
- token是否健康
280
- """
281
- try:
282
- # 构建完整的请求头,模拟真实浏览器请求
283
- headers = {
284
- "Accept": "*/*",
285
- "Accept-Language": "zh-CN,zh;q=0.9",
286
- "Authorization": f"Bearer {token}",
287
- "Connection": "keep-alive",
288
- "Content-Type": "application/json",
289
- "DNT": "1",
290
- "Referer": "https://chat.z.ai/",
291
- "Sec-Fetch-Dest": "empty",
292
- "Sec-Fetch-Mode": "cors",
293
- "Sec-Fetch-Site": "same-origin",
294
- "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/140.0.0.0 Safari/537.36",
295
- "sec-ch-ua": '"Chromium";v="140", "Not=A?Brand";v="24", "Google Chrome";v="140"',
296
- "sec-ch-ua-mobile": "?0",
297
- "sec-ch-ua-platform": "Windows"
298
- }
299
-
300
- async with httpx.AsyncClient(timeout=15.0) as client:
301
- response = await client.get(auth_url, headers=headers)
302
-
303
- # 验证token有效性并获取类型
304
- token_type, is_healthy = self._validate_token_response(response)
305
-
306
- # 更新token类型
307
- if token in self.token_statuses:
308
- self.token_statuses[token].token_type = token_type
309
-
310
- if is_healthy:
311
- self.mark_token_success(token)
312
- else:
313
- # 简化错误信息,只记录关键错误类型
314
- if token_type == "guest":
315
- error_msg = "匿名用户token"
316
- elif response.status_code != 200:
317
- error_msg = f"HTTP {response.status_code}"
318
- else:
319
- error_msg = "认证失败"
320
-
321
- self.mark_token_failure(token, Exception(error_msg))
322
-
323
- return is_healthy
324
-
325
- except (httpx.TimeoutException, httpx.ConnectError, Exception) as e:
326
- self.mark_token_failure(token, e)
327
- return False
328
-
329
- def _validate_token_response(self, response: httpx.Response) -> bool:
330
- """
331
- 基于Z.AI API响应中的role字段验证token类型
332
-
333
- 验证规则:
334
- - role: "user" = 认证用户token(有效,可用于AUTH_TOKENS)
335
- - role: "guest" = 匿名用户token(无效,不应在AUTH_TOKENS中)
336
- - 无role字段或其他值 = 无效token
337
-
338
- Args:
339
- response: HTTP响应对象
340
-
341
- Returns:
342
- token是否为有效的认证用户token
343
- """
344
- # 首先检查HTTP状态码
345
- if response.status_code != 200:
346
- return ("unknown", False)
347
-
348
- try:
349
- # 尝试解析JSON响应
350
- response_data = response.json()
351
-
352
- if not isinstance(response_data, dict):
353
- return ("unknown", False)
354
-
355
- # 检查是否包含错误信息
356
- if "error" in response_data:
357
- return ("unknown", False)
358
-
359
- if "message" in response_data and "error" in response_data.get("message", "").lower():
360
- return ("unknown", False)
361
-
362
- # 核心验证:检查role字段
363
- role = response_data.get("role")
364
-
365
- if role == "user":
366
- return ("user", True)
367
- elif role == "guest":
368
-
369
- if not hasattr(self, '_guest_token_warned'):
370
- logger.warning("⚠️ 检测到匿名用户token,建议仅在AUTH_TOKENS中配置认证用户token")
371
- self._guest_token_warned = True
372
- return ("guest", False)
373
- else:
374
- return ("unknown", False)
375
-
376
- except (ValueError, Exception):
377
- return ("unknown", False)
378
-
379
- async def health_check_all(self, auth_url: str = "https://chat.z.ai/api/v1/auths/"):
380
- """异步健康检查所有token"""
381
- if not self.token_statuses:
382
- logger.warning("⚠️ Token池为空,跳过健康检查")
383
- return
384
-
385
- total_tokens = len(self.token_statuses)
386
- logger.info(f"🔍 开始Token池健康检查... (共 {total_tokens} 个token)")
387
-
388
- # 并发执行所有token的健康检查
389
- tasks = []
390
- token_list = list(self.token_statuses.keys())
391
-
392
- for token in token_list:
393
- task = self.health_check_token(token, auth_url)
394
- tasks.append(task)
395
-
396
- # 执行并收集结果
397
- results = await asyncio.gather(*tasks, return_exceptions=True)
398
-
399
- # 统计结果
400
- healthy_count = 0
401
- failed_count = 0
402
- exception_count = 0
403
-
404
- for i, result in enumerate(results):
405
- if result is True:
406
- healthy_count += 1
407
- elif result is False:
408
- failed_count += 1
409
- else:
410
- # 异常情况
411
- exception_count += 1
412
- token = token_list[i]
413
- logger.error(f"💥 Token {token[:20]}... 健康检查异常: {result}")
414
-
415
- health_rate = (healthy_count / total_tokens) * 100 if total_tokens > 0 else 0
416
-
417
- if healthy_count == 0 and total_tokens > 0:
418
- logger.warning(f"⚠️ 健康检查完成: 0/{total_tokens} 个token健康 - 请检查token配置")
419
- elif failed_count > 0:
420
- logger.warning(f"⚠️ 健康检查完成: {healthy_count}/{total_tokens} 个token健康 ({health_rate:.1f}%)")
421
- else:
422
- logger.info(f"✅ 健康检查完成: {healthy_count}/{total_tokens} 个token健康")
423
-
424
- if exception_count > 0:
425
- logger.error(f"💥 {exception_count} 个token检查异常")
426
-
427
-
428
- # 全局token池实例
429
- _token_pool: Optional[TokenPool] = None
430
- _pool_lock = Lock()
431
-
432
-
433
- def get_token_pool() -> Optional[TokenPool]:
434
- """获取全局token池实例"""
435
- return _token_pool
436
-
437
-
438
- def initialize_token_pool(tokens: List[str], failure_threshold: int = 3, recovery_timeout: int = 1800) -> TokenPool:
439
- """初始化全局token池"""
440
- global _token_pool
441
- with _pool_lock:
442
- _token_pool = TokenPool(tokens, failure_threshold, recovery_timeout)
443
- return _token_pool
444
-
445
-
446
- def update_token_pool(tokens: List[str]):
447
- """更新全局token池"""
448
- global _token_pool
449
- with _pool_lock:
450
- if _token_pool:
451
- _token_pool.update_tokens(tokens)
452
- else:
453
- _token_pool = TokenPool(tokens)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
app/utils/tools.py ADDED
@@ -0,0 +1,325 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Tool processing utilities
3
+ """
4
+
5
+ import json
6
+ import re
7
+ import time
8
+ from typing import Dict, List, Optional, Any
9
+
10
+ from app.core.config import settings
11
+
12
+
13
+ def content_to_string(content: Any) -> str:
14
+ """Convert content from various formats to string (following app.py pattern)"""
15
+ if isinstance(content, str):
16
+ return content
17
+ if isinstance(content, list):
18
+ parts = []
19
+ for p in content:
20
+ if isinstance(p, dict) and p.get("type") == "text":
21
+ parts.append(p.get("text", ""))
22
+ elif isinstance(p, str):
23
+ parts.append(p)
24
+ return " ".join(parts)
25
+ return ""
26
+
27
+
28
+ def generate_tool_prompt(tools: List[Dict[str, Any]]) -> str:
29
+ """Generate tool injection prompt with enhanced formatting"""
30
+ if not tools:
31
+ return ""
32
+
33
+ tool_definitions = []
34
+ for tool in tools:
35
+ if tool.get("type") != "function":
36
+ continue
37
+
38
+ function_spec = tool.get("function", {}) or {}
39
+ function_name = function_spec.get("name", "unknown")
40
+ function_description = function_spec.get("description", "")
41
+ parameters = function_spec.get("parameters", {}) or {}
42
+
43
+ # Create structured tool definition
44
+ tool_info = [f"## {function_name}", f"**Purpose**: {function_description}"]
45
+
46
+ # Add parameter details
47
+ parameter_properties = parameters.get("properties", {}) or {}
48
+ required_parameters = set(parameters.get("required", []) or [])
49
+
50
+ if parameter_properties:
51
+ tool_info.append("**Parameters**:")
52
+ for param_name, param_details in parameter_properties.items():
53
+ param_type = (param_details or {}).get("type", "any")
54
+ param_desc = (param_details or {}).get("description", "")
55
+ requirement_flag = "**Required**" if param_name in required_parameters else "*Optional*"
56
+ tool_info.append(f"- `{param_name}` ({param_type}) - {requirement_flag}: {param_desc}")
57
+
58
+ tool_definitions.append("\n".join(tool_info))
59
+
60
+ if not tool_definitions:
61
+ return ""
62
+
63
+ # Build comprehensive tool prompt
64
+ prompt_template = (
65
+ "\n\n# AVAILABLE FUNCTIONS\n" + "\n\n---\n".join(tool_definitions) + "\n\n# USAGE INSTRUCTIONS\n"
66
+ "When you need to execute a function, respond ONLY with a JSON object containing tool_calls:\n"
67
+ "```json\n"
68
+ "{\n"
69
+ ' "tool_calls": [\n'
70
+ " {\n"
71
+ ' "id": "call_xxx",\n'
72
+ ' "type": "function",\n'
73
+ ' "function": {\n'
74
+ ' "name": "function_name",\n'
75
+ ' "arguments": "{\\"param1\\": \\"value1\\"}"\n'
76
+ " }\n"
77
+ " }\n"
78
+ " ]\n"
79
+ "}\n"
80
+ "```\n"
81
+ "Important: No explanatory text before or after the JSON. The 'arguments' field must be a JSON string, not an object.\n"
82
+ )
83
+
84
+ return prompt_template
85
+
86
+
87
+ def process_messages_with_tools(
88
+ messages: List[Dict[str, Any]], tools: Optional[List[Dict[str, Any]]] = None, tool_choice: Optional[Any] = None
89
+ ) -> List[Dict[str, Any]]:
90
+ """Process messages and inject tool prompts"""
91
+ processed: List[Dict[str, Any]] = []
92
+
93
+ if tools and settings.TOOL_SUPPORT and (tool_choice != "none"):
94
+ tools_prompt = generate_tool_prompt(tools)
95
+ has_system = any(m.get("role") == "system" for m in messages)
96
+
97
+ if has_system:
98
+ for m in messages:
99
+ if m.get("role") == "system":
100
+ mm = dict(m)
101
+ content = content_to_string(mm.get("content", ""))
102
+ mm["content"] = content + tools_prompt
103
+ processed.append(mm)
104
+ else:
105
+ processed.append(m)
106
+ else:
107
+ processed = [{"role": "system", "content": "你是一个有用的助手。" + tools_prompt}] + messages
108
+
109
+ # Add tool choice hints
110
+ if tool_choice in ("required", "auto"):
111
+ if processed and processed[-1].get("role") == "user":
112
+ last = dict(processed[-1])
113
+ content = content_to_string(last.get("content", ""))
114
+ last["content"] = content + "\n\n请根据需要使用提供的工具函数。"
115
+ processed[-1] = last
116
+ elif isinstance(tool_choice, dict) and tool_choice.get("type") == "function":
117
+ fname = (tool_choice.get("function") or {}).get("name")
118
+ if fname and processed and processed[-1].get("role") == "user":
119
+ last = dict(processed[-1])
120
+ content = content_to_string(last.get("content", ""))
121
+ last["content"] = content + f"\n\n请使用 {fname} 函数来处理这个请求。"
122
+ processed[-1] = last
123
+ else:
124
+ processed = list(messages)
125
+
126
+ # Handle tool/function messages
127
+ final_msgs: List[Dict[str, Any]] = []
128
+ for m in processed:
129
+ role = m.get("role")
130
+ if role in ("tool", "function"):
131
+ tool_name = m.get("name", "unknown")
132
+ tool_content = content_to_string(m.get("content", ""))
133
+ if isinstance(tool_content, dict):
134
+ tool_content = json.dumps(tool_content, ensure_ascii=False)
135
+
136
+ # 确保内容不为空且不包含 None
137
+ content = f"工具 {tool_name} 返回结果:\n```json\n{tool_content}\n```"
138
+ if not content.strip():
139
+ content = f"工具 {tool_name} 执行完成"
140
+
141
+ final_msgs.append(
142
+ {
143
+ "role": "assistant",
144
+ "content": content,
145
+ }
146
+ )
147
+ else:
148
+ # For regular messages, ensure content is string format
149
+ final_msg = dict(m)
150
+ content = content_to_string(final_msg.get("content", ""))
151
+ final_msg["content"] = content
152
+ final_msgs.append(final_msg)
153
+
154
+ return final_msgs
155
+
156
+
157
+ # Tool Extraction Patterns
158
+ TOOL_CALL_FENCE_PATTERN = re.compile(r"```json\s*(\{.*?\})\s*```", re.DOTALL)
159
+ # 注意:TOOL_CALL_INLINE_PATTERN 已被移除,因为它会导致过度匹配
160
+ # 现在在 remove_tool_json_content 函数中使用基于括号平衡的方法
161
+ FUNCTION_CALL_PATTERN = re.compile(r"调用函数\s*[::]\s*([\w\-\.]+)\s*(?:参数|arguments)[::]\s*(\{.*?\})", re.DOTALL)
162
+
163
+
164
+ def extract_tool_invocations(text: str) -> Optional[List[Dict[str, Any]]]:
165
+ """Extract tool invocations from response text"""
166
+ if not text:
167
+ return None
168
+
169
+ # Limit scan size for performance
170
+ scannable_text = text[: settings.SCAN_LIMIT]
171
+
172
+ # Attempt 1: Extract from JSON code blocks
173
+ json_blocks = TOOL_CALL_FENCE_PATTERN.findall(scannable_text)
174
+ for json_block in json_blocks:
175
+ try:
176
+ parsed_data = json.loads(json_block)
177
+ tool_calls = parsed_data.get("tool_calls")
178
+ if tool_calls and isinstance(tool_calls, list):
179
+ # Ensure arguments field is a string
180
+ for tc in tool_calls:
181
+ if "function" in tc:
182
+ func = tc["function"]
183
+ if "arguments" in func:
184
+ if isinstance(func["arguments"], dict):
185
+ # Convert dict to JSON string
186
+ func["arguments"] = json.dumps(func["arguments"], ensure_ascii=False)
187
+ elif not isinstance(func["arguments"], str):
188
+ func["arguments"] = json.dumps(func["arguments"], ensure_ascii=False)
189
+ return tool_calls
190
+ except (json.JSONDecodeError, AttributeError):
191
+ continue
192
+
193
+ # Attempt 2: Extract inline JSON objects using bracket balance method
194
+ # 查找包含 "tool_calls" 的 JSON 对象
195
+ i = 0
196
+ while i < len(scannable_text):
197
+ if scannable_text[i] == '{':
198
+ # 尝试找到匹配的右括号
199
+ brace_count = 1
200
+ j = i + 1
201
+ in_string = False
202
+ escape_next = False
203
+
204
+ while j < len(scannable_text) and brace_count > 0:
205
+ if escape_next:
206
+ escape_next = False
207
+ elif scannable_text[j] == '\\':
208
+ escape_next = True
209
+ elif scannable_text[j] == '"' and not escape_next:
210
+ in_string = not in_string
211
+ elif not in_string:
212
+ if scannable_text[j] == '{':
213
+ brace_count += 1
214
+ elif scannable_text[j] == '}':
215
+ brace_count -= 1
216
+ j += 1
217
+
218
+ if brace_count == 0:
219
+ # 找到了完整的 JSON 对象
220
+ json_str = scannable_text[i:j]
221
+ try:
222
+ parsed_data = json.loads(json_str)
223
+ tool_calls = parsed_data.get("tool_calls")
224
+ if tool_calls and isinstance(tool_calls, list):
225
+ # Ensure arguments field is a string
226
+ for tc in tool_calls:
227
+ if "function" in tc:
228
+ func = tc["function"]
229
+ if "arguments" in func:
230
+ if isinstance(func["arguments"], dict):
231
+ # Convert dict to JSON string
232
+ func["arguments"] = json.dumps(func["arguments"], ensure_ascii=False)
233
+ elif not isinstance(func["arguments"], str):
234
+ func["arguments"] = json.dumps(func["arguments"], ensure_ascii=False)
235
+ return tool_calls
236
+ except (json.JSONDecodeError, AttributeError):
237
+ pass
238
+
239
+ i += 1
240
+ else:
241
+ i += 1
242
+
243
+ # Attempt 3: Parse natural language function calls
244
+ natural_lang_match = FUNCTION_CALL_PATTERN.search(scannable_text)
245
+ if natural_lang_match:
246
+ function_name = natural_lang_match.group(1).strip()
247
+ arguments_str = natural_lang_match.group(2).strip()
248
+ try:
249
+ # Validate JSON format
250
+ json.loads(arguments_str)
251
+ return [
252
+ {
253
+ "id": f"call_{int(time.time() * 1000000)}",
254
+ "type": "function",
255
+ "function": {"name": function_name, "arguments": arguments_str},
256
+ }
257
+ ]
258
+ except json.JSONDecodeError:
259
+ return None
260
+
261
+ return None
262
+
263
+
264
+ def remove_tool_json_content(text: str) -> str:
265
+ """Remove tool JSON content from response text - using bracket balance method"""
266
+
267
+ def remove_tool_call_block(match: re.Match) -> str:
268
+ json_content = match.group(1)
269
+ try:
270
+ parsed_data = json.loads(json_content)
271
+ if "tool_calls" in parsed_data:
272
+ return ""
273
+ except (json.JSONDecodeError, AttributeError):
274
+ pass
275
+ return match.group(0)
276
+
277
+ # Step 1: Remove fenced tool JSON blocks
278
+ cleaned_text = TOOL_CALL_FENCE_PATTERN.sub(remove_tool_call_block, text)
279
+
280
+ # Step 2: Remove inline tool JSON - 使用基于括号平衡的智能方法
281
+ # 查找所有可能的 JSON 对象并精确删除包含 tool_calls 的对象
282
+ result = []
283
+ i = 0
284
+ while i < len(cleaned_text):
285
+ if cleaned_text[i] == '{':
286
+ # 尝试找到匹配的右括号
287
+ brace_count = 1
288
+ j = i + 1
289
+ in_string = False
290
+ escape_next = False
291
+
292
+ while j < len(cleaned_text) and brace_count > 0:
293
+ if escape_next:
294
+ escape_next = False
295
+ elif cleaned_text[j] == '\\':
296
+ escape_next = True
297
+ elif cleaned_text[j] == '"' and not escape_next:
298
+ in_string = not in_string
299
+ elif not in_string:
300
+ if cleaned_text[j] == '{':
301
+ brace_count += 1
302
+ elif cleaned_text[j] == '}':
303
+ brace_count -= 1
304
+ j += 1
305
+
306
+ if brace_count == 0:
307
+ # 找到了完整的 JSON 对象
308
+ json_str = cleaned_text[i:j]
309
+ try:
310
+ parsed = json.loads(json_str)
311
+ if "tool_calls" in parsed:
312
+ # 这是一个工具调用,跳过它
313
+ i = j
314
+ continue
315
+ except:
316
+ pass
317
+
318
+ # 不是工具调用或无法解析,保留这个字符
319
+ result.append(cleaned_text[i])
320
+ i += 1
321
+ else:
322
+ result.append(cleaned_text[i])
323
+ i += 1
324
+
325
+ return ''.join(result).strip()
deploy/Dockerfile CHANGED
@@ -5,6 +5,6 @@ WORKDIR /app
5
  COPY requirements.txt .
6
  RUN pip install --no-cache-dir -r requirements.txt
7
 
8
- COPY .. .
9
 
10
  CMD ["python", "main.py"]
 
5
  COPY requirements.txt .
6
  RUN pip install --no-cache-dir -r requirements.txt
7
 
8
+ COPY . .
9
 
10
  CMD ["python", "main.py"]
deploy/docker-compose.yml CHANGED
@@ -2,19 +2,19 @@ version: '3.8'
2
 
3
  services:
4
  api-server:
5
- build:
6
- context: ..
7
- dockerfile: deploy/Dockerfile
8
  container_name: z-ai-api-server
9
  ports:
10
- - "8080:8080"
11
  environment:
12
  # Auth Configuration
13
- - AUTH_TOKEN=sk-your-api-key
14
  # 是否跳过api key验证
15
  - SKIP_AUTH_TOKEN=false
16
  # Server Configurations
17
  - DEBUG_LOGGING=true
 
 
18
  - ANONYMOUS_MODE=true
19
  - TOOL_SUPPORT=true
20
  - SCAN_LIMIT=200000
 
2
 
3
  services:
4
  api-server:
5
+ image: julienol/z-ai2api-python:latest
 
 
6
  container_name: z-ai-api-server
7
  ports:
8
+ - "8084:8080"
9
  environment:
10
  # Auth Configuration
11
+ - AUTH_TOKEN=sk-123456
12
  # 是否跳过api key验证
13
  - SKIP_AUTH_TOKEN=false
14
  # Server Configurations
15
  - DEBUG_LOGGING=true
16
+ # Feature Configuration
17
+ - THINKING_PROCESSING=think
18
  - ANONYMOUS_MODE=true
19
  - TOOL_SUPPORT=true
20
  - SCAN_LIMIT=200000
docker-compose.yml ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: '3.8'
2
+
3
+ services:
4
+ z-ai2api:
5
+ image: julienol/z-ai2api-python:latest
6
+ container_name: z-ai2api-python
7
+ ports:
8
+ - "8084:8080"
9
+ env_file:
10
+ - .env
11
+ volumes:
12
+ # 挂载token文件(如果使用token池功能)
13
+ - ./tokens.txt:/app/tokens.txt:ro
14
+ # 可选:挂载数据目录用于持久化token状态
15
+ - ./data:/app/data
16
+ restart: unless-stopped
17
+ # 添加宿主机网络访问支持
18
+ # extra_hosts:
19
+ # - "host.docker.internal:host-gateway"
20
+ healthcheck:
21
+ test: ["CMD", "curl", "-f", "http://localhost:8080/"]
22
+ interval: 30s
23
+ timeout: 10s
24
+ retries: 3
25
+ start_period: 40s
26
+ networks:
27
+ - z-ai2api-network
28
+
29
+ networks:
30
+ z-ai2api-network:
31
+ driver: bridge
main.py CHANGED
@@ -1,44 +1,25 @@
1
  #!/usr/bin/env python
2
  # -*- coding: utf-8 -*-
3
 
4
- import os
5
- import sys
6
- import psutil
7
- from contextlib import asynccontextmanager
8
- from fastapi import FastAPI, Response
9
  from fastapi.middleware.cors import CORSMiddleware
10
 
11
  from app.core.config import settings
12
  from app.core import openai
13
  from app.utils.reload_config import RELOAD_CONFIG
14
- from app.utils.logger import setup_logger
15
- from app.utils.token_pool import initialize_token_pool
16
- from app.utils.process_manager import ensure_service_uniqueness
17
 
18
  from granian import Granian
19
 
20
-
21
- # Setup logger
22
- logger = setup_logger(log_dir="logs", debug_mode=settings.DEBUG_LOGGING)
23
-
24
-
25
- @asynccontextmanager
26
- async def lifespan(app: FastAPI):
27
- token_list = settings.auth_token_list
28
- if token_list:
29
- token_pool = initialize_token_pool(
30
- tokens=token_list,
31
- failure_threshold=settings.TOKEN_FAILURE_THRESHOLD,
32
- recovery_timeout=settings.TOKEN_RECOVERY_TIMEOUT
33
- )
34
-
35
- yield
36
-
37
- logger.info("🔄 应用正在关闭...")
38
-
39
-
40
- # Create FastAPI app with lifespan
41
- app = FastAPI(lifespan=lifespan)
42
 
43
  # Add CORS middleware
44
  app.add_middleware(
@@ -66,32 +47,14 @@ async def root():
66
 
67
 
68
  def run_server():
69
- # 服务唯一性检查
70
- service_name = settings.SERVICE_NAME
71
- if not ensure_service_uniqueness(service_name=service_name, port=settings.LISTEN_PORT):
72
- logger.error("❌ 服务已在运行,程序退出")
73
- sys.exit(1)
74
-
75
- logger.info(f"🚀 启动 {service_name} 服务...")
76
- logger.info(f"📡 监听地址: 0.0.0.0:{settings.LISTEN_PORT}")
77
- logger.info(f"🔧 调试模式: {'开启' if settings.DEBUG_LOGGING else '关闭'}")
78
- logger.info(f"🔐 匿名模式: {'开启' if settings.ANONYMOUS_MODE else '关闭'}")
79
-
80
- try:
81
- Granian(
82
- "main:app",
83
- interface="asgi",
84
- address="0.0.0.0",
85
- port=settings.LISTEN_PORT,
86
- reload=True, # 生产环境请关闭热重载
87
- process_name=service_name, # 设置进程名称
88
- **RELOAD_CONFIG,
89
- ).serve()
90
- except KeyboardInterrupt:
91
- logger.info("🛑 收到中断信号,正在关闭服务...")
92
- except Exception as e:
93
- logger.error(f"❌ 服务启动失败: {e}")
94
- sys.exit(1)
95
 
96
 
97
  if __name__ == "__main__":
 
1
  #!/usr/bin/env python
2
  # -*- coding: utf-8 -*-
3
 
4
+ """
5
+ Main application entry point
6
+ """
7
+
8
+ from fastapi import FastAPI, Request, Response
9
  from fastapi.middleware.cors import CORSMiddleware
10
 
11
  from app.core.config import settings
12
  from app.core import openai
13
  from app.utils.reload_config import RELOAD_CONFIG
 
 
 
14
 
15
  from granian import Granian
16
 
17
+ # Create FastAPI app
18
+ app = FastAPI(
19
+ title="OpenAI Compatible API Server",
20
+ description="An OpenAI-compatible API server for Z.AI chat service",
21
+ version="1.0.0",
22
+ )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  # Add CORS middleware
25
  app.add_middleware(
 
47
 
48
 
49
  def run_server():
50
+ Granian(
51
+ "main:app",
52
+ interface="asgi",
53
+ address="0.0.0.0",
54
+ port=settings.LISTEN_PORT,
55
+ reload=False, # 生产环境请关闭热重载
56
+ **RELOAD_CONFIG,
57
+ ).serve()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
 
60
  if __name__ == "__main__":
pyproject.toml CHANGED
@@ -24,16 +24,14 @@ classifiers = [
24
  "Topic :: Software Development :: Libraries :: Python Modules",
25
  ]
26
  dependencies = [
27
- "fastapi==0.116.1",
28
- "granian[reload,pname]==2.5.2",
29
- "httpx==0.28.1",
30
  "pydantic==2.11.7",
31
  "pydantic-settings==2.10.1",
32
  "pydantic-core==2.33.2",
33
  "typing-inspection==0.4.1",
34
  "fake-useragent==2.2.0",
35
- "loguru==0.7.3",
36
- "psutil>=7.0.0",
37
  ]
38
 
39
  [project.scripts]
 
24
  "Topic :: Software Development :: Libraries :: Python Modules",
25
  ]
26
  dependencies = [
27
+ "fastapi==0.104.1",
28
+ "granian[reload]==2.5.2",
29
+ "requests==2.32.5",
30
  "pydantic==2.11.7",
31
  "pydantic-settings==2.10.1",
32
  "pydantic-core==2.33.2",
33
  "typing-inspection==0.4.1",
34
  "fake-useragent==2.2.0",
 
 
35
  ]
36
 
37
  [project.scripts]
requirements.txt CHANGED
@@ -1,10 +1,8 @@
1
- fastapi==0.116.1
2
- granian[reload,pname]==2.5.2
3
- httpx==0.28.1
4
  pydantic==2.11.7
5
  pydantic-settings==2.10.1
6
  pydantic-core==2.33.2
7
  typing-inspection==0.4.1
8
- fake-useragent==2.2.0
9
- loguru==0.7.3
10
- psutil>=7.0.0
 
1
+ fastapi==0.104.1
2
+ granian[reload]==2.5.2
3
+ requests==2.32.5
4
  pydantic==2.11.7
5
  pydantic-settings==2.10.1
6
  pydantic-core==2.33.2
7
  typing-inspection==0.4.1
8
+ fake-useragent==2.2.0
 
 
tests/test_comprehensive_tool_calls.py DELETED
@@ -1,254 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- """
5
- 全面的工具调用测试套件
6
- 覆盖各种工具类型、参数格式、传输模式和边界情况
7
- """
8
-
9
- import json
10
- import time
11
- from typing import Dict, Any, List
12
- from app.utils.sse_tool_handler import SSEToolHandler
13
- from app.utils.logger import get_logger
14
-
15
- logger = get_logger()
16
-
17
- class TestResult:
18
- """测试结果统计"""
19
- def __init__(self, test_name: str):
20
- self.test_name = test_name
21
- self.passed = 0
22
- self.failed = 0
23
- self.errors = []
24
-
25
- def add_pass(self):
26
- self.passed += 1
27
-
28
- def add_fail(self, error_msg: str):
29
- self.failed += 1
30
- self.errors.append(error_msg)
31
-
32
- def print_summary(self):
33
- total = self.passed + self.failed
34
- success_rate = (self.passed / total * 100) if total > 0 else 0
35
-
36
- print(f"\n📊 {self.test_name} 测试汇总:")
37
- print(f" 总测试数: {total}")
38
- print(f" ✅ 通过: {self.passed}")
39
- print(f" ❌ 失败: {self.failed}")
40
- print(f" 📈 成功率: {success_rate:.1f}%")
41
-
42
- if self.errors:
43
- print(f"\n❌ 失败详情:")
44
- for i, error in enumerate(self.errors, 1):
45
- print(f" {i}. {error}")
46
-
47
- def test_various_tool_types():
48
- """测试各种类型的工具调用"""
49
-
50
- result = TestResult("工具类型测试")
51
-
52
- # 定义各种工具类型的测试用例
53
- tool_scenarios = [
54
- {
55
- "name": "浏览器导航工具",
56
- "tool_name": "browser_navigate",
57
- "arguments": '{"url": "https://www.google.com"}',
58
- "expected_args": {"url": "https://www.google.com"},
59
- "description": "测试浏览器导航工具的URL参数"
60
- },
61
- {
62
- "name": "天气查询工具",
63
- "tool_name": "get_weather",
64
- "arguments": '{"city": "北京", "unit": "celsius"}',
65
- "expected_args": {"city": "北京", "unit": "celsius"},
66
- "description": "测试天气查询工具的城市和单位参数"
67
- },
68
- {
69
- "name": "文件操作工具",
70
- "tool_name": "file_write",
71
- "arguments": '{"path": "/tmp/test.txt", "content": "Hello World", "encoding": "utf-8"}',
72
- "expected_args": {"path": "/tmp/test.txt", "content": "Hello World", "encoding": "utf-8"},
73
- "description": "测试文件写入工具的多参数"
74
- },
75
- {
76
- "name": "搜索工具",
77
- "tool_name": "web_search",
78
- "arguments": '{"query": "Python编程", "limit": 10, "safe_search": true}',
79
- "expected_args": {"query": "Python编程", "limit": 10, "safe_search": True},
80
- "description": "测试搜索工具的混合类型参数"
81
- },
82
- {
83
- "name": "数据库查询工具",
84
- "tool_name": "db_query",
85
- "arguments": '{"sql": "SELECT * FROM users WHERE age > ?", "params": [18], "timeout": 30.5}',
86
- "expected_args": {"sql": "SELECT * FROM users WHERE age > ?", "params": [18], "timeout": 30.5},
87
- "description": "测试数据库工具的复杂参数结构"
88
- },
89
- {
90
- "name": "API调用工具",
91
- "tool_name": "api_call",
92
- "arguments": '{"method": "POST", "url": "https://api.example.com/data", "headers": {"Content-Type": "application/json"}, "body": {"key": "value"}}',
93
- "expected_args": {"method": "POST", "url": "https://api.example.com/data", "headers": {"Content-Type": "application/json"}, "body": {"key": "value"}},
94
- "description": "测试API调用工具的嵌套对象参数"
95
- },
96
- {
97
- "name": "图像处理工具",
98
- "tool_name": "image_resize",
99
- "arguments": '{"input_path": "image.jpg", "output_path": "resized.jpg", "width": 800, "height": 600, "maintain_aspect": false}',
100
- "expected_args": {"input_path": "image.jpg", "output_path": "resized.jpg", "width": 800, "height": 600, "maintain_aspect": False},
101
- "description": "测试图像处理工具的数值和布尔参数"
102
- },
103
- {
104
- "name": "邮件发送工具",
105
- "tool_name": "send_email",
106
- "arguments": '{"to": ["user1@example.com", "user2@example.com"], "subject": "测试邮件", "body": "这是一封测试邮件\\n包含换行符", "attachments": []}',
107
- "expected_args": {"to": ["user1@example.com", "user2@example.com"], "subject": "测试邮件", "body": "这是一封测试邮件\n包含换行符", "attachments": []},
108
- "description": "测试邮件工具的数组参数和转义字符"
109
- }
110
- ]
111
-
112
- print("🔧 测试各种类型的工具调用")
113
- print("=" * 80)
114
-
115
- for i, scenario in enumerate(tool_scenarios, 1):
116
- print(f"\n测试 {i}: {scenario['name']}")
117
- print(f"描述: {scenario['description']}")
118
-
119
- try:
120
- handler = SSEToolHandler("test_chat_id", "GLM-4.5")
121
-
122
- # 构造完整的工具调用数据
123
- tool_data = {
124
- "edit_index": 0,
125
- "edit_content": f'<glm_block >{{"type": "mcp", "data": {{"metadata": {{"id": "call_{i}", "name": "{scenario["tool_name"]}", "arguments": "{scenario["arguments"]}", "result": "", "status": "completed"}}}}, "thought": null}}</glm_block>',
126
- "phase": "tool_call"
127
- }
128
-
129
- # 处理工具调用
130
- chunks = list(handler.process_tool_call_phase(tool_data, is_stream=False))
131
-
132
- # 验证结果
133
- if handler.active_tools:
134
- tool = list(handler.active_tools.values())[0]
135
- actual_args = tool["arguments"]
136
- expected_args = scenario["expected_args"]
137
-
138
- if actual_args == expected_args:
139
- print(f" ✅ 参数解析正确: {actual_args}")
140
- result.add_pass()
141
- else:
142
- error_msg = f"{scenario['name']}: 参数不匹配 - 期望: {expected_args}, 实际: {actual_args}"
143
- print(f" ❌ {error_msg}")
144
- result.add_fail(error_msg)
145
- else:
146
- error_msg = f"{scenario['name']}: 未检测到工具调用"
147
- print(f" ❌ {error_msg}")
148
- result.add_fail(error_msg)
149
-
150
- except Exception as e:
151
- error_msg = f"{scenario['name']}: 处理异常 - {str(e)}"
152
- print(f" ❌ {error_msg}")
153
- result.add_fail(error_msg)
154
-
155
- result.print_summary()
156
- return result
157
-
158
- def test_parameter_formats():
159
- """测试各种参数格式"""
160
-
161
- result = TestResult("参数格式测试")
162
-
163
- # 定义各种参数格式的测试用例
164
- format_scenarios = [
165
- {
166
- "name": "空参数",
167
- "arguments": "{}",
168
- "expected": {},
169
- "description": "测试空参数对象"
170
- },
171
- {
172
- "name": "null参数",
173
- "arguments": "null",
174
- "expected": {},
175
- "description": "测试null参数值"
176
- },
177
- {
178
- "name": "转义JSON字符串",
179
- "arguments": '{\\"key\\": \\"value\\"}',
180
- "expected": {"key": "value"},
181
- "description": "测试转义的JSON字符串"
182
- },
183
- {
184
- "name": "包含特殊字符",
185
- "arguments": '{"text": "Hello\\nWorld\\t!", "emoji": "😀🎉", "unicode": "中文测试"}',
186
- "expected": {"text": "Hello\nWorld\t!", "emoji": "😀🎉", "unicode": "中文测试"},
187
- "description": "测试包含换行符、制表符、emoji和中文的参数"
188
- },
189
- {
190
- "name": "数值类型",
191
- "arguments": '{"int": 42, "float": 3.14159, "negative": -100, "zero": 0}',
192
- "expected": {"int": 42, "float": 3.14159, "negative": -100, "zero": 0},
193
- "description": "测试各种数值类型参数"
194
- },
195
- {
196
- "name": "布尔类型",
197
- "arguments": '{"true_val": true, "false_val": false}',
198
- "expected": {"true_val": True, "false_val": False},
199
- "description": "测试布尔类型参数"
200
- },
201
- {
202
- "name": "数组参数",
203
- "arguments": '{"empty_array": [], "string_array": ["a", "b", "c"], "mixed_array": [1, "two", true, null]}',
204
- "expected": {"empty_array": [], "string_array": ["a", "b", "c"], "mixed_array": [1, "two", True, None]},
205
- "description": "测试各种数组类型参数"
206
- },
207
- {
208
- "name": "嵌套对象",
209
- "arguments": '{"nested": {"level1": {"level2": {"value": "deep"}}}, "array_of_objects": [{"id": 1}, {"id": 2}]}',
210
- "expected": {"nested": {"level1": {"level2": {"value": "deep"}}}, "array_of_objects": [{"id": 1}, {"id": 2}]},
211
- "description": "测试深度嵌套的对象和对象数组"
212
- },
213
- {
214
- "name": "长字符串",
215
- "arguments": '{"long_text": "' + "A" * 1000 + '"}',
216
- "expected": {"long_text": "A" * 1000},
217
- "description": "测试长字符串参数"
218
- },
219
- {
220
- "name": "包含引号的字符串",
221
- "arguments": '{"quoted": "He said \\"Hello\\" to me", "single_quote": "It\'s working"}',
222
- "expected": {"quoted": 'He said "Hello" to me', "single_quote": "It's working"},
223
- "description": "测试包含引号的字符串参数"
224
- }
225
- ]
226
-
227
- print("\n📝 测试各种参数格式")
228
- print("=" * 80)
229
-
230
- for i, scenario in enumerate(format_scenarios, 1):
231
- print(f"\n测试 {i}: {scenario['name']}")
232
- print(f"描述: {scenario['description']}")
233
-
234
- try:
235
- handler = SSEToolHandler("test_chat_id", "GLM-4.5")
236
-
237
- # 直接测试参数解析
238
- result_args = handler._parse_partial_arguments(scenario["arguments"])
239
-
240
- if result_args == scenario["expected"]:
241
- print(f" ✅ 参数解析正确")
242
- result.add_pass()
243
- else:
244
- error_msg = f"{scenario['name']}: 参数解析错误 - 期望: {scenario['expected']}, 实际: {result_args}"
245
- print(f" ❌ {error_msg}")
246
- result.add_fail(error_msg)
247
-
248
- except Exception as e:
249
- error_msg = f"{scenario['name']}: 解析异常 - {str(e)}"
250
- print(f" ❌ {error_msg}")
251
- result.add_fail(error_msg)
252
-
253
- result.print_summary()
254
- return result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_final_verification.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """验证 tools.py 修复后的功能"""
2
+
3
+ import sys
4
+ sys.path.append('E:\\GitHub\\z.ai2api_python')
5
+
6
+ from app.utils.tools import remove_tool_json_content
7
+
8
+ def test_remove_tool_json():
9
+ print("=" * 60)
10
+ print("验证 tools.py 中的 remove_tool_json_content 函数")
11
+ print("=" * 60)
12
+
13
+ # 测试案例 1: 纯工具调用 JSON(应该被完全移除)
14
+ test1 = '{"tool_calls": [{"id": "call_1", "type": "function"}]}'
15
+ result1 = remove_tool_json_content(test1)
16
+ print(f"\n测试1 - 纯工具调用:")
17
+ print(f"输入: {test1}")
18
+ print(f"输出: '{result1}'")
19
+ print("[PASS] 通过" if result1 == "" else "[FAIL] 失败")
20
+
21
+ # 测试案例 2: 混合内容
22
+ test2 = '''这是开始文本
23
+ {"tool_calls": [{"id": "call_2", "type": "function"}]}
24
+ 这是结束文本'''
25
+ result2 = remove_tool_json_content(test2)
26
+ print(f"\n测试2 - 混合内容:")
27
+ print(f"输入: {repr(test2)}")
28
+ print(f"输出: {repr(result2)}")
29
+ expected2 = "这是开始文本\n\n这是结束文本"
30
+ print("[PASS] 通过" if result2 == expected2 else "[FAIL] 失败")
31
+
32
+ # 测试案例 3: 普通 JSON(不应被删除)
33
+ test3 = '{"data": {"result": "success"}}'
34
+ result3 = remove_tool_json_content(test3)
35
+ print(f"\n测试3 - 普通JSON:")
36
+ print(f"输入: {test3}")
37
+ print(f"输出: '{result3}'")
38
+ print("[PASS] 通过" if result3 == test3 else "[FAIL] 失败")
39
+
40
+ # 测试案例 4: 代码块中的工具调用
41
+ test4 = '''正常文本
42
+ ```json
43
+ {"tool_calls": [{"id": "call_3"}]}
44
+ ```
45
+ 保留文本'''
46
+ result4 = remove_tool_json_content(test4)
47
+ print(f"\n测试4 - 代码块中的工具调用:")
48
+ print(f"输入: {repr(test4)}")
49
+ print(f"输出: {repr(result4)}")
50
+ print("[PASS] 通过" if "保留文本" in result4 and "tool_calls" not in result4 else "[FAIL] 失败")
51
+
52
+ if __name__ == "__main__":
53
+ test_remove_tool_json()
54
+ print("\n" + "=" * 60)
55
+ print("所有测试完成!正则表达式问题已成功修复。")
56
+ print("=" * 60)
tests/test_function_call.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+
3
+ import json
4
+ import requests
5
+
6
+ # API 配置
7
+ API_BASE = "http://localhost:8080"
8
+ API_KEY = "sk-your-api-key"
9
+
10
+ def test_weather_query():
11
+ """测试天气查询"""
12
+ print("=" * 50)
13
+ print("上海天气查询测试")
14
+ print("=" * 50)
15
+
16
+ # 工具定义
17
+ tool = {
18
+ "type": "function",
19
+ "function": {
20
+ "name": "get_weather",
21
+ "description": "查询指定城市的天气信息",
22
+ "parameters": {
23
+ "type": "object",
24
+ "properties": {
25
+ "city": {"type": "string", "description": "城市名称"},
26
+ "date": {"type": "string", "description": "查询日期(可选)"}
27
+ },
28
+ "required": ["city"]
29
+ }
30
+ }
31
+ }
32
+
33
+ # 发送请求
34
+ headers = {
35
+ "Content-Type": "application/json",
36
+ "Authorization": f"Bearer {API_KEY}"
37
+ }
38
+
39
+ data = {
40
+ "model": "GLM-4.5",
41
+ "messages": [
42
+ {"role": "user", "content": "查询上海2025年9月3日的天气"}
43
+ ],
44
+ "tools": [tool]
45
+ }
46
+
47
+ print("\n发送请求...")
48
+ response = requests.post(f"{API_BASE}/v1/chat/completions",
49
+ headers=headers,
50
+ json=data)
51
+
52
+ if response.status_code == 200:
53
+ result = response.json()
54
+ message = result["choices"][0]["message"]
55
+
56
+ print("\n模型响应:")
57
+ if message.get("tool_calls"):
58
+ print("检测到工具调用:")
59
+ for tc in message["tool_calls"]:
60
+ print(f" - 工具: {tc['function']['name']}")
61
+ print(f" - 参数: {tc['function']['arguments']}")
62
+ else:
63
+ print("未检测到工具调用")
64
+ print(f"内容: {message.get('content', '无内容')[:100]}...")
65
+ else:
66
+ print(f"请求失败: {response.status_code}")
67
+ print(f"错误信息: {response.text}")
68
+
69
+ if __name__ == "__main__":
70
+ test_weather_query()
tests/test_live_server.py DELETED
@@ -1,112 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- """
5
- 测试当前运行的服务器是否正确处理GLM-4.5-Search模型
6
- """
7
-
8
- import asyncio
9
- import json
10
- import httpx
11
- from app.core.config import settings
12
-
13
- async def test_live_server():
14
- """测试实际运行的服务器"""
15
-
16
- print("🧪 测试当前运行的服务器...")
17
- print(f"服务器地址: http://localhost:{settings.LISTEN_PORT}")
18
- print()
19
-
20
- try:
21
- async with httpx.AsyncClient() as client:
22
- # 测试搜索模型请求
23
- search_request = {
24
- "model": "GLM-4.5-Search",
25
- "messages": [
26
- {"role": "user", "content": "请搜索今天北京的天气"}
27
- ],
28
- "stream": True # 使用流式以便观察日志
29
- }
30
-
31
- headers = {
32
- "Content-Type": "application/json",
33
- "Authorization": f"Bearer {settings.AUTH_TOKEN}"
34
- }
35
-
36
- print(f"📤 发送GLM-4.5-Search请求...")
37
- print(f"请求内容: {json.dumps(search_request, ensure_ascii=False, indent=2)}")
38
- print()
39
-
40
- # 发送请求并接收流式响应
41
- async with client.stream(
42
- "POST",
43
- f"http://localhost:{settings.LISTEN_PORT}/v1/chat/completions",
44
- json=search_request,
45
- headers=headers,
46
- timeout=30.0
47
- ) as response:
48
-
49
- print(f"📥 响应状态: {response.status_code}")
50
-
51
- if response.status_code == 200:
52
- print(f"✅ 请求成功,开始接收流式响应...")
53
- print(f"💡 请查看服务器日志以确认是否正确添加了 deep-web-search MCP 服务器")
54
- print()
55
-
56
- # 读取前几个响应块
57
- chunk_count = 0
58
- async for line in response.aiter_lines():
59
- if line.startswith("data: "):
60
- chunk_count += 1
61
- if chunk_count <= 3: # 只显示前3个块
62
- data = line[6:] # 去掉 "data: " 前缀
63
- if data.strip() and data.strip() != "[DONE]":
64
- try:
65
- chunk_data = json.loads(data)
66
- content = chunk_data.get("choices", [{}])[0].get("delta", {}).get("content", "")
67
- if content:
68
- print(f"📦 响应块 {chunk_count}: {content}")
69
- except:
70
- pass
71
- elif chunk_count > 10: # 读取足够的块后停止
72
- break
73
-
74
- print(f"\n✅ 流式响应正常,共接收 {chunk_count} 个数据块")
75
- print(f"🔍 请检查服务器日志中是否包含以下信息:")
76
- print(f" - '模型特性检测: is_search=True'")
77
- print(f" - '🔍 检测到搜索模型,添加 deep-web-search MCP 服务器'")
78
- print(f" - 'MCP服务器列表: [\"deep-web-search\"]'")
79
-
80
- else:
81
- error_text = await response.aread()
82
- print(f"❌ 请求失败: {response.status_code}")
83
- print(f"错误信息: {error_text.decode('utf-8', errors='ignore')}")
84
-
85
- except httpx.ConnectError:
86
- print(f"❌ 无法连接到服务器 localhost:{settings.LISTEN_PORT}")
87
- print(f" 请确保服务器正在运行: python main.py")
88
- except Exception as e:
89
- print(f"❌ 请求异常: {e}")
90
-
91
- async def main():
92
- """主函数"""
93
- print("=" * 60)
94
- print("GLM-4.5-Search 实时服务器测试")
95
- print("=" * 60)
96
- print()
97
-
98
- await test_live_server()
99
-
100
- print()
101
- print("=" * 60)
102
- print("测试完成")
103
- print("=" * 60)
104
- print()
105
- print("📋 检查清单:")
106
- print("1. 服务器是否正常响应 GLM-4.5-Search 请求?")
107
- print("2. 日志中是否显示 'is_search=True'?")
108
- print("3. 日志中是否显示添加 deep-web-search MCP 服务器?")
109
- print("4. 如果以上信息缺失,请重启服务器以加载最新代码")
110
-
111
- if __name__ == "__main__":
112
- asyncio.run(main())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_model_comparison.py DELETED
@@ -1,118 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- """
5
- 对比不同模型的搜索行为
6
- """
7
-
8
- import asyncio
9
- import json
10
- import httpx
11
- from app.core.config import settings
12
-
13
- async def test_model(model_name: str, question: str):
14
- """测试特定模型的响应"""
15
-
16
- print(f"🧪 测试模型: {model_name}")
17
- print(f"问题: {question}")
18
- print()
19
-
20
- try:
21
- async with httpx.AsyncClient() as client:
22
- request_data = {
23
- "model": model_name,
24
- "messages": [
25
- {"role": "user", "content": question}
26
- ],
27
- "stream": False # 使用非流式以便完整查看响应
28
- }
29
-
30
- headers = {
31
- "Content-Type": "application/json",
32
- "Authorization": f"Bearer {settings.AUTH_TOKEN}"
33
- }
34
-
35
- response = await client.post(
36
- f"http://localhost:{settings.LISTEN_PORT}/v1/chat/completions",
37
- json=request_data,
38
- headers=headers,
39
- timeout=60.0
40
- )
41
-
42
- if response.status_code == 200:
43
- result = response.json()
44
- content = result["choices"][0]["message"]["content"]
45
- print(f"✅ 响应成功:")
46
- print(f"内容: {content[:200]}...")
47
- print()
48
-
49
- # 检查是否包含搜索相关的内容
50
- search_indicators = [
51
- "搜索", "查询", "实时", "最新", "网络", "互联网",
52
- "search", "query", "real-time", "latest", "web", "internet"
53
- ]
54
-
55
- has_search_content = any(indicator in content.lower() for indicator in search_indicators)
56
- if has_search_content:
57
- print(f"🔍 检测到搜索相关内容")
58
- else:
59
- print(f"❌ 未检测到搜索相关内容")
60
-
61
- return content
62
- else:
63
- print(f"❌ 请求失败: {response.status_code}")
64
- print(f"错误: {response.text}")
65
- return None
66
-
67
- except Exception as e:
68
- print(f"❌ 请求异常: {e}")
69
- return None
70
-
71
- async def main():
72
- """主测试函数"""
73
- print("=" * 80)
74
- print("GLM模型搜索能力对比测试")
75
- print("=" * 80)
76
- print()
77
-
78
- # 测试问题
79
- search_question = "请搜索今天北京的天气情况"
80
- general_question = "你好,请介绍一下自己"
81
-
82
- models_to_test = [
83
- "GLM-4.5",
84
- "GLM-4.5-Search",
85
- "GLM-4.5-Thinking",
86
- "GLM-4.5-Air"
87
- ]
88
-
89
- print("🔍 测试搜索相关问题:")
90
- print(f"问题: {search_question}")
91
- print("-" * 80)
92
-
93
- for model in models_to_test:
94
- await test_model(model, search_question)
95
- print("-" * 40)
96
-
97
- print()
98
- print("💬 测试一般问题:")
99
- print(f"问题: {general_question}")
100
- print("-" * 80)
101
-
102
- for model in models_to_test:
103
- await test_model(model, general_question)
104
- print("-" * 40)
105
-
106
- print()
107
- print("=" * 80)
108
- print("测试完成")
109
- print("=" * 80)
110
- print()
111
- print("📋 分析要点:")
112
- print("1. GLM-4.5-Search 是否表现出不同的搜索行为?")
113
- print("2. 其他模型是否都拒绝搜索请求?")
114
- print("3. 模型响应中是否包含实际的搜索结果?")
115
- print("4. 检查服务器日志中的MCP服务器配置是否正确")
116
-
117
- if __name__ == "__main__":
118
- asyncio.run(main())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_multimodal_quick.py CHANGED
@@ -1,6 +1,3 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
  """
5
  glm-4.5v 多模态功能测试
6
  """
@@ -8,7 +5,9 @@ import requests
8
  import json
9
 
10
  # 创建一个1x1像素的红色图片作为测试
11
- tiny_red_image = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mP8z8DwHwAFBQIAX8jx0gAAAABJRU5ErkJggg=="
 
 
12
 
13
  # API配置
14
  api_url = "http://localhost:8080/v1/chat/completions"
@@ -21,25 +20,36 @@ request_data = {
21
  {
22
  "role": "user",
23
  "content": [ # content必须是数组
24
- {"type": "text", "text": "这是什么颜色的图片?"},
25
- {"type": "image_url", "image_url": {"url": tiny_red_image}},
26
- ],
 
 
 
 
 
 
 
 
27
  }
28
  ],
29
- "stream": False,
30
  }
31
 
32
  print("发送的请求:")
33
  print(json.dumps(request_data, indent=2, ensure_ascii=False))
34
- print("\n" + "=" * 60)
35
 
36
  # 发送请求
37
- headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
 
 
 
38
 
39
  try:
40
  response = requests.post(api_url, json=request_data, headers=headers)
41
  print(f"响应状态码: {response.status_code}")
42
-
43
  if response.status_code == 200:
44
  result = response.json()
45
  print("\n模型回复:")
@@ -47,6 +57,6 @@ try:
47
  else:
48
  print("\n错误响应:")
49
  print(response.text)
50
-
51
  except Exception as e:
52
- print(f"\n发生错误: {e}")
 
 
 
 
1
  """
2
  glm-4.5v 多模态功能测试
3
  """
 
5
  import json
6
 
7
  # 创建一个1x1像素的红色图片作为测试
8
+ tiny_red_image = (
9
+ "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mP8z8DwHwAFBQIAX8jx0gAAAABJRU5ErkJggg=="
10
+ )
11
 
12
  # API配置
13
  api_url = "http://localhost:8080/v1/chat/completions"
 
20
  {
21
  "role": "user",
22
  "content": [ # content必须是数组
23
+ {
24
+ "type": "text",
25
+ "text": "这是什么颜色的图片?"
26
+ },
27
+ {
28
+ "type": "image_url",
29
+ "image_url": {
30
+ "url": tiny_red_image
31
+ }
32
+ }
33
+ ]
34
  }
35
  ],
36
+ "stream": False
37
  }
38
 
39
  print("发送的请求:")
40
  print(json.dumps(request_data, indent=2, ensure_ascii=False))
41
+ print("\n" + "="*60)
42
 
43
  # 发送请求
44
+ headers = {
45
+ "Authorization": f"Bearer {api_key}",
46
+ "Content-Type": "application/json"
47
+ }
48
 
49
  try:
50
  response = requests.post(api_url, json=request_data, headers=headers)
51
  print(f"响应状态码: {response.status_code}")
52
+
53
  if response.status_code == 200:
54
  result = response.json()
55
  print("\n模型回复:")
 
57
  else:
58
  print("\n错误响应:")
59
  print(response.text)
60
+
61
  except Exception as e:
62
+ print(f"\n发生错误: {e}")
tests/test_re.py ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """测试和修复正则表达式问题"""
2
+
3
+ import json
4
+ import re
5
+
6
+ # 原始的正则表达式(来自 tools.py)
7
+ TOOL_CALL_FENCE_PATTERN = re.compile(r"```json\s*(\{.*?\})\s*```", re.DOTALL)
8
+ TOOL_CALL_INLINE_PATTERN_OLD = re.compile(r"(\{[^{}]{0,10000}\"tool_calls\".*?\})", re.DOTALL)
9
+
10
+ # 改进的正则表达式
11
+ # 方案1:更精确的匹配 - 只匹配包含 tool_calls 的完整 JSON 对象
12
+ TOOL_CALL_INLINE_PATTERN_NEW = re.compile(
13
+ r'\{(?:[^{}]|\{[^{}]*\})*"tool_calls"\s*:\s*\[[^\]]*\](?:[^{}]|\{[^{}]*\})*\}',
14
+ re.MULTILINE
15
+ )
16
+
17
+ def remove_tool_json_content_old(text: str) -> str:
18
+ """原始的移除工具JSON内容函数"""
19
+
20
+ def remove_tool_call_block(match: re.Match) -> str:
21
+ json_content = match.group(1)
22
+ try:
23
+ parsed_data = json.loads(json_content)
24
+ if "tool_calls" in parsed_data:
25
+ return ""
26
+ except (json.JSONDecodeError, AttributeError):
27
+ pass
28
+ return match.group(0)
29
+
30
+ # Remove fenced tool JSON blocks
31
+ cleaned_text = TOOL_CALL_FENCE_PATTERN.sub(remove_tool_call_block, text)
32
+ # Remove inline tool JSON
33
+ cleaned_text = TOOL_CALL_INLINE_PATTERN_OLD.sub("", cleaned_text)
34
+ return cleaned_text.strip()
35
+
36
+ def remove_tool_json_content_new(text: str) -> str:
37
+ """改进的移除工具JSON内容函数 - 使用基于括号平衡的方法"""
38
+
39
+ def remove_tool_call_block(match: re.Match) -> str:
40
+ json_content = match.group(1)
41
+ try:
42
+ parsed_data = json.loads(json_content)
43
+ if "tool_calls" in parsed_data:
44
+ return ""
45
+ except (json.JSONDecodeError, AttributeError):
46
+ pass
47
+ return match.group(0)
48
+
49
+ # Step 1: Remove fenced tool JSON blocks
50
+ cleaned_text = TOOL_CALL_FENCE_PATTERN.sub(remove_tool_call_block, text)
51
+
52
+ # Step 2: Remove inline tool JSON - 使用更智能的方法
53
+ # 查找所有可能的 JSON 对象
54
+ result = []
55
+ i = 0
56
+ while i < len(cleaned_text):
57
+ if cleaned_text[i] == '{':
58
+ # 尝试找到匹配的右括号
59
+ brace_count = 1
60
+ j = i + 1
61
+ in_string = False
62
+ escape_next = False
63
+
64
+ while j < len(cleaned_text) and brace_count > 0:
65
+ if escape_next:
66
+ escape_next = False
67
+ elif cleaned_text[j] == '\\':
68
+ escape_next = True
69
+ elif cleaned_text[j] == '"' and not escape_next:
70
+ in_string = not in_string
71
+ elif not in_string:
72
+ if cleaned_text[j] == '{':
73
+ brace_count += 1
74
+ elif cleaned_text[j] == '}':
75
+ brace_count -= 1
76
+ j += 1
77
+
78
+ if brace_count == 0:
79
+ # 找到了完整的 JSON 对象
80
+ json_str = cleaned_text[i:j]
81
+ try:
82
+ parsed = json.loads(json_str)
83
+ if "tool_calls" in parsed:
84
+ # 这是一个工具调用,跳过它
85
+ i = j
86
+ continue
87
+ except:
88
+ pass
89
+
90
+ # 不是工具调用或无法解析,保留这个字符
91
+ result.append(cleaned_text[i])
92
+ i += 1
93
+ else:
94
+ result.append(cleaned_text[i])
95
+ i += 1
96
+
97
+ return ''.join(result).strip()
98
+
99
+ # 测试用例
100
+ test_cases = [
101
+ # 测试案例 1: 只有工具调用JSON,应该被完全删除
102
+ {
103
+ "name": "纯工具调用JSON",
104
+ "input": """{"tool_calls": [{"id": "call_1", "type": "function", "function": {"name": "test", "arguments": "{}"}}]}""",
105
+ "expected": ""
106
+ },
107
+
108
+ # 测试案例 2: 包含工具调用的 JSON 代码块
109
+ {
110
+ "name": "代码块中的工具调用",
111
+ "input": """这是一些正常的文本内容。
112
+
113
+ ```json
114
+ {
115
+ "tool_calls": [
116
+ {
117
+ "id": "call_123",
118
+ "type": "function",
119
+ "function": {
120
+ "name": "test_function",
121
+ "arguments": "{\\"param\\": \\"value\\"}"
122
+ }
123
+ }
124
+ ]
125
+ }
126
+ ```
127
+
128
+ 这部分内容应该被保留。""",
129
+ "expected": """这是一些正常的文本内容。
130
+
131
+
132
+
133
+ 这部分内容应该被保留。"""
134
+ },
135
+
136
+ # 测试案例 3: 混合内容
137
+ {
138
+ "name": "混合内容",
139
+ "input": """让我为您执行一个函数调用:
140
+
141
+ {"tool_calls": [{"id": "call_789", "type": "function", "function": {"name": "search", "arguments": "{\\"query\\": \\"test\\"}"}}]}
142
+
143
+ 函数执行结果如下:
144
+ - 找到了相关内容
145
+ - 处理完成
146
+
147
+ 这里还有其他重要信息需要保留。""",
148
+ "expected": """让我为您执行一个函数调用:
149
+
150
+
151
+
152
+ 函数执行结果如下:
153
+ - 找到了相关内容
154
+ - 处理完成
155
+
156
+ 这里还有其他重要信息需要保留。"""
157
+ },
158
+
159
+ # 测试案例 4: 不应该被删除的普通 JSON
160
+ {
161
+ "name": "普通JSON(应保留)",
162
+ "input": """这是一个普通的 JSON 示例:
163
+ {"data": {"result": "success"}}
164
+
165
+ 这不是工具调用,应该保留。""",
166
+ "expected": """这是一个普通的 JSON 示例:
167
+ {"data": {"result": "success"}}
168
+
169
+ 这不是工具调用,应该保留。"""
170
+ },
171
+
172
+ # 测试案例 5: 嵌套的复杂JSON
173
+ {
174
+ "name": "嵌套复杂JSON",
175
+ "input": """开始文本
176
+ {"tool_calls": [{"id": "call_1", "function": {"name": "test", "arguments": "{\\"nested\\": {\\"deep\\": \\"value\\"}}"}}]}
177
+ 中间文本
178
+ {"normal": {"data": "keep this"}}
179
+ 结束文本""",
180
+ "expected": """开始文本
181
+
182
+ 中间文本
183
+ {"normal": {"data": "keep this"}}
184
+ 结束文本"""
185
+ }
186
+ ]
187
+
188
+ def run_tests():
189
+ print("=" * 80)
190
+ print("测试正则表达式处理")
191
+ print("=" * 80)
192
+
193
+ passed = 0
194
+ failed = 0
195
+
196
+ for test_case in test_cases:
197
+ print(f"\n测试案例: {test_case['name']}")
198
+ print("-" * 40)
199
+ print("输入文本:")
200
+ print(repr(test_case['input']))
201
+
202
+ print("\n使用原始函数处理后:")
203
+ result_old = remove_tool_json_content_old(test_case['input'])
204
+ print(repr(result_old))
205
+
206
+ print("\n使用改进函数处理后:")
207
+ result_new = remove_tool_json_content_new(test_case['input'])
208
+ print(repr(result_new))
209
+
210
+ print("\n期望结果:")
211
+ print(repr(test_case['expected']))
212
+
213
+ # 检查新函数是否正确
214
+ if result_new == test_case['expected']:
215
+ print("[PASS] 新函数通过测试")
216
+ passed += 1
217
+ else:
218
+ print("[FAIL] 新函数测试失败")
219
+ failed += 1
220
+
221
+ print("-" * 40)
222
+
223
+ print(f"\n\n总结: {passed} 个通过, {failed} 个失败")
224
+
225
+ if __name__ == "__main__":
226
+ run_tests()
tests/test_search_model.py DELETED
@@ -1,180 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- """
5
- 测试GLM-4.5-Search模型的deep-web-search MCP服务器功能
6
- """
7
-
8
- import asyncio
9
- import json
10
- import httpx
11
- from app.core.config import settings
12
- from app.core.zai_transformer import ZAITransformer
13
- from app.utils.logger import setup_logger
14
-
15
- # 设置日志
16
- logger = setup_logger(log_dir="logs", debug_mode=True)
17
-
18
- async def test_search_model_mcp():
19
- """测试搜索模型的MCP服务器配置"""
20
-
21
- # 创建转换器实例
22
- transformer = ZAITransformer()
23
-
24
- # 模拟OpenAI请求 - 使用GLM-4.5-Search模型
25
- openai_request = {
26
- "model": "GLM-4.5-Search",
27
- "messages": [
28
- {"role": "user", "content": "请搜索一下今天的新闻"}
29
- ],
30
- "stream": True
31
- }
32
-
33
- print(f"🧪 测试请求:")
34
- print(f" 模型: {openai_request['model']}")
35
- print(f" SEARCH_MODEL配置: {settings.SEARCH_MODEL}")
36
- print(f" 模型匹配: {openai_request['model'] == settings.SEARCH_MODEL}")
37
- print()
38
-
39
- try:
40
- # 转换请求
41
- transformed = await transformer.transform_request_in(openai_request)
42
-
43
- print(f"✅ 转换成功!")
44
- print(f" 上游模型: {transformed['body']['model']}")
45
- print(f" MCP服务器: {transformed['body']['mcp_servers']}")
46
- print(f" web_search特性: {transformed['body']['features']['web_search']}")
47
- print(f" auto_web_search特性: {transformed['body']['features']['auto_web_search']}")
48
- print()
49
-
50
- # 检查是否正确添加了deep-web-search
51
- mcp_servers = transformed['body']['mcp_servers']
52
- if "deep-web-search" in mcp_servers:
53
- print("✅ deep-web-search MCP服务器已正确添加!")
54
- else:
55
- print("❌ deep-web-search MCP服务器未添加!")
56
- print(f" 实际MCP服务器列表: {mcp_servers}")
57
-
58
- return transformed
59
-
60
- except Exception as e:
61
- print(f"❌ 转换失败: {e}")
62
- return None
63
-
64
- async def test_non_search_model():
65
- """测试非搜索模型不应该添加MCP服务器"""
66
-
67
- transformer = ZAITransformer()
68
-
69
- # 模拟OpenAI请求 - 使用普通GLM-4.5模型
70
- openai_request = {
71
- "model": "GLM-4.5",
72
- "messages": [
73
- {"role": "user", "content": "你好"}
74
- ],
75
- "stream": True
76
- }
77
-
78
- print(f"🧪 测试普通模型:")
79
- print(f" 模型: {openai_request['model']}")
80
- print()
81
-
82
- try:
83
- # 转换请求
84
- transformed = await transformer.transform_request_in(openai_request)
85
-
86
- print(f"✅ 转换成功!")
87
- print(f" 上游模型: {transformed['body']['model']}")
88
- print(f" MCP服务器: {transformed['body']['mcp_servers']}")
89
- print(f" web_search特性: {transformed['body']['features']['web_search']}")
90
- print()
91
-
92
- # 检查MCP服务器列表应该为空
93
- mcp_servers = transformed['body']['mcp_servers']
94
- if not mcp_servers:
95
- print("✅ 普通模型正确地没有添加MCP服务器!")
96
- else:
97
- print(f"❌ 普通模型意外添加了MCP服务器: {mcp_servers}")
98
-
99
- return transformed
100
-
101
- except Exception as e:
102
- print(f"❌ 转换失败: {e}")
103
- return None
104
-
105
- async def test_actual_request():
106
- """测试实际的HTTP请求"""
107
-
108
- print(f"🌐 测试实际HTTP请求到本地服务器...")
109
-
110
- # 检查服务器是否运行
111
- try:
112
- async with httpx.AsyncClient() as client:
113
- # 测试服务器是否可达
114
- response = await client.get(f"http://localhost:{settings.LISTEN_PORT}/v1/models", timeout=5.0)
115
- if response.status_code != 200:
116
- print(f"❌ 服务器未运行或不可达: {response.status_code}")
117
- return
118
-
119
- print(f"✅ 服务器运行正常")
120
-
121
- # 发送搜索模型请求
122
- search_request = {
123
- "model": "GLM-4.5-Search",
124
- "messages": [
125
- {"role": "user", "content": "搜索今天的天气"}
126
- ],
127
- "stream": False
128
- }
129
-
130
- headers = {
131
- "Content-Type": "application/json",
132
- "Authorization": f"Bearer {settings.AUTH_TOKEN}"
133
- }
134
-
135
- print(f"📤 发送搜索请求...")
136
- response = await client.post(
137
- f"http://localhost:{settings.LISTEN_PORT}/v1/chat/completions",
138
- json=search_request,
139
- headers=headers,
140
- timeout=30.0
141
- )
142
-
143
- print(f"📥 响应状态: {response.status_code}")
144
- if response.status_code == 200:
145
- print(f"✅ 请求成功!")
146
- # 不打印完整响应,只显示状态
147
- else:
148
- print(f"❌ 请求失败: {response.text}")
149
-
150
- except httpx.ConnectError:
151
- print(f"❌ 无法连接到服务器 localhost:{settings.LISTEN_PORT}")
152
- print(f" 请确保服务器正在运行: python main.py")
153
- except Exception as e:
154
- print(f"❌ 请求异常: {e}")
155
-
156
- async def main():
157
- """主测试函数"""
158
- print("=" * 60)
159
- print("GLM-4.5-Search MCP服务器测试")
160
- print("=" * 60)
161
- print()
162
-
163
- # 测试1: 搜索模型应该添加MCP服务器
164
- await test_search_model_mcp()
165
- print()
166
-
167
- # 测试2: 普通模型不应该添加MCP服务器
168
- await test_non_search_model()
169
- print()
170
-
171
- # 测试3: 实际HTTP请求(如果服务器运行)
172
- await test_actual_request()
173
- print()
174
-
175
- print("=" * 60)
176
- print("测试完成")
177
- print("=" * 60)
178
-
179
- if __name__ == "__main__":
180
- asyncio.run(main())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_service_uniqueness.py DELETED
@@ -1,173 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- """
5
- 测试服务唯一性验证功能
6
- """
7
-
8
- import time
9
- import subprocess
10
- import sys
11
- from pathlib import Path
12
-
13
- from app.core.config import settings
14
- from app.utils.process_manager import ProcessManager, ensure_service_uniqueness
15
- from app.utils.logger import setup_logger
16
-
17
- # 设置日志
18
- logger = setup_logger(log_dir="logs", debug_mode=True)
19
-
20
-
21
- def test_process_manager():
22
- """测试进程管理器功能"""
23
- print("=" * 60)
24
- print("测试进程管理器功能")
25
- print("=" * 60)
26
-
27
- service_name = "test-z-ai2api-server"
28
- port = 8081
29
-
30
- # 创建进程管理器
31
- manager = ProcessManager(service_name=service_name, port=port)
32
-
33
- print(f"\n1. 测试服务唯一性检查...")
34
- print(f" 服务名称: {service_name}")
35
- print(f" 端口: {port}")
36
-
37
- # 第一次检查应该通过
38
- result1 = manager.check_service_uniqueness()
39
- print(f" 第一次检查结果: {'✅ 通过' if result1 else '❌ 失败'}")
40
-
41
- if result1:
42
- # 创建 PID 文件
43
- manager.create_pid_file()
44
- print(f" 已创建 PID 文件: {manager.pid_file}")
45
-
46
- # 第二次检查应该失败(因为 PID 文件存在且进程运行中)
47
- manager2 = ProcessManager(service_name=service_name, port=port)
48
- result2 = manager2.check_service_uniqueness()
49
- print(f" 第二次检查结果: {'✅ 通过' if result2 else '❌ 失败(预期)'}")
50
-
51
- # 清理
52
- manager.cleanup_on_exit()
53
- print(f" 已清理 PID 文件")
54
-
55
- # 第三次检查应该通过
56
- manager3 = ProcessManager(service_name=service_name, port=port)
57
- result3 = manager3.check_service_uniqueness()
58
- print(f" 第三次检查结果: {'✅ 通过' if result3 else '❌ 失败'}")
59
-
60
-
61
- def test_convenience_function():
62
- """测试便捷函数"""
63
- print("\n" + "=" * 60)
64
- print("测试便捷函数")
65
- print("=" * 60)
66
-
67
- service_name = "test-convenience-server"
68
- port = 8082
69
-
70
- print(f"\n2. 测试便捷函数...")
71
- print(f" 服务名称: {service_name}")
72
- print(f" 端口: {port}")
73
-
74
- # 第一次调用应该成功
75
- result1 = ensure_service_uniqueness(service_name=service_name, port=port)
76
- print(f" 第一次调用结果: {'✅ 成功' if result1 else '❌ 失败'}")
77
-
78
- if result1:
79
- # 第二次调用应该失败
80
- result2 = ensure_service_uniqueness(service_name=service_name, port=port)
81
- print(f" 第二次调用结果: {'✅ 成功' if result2 else '❌ 失败(预期)'}")
82
-
83
- # 手动清理
84
- pid_file = Path(f"{service_name}.pid")
85
- if pid_file.exists():
86
- pid_file.unlink()
87
- print(f" 已手动清理 PID 文件")
88
-
89
-
90
- def test_real_service():
91
- """测试真实服务场景"""
92
- print("\n" + "=" * 60)
93
- print("测试真实服务场景")
94
- print("=" * 60)
95
-
96
- service_name = settings.SERVICE_NAME
97
- port = settings.LISTEN_PORT
98
-
99
- print(f"\n3. 测试真实服务场景...")
100
- print(f" 服务名称: {service_name}")
101
- print(f" 端口: {port}")
102
-
103
- # 检查当前是否有服务运行
104
- manager = ProcessManager(service_name=service_name, port=port)
105
- instances = manager.get_running_instances()
106
-
107
- if instances:
108
- print(f" 发现 {len(instances)} 个运行中的实例:")
109
- for instance in instances:
110
- print(f" PID: {instance['pid']}, 启动时间: {instance['start_time']}")
111
- else:
112
- print(" 未发现运行中的实例")
113
-
114
- # 测试唯一性检查
115
- result = manager.check_service_uniqueness()
116
- print(f" 唯一性检查结果: {'✅ 可以启动' if result else '❌ 已有实例运行'}")
117
-
118
-
119
- def test_port_conflict():
120
- """测试端口冲突检测"""
121
- print("\n" + "=" * 60)
122
- print("测试端口冲突检测")
123
- print("=" * 60)
124
-
125
- print(f"\n4. 测试端口冲突检测...")
126
-
127
- # 尝试检测一些常用端口
128
- test_ports = [80, 443, 8080, 3000, 5000]
129
-
130
- for port in test_ports:
131
- manager = ProcessManager(service_name="test-port-check", port=port)
132
- is_occupied = manager._check_port_usage()
133
- print(f" 端口 {port}: {'❌ 被占用' if is_occupied else '✅ 可用'}")
134
-
135
-
136
- def main():
137
- """主测试函数"""
138
- print("🧪 Z.AI2API 服务唯一性验证测试")
139
- print("=" * 60)
140
- print("此测试将验证以下功能:")
141
- print("1. 进程管理器基本功能")
142
- print("2. 便捷函数功能")
143
- print("3. 真实服务场景")
144
- print("4. 端口冲突检测")
145
- print("=" * 60)
146
-
147
- try:
148
- # 运行所有测试
149
- test_process_manager()
150
- test_convenience_function()
151
- test_real_service()
152
- test_port_conflict()
153
-
154
- print("\n" + "=" * 60)
155
- print("✅ 所有测试完成")
156
- print("=" * 60)
157
-
158
- print("\n📋 使用说��:")
159
- print("1. 启动服务时会自动进行唯一性检查")
160
- print("2. 如果检测到已有实例运行,新实例将拒绝启动")
161
- print("3. 可以通过环境变量 SERVICE_NAME 自定义服务名称")
162
- print("4. PID 文件会在服务正常退出时自动清理")
163
- print("5. 异常退出的 PID 文件会在下次启动时自动清理")
164
-
165
- except Exception as e:
166
- logger.error(f"❌ 测试过程中发生错误: {e}")
167
- import traceback
168
- traceback.print_exc()
169
- sys.exit(1)
170
-
171
-
172
- if __name__ == "__main__":
173
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_sse_optimization.py DELETED
@@ -1,131 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- """
5
- 测试 SSE 工具调用处理器的优化效果
6
- """
7
-
8
- import json
9
- import time
10
- from app.utils.sse_tool_handler import SSEToolHandler
11
- from app.utils.logger import get_logger
12
-
13
- logger = get_logger()
14
-
15
- def test_tool_call_processing():
16
- """测试工具调用处理的优化效果"""
17
-
18
- # 创建处理器
19
- handler = SSEToolHandler("test_chat_id", "GLM-4.5")
20
-
21
- # 模拟 Z.AI 的原始响应数据(基于文档中的示例)
22
- test_data_sequence = [
23
- # 第一个数据块 - 工具调用开始
24
- {
25
- "edit_index": 22,
26
- "edit_content": '\n\n<glm_block >{"type": "mcp", "data": {"metadata": {"id": "call_fyh97tn03ow", "name": "playwri-browser_navigate", "arguments": "{\\"url\\":\\"https://www.goo',
27
- "phase": "tool_call"
28
- },
29
- # 第二个数据块 - 参数补全
30
- {
31
- "edit_index": 176,
32
- "edit_content": 'gle.com\\"}", "result": "", "display_result": "", "duration": "...", "status": "completed", "is_error": false, "mcp_server": {"name": "mcp-server"}}, "thought": null, "ppt": null, "browser": null}}</glm_block>',
33
- "phase": "tool_call"
34
- },
35
- # 第三个数据块 - 工具调用结束
36
- {
37
- "edit_index": 199,
38
- "edit_content": 'null, "display_result": "", "duration": "...", "status": "completed", "is_error": false, "mcp_server": {"name": "mcp-server"}}, "thought": null, "ppt": null, "browser": null}}</glm_block>',
39
- "phase": "other"
40
- }
41
- ]
42
-
43
- print("🧪 开始测试 SSE 工具调用处理器优化...")
44
-
45
- # 处理数据序列
46
- all_chunks = []
47
- for i, data in enumerate(test_data_sequence):
48
- print(f"\n📦 处理数据块 {i+1}: phase={data['phase']}, edit_index={data['edit_index']}")
49
-
50
- if data["phase"] == "tool_call":
51
- chunks = list(handler.process_tool_call_phase(data, is_stream=True))
52
- else:
53
- chunks = list(handler.process_other_phase(data, is_stream=True))
54
-
55
- all_chunks.extend(chunks)
56
-
57
- # 打印生成的块
58
- for j, chunk in enumerate(chunks):
59
- if chunk.strip():
60
- print(f" 📤 输出块 {j+1}: {chunk[:100]}...")
61
-
62
- print(f"\n✅ 测试完成,共生成 {len(all_chunks)} 个输出块")
63
-
64
- # 验证工具调用是否正确解析
65
- print(f"🔧 活跃工具数: {len(handler.active_tools)}")
66
- print(f"✅ 完成工具数: {len(handler.completed_tools)}")
67
-
68
- # 打印最终的内容缓冲区
69
- try:
70
- final_content = handler.content_buffer.decode('utf-8', errors='ignore')
71
- print(f"\n📝 最终内容缓冲区长度: {len(final_content)}")
72
- print(f"📝 内容预览: {final_content[:200]}...")
73
- except Exception as e:
74
- print(f"❌ 内容缓冲区解析失败: {e}")
75
-
76
- def test_partial_arguments_parsing():
77
- """测试部分参数解析功能"""
78
-
79
- handler = SSEToolHandler("test_chat_id", "GLM-4.5")
80
-
81
- # 测试各种不完整的参数
82
- test_cases = [
83
- '{"url":"https://www.goo', # 不完整的URL
84
- '{"city":"北京', # 缺少引号和括号
85
- '{"query":"test", "limit":', # 不完整的数值
86
- '{"name":"test"', # 缺少结束括号
87
- '', # 空字符串
88
- '{', # 只有开始括号
89
- ]
90
-
91
- print("\n🧪 测试部分参数解析...")
92
-
93
- for i, test_arg in enumerate(test_cases):
94
- print(f"\n📦 测试用例 {i+1}: {test_arg}")
95
- result = handler._parse_partial_arguments(test_arg)
96
- print(f" ✅ 解析结果: {result}")
97
-
98
- def test_performance():
99
- """测试性能优化效果"""
100
-
101
- print("\n🚀 测试性能优化效果...")
102
-
103
- # 创建大量数据进行性能测试
104
- handler = SSEToolHandler("test_chat_id", "GLM-4.5")
105
-
106
- # 模拟大量的编辑操作
107
- start_time = time.time()
108
-
109
- for i in range(1000):
110
- edit_data = {
111
- "edit_index": i * 10,
112
- "edit_content": f"test_content_{i}",
113
- "phase": "tool_call"
114
- }
115
- list(handler.process_tool_call_phase(edit_data, is_stream=False))
116
-
117
- end_time = time.time()
118
-
119
- print(f"⏱️ 处理1000次编辑操作耗时: {end_time - start_time:.3f}秒")
120
- print(f"📊 平均每次操作耗时: {(end_time - start_time) * 1000 / 1000:.3f}毫秒")
121
-
122
- if __name__ == "__main__":
123
- try:
124
- test_tool_call_processing()
125
- test_partial_arguments_parsing()
126
- test_performance()
127
- print("\n🎉 所有测试完成!")
128
- except Exception as e:
129
- logger.error(f"❌ 测试失败: {e}")
130
- import traceback
131
- traceback.print_exc()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_tool_call.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # -*- coding: utf-8 -*-
3
+ """
4
+ 测试工具调用功能
5
+ """
6
+
7
+ import json
8
+ import requests
9
+
10
+ # 配置
11
+ BASE_URL = "http://localhost:8080"
12
+ API_KEY = "your-api-key" # 替换为实际的 API key
13
+
14
+ def test_tool_call():
15
+ """测试工具调用功能"""
16
+
17
+ # 定义一个简单的工具
18
+ tools = [
19
+ {
20
+ "type": "function",
21
+ "function": {
22
+ "name": "get_weather",
23
+ "description": "获取指定城市的天气信息",
24
+ "parameters": {
25
+ "type": "object",
26
+ "properties": {
27
+ "location": {
28
+ "type": "string",
29
+ "description": "城市名称,例如:北京、上海"
30
+ },
31
+ "unit": {
32
+ "type": "string",
33
+ "description": "温度单位",
34
+ "enum": ["celsius", "fahrenheit"]
35
+ }
36
+ },
37
+ "required": ["location"]
38
+ }
39
+ }
40
+ }
41
+ ]
42
+
43
+ # 构建请求
44
+ request_data = {
45
+ "model": "GLM-4.5",
46
+ "messages": [
47
+ {
48
+ "role": "user",
49
+ "content": "北京的天气怎么样?"
50
+ }
51
+ ],
52
+ "tools": tools,
53
+ "tool_choice": "auto",
54
+ "stream": False
55
+ }
56
+
57
+ headers = {
58
+ "Content-Type": "application/json",
59
+ "Authorization": f"Bearer {API_KEY}"
60
+ }
61
+
62
+ print("=" * 60)
63
+ print("测试工具调用 (非流式)")
64
+ print("=" * 60)
65
+
66
+ # 发送请求
67
+ response = requests.post(
68
+ f"{BASE_URL}/v1/chat/completions",
69
+ json=request_data,
70
+ headers=headers
71
+ )
72
+
73
+ print(f"状态码: {response.status_code}")
74
+
75
+ if response.status_code == 200:
76
+ result = response.json()
77
+ print("\n响应内容:")
78
+ print(json.dumps(result, ensure_ascii=False, indent=2))
79
+
80
+ # 检查是否有工具调用
81
+ if result.get("choices"):
82
+ choice = result["choices"][0]
83
+ if choice.get("message", {}).get("tool_calls"):
84
+ print("\n✅ 检测到工具调用!")
85
+ for tc in choice["message"]["tool_calls"]:
86
+ print(f" - 函数: {tc.get('function', {}).get('name')}")
87
+ print(f" 参数: {tc.get('function', {}).get('arguments')}")
88
+ else:
89
+ print("\n⚠️ 未检测到工具调用")
90
+ if choice.get("message", {}).get("content"):
91
+ print(f"内容: {choice['message']['content'][:200]}")
92
+ else:
93
+ print(f"\n错误响应: {response.text}")
94
+
95
+ # 测试流式响应
96
+ print("\n" + "=" * 60)
97
+ print("测试工具调用 (流式)")
98
+ print("=" * 60)
99
+
100
+ request_data["stream"] = True
101
+
102
+ response = requests.post(
103
+ f"{BASE_URL}/v1/chat/completions",
104
+ json=request_data,
105
+ headers=headers,
106
+ stream=True
107
+ )
108
+
109
+ print(f"状态码: {response.status_code}")
110
+
111
+ if response.status_code == 200:
112
+ print("\n流式响应:")
113
+ tool_calls_detected = False
114
+
115
+ for line in response.iter_lines():
116
+ if line:
117
+ line_str = line.decode('utf-8')
118
+ if line_str.startswith("data: "):
119
+ data = line_str[6:]
120
+ if data == "[DONE]":
121
+ print("流结束")
122
+ break
123
+
124
+ try:
125
+ chunk = json.loads(data)
126
+ if chunk.get("choices"):
127
+ delta = chunk["choices"][0].get("delta", {})
128
+ if delta.get("tool_calls"):
129
+ tool_calls_detected = True
130
+ print(f"检测到工具调用: {json.dumps(delta['tool_calls'], ensure_ascii=False)}")
131
+ elif delta.get("content"):
132
+ print(f"内容: {delta['content']}", end="")
133
+ except json.JSONDecodeError:
134
+ pass
135
+
136
+ if tool_calls_detected:
137
+ print("\n\n✅ 流式响应中检测到工具调用!")
138
+ else:
139
+ print("\n\n⚠️ 流式响应中未检测到工具调用")
140
+ else:
141
+ print(f"\n错误响应: {response.text}")
142
+
143
+
144
+ if __name__ == "__main__":
145
+ test_tool_call()
tests/test_tool_call_fix.py DELETED
@@ -1,133 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- """
5
- 测试工具调用
6
- """
7
-
8
- import json
9
- import urllib.request
10
- import urllib.parse
11
- from typing import Dict, Any
12
-
13
- def test_tool_call():
14
- """测试工具调用功能"""
15
-
16
- # 测试请求
17
- test_request = {
18
- "model": "glm-4.5",
19
- "messages": [
20
- {
21
- "role": "user",
22
- "content": "请打开Google网站"
23
- }
24
- ],
25
- "tools": [
26
- {
27
- "type": "function",
28
- "function": {
29
- "name": "playwri-browser_navigate",
30
- "description": "Navigate to a URL in the browser",
31
- "parameters": {
32
- "type": "object",
33
- "properties": {
34
- "url": {
35
- "type": "string",
36
- "description": "The URL to navigate to"
37
- }
38
- },
39
- "required": ["url"]
40
- }
41
- }
42
- }
43
- ],
44
- "stream": True
45
- }
46
-
47
- print("🚀 发送工具调用测试请求...")
48
- print(f"📦 请求内容: {json.dumps(test_request, ensure_ascii=False, indent=2)}")
49
-
50
- # 准备HTTP请求
51
- url = "http://localhost:8080/v1/chat/completions"
52
- data = json.dumps(test_request).encode('utf-8')
53
-
54
- req = urllib.request.Request(url, data=data)
55
- req.add_header('Content-Type', 'application/json')
56
- req.add_header('Authorization', 'Bearer sk-test-key')
57
-
58
- try:
59
- with urllib.request.urlopen(req) as response:
60
- print(f"📈 响应状态: {response.status}")
61
-
62
- if response.status == 200:
63
- print("✅ 开始接收流式响应...")
64
-
65
- tool_calls_found = []
66
- chunk_count = 0
67
-
68
- for line in response:
69
- line = line.decode('utf-8').strip()
70
- if line.startswith('data: '):
71
- chunk_count += 1
72
- data_str = line[6:] # 去掉 'data: ' 前缀
73
-
74
- if data_str == '[DONE]':
75
- print("🏁 接收到结束信号")
76
- break
77
-
78
- try:
79
- chunk = json.loads(data_str)
80
-
81
- # 检查是否包含工具调用
82
- if 'choices' in chunk and chunk['choices']:
83
- choice = chunk['choices'][0]
84
- if 'delta' in choice and 'tool_calls' in choice['delta']:
85
- tool_calls = choice['delta']['tool_calls']
86
- if tool_calls:
87
- for tool_call in tool_calls:
88
- print(f"🔧 发现工具调用: {json.dumps(tool_call, ensure_ascii=False, indent=2)}")
89
- tool_calls_found.append(tool_call)
90
-
91
- # 检查完成原因
92
- if choice.get('finish_reason') == 'tool_calls':
93
- print("✅ 工具调用完成")
94
-
95
- except json.JSONDecodeError as e:
96
- print(f"❌ JSON解析错误: {e}, 数据: {data_str[:200]}")
97
-
98
- print(f"📊 总共接收到 {chunk_count} 个数据块")
99
- print(f"🔧 发现 {len(tool_calls_found)} 个工具调用")
100
-
101
- # 分析工具调用格式
102
- for i, tool_call in enumerate(tool_calls_found):
103
- print(f"\n🔍 工具调用 {i+1} 分析:")
104
- print(f" ID: {tool_call.get('id', 'N/A')}")
105
- print(f" 类型: {tool_call.get('type', 'N/A')}")
106
-
107
- if 'function' in tool_call:
108
- func = tool_call['function']
109
- print(f" 函数名: {func.get('name', 'N/A')}")
110
-
111
- arguments = func.get('arguments', '')
112
- print(f" 参数类型: {type(arguments)}")
113
- print(f" 参数内容: {arguments}")
114
-
115
- # 尝试解析参数
116
- if isinstance(arguments, str) and arguments:
117
- try:
118
- parsed_args = json.loads(arguments)
119
- print(f" ✅ 参数解析成功: {parsed_args}")
120
- except json.JSONDecodeError as e:
121
- print(f" ❌ 参数解析失败: {e}")
122
- elif isinstance(arguments, dict):
123
- print(f" ⚠️ 参数是对象格式(应该是字符串): {arguments}")
124
-
125
- else:
126
- error_text = response.read().decode('utf-8')
127
- print(f"❌ 请求失败: {error_text}")
128
-
129
- except Exception as e:
130
- print(f"❌ 请求异常: {e}")
131
-
132
- if __name__ == "__main__":
133
- test_tool_call()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_tool_handler_optimized.py DELETED
@@ -1,492 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
-
4
- """
5
- 测试优化后的SSE工具调用处理器
6
- 基于真实的Z.AI响应格式和日志数据进行全面测试
7
- """
8
-
9
- import json
10
- import time
11
- import traceback
12
- from typing import List, Dict, Any
13
- from app.utils.sse_tool_handler import SSEToolHandler
14
- from app.utils.logger import get_logger
15
-
16
- logger = get_logger()
17
-
18
-
19
- class TestResult:
20
- """测试结果类"""
21
- def __init__(self, name: str):
22
- self.name = name
23
- self.passed = 0
24
- self.failed = 0
25
- self.errors = []
26
-
27
- def add_pass(self):
28
- self.passed += 1
29
-
30
- def add_fail(self, error: str):
31
- self.failed += 1
32
- self.errors.append(error)
33
-
34
- def print_summary(self):
35
- total = self.passed + self.failed
36
- success_rate = (self.passed / total * 100) if total > 0 else 0
37
-
38
- print(f"\n📊 {self.name} 测试结果:")
39
- print(f" ✅ 通过: {self.passed}")
40
- print(f" ❌ 失败: {self.failed}")
41
- print(f" 📈 成功率: {success_rate:.1f}%")
42
-
43
- if self.errors:
44
- print(f" 🔍 错误详情:")
45
- for i, error in enumerate(self.errors, 1):
46
- print(f" {i}. {error}")
47
-
48
-
49
- def parse_openai_chunk(chunk_data: str) -> Dict[str, Any]:
50
- """解析OpenAI格式的chunk数据"""
51
- try:
52
- if chunk_data.startswith("data: "):
53
- chunk_data = chunk_data[6:] # 移除 "data: " 前缀
54
- if chunk_data.strip() == "[DONE]":
55
- return {"type": "done"}
56
- return json.loads(chunk_data)
57
- except json.JSONDecodeError:
58
- return {"type": "invalid", "raw": chunk_data}
59
-
60
-
61
- def extract_tool_calls(chunks: List[str]) -> List[Dict[str, Any]]:
62
- """从chunk列表中提取工具调用信息"""
63
- tools = []
64
- current_tool = None
65
-
66
- for chunk in chunks:
67
- parsed = parse_openai_chunk(chunk)
68
- if parsed.get("type") == "invalid":
69
- continue
70
-
71
- choices = parsed.get("choices", [])
72
- if not choices:
73
- continue
74
-
75
- delta = choices[0].get("delta", {})
76
- tool_calls = delta.get("tool_calls", [])
77
-
78
- for tc in tool_calls:
79
- if tc.get("function", {}).get("name"): # 新工具开始
80
- current_tool = {
81
- "id": tc.get("id"),
82
- "name": tc["function"]["name"],
83
- "arguments": ""
84
- }
85
- tools.append(current_tool)
86
- elif tc.get("function", {}).get("arguments") and current_tool: # 参数累积
87
- current_tool["arguments"] += tc["function"]["arguments"]
88
-
89
- # 解析最终参数
90
- for tool in tools:
91
- try:
92
- tool["parsed_arguments"] = json.loads(tool["arguments"]) if tool["arguments"] else {}
93
- except json.JSONDecodeError:
94
- tool["parsed_arguments"] = {}
95
-
96
- return tools
97
-
98
-
99
- def test_real_world_scenarios():
100
- """测试基于真实Z.AI响应的工具调用处理"""
101
-
102
- result = TestResult("真实场景测试")
103
-
104
- # 基于实际日志的测试数据
105
- test_scenarios = [
106
- {
107
- "name": "浏览器导航工具调用",
108
- "description": "模拟打开Google网站的工具调用",
109
- "expected_tools": [
110
- {
111
- "name": "playwri-browser_navigate",
112
- "id": "call_fyh97tn03ow",
113
- "arguments": {"url": "https://www.google.com"}
114
- }
115
- ],
116
- "data_sequence": [
117
- {
118
- "edit_index": 22,
119
- "edit_content": '\n\n<glm_block >{"type": "mcp", "data": {"metadata": {"id": "call_fyh97tn03ow", "name": "playwri-browser_navigate", "arguments": "{\\"url\\":\\"https://www.goo',
120
- "phase": "tool_call"
121
- },
122
- {
123
- "edit_index": 176,
124
- "edit_content": 'gle.com\\"}", "result": "", "display_result": "", "duration": "...", "status": "completed", "is_error": false, "mcp_server": {"name": "mcp-server"}}, "thought": null, "ppt": null, "browser": null}}</glm_block>',
125
- "phase": "tool_call"
126
- },
127
- {
128
- "edit_index": 199,
129
- "edit_content": 'null, "display_result": "", "duration": "...", "status": "completed", "is_error": false, "mcp_server": {"name": "mcp-server"}}, "thought": null, "ppt": null, "browser": null}}</glm_block>',
130
- "phase": "other"
131
- }
132
- ]
133
- },
134
- {
135
- "name": "天气查询工具调用",
136
- "description": "模拟查询上海天气的工具调用",
137
- "expected_tools": [
138
- {
139
- "name": "search",
140
- "id": "call_qsn2jby8al",
141
- "arguments": {"queries": ["今天上海天气", "上海天气预报 今天"]}
142
- }
143
- ],
144
- "data_sequence": [
145
- {
146
- "edit_index": 16,
147
- "edit_content": '\n\n<glm_block >{"type": "mcp", "data": {"metadata": {"id": "call_qsn2jby8al", "name": "search", "arguments": "{\\"queries\\":[\\"今天上海天气\\", \\"',
148
- "phase": "tool_call"
149
- },
150
- {
151
- "edit_index": 183,
152
- "edit_content": '上海天气预报 今天\\"]}", "result": "", "display_result": "", "duration": "...", "status": "completed", "is_error": false, "mcp_server": {"name": "mcp-server"}}, "thought": null, "ppt": null, "browser": null}}</glm_block>',
153
- "phase": "tool_call"
154
- }
155
- ]
156
- },
157
- {
158
- "name": "多工具调用序列",
159
- "description": "模拟连续的多个工具调用",
160
- "expected_tools": [
161
- {
162
- "name": "search",
163
- "id": "call_001",
164
- "arguments": {"query": "北京天气"}
165
- },
166
- {
167
- "name": "visit_page",
168
- "id": "call_002",
169
- "arguments": {"url": "https://weather.com"}
170
- }
171
- ],
172
- "data_sequence": [
173
- {
174
- "edit_index": 0,
175
- "edit_content": '<glm_block >{"type": "mcp", "data": {"metadata": {"id": "call_001", "name": "search", "arguments": "{\\"query\\":\\"北京天气\\"}", "result": "", "status": "completed"}}, "thought": null}}</glm_block>',
176
- "phase": "tool_call"
177
- },
178
- {
179
- "edit_index": 200,
180
- "edit_content": '\n\n<glm_block >{"type": "mcp", "data": {"metadata": {"id": "call_002", "name": "visit_page", "arguments": "{\\"url\\":\\"https://weather.com\\"}", "result": "", "status": "completed"}}, "thought": null}}</glm_block>',
181
- "phase": "tool_call"
182
- }
183
- ]
184
- }
185
- ]
186
-
187
- print(f"\n🧪 开始执行 {len(test_scenarios)} 个真实场景测试...")
188
-
189
- # 执行每个测试场景
190
- for i, scenario in enumerate(test_scenarios, 1):
191
- print(f"\n{'='*60}")
192
- print(f"测试 {i}: {scenario['name']}")
193
- print(f"描述: {scenario['description']}")
194
- print('='*60)
195
-
196
- try:
197
- # 创建新的处理器实例
198
- handler = SSEToolHandler("test_chat_id", "GLM-4.5")
199
-
200
- # 处理数据序列
201
- all_chunks = []
202
- for j, data in enumerate(scenario["data_sequence"]):
203
- print(f"\n📦 处理数据块 {j+1}: phase={data['phase']}, edit_index={data['edit_index']}")
204
-
205
- if data["phase"] == "tool_call":
206
- chunks = list(handler.process_tool_call_phase(data, is_stream=True))
207
- else:
208
- chunks = list(handler.process_other_phase(data, is_stream=True))
209
-
210
- all_chunks.extend(chunks)
211
-
212
- # 提取工具调用信息
213
- extracted_tools = extract_tool_calls(all_chunks)
214
-
215
- # 验证结果
216
- expected_tools = scenario["expected_tools"]
217
-
218
- print(f"\n📊 验证结果:")
219
- print(f" 期望工具数: {len(expected_tools)}")
220
- print(f" 实际工具数: {len(extracted_tools)}")
221
-
222
- # 详细验证每个工具
223
- for k, expected_tool in enumerate(expected_tools):
224
- if k < len(extracted_tools):
225
- actual_tool = extracted_tools[k]
226
-
227
- # 验证工具名称
228
- name_match = actual_tool["name"] == expected_tool["name"]
229
- # 验证工具ID
230
- id_match = actual_tool["id"] == expected_tool["id"]
231
- # 验证参数
232
- args_match = actual_tool["parsed_arguments"] == expected_tool["arguments"]
233
-
234
- if name_match and id_match and args_match:
235
- print(f" ✅ 工具 {k+1}: {expected_tool['name']} - 验证通过")
236
- result.add_pass()
237
- else:
238
- error_details = []
239
- if not name_match:
240
- error_details.append(f"名称不匹配: 期望'{expected_tool['name']}', 实际'{actual_tool['name']}'")
241
- if not id_match:
242
- error_details.append(f"ID不匹配: 期望'{expected_tool['id']}', 实际'{actual_tool['id']}'")
243
- if not args_match:
244
- error_details.append(f"参数不匹配: 期望{expected_tool['arguments']}, 实际{actual_tool['parsed_arguments']}")
245
-
246
- error_msg = f"工具 {k+1} 验证失败: {'; '.join(error_details)}"
247
- print(f" ❌ {error_msg}")
248
- result.add_fail(error_msg)
249
- else:
250
- error_msg = f"缺少工具 {k+1}: {expected_tool['name']}"
251
- print(f" ❌ {error_msg}")
252
- result.add_fail(error_msg)
253
-
254
- # 显示提取的工具详情
255
- if extracted_tools:
256
- print(f"\n🔍 提取的工具详情:")
257
- for tool in extracted_tools:
258
- print(f" - {tool['name']}(id={tool['id']})")
259
- print(f" 参数: {tool['parsed_arguments']}")
260
-
261
- except Exception as e:
262
- error_msg = f"测试 {scenario['name']} 执行失败: {str(e)}"
263
- print(f"❌ {error_msg}")
264
- result.add_fail(error_msg)
265
- logger.error(f"测试执行异常: {e}")
266
-
267
- result.print_summary()
268
- return result
269
-
270
-
271
- def test_edge_cases():
272
- """测试边界情况和异常处理"""
273
-
274
- result = TestResult("边界情况测试")
275
-
276
- edge_cases = [
277
- {
278
- "name": "空内容处理",
279
- "data": {"edit_index": 0, "edit_content": "", "phase": "tool_call"},
280
- "should_pass": True
281
- },
282
- {
283
- "name": "无效JSON处理",
284
- "data": {"edit_index": 0, "edit_content": '<glm_block >{"invalid": json}}</glm_block>', "phase": "tool_call"},
285
- "should_pass": True # 应该优雅处理,不崩溃
286
- },
287
- {
288
- "name": "不完整的glm_block",
289
- "data": {"edit_index": 0, "edit_content": '<glm_block >{"type": "mcp", "data": {"metadata": {"id": "test"', "phase": "tool_call"},
290
- "should_pass": True
291
- },
292
- {
293
- "name": "超大edit_index",
294
- "data": {"edit_index": 999999, "edit_content": "test", "phase": "tool_call"},
295
- "should_pass": True
296
- },
297
- {
298
- "name": "特殊字符处理",
299
- "data": {"edit_index": 0, "edit_content": '<glm_block >{"type": "mcp", "data": {"metadata": {"id": "test", "name": "test", "arguments": "{\\"text\\":\\"测试\\u4e2d\\u6587\\"}"}}}</glm_block>', "phase": "tool_call"},
300
- "should_pass": True
301
- }
302
- ]
303
-
304
- print(f"\n🧪 开始执行 {len(edge_cases)} 个边界情况测试...")
305
-
306
- for i, case in enumerate(edge_cases, 1):
307
- print(f"\n📦 测试 {i}: {case['name']}")
308
-
309
- try:
310
- handler = SSEToolHandler("test_chat_id", "GLM-4.5")
311
-
312
- # 处理数据
313
- if case["data"]["phase"] == "tool_call":
314
- chunks = list(handler.process_tool_call_phase(case["data"], is_stream=True))
315
- else:
316
- chunks = list(handler.process_other_phase(case["data"], is_stream=True))
317
-
318
- # 检查是否按预期处理
319
- if case["should_pass"]:
320
- print(f" ✅ 成功处理,生成 {len(chunks)} 个输出块")
321
- result.add_pass()
322
- else:
323
- print(f" ❌ 应该失败但成功了")
324
- result.add_fail(f"{case['name']}: 应该失败但成功了")
325
-
326
- except Exception as e:
327
- if case["should_pass"]:
328
- error_msg = f"{case['name']}: 意外异常 - {str(e)}"
329
- print(f" ❌ {error_msg}")
330
- result.add_fail(error_msg)
331
- else:
332
- print(f" ✅ 按预期失败: {str(e)}")
333
- result.add_pass()
334
-
335
- result.print_summary()
336
- return result
337
-
338
-
339
- def test_performance():
340
- """测试性能表现"""
341
-
342
- result = TestResult("性能测试")
343
-
344
- print(f"\n🚀 开始性能测试...")
345
-
346
- # 测试大量小块数据的处理性能
347
- handler = SSEToolHandler("test_chat_id", "GLM-4.5")
348
-
349
- start_time = time.time()
350
-
351
- # 模拟1000次小的编辑操作
352
- for i in range(1000):
353
- data = {
354
- "edit_index": i * 5,
355
- "edit_content": f"chunk_{i}",
356
- "phase": "tool_call"
357
- }
358
- list(handler.process_tool_call_phase(data, is_stream=False))
359
-
360
- end_time = time.time()
361
- duration = end_time - start_time
362
-
363
- print(f"⏱️ 处理1000次编辑操作耗时: {duration:.3f}秒")
364
- print(f"📊 平均每次操作耗时: {duration * 1000 / 1000:.3f}毫秒")
365
-
366
- # 性能基准:每次操作应该在1毫秒以内
367
- if duration < 1.0: # 1秒内完成1000次操作
368
- print("✅ 性能测试通过")
369
- result.add_pass()
370
- else:
371
- error_msg = f"性能测试失败: 耗时{duration:.3f}秒,超过1秒基准"
372
- print(f"❌ {error_msg}")
373
- result.add_fail(error_msg)
374
-
375
- result.print_summary()
376
- return result
377
-
378
-
379
- def test_argument_parsing():
380
- """测试参数解析功能"""
381
-
382
- result = TestResult("参数解析测试")
383
-
384
- print(f"\n🧪 开始参数解析测试...")
385
-
386
- handler = SSEToolHandler("test", "test")
387
-
388
- test_cases = [
389
- ('{"city": "北京"}', {"city": "北京"}),
390
- ('{"city": "北京', {"city": "北京"}), # 缺少闭合括号
391
- ('{"city": "北京"', {"city": "北京"}), # 缺少闭合括号但有引号
392
- ('{\\"city\\": \\"北京\\"}', {"city": "北京"}), # 转义的JSON
393
- ('{}', {}), # 空参数
394
- ('null', {}), # null参数
395
- ('{"array": [1,2,3], "nested": {"key": "value"}}', {"array": [1,2,3], "nested": {"key": "value"}}), # 复杂参数
396
- ('{"url":"https://www.goo', {"url": "https://www.goo"}), # 不完整的URL
397
- ('', {}), # 空字符串
398
- ('{', {}), # 只有开始括号
399
- ]
400
-
401
- for i, (input_str, expected) in enumerate(test_cases, 1):
402
- try:
403
- parsed_result = handler._parse_partial_arguments(input_str)
404
- success = parsed_result == expected
405
-
406
- if success:
407
- print(f"✅ 测试 {i}: 解析成功")
408
- result.add_pass()
409
- else:
410
- error_msg = f"测试 {i} 失败: 输入'{input_str[:30]}...', 期望{expected}, 实际{parsed_result}"
411
- print(f"❌ {error_msg}")
412
- result.add_fail(error_msg)
413
-
414
- except Exception as e:
415
- error_msg = f"测试 {i} 异常: 输入'{input_str[:30]}...', 错误: {str(e)}"
416
- print(f"❌ {error_msg}")
417
- result.add_fail(error_msg)
418
-
419
- result.print_summary()
420
- return result
421
-
422
-
423
- def run_all_tests():
424
- """运行所有测试"""
425
-
426
- print("🧪 SSE工具调用处理器优化测试套件")
427
- print("="*60)
428
-
429
- all_results = []
430
-
431
- try:
432
- # 运行真实场景测试
433
- print("\n1️⃣ 真实场景测试")
434
- all_results.append(test_real_world_scenarios())
435
-
436
- # 运行边界情况测试
437
- print("\n2️⃣ 边界情况测试")
438
- all_results.append(test_edge_cases())
439
-
440
- # 运行参数解析测试
441
- print("\n3️⃣ 参数解析测试")
442
- all_results.append(test_argument_parsing())
443
-
444
- # 运行性能测试
445
- print("\n4️⃣ 性能测试")
446
- all_results.append(test_performance())
447
-
448
- # 汇总结果
449
- print("\n" + "="*60)
450
- print("📊 测试汇总")
451
- print("="*60)
452
-
453
- total_passed = sum(r.passed for r in all_results)
454
- total_failed = sum(r.failed for r in all_results)
455
- total_tests = total_passed + total_failed
456
-
457
- print(f"总测试数: {total_tests}")
458
- print(f"✅ 通过: {total_passed}")
459
- print(f"❌ 失败: {total_failed}")
460
-
461
- if total_tests > 0:
462
- success_rate = (total_passed / total_tests) * 100
463
- print(f"📈 总体成功率: {success_rate:.1f}%")
464
-
465
- if success_rate >= 90:
466
- print("🎉 测试结果优秀!")
467
- elif success_rate >= 70:
468
- print("👍 测试结果良好")
469
- else:
470
- print("⚠️ 需要改进")
471
-
472
- # 显示失败的测试
473
- failed_tests = []
474
- for result in all_results:
475
- failed_tests.extend(result.errors)
476
-
477
- if failed_tests:
478
- print(f"\n🔍 失败测试详情:")
479
- for i, error in enumerate(failed_tests, 1):
480
- print(f" {i}. {error}")
481
-
482
- return total_failed == 0
483
-
484
- except Exception as e:
485
- print(f"❌ 测试套件执行失败: {e}")
486
- traceback.print_exc()
487
- return False
488
-
489
-
490
- if __name__ == "__main__":
491
- success = run_all_tests()
492
- exit(0 if success else 1)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tokens.txt.example DELETED
@@ -1,25 +0,0 @@
1
- # 认证Token配置文件
2
- #
3
- # 说明:
4
- # 1. 支持两种格式:每行一个token 或 逗号分隔的token
5
- # 2. 只包含认证用户token (role: "user"),不要添加匿名用户token (role: "guest")
6
- # 3. 系统会自动去重和验证token有效性
7
- # 4. 修改此文件后无需重启服务,系统会自动重新加载
8
- # 5. 自动跳过空格、换行符和空token
9
- #
10
- # 格式1:纯换行分隔
11
- # token1
12
- # token2
13
- # token3
14
-
15
- # 格式2:纯逗号分隔
16
- # token1,token2,token3
17
-
18
- # 格式3:混合格式
19
- # token1,token2
20
- # token3
21
- # token4,token5,token6
22
- # token7
23
-
24
- # 请在下方添加您的认证用户token(使用任一格式):
25
-