feat: 统一参数处理模块 + 多 API 格式参数支持
Browse files## 新增文件
### 1. src/utils/parameterNormalizer.js (新建)
统一参数处理模块,将 OpenAI、Claude、Gemini 三种格式的参数统一转换为内部格式:
- normalizeOpenAIParameters() - 规范化 OpenAI 格式参数
- normalizeClaudeParameters() - 规范化 Claude 格式参数(处理 thinking.budget_tokens)
- normalizeGeminiParameters() - 规范化 Gemini 格式参数(处理 thinkingConfig.thinkingBudget)
- toGenerationConfig() - 转换为上游 API 的 generationConfig 格式
### 2. src/utils/converters/common.js (新建)
转换器公共模块,提取三个转换器的公共函数:
- getSignatureContext() - 获取思维签名和工具签名
- pushUserMessage() - 添加用户消息
- findFunctionNameById() - 根据工具调用 ID 查找函数名
- pushFunctionResponse() - 添加函数响应
- createThoughtPart() - 创建带签名的思维 part
- createFunctionCallPart() - 创建带签名的函数调用 part
- processToolName() - 处理工具名称映射
- pushModelMessage() - 添加模型消息
- buildRequestBody() - 构建 Antigravity 请求体
- mergeSystemInstruction() - 合并系统指令
### 3. src/utils/toolConverter.js (新建)
统一工具定义转换模块:
- convertOpenAIToolsToAntigravity() - OpenAI 工具格式转换
- convertClaudeToolsToAntigravity() - Claude 工具格式转换
- convertGeminiToolsToAntigravity() - Gemini 工具格式转换
## 修改文件
### 4. src/server/index.js
- 第 6 行:添加 normalizeClaudeParameters 导入
- 第 659-679 行:Claude API 处理函数使用 normalizeClaudeParameters() 替代手动参数构建
- 第 22-42 行:with429Retry 函数兼容多种错误格式(error.status, error.statusCode, error.response?.status)
### 5. src/utils/utils.js
- 第 5 行:添加 toGenerationConfig 导入
- 第 73-108 行:generateGenerationConfig() 函数重构,使用 toGenerationConfig() 简化逻辑
### 6. src/utils/converters/gemini.js
- 第 6 行:添加 normalizeGeminiParameters, toGenerationConfig 导入
- 第 132-135 行:使用 normalizeGeminiParameters() 和 toGenerationConfig() 替代手动参数处理
### 7. src/utils/converters/openai.js
- 使用 common.js 中的公共函数替代重复代码
### 8. src/utils/converters/claude.js
- 使用 common.js 中的公共函数替代重复代码
### 9. src/utils/memoryManager.js
- 第 10-15 行:添加 setThreshold(thresholdMB) 方法,支持动态设置内存阈值
- 阈值计算:LOW=30%, MEDIUM=60%, HIGH=100%
### 10. src/config/config.js
- defaults.memoryThreshold:默认值从 500 改为 100(MB)
### 11. src/constants/index.js
- 添加 REASONING_EFFORT_MAP 常量导出
### 12. config.json
- 配置文件格式调整
### 13. README.md
- 添加「多 API 格式支持」章节,详细说明三种格式的参数
- 添加「参数规范化模块」文档
- 更新项目结构,包含新增文件
- 更新代码架构章节
- README.md +154 -21
- config.json +1 -1
- src/config/config.js +1 -1
- src/constants/index.js +6 -13
- src/server/index.js +77 -34
- src/utils/converters/claude.js +39 -126
- src/utils/converters/common.js +211 -0
- src/utils/converters/gemini.js +128 -55
- src/utils/converters/openai.js +40 -119
- src/utils/memoryManager.js +52 -9
- src/utils/parameterNormalizer.js +189 -0
- src/utils/toolConverter.js +128 -0
- src/utils/utils.js +13 -26
|
@@ -25,6 +25,9 @@
|
|
| 25 |
- ✅ 对象池复用(减少 50%+ 临时对象创建,降低 GC 频率)
|
| 26 |
- ✅ 签名透传控制(可配置是否将 thoughtSignature 透传到客户端)
|
| 27 |
- ✅ 预编译二进制文件(支持 Windows/Linux/Android,无需 Node.js 环境)
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
## 环境要求
|
| 30 |
|
|
@@ -568,14 +571,17 @@ npm run login
|
|
| 568 |
│ │ ├── imageStorage.js # 图片存储
|
| 569 |
│ │ ├── logger.js # 日志模块
|
| 570 |
│ │ ├── memoryManager.js # 智能内存管理
|
| 571 |
-
│ │ ├──
|
| 572 |
-
│ │ ├── openai_mapping.js # 请求体构建
|
| 573 |
-
│ │ ├── openai_messages.js # 消息格式转换
|
| 574 |
-
│ │ ├── openai_signatures.js # 签名常量
|
| 575 |
-
│ │ ├── openai_system.js # 系统指令提取
|
| 576 |
-
│ │ ├── openai_tools.js # 工具格式转换
|
| 577 |
│ │ ├── paths.js # 路径工具(支持 pkg 打包)
|
|
|
|
|
|
|
|
|
|
| 578 |
│ │ └── utils.js # 工具函数(重导出)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 579 |
│ └── AntigravityRequester.js # TLS 指纹请求器封装
|
| 580 |
├── test/
|
| 581 |
│ ├── test-request.js # 请求测试
|
|
@@ -641,38 +647,94 @@ messages = [{role: user, content: 你好}]
|
|
| 641 |
- 充分利用 Antigravity 的 SystemInstruction 功能
|
| 642 |
- 确保系统提示词的完整性和优先级
|
| 643 |
|
| 644 |
-
##
|
| 645 |
|
| 646 |
-
|
| 647 |
|
| 648 |
-
###
|
| 649 |
|
| 650 |
```json
|
| 651 |
{
|
| 652 |
"model": "gemini-2.0-flash-thinking-exp",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 653 |
"reasoning_effort": "high",
|
| 654 |
"messages": [...]
|
| 655 |
}
|
| 656 |
```
|
| 657 |
|
| 658 |
-
|
|
| 659 |
-
|
| 660 |
-
| `
|
| 661 |
-
| `
|
| 662 |
-
| `
|
|
|
|
|
|
|
|
|
|
| 663 |
|
| 664 |
-
###
|
| 665 |
|
| 666 |
```json
|
| 667 |
{
|
| 668 |
-
"model": "
|
| 669 |
-
"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 670 |
"messages": [...]
|
| 671 |
}
|
| 672 |
```
|
| 673 |
|
| 674 |
-
|
| 675 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 676 |
|
| 677 |
### DeepSeek 思考格式兼容
|
| 678 |
|
|
@@ -689,6 +751,14 @@ messages = [{role: user, content: 你好}]
|
|
| 689 |
}
|
| 690 |
```
|
| 691 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 692 |
## 内存优化
|
| 693 |
|
| 694 |
本服务经过深度内存优化:
|
|
@@ -706,9 +776,20 @@ messages = [{role: user, content: 你好}]
|
|
| 706 |
1. **对象池复用**:流式响应对象通过对象池复用,减少 50%+ 临时对象创建
|
| 707 |
2. **预编译常量**:正则表达式、格式字符串等预编译,避免重复创建
|
| 708 |
3. **LineBuffer 优化**:高效的流式行分割,避免频繁字符串操作
|
| 709 |
-
4.
|
| 710 |
5. **进程精简**:移除不必要的子进程,统一在主进程处理
|
| 711 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 712 |
### 配置
|
| 713 |
|
| 714 |
```json
|
|
@@ -719,7 +800,7 @@ messages = [{role: user, content: 你好}]
|
|
| 719 |
}
|
| 720 |
```
|
| 721 |
|
| 722 |
-
- `memoryThreshold
|
| 723 |
|
| 724 |
## 心跳机制
|
| 725 |
|
|
@@ -741,6 +822,58 @@ messages = [{role: user, content: 你好}]
|
|
| 741 |
|
| 742 |
- `heartbeatInterval`:心跳间隔(毫秒),设为 0 禁用心跳
|
| 743 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 744 |
## 注意事项
|
| 745 |
|
| 746 |
1. 首次使用需要复制 `.env.example` 为 `.env` 并配置
|
|
|
|
| 25 |
- ✅ 对象池复用(减少 50%+ 临时对象创建,降低 GC 频率)
|
| 26 |
- ✅ 签名透传控制(可配置是否将 thoughtSignature 透传到客户端)
|
| 27 |
- ✅ 预编译二进制文件(支持 Windows/Linux/Android,无需 Node.js 环境)
|
| 28 |
+
- ✅ 多 API 格式支持(OpenAI、Gemini、Claude 三种格式)
|
| 29 |
+
- ✅ 转换器代码复用(公共模块提取,减少重复代码)
|
| 30 |
+
- ✅ 动态内存阈值(根据用户配置自动计算各级别阈值)
|
| 31 |
|
| 32 |
## 环境要求
|
| 33 |
|
|
|
|
| 571 |
│ │ ├── imageStorage.js # 图片存储
|
| 572 |
│ │ ├── logger.js # 日志模块
|
| 573 |
│ │ ├── memoryManager.js # 智能内存管理
|
| 574 |
+
│ │ ├── parameterNormalizer.js # 统一参数处理
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 575 |
│ │ ├── paths.js # 路径工具(支持 pkg 打包)
|
| 576 |
+
│ │ ├── thoughtSignatureCache.js # 签名缓存
|
| 577 |
+
│ │ ├── toolConverter.js # 工具定义转换
|
| 578 |
+
│ │ ├── toolNameCache.js # 工具名称缓存
|
| 579 |
│ │ └── utils.js # 工具函数(重导出)
|
| 580 |
+
│ │ └── converters/ # 格式转换器
|
| 581 |
+
│ │ ├── common.js # 公共函数
|
| 582 |
+
│ │ ├── openai.js # OpenAI 格式
|
| 583 |
+
│ │ ├── claude.js # Claude 格式
|
| 584 |
+
│ │ └── gemini.js # Gemini 格式
|
| 585 |
│ └── AntigravityRequester.js # TLS 指纹请求器封装
|
| 586 |
├── test/
|
| 587 |
│ ├── test-request.js # 请求测试
|
|
|
|
| 647 |
- 充分利用 Antigravity 的 SystemInstruction 功能
|
| 648 |
- 确保系统提示词的完整性和优先级
|
| 649 |
|
| 650 |
+
## 多 API 格式支持
|
| 651 |
|
| 652 |
+
本服务支持三种 API 格式,每种格式都有完整的参数支持:
|
| 653 |
|
| 654 |
+
### OpenAI 格式 (`/v1/chat/completions`)
|
| 655 |
|
| 656 |
```json
|
| 657 |
{
|
| 658 |
"model": "gemini-2.0-flash-thinking-exp",
|
| 659 |
+
"max_tokens": 16000,
|
| 660 |
+
"temperature": 0.7,
|
| 661 |
+
"top_p": 0.9,
|
| 662 |
+
"top_k": 40,
|
| 663 |
+
"thinking_budget": 10000,
|
| 664 |
"reasoning_effort": "high",
|
| 665 |
"messages": [...]
|
| 666 |
}
|
| 667 |
```
|
| 668 |
|
| 669 |
+
| 参数 | 说明 | 默认值 |
|
| 670 |
+
|------|------|--------|
|
| 671 |
+
| `max_tokens` | 最大输出 token 数 | 32000 |
|
| 672 |
+
| `temperature` | 温度 (0.0-1.0) | 1 |
|
| 673 |
+
| `top_p` | Top-P 采样 | 1 |
|
| 674 |
+
| `top_k` | Top-K 采样 | 50 |
|
| 675 |
+
| `thinking_budget` | 思考预算 (1024-32000) | 1024 |
|
| 676 |
+
| `reasoning_effort` | 思考强度 (`low`/`medium`/`high`) | - |
|
| 677 |
|
| 678 |
+
### Claude 格式 (`/v1/messages`)
|
| 679 |
|
| 680 |
```json
|
| 681 |
{
|
| 682 |
+
"model": "claude-sonnet-4-5-thinking",
|
| 683 |
+
"max_tokens": 16000,
|
| 684 |
+
"temperature": 0.7,
|
| 685 |
+
"top_p": 0.9,
|
| 686 |
+
"top_k": 40,
|
| 687 |
+
"thinking": {
|
| 688 |
+
"type": "enabled",
|
| 689 |
+
"budget_tokens": 10000
|
| 690 |
+
},
|
| 691 |
"messages": [...]
|
| 692 |
}
|
| 693 |
```
|
| 694 |
|
| 695 |
+
| 参数 | 说明 | 默认值 |
|
| 696 |
+
|------|------|--------|
|
| 697 |
+
| `max_tokens` | 最大输出 token 数 | 32000 |
|
| 698 |
+
| `temperature` | 温度 (0.0-1.0) | 1 |
|
| 699 |
+
| `top_p` | Top-P 采样 | 1 |
|
| 700 |
+
| `top_k` | Top-K 采样 | 50 |
|
| 701 |
+
| `thinking.type` | 思考开关 (`enabled`/`disabled`) | - |
|
| 702 |
+
| `thinking.budget_tokens` | 思考预算 (1024-32000) | 1024 |
|
| 703 |
+
|
| 704 |
+
### Gemini 格式 (`/v1beta/models/:model:generateContent`)
|
| 705 |
+
|
| 706 |
+
```json
|
| 707 |
+
{
|
| 708 |
+
"contents": [...],
|
| 709 |
+
"generationConfig": {
|
| 710 |
+
"maxOutputTokens": 16000,
|
| 711 |
+
"temperature": 0.7,
|
| 712 |
+
"topP": 0.9,
|
| 713 |
+
"topK": 40,
|
| 714 |
+
"thinkingConfig": {
|
| 715 |
+
"includeThoughts": true,
|
| 716 |
+
"thinkingBudget": 10000
|
| 717 |
+
}
|
| 718 |
+
}
|
| 719 |
+
}
|
| 720 |
+
```
|
| 721 |
+
|
| 722 |
+
| 参数 | 说明 | 默认值 |
|
| 723 |
+
|------|------|--------|
|
| 724 |
+
| `maxOutputTokens` | 最大输出 token 数 | 32000 |
|
| 725 |
+
| `temperature` | 温度 (0.0-1.0) | 1 |
|
| 726 |
+
| `topP` | Top-P 采样 | 1 |
|
| 727 |
+
| `topK` | Top-K 采样 | 50 |
|
| 728 |
+
| `thinkingConfig.includeThoughts` | 是否包含思考内容 | true |
|
| 729 |
+
| `thinkingConfig.thinkingBudget` | 思考预算 (1024-32000) | 1024 |
|
| 730 |
+
|
| 731 |
+
### 统一参数处理
|
| 732 |
+
|
| 733 |
+
所有三种格式的参数都会被统一规范化处理,确保一致的行为:
|
| 734 |
+
|
| 735 |
+
1. **参数优先级**:请求参数 > 配置文件默认值
|
| 736 |
+
2. **思考预算优先级**:`thinking_budget`/`budget_tokens`/`thinkingBudget` > `reasoning_effort` > 配置文件默认值
|
| 737 |
+
3. **禁用思考**:设置 `thinking_budget=0` 或 `thinking.type="disabled"` 或 `thinkingConfig.includeThoughts=false`
|
| 738 |
|
| 739 |
### DeepSeek 思考格式兼容
|
| 740 |
|
|
|
|
| 751 |
}
|
| 752 |
```
|
| 753 |
|
| 754 |
+
### reasoning_effort 映射
|
| 755 |
+
|
| 756 |
+
| 值 | 思考 Token 预算 |
|
| 757 |
+
|---|----------------|
|
| 758 |
+
| `low` | 1024 |
|
| 759 |
+
| `medium` | 16000 |
|
| 760 |
+
| `high` | 32000 |
|
| 761 |
+
|
| 762 |
## 内存优化
|
| 763 |
|
| 764 |
本服务经过深度内存优化:
|
|
|
|
| 776 |
1. **对象池复用**:流式响应对象通过对象池复用,减少 50%+ 临时对象创建
|
| 777 |
2. **预编译常量**:正则表达式、格式字符串等预编译,避免重复创建
|
| 778 |
3. **LineBuffer 优化**:高效的流式行分割,避免频繁字符串操作
|
| 779 |
+
4. **自动内存清理**:堆内存超过阈值时自动触发 GC
|
| 780 |
5. **进程精简**:移除不必要的子进程,统一在主进程处理
|
| 781 |
|
| 782 |
+
### 动态内存阈值
|
| 783 |
+
|
| 784 |
+
内存压力阈值根据用户配置的 `memoryThreshold`(MB)动态计算:
|
| 785 |
+
|
| 786 |
+
| 压力级别 | 阈值比例 | 默认值(100MB 配置) | 行为 |
|
| 787 |
+
|---------|---------|---------------------|------|
|
| 788 |
+
| LOW | 30% | 30MB | 正常运行 |
|
| 789 |
+
| MEDIUM | 60% | 60MB | 轻度清理 |
|
| 790 |
+
| HIGH | 100% | 100MB | 积极清理 + GC |
|
| 791 |
+
| CRITICAL | >100% | >100MB | 紧急清理 + 强制 GC |
|
| 792 |
+
|
| 793 |
### 配置
|
| 794 |
|
| 795 |
```json
|
|
|
|
| 800 |
}
|
| 801 |
```
|
| 802 |
|
| 803 |
+
- `memoryThreshold`:高压力阈值(MB),其他级别按比例自动计算
|
| 804 |
|
| 805 |
## 心跳机制
|
| 806 |
|
|
|
|
| 822 |
|
| 823 |
- `heartbeatInterval`:心跳间隔(毫秒),设为 0 禁用心跳
|
| 824 |
|
| 825 |
+
## 代码架构
|
| 826 |
+
|
| 827 |
+
### 转换器模块
|
| 828 |
+
|
| 829 |
+
项目支持三种 API 格式(OpenAI、Gemini、Claude),转换器代码经过优化,提取了公共模块:
|
| 830 |
+
|
| 831 |
+
```
|
| 832 |
+
src/utils/converters/
|
| 833 |
+
├── common.js # 公共函数(签名处理、消息构建、请求体构建等)
|
| 834 |
+
├── openai.js # OpenAI 格式转换器
|
| 835 |
+
├── claude.js # Claude 格式转换器
|
| 836 |
+
└── gemini.js # Gemini 格式转换器
|
| 837 |
+
```
|
| 838 |
+
|
| 839 |
+
#### 公共函数
|
| 840 |
+
|
| 841 |
+
| 函数 | 说明 |
|
| 842 |
+
|------|------|
|
| 843 |
+
| `getSignatureContext()` | 获取思维签名和工具签名 |
|
| 844 |
+
| `pushUserMessage()` | 添加用户消息到消息数组 |
|
| 845 |
+
| `findFunctionNameById()` | 根据工具调用 ID 查找函数名 |
|
| 846 |
+
| `pushFunctionResponse()` | 添加函数响应到消息数组 |
|
| 847 |
+
| `createThoughtPart()` | 创建带签名的思维 part |
|
| 848 |
+
| `createFunctionCallPart()` | 创建带签名的函数调用 part |
|
| 849 |
+
| `processToolName()` | 处理工具名称映射 |
|
| 850 |
+
| `pushModelMessage()` | 添加模型消息到消息数组 |
|
| 851 |
+
| `buildRequestBody()` | 构建 Antigravity 请求体 |
|
| 852 |
+
| `mergeSystemInstruction()` | 合并系统指令 |
|
| 853 |
+
|
| 854 |
+
### 参数规范化模块
|
| 855 |
+
|
| 856 |
+
```
|
| 857 |
+
src/utils/parameterNormalizer.js # 统一参数处理
|
| 858 |
+
```
|
| 859 |
+
|
| 860 |
+
将 OpenAI、Claude、Gemini 三种格式的参数统一转换为内部格式:
|
| 861 |
+
|
| 862 |
+
| 函数 | 说明 |
|
| 863 |
+
|------|------|
|
| 864 |
+
| `normalizeOpenAIParameters()` | 规范化 OpenAI 格式参数 |
|
| 865 |
+
| `normalizeClaudeParameters()` | 规范化 Claude 格式参数 |
|
| 866 |
+
| `normalizeGeminiParameters()` | 规范化 Gemini 格式参数 |
|
| 867 |
+
| `toGenerationConfig()` | 转换为上游 API 格式 |
|
| 868 |
+
|
| 869 |
+
### 工具转换模块
|
| 870 |
+
|
| 871 |
+
```
|
| 872 |
+
src/utils/toolConverter.js # 统一的工具定义转换
|
| 873 |
+
```
|
| 874 |
+
|
| 875 |
+
支持将 OpenAI、Claude、Gemini 三种格式的工具定义转换为 Antigravity 格式。
|
| 876 |
+
|
| 877 |
## 注意事项
|
| 878 |
|
| 879 |
1. 首次使用需要复制 `.env.example` 为 `.env` 并配置
|
|
@@ -4,7 +4,7 @@
|
|
| 4 |
"host": "0.0.0.0",
|
| 5 |
"maxRequestSize": "500mb",
|
| 6 |
"heartbeatInterval": 15000,
|
| 7 |
-
"memoryThreshold":
|
| 8 |
},
|
| 9 |
"rotation": {
|
| 10 |
"strategy": "request_count",
|
|
|
|
| 4 |
"host": "0.0.0.0",
|
| 5 |
"maxRequestSize": "500mb",
|
| 6 |
"heartbeatInterval": 15000,
|
| 7 |
+
"memoryThreshold": 50
|
| 8 |
},
|
| 9 |
"rotation": {
|
| 10 |
"strategy": "request_count",
|
|
@@ -70,7 +70,7 @@ export function buildConfig(jsonConfig) {
|
|
| 70 |
port: jsonConfig.server?.port || DEFAULT_SERVER_PORT,
|
| 71 |
host: jsonConfig.server?.host || DEFAULT_SERVER_HOST,
|
| 72 |
heartbeatInterval: jsonConfig.server?.heartbeatInterval || DEFAULT_HEARTBEAT_INTERVAL,
|
| 73 |
-
memoryThreshold: jsonConfig.server?.memoryThreshold ||
|
| 74 |
},
|
| 75 |
cache: {
|
| 76 |
modelListTTL: jsonConfig.cache?.modelListTTL || MODEL_LIST_CACHE_TTL
|
|
|
|
| 70 |
port: jsonConfig.server?.port || DEFAULT_SERVER_PORT,
|
| 71 |
host: jsonConfig.server?.host || DEFAULT_SERVER_HOST,
|
| 72 |
heartbeatInterval: jsonConfig.server?.heartbeatInterval || DEFAULT_HEARTBEAT_INTERVAL,
|
| 73 |
+
memoryThreshold: jsonConfig.server?.memoryThreshold || 100
|
| 74 |
},
|
| 75 |
cache: {
|
| 76 |
modelListTTL: jsonConfig.cache?.modelListTTL || MODEL_LIST_CACHE_TTL
|
|
@@ -37,19 +37,12 @@ export const MODEL_LIST_CACHE_TTL = 60 * 60 * 1000;
|
|
| 37 |
|
| 38 |
// ==================== 内存管理常量 ====================
|
| 39 |
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
/** 中等压力阈值 - 25MB */
|
| 47 |
-
MEDIUM: 25 * 1024 * 1024,
|
| 48 |
-
/** 高压力阈值 - 35MB */
|
| 49 |
-
HIGH: 35 * 1024 * 1024,
|
| 50 |
-
/** 目标内存 - 20MB */
|
| 51 |
-
TARGET: 20 * 1024 * 1024
|
| 52 |
-
};
|
| 53 |
|
| 54 |
/**
|
| 55 |
* GC 冷却时间(毫秒)
|
|
|
|
| 37 |
|
| 38 |
// ==================== 内存管理常量 ====================
|
| 39 |
|
| 40 |
+
// 注意:内存压力阈值现在由 memoryManager 根据用户配置的 memoryThreshold 动态计算
|
| 41 |
+
// 用户配置的 memoryThreshold(MB)即为高压力阈值,其他阈值按比例计算:
|
| 42 |
+
// - LOW: 30% 阈值
|
| 43 |
+
// - MEDIUM: 60% 阈值
|
| 44 |
+
// - HIGH: 100% 阈值(用户配置值)
|
| 45 |
+
// - TARGET: 50% 阈值
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
/**
|
| 48 |
* GC 冷却时间(毫秒)
|
|
@@ -3,6 +3,7 @@ import cors from 'cors';
|
|
| 3 |
import path from 'path';
|
| 4 |
import { generateAssistantResponse, generateAssistantResponseNoStream, getAvailableModels, generateImageForSD, closeRequester } from '../api/client.js';
|
| 5 |
import { generateRequestBody, generateGeminiRequestBody, generateClaudeRequestBody, prepareImageRequest } from '../utils/utils.js';
|
|
|
|
| 6 |
import logger from '../utils/logger.js';
|
| 7 |
import config from '../config/config.js';
|
| 8 |
import tokenManager from '../auth/token_manager.js';
|
|
@@ -28,7 +29,8 @@ const with429Retry = async (fn, maxRetries, loggerPrefix = '') => {
|
|
| 28 |
try {
|
| 29 |
return await fn(attempt);
|
| 30 |
} catch (error) {
|
| 31 |
-
|
|
|
|
| 32 |
if (status === 429 && attempt < retries) {
|
| 33 |
const nextAttempt = attempt + 1;
|
| 34 |
logger.warn(`${loggerPrefix}收到 429,正在进行第 ${nextAttempt} 次重试(共 ${retries} 次)`);
|
|
@@ -92,7 +94,8 @@ const releaseChunkObject = (obj) => {
|
|
| 92 |
// 注册内存清理回调(使用统一工具收缩对象池)
|
| 93 |
registerMemoryPoolCleanup(chunkPool, () => memoryManager.getPoolSizes().chunk);
|
| 94 |
|
| 95 |
-
//
|
|
|
|
| 96 |
memoryManager.start(MEMORY_CHECK_INTERVAL);
|
| 97 |
|
| 98 |
const createStreamChunk = (id, created, model, delta, finish_reason = null) => {
|
|
@@ -143,10 +146,14 @@ const buildGeminiErrorPayload = (error, statusCode) => {
|
|
| 143 |
};
|
| 144 |
|
| 145 |
// Gemini 响应构建工具
|
| 146 |
-
const createGeminiResponse = (content, reasoning, toolCalls, finishReason, usage) => {
|
| 147 |
const parts = [];
|
| 148 |
if (reasoning) {
|
| 149 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 150 |
}
|
| 151 |
if (content) {
|
| 152 |
parts.push({ text: content });
|
|
@@ -154,12 +161,16 @@ const createGeminiResponse = (content, reasoning, toolCalls, finishReason, usage
|
|
| 154 |
if (toolCalls && toolCalls.length > 0) {
|
| 155 |
toolCalls.forEach(tc => {
|
| 156 |
try {
|
| 157 |
-
|
| 158 |
functionCall: {
|
| 159 |
name: tc.function.name,
|
| 160 |
args: JSON.parse(tc.function.arguments)
|
| 161 |
}
|
| 162 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
| 163 |
} catch (e) {
|
| 164 |
// 忽略解析错误
|
| 165 |
}
|
|
@@ -454,6 +465,9 @@ app.get('/v1beta/models/:model', async (req, res) => {
|
|
| 454 |
});
|
| 455 |
|
| 456 |
const handleGeminiRequest = async (req, res, modelName, isStream) => {
|
|
|
|
|
|
|
|
|
|
| 457 |
try {
|
| 458 |
const token = await tokenManager.getToken();
|
| 459 |
if (!token) {
|
|
@@ -461,8 +475,6 @@ const handleGeminiRequest = async (req, res, modelName, isStream) => {
|
|
| 461 |
}
|
| 462 |
|
| 463 |
const requestBody = generateGeminiRequestBody(req.body, modelName, token);
|
| 464 |
-
const maxRetries = Number(config.retryTimes || 0);
|
| 465 |
-
const safeRetries = maxRetries > 0 ? Math.floor(maxRetries) : 0;
|
| 466 |
|
| 467 |
if (isStream) {
|
| 468 |
setStreamHeaders(res);
|
|
@@ -478,16 +490,16 @@ const handleGeminiRequest = async (req, res, modelName, isStream) => {
|
|
| 478 |
usageData = data.usage;
|
| 479 |
} else if (data.type === 'reasoning') {
|
| 480 |
// Gemini 思考内容
|
| 481 |
-
const chunk = createGeminiResponse(null, data.reasoning_content, null, null, null);
|
| 482 |
writeStreamData(res, chunk);
|
| 483 |
} else if (data.type === 'tool_calls') {
|
| 484 |
hasToolCall = true;
|
| 485 |
// Gemini 工具调用
|
| 486 |
-
const chunk = createGeminiResponse(null, null, data.tool_calls, null, null);
|
| 487 |
writeStreamData(res, chunk);
|
| 488 |
} else {
|
| 489 |
// 普通文本
|
| 490 |
-
const chunk = createGeminiResponse(data.content, null, null, null, null);
|
| 491 |
writeStreamData(res, chunk);
|
| 492 |
}
|
| 493 |
}),
|
|
@@ -497,28 +509,36 @@ const handleGeminiRequest = async (req, res, modelName, isStream) => {
|
|
| 497 |
|
| 498 |
// 发送结束块和 usage
|
| 499 |
const finishReason = hasToolCall ? "STOP" : "STOP"; // Gemini 工具调用也是 STOP
|
| 500 |
-
const finalChunk = createGeminiResponse(null, null, null, finishReason, usageData);
|
| 501 |
writeStreamData(res, finalChunk);
|
| 502 |
|
| 503 |
clearInterval(heartbeatTimer);
|
| 504 |
endStream(res);
|
| 505 |
} catch (error) {
|
| 506 |
clearInterval(heartbeatTimer);
|
| 507 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 508 |
}
|
| 509 |
} else {
|
| 510 |
// 非流式
|
| 511 |
req.setTimeout(0);
|
| 512 |
res.setTimeout(0);
|
| 513 |
|
| 514 |
-
const { content, reasoningContent, toolCalls, usage } = await with429Retry(
|
| 515 |
() => generateAssistantResponseNoStream(requestBody, token),
|
| 516 |
safeRetries,
|
| 517 |
'gemini.no_stream '
|
| 518 |
);
|
| 519 |
|
| 520 |
const finishReason = toolCalls.length > 0 ? "STOP" : "STOP";
|
| 521 |
-
const response = createGeminiResponse(content, reasoningContent, toolCalls, finishReason, usage);
|
| 522 |
res.json(response);
|
| 523 |
}
|
| 524 |
} catch (error) {
|
|
@@ -572,15 +592,19 @@ const createClaudeStreamEvent = (eventType, data) => {
|
|
| 572 |
};
|
| 573 |
|
| 574 |
// Claude 非流式响应构建
|
| 575 |
-
const createClaudeResponse = (id, model, content, reasoning, toolCalls, stopReason, usage) => {
|
| 576 |
const contentBlocks = [];
|
| 577 |
|
| 578 |
// 思维链内容(如果有)- Claude 格式用 thinking 类型
|
| 579 |
if (reasoning) {
|
| 580 |
-
|
| 581 |
type: "thinking",
|
| 582 |
thinking: reasoning
|
| 583 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
| 584 |
}
|
| 585 |
|
| 586 |
// 文本内容
|
|
@@ -595,12 +619,16 @@ const createClaudeResponse = (id, model, content, reasoning, toolCalls, stopReas
|
|
| 595 |
if (toolCalls && toolCalls.length > 0) {
|
| 596 |
for (const tc of toolCalls) {
|
| 597 |
try {
|
| 598 |
-
|
| 599 |
type: "tool_use",
|
| 600 |
id: tc.id,
|
| 601 |
name: tc.function.name,
|
| 602 |
input: JSON.parse(tc.function.arguments)
|
| 603 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
| 604 |
} catch (e) {
|
| 605 |
// 解析失败时传入空对象
|
| 606 |
contentBlocks.push({
|
|
@@ -630,7 +658,7 @@ const createClaudeResponse = (id, model, content, reasoning, toolCalls, stopReas
|
|
| 630 |
|
| 631 |
// Claude API 处理函数
|
| 632 |
const handleClaudeRequest = async (req, res, isStream) => {
|
| 633 |
-
const { messages, model, system, tools,
|
| 634 |
|
| 635 |
try {
|
| 636 |
if (!messages) {
|
|
@@ -642,14 +670,8 @@ const handleClaudeRequest = async (req, res, isStream) => {
|
|
| 642 |
throw new Error('没有可用的token,请运行 npm run login 获取token');
|
| 643 |
}
|
| 644 |
|
| 645 |
-
//
|
| 646 |
-
const parameters =
|
| 647 |
-
max_tokens: max_tokens || config.defaults.max_tokens,
|
| 648 |
-
temperature: temperature ?? config.defaults.temperature,
|
| 649 |
-
top_p: top_p ?? config.defaults.top_p,
|
| 650 |
-
top_k: top_k ?? config.defaults.top_k,
|
| 651 |
-
...otherParams
|
| 652 |
-
};
|
| 653 |
|
| 654 |
const requestBody = generateClaudeRequestBody(messages, model, parameters, tools, system, token);
|
| 655 |
|
|
@@ -691,19 +713,27 @@ const handleClaudeRequest = async (req, res, isStream) => {
|
|
| 691 |
// 思维链内容 - 使用 thinking 类型
|
| 692 |
if (!reasoningSent) {
|
| 693 |
// 开始思维块
|
|
|
|
|
|
|
|
|
|
|
|
|
| 694 |
res.write(createClaudeStreamEvent('content_block_start', {
|
| 695 |
type: "content_block_start",
|
| 696 |
index: contentIndex,
|
| 697 |
-
content_block:
|
| 698 |
}));
|
| 699 |
currentBlockType = 'thinking';
|
| 700 |
reasoningSent = true;
|
| 701 |
}
|
| 702 |
// 发送思维增量
|
|
|
|
|
|
|
|
|
|
|
|
|
| 703 |
res.write(createClaudeStreamEvent('content_block_delta', {
|
| 704 |
type: "content_block_delta",
|
| 705 |
index: contentIndex,
|
| 706 |
-
delta:
|
| 707 |
}));
|
| 708 |
} else if (data.type === 'tool_calls') {
|
| 709 |
hasToolCall = true;
|
|
@@ -719,10 +749,14 @@ const handleClaudeRequest = async (req, res, isStream) => {
|
|
| 719 |
for (const tc of data.tool_calls) {
|
| 720 |
try {
|
| 721 |
const inputObj = JSON.parse(tc.function.arguments);
|
|
|
|
|
|
|
|
|
|
|
|
|
| 722 |
res.write(createClaudeStreamEvent('content_block_start', {
|
| 723 |
type: "content_block_start",
|
| 724 |
index: contentIndex,
|
| 725 |
-
content_block:
|
| 726 |
}));
|
| 727 |
// 发送 input 增量
|
| 728 |
res.write(createClaudeStreamEvent('content_block_delta', {
|
|
@@ -797,14 +831,22 @@ const handleClaudeRequest = async (req, res, isStream) => {
|
|
| 797 |
res.end();
|
| 798 |
} catch (error) {
|
| 799 |
clearInterval(heartbeatTimer);
|
| 800 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 801 |
}
|
| 802 |
} else {
|
| 803 |
// 非流式请求
|
| 804 |
req.setTimeout(0);
|
| 805 |
res.setTimeout(0);
|
| 806 |
|
| 807 |
-
const { content, reasoningContent, toolCalls, usage } = await with429Retry(
|
| 808 |
() => generateAssistantResponseNoStream(requestBody, token),
|
| 809 |
safeRetries,
|
| 810 |
'claude.no_stream '
|
|
@@ -816,6 +858,7 @@ const handleClaudeRequest = async (req, res, isStream) => {
|
|
| 816 |
model,
|
| 817 |
content,
|
| 818 |
reasoningContent,
|
|
|
|
| 819 |
toolCalls,
|
| 820 |
stopReason,
|
| 821 |
usage
|
|
|
|
| 3 |
import path from 'path';
|
| 4 |
import { generateAssistantResponse, generateAssistantResponseNoStream, getAvailableModels, generateImageForSD, closeRequester } from '../api/client.js';
|
| 5 |
import { generateRequestBody, generateGeminiRequestBody, generateClaudeRequestBody, prepareImageRequest } from '../utils/utils.js';
|
| 6 |
+
import { normalizeClaudeParameters } from '../utils/parameterNormalizer.js';
|
| 7 |
import logger from '../utils/logger.js';
|
| 8 |
import config from '../config/config.js';
|
| 9 |
import tokenManager from '../auth/token_manager.js';
|
|
|
|
| 29 |
try {
|
| 30 |
return await fn(attempt);
|
| 31 |
} catch (error) {
|
| 32 |
+
// 兼容多种错误格式:error.status, error.statusCode, error.response?.status
|
| 33 |
+
const status = Number(error.status || error.statusCode || error.response?.status);
|
| 34 |
if (status === 429 && attempt < retries) {
|
| 35 |
const nextAttempt = attempt + 1;
|
| 36 |
logger.warn(`${loggerPrefix}收到 429,正在进行第 ${nextAttempt} 次重试(共 ${retries} 次)`);
|
|
|
|
| 94 |
// 注册内存清理回调(使用统一工具收缩对象池)
|
| 95 |
registerMemoryPoolCleanup(chunkPool, () => memoryManager.getPoolSizes().chunk);
|
| 96 |
|
| 97 |
+
// 设置内存阈值(从配置加载)并启动内存管理器
|
| 98 |
+
memoryManager.setThreshold(config.server.memoryThreshold);
|
| 99 |
memoryManager.start(MEMORY_CHECK_INTERVAL);
|
| 100 |
|
| 101 |
const createStreamChunk = (id, created, model, delta, finish_reason = null) => {
|
|
|
|
| 146 |
};
|
| 147 |
|
| 148 |
// Gemini 响应构建工具
|
| 149 |
+
const createGeminiResponse = (content, reasoning, reasoningSignature, toolCalls, finishReason, usage) => {
|
| 150 |
const parts = [];
|
| 151 |
if (reasoning) {
|
| 152 |
+
const thoughtPart = { text: reasoning, thought: true };
|
| 153 |
+
if (reasoningSignature && config.passSignatureToClient) {
|
| 154 |
+
thoughtPart.thoughtSignature = reasoningSignature;
|
| 155 |
+
}
|
| 156 |
+
parts.push(thoughtPart);
|
| 157 |
}
|
| 158 |
if (content) {
|
| 159 |
parts.push({ text: content });
|
|
|
|
| 161 |
if (toolCalls && toolCalls.length > 0) {
|
| 162 |
toolCalls.forEach(tc => {
|
| 163 |
try {
|
| 164 |
+
const functionCallPart = {
|
| 165 |
functionCall: {
|
| 166 |
name: tc.function.name,
|
| 167 |
args: JSON.parse(tc.function.arguments)
|
| 168 |
}
|
| 169 |
+
};
|
| 170 |
+
if (tc.thoughtSignature && config.passSignatureToClient) {
|
| 171 |
+
functionCallPart.thoughtSignature = tc.thoughtSignature;
|
| 172 |
+
}
|
| 173 |
+
parts.push(functionCallPart);
|
| 174 |
} catch (e) {
|
| 175 |
// 忽略解析错误
|
| 176 |
}
|
|
|
|
| 465 |
});
|
| 466 |
|
| 467 |
const handleGeminiRequest = async (req, res, modelName, isStream) => {
|
| 468 |
+
const maxRetries = Number(config.retryTimes || 0);
|
| 469 |
+
const safeRetries = maxRetries > 0 ? Math.floor(maxRetries) : 0;
|
| 470 |
+
|
| 471 |
try {
|
| 472 |
const token = await tokenManager.getToken();
|
| 473 |
if (!token) {
|
|
|
|
| 475 |
}
|
| 476 |
|
| 477 |
const requestBody = generateGeminiRequestBody(req.body, modelName, token);
|
|
|
|
|
|
|
| 478 |
|
| 479 |
if (isStream) {
|
| 480 |
setStreamHeaders(res);
|
|
|
|
| 490 |
usageData = data.usage;
|
| 491 |
} else if (data.type === 'reasoning') {
|
| 492 |
// Gemini 思考内容
|
| 493 |
+
const chunk = createGeminiResponse(null, data.reasoning_content, data.thoughtSignature, null, null, null);
|
| 494 |
writeStreamData(res, chunk);
|
| 495 |
} else if (data.type === 'tool_calls') {
|
| 496 |
hasToolCall = true;
|
| 497 |
// Gemini 工具调用
|
| 498 |
+
const chunk = createGeminiResponse(null, null, null, data.tool_calls, null, null);
|
| 499 |
writeStreamData(res, chunk);
|
| 500 |
} else {
|
| 501 |
// 普通文本
|
| 502 |
+
const chunk = createGeminiResponse(data.content, null, null, null, null, null);
|
| 503 |
writeStreamData(res, chunk);
|
| 504 |
}
|
| 505 |
}),
|
|
|
|
| 509 |
|
| 510 |
// 发送结束块和 usage
|
| 511 |
const finishReason = hasToolCall ? "STOP" : "STOP"; // Gemini 工具调用也是 STOP
|
| 512 |
+
const finalChunk = createGeminiResponse(null, null, null, null, finishReason, usageData);
|
| 513 |
writeStreamData(res, finalChunk);
|
| 514 |
|
| 515 |
clearInterval(heartbeatTimer);
|
| 516 |
endStream(res);
|
| 517 |
} catch (error) {
|
| 518 |
clearInterval(heartbeatTimer);
|
| 519 |
+
// 流式响应中发送错误
|
| 520 |
+
if (!res.writableEnded) {
|
| 521 |
+
const statusCode = Number(error.status) || 500;
|
| 522 |
+
const errorPayload = buildGeminiErrorPayload(error, statusCode);
|
| 523 |
+
writeStreamData(res, errorPayload);
|
| 524 |
+
endStream(res);
|
| 525 |
+
}
|
| 526 |
+
logger.error('Gemini 流式请求失败:', error.message);
|
| 527 |
+
return;
|
| 528 |
}
|
| 529 |
} else {
|
| 530 |
// 非流式
|
| 531 |
req.setTimeout(0);
|
| 532 |
res.setTimeout(0);
|
| 533 |
|
| 534 |
+
const { content, reasoningContent, reasoningSignature, toolCalls, usage } = await with429Retry(
|
| 535 |
() => generateAssistantResponseNoStream(requestBody, token),
|
| 536 |
safeRetries,
|
| 537 |
'gemini.no_stream '
|
| 538 |
);
|
| 539 |
|
| 540 |
const finishReason = toolCalls.length > 0 ? "STOP" : "STOP";
|
| 541 |
+
const response = createGeminiResponse(content, reasoningContent, reasoningSignature, toolCalls, finishReason, usage);
|
| 542 |
res.json(response);
|
| 543 |
}
|
| 544 |
} catch (error) {
|
|
|
|
| 592 |
};
|
| 593 |
|
| 594 |
// Claude 非流式响应构建
|
| 595 |
+
const createClaudeResponse = (id, model, content, reasoning, reasoningSignature, toolCalls, stopReason, usage) => {
|
| 596 |
const contentBlocks = [];
|
| 597 |
|
| 598 |
// 思维链内容(如果有)- Claude 格式用 thinking 类型
|
| 599 |
if (reasoning) {
|
| 600 |
+
const thinkingBlock = {
|
| 601 |
type: "thinking",
|
| 602 |
thinking: reasoning
|
| 603 |
+
};
|
| 604 |
+
if (reasoningSignature && config.passSignatureToClient) {
|
| 605 |
+
thinkingBlock.signature = reasoningSignature;
|
| 606 |
+
}
|
| 607 |
+
contentBlocks.push(thinkingBlock);
|
| 608 |
}
|
| 609 |
|
| 610 |
// 文本内容
|
|
|
|
| 619 |
if (toolCalls && toolCalls.length > 0) {
|
| 620 |
for (const tc of toolCalls) {
|
| 621 |
try {
|
| 622 |
+
const toolBlock = {
|
| 623 |
type: "tool_use",
|
| 624 |
id: tc.id,
|
| 625 |
name: tc.function.name,
|
| 626 |
input: JSON.parse(tc.function.arguments)
|
| 627 |
+
};
|
| 628 |
+
if (tc.thoughtSignature && config.passSignatureToClient) {
|
| 629 |
+
toolBlock.signature = tc.thoughtSignature;
|
| 630 |
+
}
|
| 631 |
+
contentBlocks.push(toolBlock);
|
| 632 |
} catch (e) {
|
| 633 |
// 解析失败时传入空对象
|
| 634 |
contentBlocks.push({
|
|
|
|
| 658 |
|
| 659 |
// Claude API 处理函数
|
| 660 |
const handleClaudeRequest = async (req, res, isStream) => {
|
| 661 |
+
const { messages, model, system, tools, ...rawParams } = req.body;
|
| 662 |
|
| 663 |
try {
|
| 664 |
if (!messages) {
|
|
|
|
| 670 |
throw new Error('没有可用的token,请运行 npm run login 获取token');
|
| 671 |
}
|
| 672 |
|
| 673 |
+
// 使用统一参数规范化模块处理 Claude 格式参数
|
| 674 |
+
const parameters = normalizeClaudeParameters(rawParams);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 675 |
|
| 676 |
const requestBody = generateClaudeRequestBody(messages, model, parameters, tools, system, token);
|
| 677 |
|
|
|
|
| 713 |
// 思维链内容 - 使用 thinking 类型
|
| 714 |
if (!reasoningSent) {
|
| 715 |
// 开始思维块
|
| 716 |
+
const contentBlock = { type: "thinking", thinking: "" };
|
| 717 |
+
if (data.thoughtSignature && config.passSignatureToClient) {
|
| 718 |
+
contentBlock.signature = data.thoughtSignature;
|
| 719 |
+
}
|
| 720 |
res.write(createClaudeStreamEvent('content_block_start', {
|
| 721 |
type: "content_block_start",
|
| 722 |
index: contentIndex,
|
| 723 |
+
content_block: contentBlock
|
| 724 |
}));
|
| 725 |
currentBlockType = 'thinking';
|
| 726 |
reasoningSent = true;
|
| 727 |
}
|
| 728 |
// 发送思维增量
|
| 729 |
+
const delta = { type: "thinking_delta", thinking: data.reasoning_content || '' };
|
| 730 |
+
if (data.thoughtSignature && config.passSignatureToClient) {
|
| 731 |
+
delta.signature = data.thoughtSignature;
|
| 732 |
+
}
|
| 733 |
res.write(createClaudeStreamEvent('content_block_delta', {
|
| 734 |
type: "content_block_delta",
|
| 735 |
index: contentIndex,
|
| 736 |
+
delta: delta
|
| 737 |
}));
|
| 738 |
} else if (data.type === 'tool_calls') {
|
| 739 |
hasToolCall = true;
|
|
|
|
| 749 |
for (const tc of data.tool_calls) {
|
| 750 |
try {
|
| 751 |
const inputObj = JSON.parse(tc.function.arguments);
|
| 752 |
+
const toolContentBlock = { type: "tool_use", id: tc.id, name: tc.function.name, input: {} };
|
| 753 |
+
if (tc.thoughtSignature && config.passSignatureToClient) {
|
| 754 |
+
toolContentBlock.signature = tc.thoughtSignature;
|
| 755 |
+
}
|
| 756 |
res.write(createClaudeStreamEvent('content_block_start', {
|
| 757 |
type: "content_block_start",
|
| 758 |
index: contentIndex,
|
| 759 |
+
content_block: toolContentBlock
|
| 760 |
}));
|
| 761 |
// 发送 input 增量
|
| 762 |
res.write(createClaudeStreamEvent('content_block_delta', {
|
|
|
|
| 831 |
res.end();
|
| 832 |
} catch (error) {
|
| 833 |
clearInterval(heartbeatTimer);
|
| 834 |
+
// 流式响应中发送错误事件
|
| 835 |
+
if (!res.writableEnded) {
|
| 836 |
+
const statusCode = Number(error.status) || 500;
|
| 837 |
+
const errorPayload = buildClaudeErrorPayload(error, statusCode);
|
| 838 |
+
res.write(createClaudeStreamEvent('error', errorPayload));
|
| 839 |
+
res.end();
|
| 840 |
+
}
|
| 841 |
+
logger.error('Claude 流式请求失败:', error.message);
|
| 842 |
+
return;
|
| 843 |
}
|
| 844 |
} else {
|
| 845 |
// 非流式请求
|
| 846 |
req.setTimeout(0);
|
| 847 |
res.setTimeout(0);
|
| 848 |
|
| 849 |
+
const { content, reasoningContent, reasoningSignature, toolCalls, usage } = await with429Retry(
|
| 850 |
() => generateAssistantResponseNoStream(requestBody, token),
|
| 851 |
safeRetries,
|
| 852 |
'claude.no_stream '
|
|
|
|
| 858 |
model,
|
| 859 |
content,
|
| 860 |
reasoningContent,
|
| 861 |
+
reasoningSignature,
|
| 862 |
toolCalls,
|
| 863 |
stopReason,
|
| 864 |
usage
|
|
@@ -1,9 +1,21 @@
|
|
| 1 |
// Claude 格式转换工具
|
| 2 |
import config from '../../config/config.js';
|
| 3 |
-
import {
|
| 4 |
-
import {
|
| 5 |
-
|
| 6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
|
| 8 |
function extractImagesFromClaudeContent(content) {
|
| 9 |
const result = { text: '', images: [] };
|
|
@@ -31,16 +43,9 @@ function extractImagesFromClaudeContent(content) {
|
|
| 31 |
return result;
|
| 32 |
}
|
| 33 |
|
| 34 |
-
function handleClaudeUserMessage(extracted, antigravityMessages) {
|
| 35 |
-
antigravityMessages.push({
|
| 36 |
-
role: 'user',
|
| 37 |
-
parts: [{ text: extracted.text }, ...extracted.images]
|
| 38 |
-
});
|
| 39 |
-
}
|
| 40 |
-
|
| 41 |
function handleClaudeAssistantMessage(message, antigravityMessages, enableThinking, actualModelName, sessionId) {
|
| 42 |
-
const lastMessage = antigravityMessages[antigravityMessages.length - 1];
|
| 43 |
const content = message.content;
|
|
|
|
| 44 |
|
| 45 |
let textContent = '';
|
| 46 |
const toolCalls = [];
|
|
@@ -52,40 +57,22 @@ function handleClaudeAssistantMessage(message, antigravityMessages, enableThinki
|
|
| 52 |
if (item.type === 'text') {
|
| 53 |
textContent += item.text || '';
|
| 54 |
} else if (item.type === 'tool_use') {
|
| 55 |
-
const
|
| 56 |
-
const
|
| 57 |
-
|
| 58 |
-
functionCall: {
|
| 59 |
-
id: item.id,
|
| 60 |
-
name: safeName,
|
| 61 |
-
args: { query: JSON.stringify(item.input || {}) }
|
| 62 |
-
}
|
| 63 |
-
};
|
| 64 |
-
if (sessionId && actualModelName && safeName !== originalName) {
|
| 65 |
-
setToolNameMapping(sessionId, actualModelName, safeName, originalName);
|
| 66 |
-
}
|
| 67 |
-
toolCalls.push(part);
|
| 68 |
}
|
| 69 |
}
|
| 70 |
}
|
| 71 |
|
| 72 |
-
const hasToolCalls = toolCalls.length > 0;
|
| 73 |
const hasContent = textContent && textContent.trim() !== '';
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
const parts = [];
|
| 79 |
-
if (enableThinking) {
|
| 80 |
-
const cachedSig = getReasoningSignature(sessionId, actualModelName);
|
| 81 |
-
const thoughtSignature = cachedSig || getThoughtSignatureForModel(actualModelName);
|
| 82 |
-
parts.push({ text: ' ', thought: true });
|
| 83 |
-
parts.push({ text: ' ', thoughtSignature });
|
| 84 |
-
}
|
| 85 |
-
if (hasContent) parts.push({ text: textContent.trimEnd() });
|
| 86 |
-
parts.push(...toolCalls);
|
| 87 |
-
antigravityMessages.push({ role: 'model', parts });
|
| 88 |
}
|
|
|
|
|
|
|
|
|
|
| 89 |
}
|
| 90 |
|
| 91 |
function handleClaudeToolResult(message, antigravityMessages) {
|
|
@@ -96,21 +83,8 @@ function handleClaudeToolResult(message, antigravityMessages) {
|
|
| 96 |
if (item.type !== 'tool_result') continue;
|
| 97 |
|
| 98 |
const toolUseId = item.tool_use_id;
|
| 99 |
-
|
| 100 |
-
for (let i = antigravityMessages.length - 1; i >= 0; i--) {
|
| 101 |
-
if (antigravityMessages[i].role === 'model') {
|
| 102 |
-
const parts = antigravityMessages[i].parts;
|
| 103 |
-
for (const part of parts) {
|
| 104 |
-
if (part.functionCall && part.functionCall.id === toolUseId) {
|
| 105 |
-
functionName = part.functionCall.name;
|
| 106 |
-
break;
|
| 107 |
-
}
|
| 108 |
-
}
|
| 109 |
-
if (functionName) break;
|
| 110 |
-
}
|
| 111 |
-
}
|
| 112 |
|
| 113 |
-
const lastMessage = antigravityMessages[antigravityMessages.length - 1];
|
| 114 |
let resultContent = '';
|
| 115 |
if (typeof item.content === 'string') {
|
| 116 |
resultContent = item.content;
|
|
@@ -118,19 +92,7 @@ function handleClaudeToolResult(message, antigravityMessages) {
|
|
| 118 |
resultContent = item.content.filter(c => c.type === 'text').map(c => c.text).join('');
|
| 119 |
}
|
| 120 |
|
| 121 |
-
|
| 122 |
-
functionResponse: {
|
| 123 |
-
id: toolUseId,
|
| 124 |
-
name: functionName,
|
| 125 |
-
response: { output: resultContent }
|
| 126 |
-
}
|
| 127 |
-
};
|
| 128 |
-
|
| 129 |
-
if (lastMessage?.role === 'user' && lastMessage.parts.some(p => p.functionResponse)) {
|
| 130 |
-
lastMessage.parts.push(functionResponse);
|
| 131 |
-
} else {
|
| 132 |
-
antigravityMessages.push({ role: 'user', parts: [functionResponse] });
|
| 133 |
-
}
|
| 134 |
}
|
| 135 |
}
|
| 136 |
|
|
@@ -143,7 +105,7 @@ function claudeMessageToAntigravity(claudeMessages, enableThinking, actualModelN
|
|
| 143 |
handleClaudeToolResult(message, antigravityMessages);
|
| 144 |
} else {
|
| 145 |
const extracted = extractImagesFromClaudeContent(content);
|
| 146 |
-
|
| 147 |
}
|
| 148 |
} else if (message.role === 'assistant') {
|
| 149 |
handleClaudeAssistantMessage(message, antigravityMessages, enableThinking, actualModelName, sessionId);
|
|
@@ -152,65 +114,16 @@ function claudeMessageToAntigravity(claudeMessages, enableThinking, actualModelN
|
|
| 152 |
return antigravityMessages;
|
| 153 |
}
|
| 154 |
|
| 155 |
-
function convertClaudeToolsToAntigravity(claudeTools, sessionId, actualModelName) {
|
| 156 |
-
if (!claudeTools || claudeTools.length === 0) return [];
|
| 157 |
-
return claudeTools.map((tool) => {
|
| 158 |
-
const rawParams = tool.input_schema || {};
|
| 159 |
-
const cleanedParams = cleanParameters(rawParams) || {};
|
| 160 |
-
if (cleanedParams.type === undefined) cleanedParams.type = 'object';
|
| 161 |
-
if (cleanedParams.type === 'object' && cleanedParams.properties === undefined) cleanedParams.properties = {};
|
| 162 |
-
|
| 163 |
-
const originalName = tool.name;
|
| 164 |
-
const safeName = sanitizeToolName(originalName);
|
| 165 |
-
if (sessionId && actualModelName && safeName !== originalName) {
|
| 166 |
-
setToolNameMapping(sessionId, actualModelName, safeName, originalName);
|
| 167 |
-
}
|
| 168 |
-
|
| 169 |
-
return {
|
| 170 |
-
functionDeclarations: [{
|
| 171 |
-
name: safeName,
|
| 172 |
-
description: tool.description || '',
|
| 173 |
-
parameters: cleanedParams
|
| 174 |
-
}]
|
| 175 |
-
};
|
| 176 |
-
});
|
| 177 |
-
}
|
| 178 |
-
|
| 179 |
export function generateClaudeRequestBody(claudeMessages, modelName, parameters, claudeTools, systemPrompt, token) {
|
| 180 |
const enableThinking = isEnableThinking(modelName);
|
| 181 |
const actualModelName = modelMapping(modelName);
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
mergedSystem
|
| 190 |
-
}
|
| 191 |
-
mergedSystem = baseSystem;
|
| 192 |
-
}
|
| 193 |
-
|
| 194 |
-
const requestBody = {
|
| 195 |
-
project: token.projectId,
|
| 196 |
-
requestId: generateRequestId(),
|
| 197 |
-
request: {
|
| 198 |
-
contents: claudeMessageToAntigravity(claudeMessages, enableThinking, actualModelName, token.sessionId),
|
| 199 |
-
tools: convertClaudeToolsToAntigravity(claudeTools, token.sessionId, actualModelName),
|
| 200 |
-
toolConfig: { functionCallingConfig: { mode: 'VALIDATED' } },
|
| 201 |
-
generationConfig: generateGenerationConfig(parameters, enableThinking, actualModelName),
|
| 202 |
-
sessionId: token.sessionId
|
| 203 |
-
},
|
| 204 |
-
model: actualModelName,
|
| 205 |
-
userAgent: 'antigravity'
|
| 206 |
-
};
|
| 207 |
-
|
| 208 |
-
if (mergedSystem) {
|
| 209 |
-
requestBody.request.systemInstruction = {
|
| 210 |
-
role: 'user',
|
| 211 |
-
parts: [{ text: mergedSystem }]
|
| 212 |
-
};
|
| 213 |
-
}
|
| 214 |
-
|
| 215 |
-
return requestBody;
|
| 216 |
}
|
|
|
|
| 1 |
// Claude 格式转换工具
|
| 2 |
import config from '../../config/config.js';
|
| 3 |
+
import { convertClaudeToolsToAntigravity } from '../toolConverter.js';
|
| 4 |
+
import {
|
| 5 |
+
getSignatureContext,
|
| 6 |
+
pushUserMessage,
|
| 7 |
+
findFunctionNameById,
|
| 8 |
+
pushFunctionResponse,
|
| 9 |
+
createThoughtPart,
|
| 10 |
+
createFunctionCallPart,
|
| 11 |
+
processToolName,
|
| 12 |
+
pushModelMessage,
|
| 13 |
+
buildRequestBody,
|
| 14 |
+
mergeSystemInstruction,
|
| 15 |
+
modelMapping,
|
| 16 |
+
isEnableThinking,
|
| 17 |
+
generateGenerationConfig
|
| 18 |
+
} from './common.js';
|
| 19 |
|
| 20 |
function extractImagesFromClaudeContent(content) {
|
| 21 |
const result = { text: '', images: [] };
|
|
|
|
| 43 |
return result;
|
| 44 |
}
|
| 45 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
function handleClaudeAssistantMessage(message, antigravityMessages, enableThinking, actualModelName, sessionId) {
|
|
|
|
| 47 |
const content = message.content;
|
| 48 |
+
const { reasoningSignature, toolSignature } = getSignatureContext(sessionId, actualModelName);
|
| 49 |
|
| 50 |
let textContent = '';
|
| 51 |
const toolCalls = [];
|
|
|
|
| 57 |
if (item.type === 'text') {
|
| 58 |
textContent += item.text || '';
|
| 59 |
} else if (item.type === 'tool_use') {
|
| 60 |
+
const safeName = processToolName(item.name, sessionId, actualModelName);
|
| 61 |
+
const signature = enableThinking ? toolSignature : null;
|
| 62 |
+
toolCalls.push(createFunctionCallPart(item.id, safeName, JSON.stringify(item.input || {}), signature));
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
}
|
| 64 |
}
|
| 65 |
}
|
| 66 |
|
|
|
|
| 67 |
const hasContent = textContent && textContent.trim() !== '';
|
| 68 |
+
const parts = [];
|
| 69 |
+
|
| 70 |
+
if (enableThinking) {
|
| 71 |
+
parts.push(createThoughtPart(' ', reasoningSignature));
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
}
|
| 73 |
+
if (hasContent) parts.push({ text: textContent.trimEnd() });
|
| 74 |
+
|
| 75 |
+
pushModelMessage({ parts, toolCalls, hasContent }, antigravityMessages);
|
| 76 |
}
|
| 77 |
|
| 78 |
function handleClaudeToolResult(message, antigravityMessages) {
|
|
|
|
| 83 |
if (item.type !== 'tool_result') continue;
|
| 84 |
|
| 85 |
const toolUseId = item.tool_use_id;
|
| 86 |
+
const functionName = findFunctionNameById(toolUseId, antigravityMessages);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
|
|
|
|
| 88 |
let resultContent = '';
|
| 89 |
if (typeof item.content === 'string') {
|
| 90 |
resultContent = item.content;
|
|
|
|
| 92 |
resultContent = item.content.filter(c => c.type === 'text').map(c => c.text).join('');
|
| 93 |
}
|
| 94 |
|
| 95 |
+
pushFunctionResponse(toolUseId, functionName, resultContent, antigravityMessages);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
}
|
| 97 |
}
|
| 98 |
|
|
|
|
| 105 |
handleClaudeToolResult(message, antigravityMessages);
|
| 106 |
} else {
|
| 107 |
const extracted = extractImagesFromClaudeContent(content);
|
| 108 |
+
pushUserMessage(extracted, antigravityMessages);
|
| 109 |
}
|
| 110 |
} else if (message.role === 'assistant') {
|
| 111 |
handleClaudeAssistantMessage(message, antigravityMessages, enableThinking, actualModelName, sessionId);
|
|
|
|
| 114 |
return antigravityMessages;
|
| 115 |
}
|
| 116 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 117 |
export function generateClaudeRequestBody(claudeMessages, modelName, parameters, claudeTools, systemPrompt, token) {
|
| 118 |
const enableThinking = isEnableThinking(modelName);
|
| 119 |
const actualModelName = modelMapping(modelName);
|
| 120 |
+
const mergedSystem = mergeSystemInstruction(config.systemInstruction || '', systemPrompt);
|
| 121 |
+
|
| 122 |
+
return buildRequestBody({
|
| 123 |
+
contents: claudeMessageToAntigravity(claudeMessages, enableThinking, actualModelName, token.sessionId),
|
| 124 |
+
tools: convertClaudeToolsToAntigravity(claudeTools, token.sessionId, actualModelName),
|
| 125 |
+
generationConfig: generateGenerationConfig(parameters, enableThinking, actualModelName),
|
| 126 |
+
sessionId: token.sessionId,
|
| 127 |
+
systemInstruction: mergedSystem
|
| 128 |
+
}, token, actualModelName);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 129 |
}
|
|
@@ -0,0 +1,211 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
// 转换器公共模块
|
| 2 |
+
import config from '../../config/config.js';
|
| 3 |
+
import { generateRequestId } from '../idGenerator.js';
|
| 4 |
+
import { getReasoningSignature, getToolSignature } from '../thoughtSignatureCache.js';
|
| 5 |
+
import { setToolNameMapping } from '../toolNameCache.js';
|
| 6 |
+
import { getThoughtSignatureForModel, getToolSignatureForModel, sanitizeToolName, modelMapping, isEnableThinking, generateGenerationConfig } from '../utils.js';
|
| 7 |
+
|
| 8 |
+
/**
|
| 9 |
+
* 获取签名上下文
|
| 10 |
+
* @param {string} sessionId - 会话 ID
|
| 11 |
+
* @param {string} actualModelName - 实际模型名称
|
| 12 |
+
* @returns {Object} 包含思维签名和工具签名的对象
|
| 13 |
+
*/
|
| 14 |
+
export function getSignatureContext(sessionId, actualModelName) {
|
| 15 |
+
const cachedReasoningSig = getReasoningSignature(sessionId, actualModelName);
|
| 16 |
+
const cachedToolSig = getToolSignature(sessionId, actualModelName);
|
| 17 |
+
|
| 18 |
+
return {
|
| 19 |
+
reasoningSignature: cachedReasoningSig || getThoughtSignatureForModel(actualModelName),
|
| 20 |
+
toolSignature: cachedToolSig || getToolSignatureForModel(actualModelName)
|
| 21 |
+
};
|
| 22 |
+
}
|
| 23 |
+
|
| 24 |
+
/**
|
| 25 |
+
* 添加用户消息到 antigravityMessages
|
| 26 |
+
* @param {Object} extracted - 提取的内容 { text, images }
|
| 27 |
+
* @param {Array} antigravityMessages - 目标消息数组
|
| 28 |
+
*/
|
| 29 |
+
export function pushUserMessage(extracted, antigravityMessages) {
|
| 30 |
+
antigravityMessages.push({
|
| 31 |
+
role: 'user',
|
| 32 |
+
parts: [{ text: extracted.text }, ...extracted.images]
|
| 33 |
+
});
|
| 34 |
+
}
|
| 35 |
+
|
| 36 |
+
/**
|
| 37 |
+
* 根据工具调用 ID 查找函数名
|
| 38 |
+
* @param {string} toolCallId - 工具调用 ID
|
| 39 |
+
* @param {Array} antigravityMessages - 消息数组
|
| 40 |
+
* @returns {string} 函数名
|
| 41 |
+
*/
|
| 42 |
+
export function findFunctionNameById(toolCallId, antigravityMessages) {
|
| 43 |
+
for (let i = antigravityMessages.length - 1; i >= 0; i--) {
|
| 44 |
+
if (antigravityMessages[i].role === 'model') {
|
| 45 |
+
const parts = antigravityMessages[i].parts;
|
| 46 |
+
for (const part of parts) {
|
| 47 |
+
if (part.functionCall && part.functionCall.id === toolCallId) {
|
| 48 |
+
return part.functionCall.name;
|
| 49 |
+
}
|
| 50 |
+
}
|
| 51 |
+
}
|
| 52 |
+
}
|
| 53 |
+
return '';
|
| 54 |
+
}
|
| 55 |
+
|
| 56 |
+
/**
|
| 57 |
+
* 添加函数响应到 antigravityMessages
|
| 58 |
+
* @param {string} toolCallId - 工具调用 ID
|
| 59 |
+
* @param {string} functionName - 函数名
|
| 60 |
+
* @param {string} resultContent - 响应内容
|
| 61 |
+
* @param {Array} antigravityMessages - 目标消息数组
|
| 62 |
+
*/
|
| 63 |
+
export function pushFunctionResponse(toolCallId, functionName, resultContent, antigravityMessages) {
|
| 64 |
+
const lastMessage = antigravityMessages[antigravityMessages.length - 1];
|
| 65 |
+
const functionResponse = {
|
| 66 |
+
functionResponse: {
|
| 67 |
+
id: toolCallId,
|
| 68 |
+
name: functionName,
|
| 69 |
+
response: { output: resultContent }
|
| 70 |
+
}
|
| 71 |
+
};
|
| 72 |
+
|
| 73 |
+
if (lastMessage?.role === 'user' && lastMessage.parts.some(p => p.functionResponse)) {
|
| 74 |
+
lastMessage.parts.push(functionResponse);
|
| 75 |
+
} else {
|
| 76 |
+
antigravityMessages.push({ role: 'user', parts: [functionResponse] });
|
| 77 |
+
}
|
| 78 |
+
}
|
| 79 |
+
|
| 80 |
+
/**
|
| 81 |
+
* 创建带签名的思维 part
|
| 82 |
+
* @param {string} text - 思维文本
|
| 83 |
+
* @param {string} signature - 签名
|
| 84 |
+
* @returns {Object} 思维 part
|
| 85 |
+
*/
|
| 86 |
+
export function createThoughtPart(text, signature) {
|
| 87 |
+
return { text: text || ' ', thought: true, thoughtSignature: signature };
|
| 88 |
+
}
|
| 89 |
+
|
| 90 |
+
/**
|
| 91 |
+
* 创建带签名的函数调用 part
|
| 92 |
+
* @param {string} id - 调用 ID
|
| 93 |
+
* @param {string} name - 函数名(已清理)
|
| 94 |
+
* @param {Object|string} args - 参数
|
| 95 |
+
* @param {string} signature - 签名(可选)
|
| 96 |
+
* @returns {Object} 函数调用 part
|
| 97 |
+
*/
|
| 98 |
+
export function createFunctionCallPart(id, name, args, signature = null) {
|
| 99 |
+
const part = {
|
| 100 |
+
functionCall: {
|
| 101 |
+
id,
|
| 102 |
+
name,
|
| 103 |
+
args: typeof args === 'string' ? { query: args } : args
|
| 104 |
+
}
|
| 105 |
+
};
|
| 106 |
+
if (signature) {
|
| 107 |
+
part.thoughtSignature = signature;
|
| 108 |
+
}
|
| 109 |
+
return part;
|
| 110 |
+
}
|
| 111 |
+
|
| 112 |
+
/**
|
| 113 |
+
* 处理工具名称映射
|
| 114 |
+
* @param {string} originalName - 原始名称
|
| 115 |
+
* @param {string} sessionId - 会话 ID
|
| 116 |
+
* @param {string} actualModelName - 实际模型名称
|
| 117 |
+
* @returns {string} 清理后的安全名称
|
| 118 |
+
*/
|
| 119 |
+
export function processToolName(originalName, sessionId, actualModelName) {
|
| 120 |
+
const safeName = sanitizeToolName(originalName);
|
| 121 |
+
if (sessionId && actualModelName && safeName !== originalName) {
|
| 122 |
+
setToolNameMapping(sessionId, actualModelName, safeName, originalName);
|
| 123 |
+
}
|
| 124 |
+
return safeName;
|
| 125 |
+
}
|
| 126 |
+
|
| 127 |
+
/**
|
| 128 |
+
* 添加模型消息到 antigravityMessages
|
| 129 |
+
* @param {Object} options - 选项
|
| 130 |
+
* @param {Array} options.parts - 消息 parts
|
| 131 |
+
* @param {Array} options.toolCalls - 工具调用 parts
|
| 132 |
+
* @param {boolean} options.hasContent - 是否有文本内容
|
| 133 |
+
* @param {Array} antigravityMessages - 目标消息数组
|
| 134 |
+
*/
|
| 135 |
+
export function pushModelMessage({ parts, toolCalls, hasContent }, antigravityMessages) {
|
| 136 |
+
const lastMessage = antigravityMessages[antigravityMessages.length - 1];
|
| 137 |
+
const hasToolCalls = toolCalls && toolCalls.length > 0;
|
| 138 |
+
|
| 139 |
+
if (lastMessage?.role === 'model' && hasToolCalls && !hasContent) {
|
| 140 |
+
lastMessage.parts.push(...toolCalls);
|
| 141 |
+
} else {
|
| 142 |
+
const allParts = [...parts, ...(toolCalls || [])];
|
| 143 |
+
antigravityMessages.push({ role: 'model', parts: allParts });
|
| 144 |
+
}
|
| 145 |
+
}
|
| 146 |
+
|
| 147 |
+
/**
|
| 148 |
+
* 构建基础请求体
|
| 149 |
+
* @param {Object} options - 选项
|
| 150 |
+
* @param {Array} options.contents - 消息内容
|
| 151 |
+
* @param {Array} options.tools - 工具列表
|
| 152 |
+
* @param {Object} options.generationConfig - 生成配置
|
| 153 |
+
* @param {string} options.sessionId - 会话 ID
|
| 154 |
+
* @param {string} options.systemInstruction - 系统指令
|
| 155 |
+
* @param {Object} token - Token 对象
|
| 156 |
+
* @param {string} actualModelName - 实际模型名称
|
| 157 |
+
* @returns {Object} 请求体
|
| 158 |
+
*/
|
| 159 |
+
export function buildRequestBody({ contents, tools, generationConfig, sessionId, systemInstruction }, token, actualModelName) {
|
| 160 |
+
const requestBody = {
|
| 161 |
+
project: token.projectId,
|
| 162 |
+
requestId: generateRequestId(),
|
| 163 |
+
request: {
|
| 164 |
+
contents,
|
| 165 |
+
tools: tools || [],
|
| 166 |
+
toolConfig: { functionCallingConfig: { mode: 'VALIDATED' } },
|
| 167 |
+
generationConfig,
|
| 168 |
+
sessionId
|
| 169 |
+
},
|
| 170 |
+
model: actualModelName,
|
| 171 |
+
userAgent: 'antigravity'
|
| 172 |
+
};
|
| 173 |
+
|
| 174 |
+
if (systemInstruction) {
|
| 175 |
+
requestBody.request.systemInstruction = {
|
| 176 |
+
role: 'user',
|
| 177 |
+
parts: [{ text: systemInstruction }]
|
| 178 |
+
};
|
| 179 |
+
}
|
| 180 |
+
|
| 181 |
+
return requestBody;
|
| 182 |
+
}
|
| 183 |
+
|
| 184 |
+
/**
|
| 185 |
+
* 合并系统指令
|
| 186 |
+
* @param {string} baseSystem - 基础系统指令
|
| 187 |
+
* @param {string} contextSystem - 上下文系统指令
|
| 188 |
+
* @returns {string} 合并后的系统指令
|
| 189 |
+
*/
|
| 190 |
+
export function mergeSystemInstruction(baseSystem, contextSystem) {
|
| 191 |
+
if (!config.useContextSystemPrompt || !contextSystem) {
|
| 192 |
+
return baseSystem || '';
|
| 193 |
+
}
|
| 194 |
+
|
| 195 |
+
const parts = [];
|
| 196 |
+
if (baseSystem && baseSystem.trim()) parts.push(baseSystem.trim());
|
| 197 |
+
if (contextSystem && contextSystem.trim()) parts.push(contextSystem.trim());
|
| 198 |
+
return parts.join('\n\n');
|
| 199 |
+
}
|
| 200 |
+
|
| 201 |
+
// 重导出常用函数
|
| 202 |
+
export { sanitizeToolName, modelMapping, isEnableThinking, generateGenerationConfig };
|
| 203 |
+
|
| 204 |
+
// 重导出参数规范化函数
|
| 205 |
+
export {
|
| 206 |
+
normalizeOpenAIParameters,
|
| 207 |
+
normalizeClaudeParameters,
|
| 208 |
+
normalizeGeminiParameters,
|
| 209 |
+
normalizeParameters,
|
| 210 |
+
toGenerationConfig
|
| 211 |
+
} from '../parameterNormalizer.js';
|
|
@@ -1,79 +1,152 @@
|
|
| 1 |
// Gemini 格式转换工具
|
| 2 |
import config from '../../config/config.js';
|
| 3 |
import { generateRequestId } from '../idGenerator.js';
|
| 4 |
-
import {
|
| 5 |
-
import {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
|
| 7 |
export function generateGeminiRequestBody(geminiBody, modelName, token) {
|
| 8 |
const enableThinking = isEnableThinking(modelName);
|
| 9 |
const actualModelName = modelMapping(modelName);
|
| 10 |
-
|
| 11 |
const request = JSON.parse(JSON.stringify(geminiBody));
|
| 12 |
|
| 13 |
if (request.contents && Array.isArray(request.contents)) {
|
| 14 |
-
|
| 15 |
-
request.contents.forEach(content => {
|
| 16 |
-
if (content.role === 'model' && content.parts && Array.isArray(content.parts)) {
|
| 17 |
-
content.parts.forEach(part => {
|
| 18 |
-
if (part.functionCall) {
|
| 19 |
-
if (!part.functionCall.id) {
|
| 20 |
-
part.functionCall.id = `call_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
|
| 21 |
-
}
|
| 22 |
-
functionCallIds.push(part.functionCall.id);
|
| 23 |
-
}
|
| 24 |
-
});
|
| 25 |
-
}
|
| 26 |
-
});
|
| 27 |
-
|
| 28 |
-
let responseIndex = 0;
|
| 29 |
-
request.contents.forEach(content => {
|
| 30 |
-
if (content.role === 'user' && content.parts && Array.isArray(content.parts)) {
|
| 31 |
-
content.parts.forEach(part => {
|
| 32 |
-
if (part.functionResponse) {
|
| 33 |
-
if (!part.functionResponse.id && responseIndex < functionCallIds.length) {
|
| 34 |
-
part.functionResponse.id = functionCallIds[responseIndex];
|
| 35 |
-
responseIndex++;
|
| 36 |
-
}
|
| 37 |
-
}
|
| 38 |
-
});
|
| 39 |
-
}
|
| 40 |
-
});
|
| 41 |
|
| 42 |
if (enableThinking) {
|
| 43 |
-
const
|
| 44 |
-
|
| 45 |
-
|
| 46 |
request.contents.forEach(content => {
|
| 47 |
if (content.role === 'model' && content.parts && Array.isArray(content.parts)) {
|
| 48 |
-
|
| 49 |
-
if (!hasThought) {
|
| 50 |
-
content.parts.unshift(
|
| 51 |
-
{ text: ' ', thought: true },
|
| 52 |
-
{ text: ' ', thoughtSignature }
|
| 53 |
-
);
|
| 54 |
-
}
|
| 55 |
}
|
| 56 |
});
|
| 57 |
}
|
| 58 |
}
|
| 59 |
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
const defaultThinkingBudget = config.defaults.thinking_budget ?? 1024;
|
| 66 |
-
if (!request.generationConfig.thinkingConfig) {
|
| 67 |
-
request.generationConfig.thinkingConfig = {
|
| 68 |
-
includeThoughts: true,
|
| 69 |
-
thinkingBudget: defaultThinkingBudget
|
| 70 |
-
};
|
| 71 |
-
}
|
| 72 |
-
}
|
| 73 |
-
|
| 74 |
-
request.generationConfig.candidateCount = 1;
|
| 75 |
request.sessionId = token.sessionId;
|
| 76 |
delete request.safetySettings;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
|
| 78 |
const existingText = request.systemInstruction?.parts?.[0]?.text || '';
|
| 79 |
const mergedText = existingText ? `${config.systemInstruction}\n\n${existingText}` : config.systemInstruction ?? "";
|
|
|
|
| 1 |
// Gemini 格式转换工具
|
| 2 |
import config from '../../config/config.js';
|
| 3 |
import { generateRequestId } from '../idGenerator.js';
|
| 4 |
+
import { convertGeminiToolsToAntigravity } from '../toolConverter.js';
|
| 5 |
+
import { getSignatureContext, createThoughtPart, modelMapping, isEnableThinking } from './common.js';
|
| 6 |
+
import { normalizeGeminiParameters, toGenerationConfig } from '../parameterNormalizer.js';
|
| 7 |
+
|
| 8 |
+
/**
|
| 9 |
+
* 为 functionCall 生成唯一 ID
|
| 10 |
+
*/
|
| 11 |
+
function generateFunctionCallId() {
|
| 12 |
+
return `call_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
|
| 13 |
+
}
|
| 14 |
+
|
| 15 |
+
/**
|
| 16 |
+
* 处理 functionCall 和 functionResponse 的 ID 匹配
|
| 17 |
+
*/
|
| 18 |
+
function processFunctionCallIds(contents) {
|
| 19 |
+
const functionCallIds = [];
|
| 20 |
+
|
| 21 |
+
// 收集所有 functionCall 的 ID
|
| 22 |
+
contents.forEach(content => {
|
| 23 |
+
if (content.role === 'model' && content.parts && Array.isArray(content.parts)) {
|
| 24 |
+
content.parts.forEach(part => {
|
| 25 |
+
if (part.functionCall) {
|
| 26 |
+
if (!part.functionCall.id) {
|
| 27 |
+
part.functionCall.id = generateFunctionCallId();
|
| 28 |
+
}
|
| 29 |
+
functionCallIds.push(part.functionCall.id);
|
| 30 |
+
}
|
| 31 |
+
});
|
| 32 |
+
}
|
| 33 |
+
});
|
| 34 |
+
|
| 35 |
+
// 为 functionResponse 分配对应的 ID
|
| 36 |
+
let responseIndex = 0;
|
| 37 |
+
contents.forEach(content => {
|
| 38 |
+
if (content.role === 'user' && content.parts && Array.isArray(content.parts)) {
|
| 39 |
+
content.parts.forEach(part => {
|
| 40 |
+
if (part.functionResponse) {
|
| 41 |
+
if (!part.functionResponse.id && responseIndex < functionCallIds.length) {
|
| 42 |
+
part.functionResponse.id = functionCallIds[responseIndex];
|
| 43 |
+
responseIndex++;
|
| 44 |
+
}
|
| 45 |
+
}
|
| 46 |
+
});
|
| 47 |
+
}
|
| 48 |
+
});
|
| 49 |
+
}
|
| 50 |
+
|
| 51 |
+
/**
|
| 52 |
+
* 处理 model 消息中的 thought 和签名
|
| 53 |
+
*/
|
| 54 |
+
function processModelThoughts(content, reasoningSignature, toolSignature) {
|
| 55 |
+
const parts = content.parts;
|
| 56 |
+
|
| 57 |
+
// 查找 thought 和独立 thoughtSignature 的位置
|
| 58 |
+
let thoughtIndex = -1;
|
| 59 |
+
let signatureIndex = -1;
|
| 60 |
+
let signatureValue = null;
|
| 61 |
+
|
| 62 |
+
for (let i = 0; i < parts.length; i++) {
|
| 63 |
+
const part = parts[i];
|
| 64 |
+
if (part.thought === true && !part.thoughtSignature) {
|
| 65 |
+
thoughtIndex = i;
|
| 66 |
+
}
|
| 67 |
+
if (part.thoughtSignature && !part.thought) {
|
| 68 |
+
signatureIndex = i;
|
| 69 |
+
signatureValue = part.thoughtSignature;
|
| 70 |
+
}
|
| 71 |
+
}
|
| 72 |
+
|
| 73 |
+
// 合并或添加 thought 和签名
|
| 74 |
+
if (thoughtIndex !== -1 && signatureIndex !== -1) {
|
| 75 |
+
parts[thoughtIndex].thoughtSignature = signatureValue;
|
| 76 |
+
parts.splice(signatureIndex, 1);
|
| 77 |
+
} else if (thoughtIndex !== -1 && signatureIndex === -1) {
|
| 78 |
+
parts[thoughtIndex].thoughtSignature = reasoningSignature;
|
| 79 |
+
} else if (thoughtIndex === -1) {
|
| 80 |
+
parts.unshift(createThoughtPart(' ', reasoningSignature));
|
| 81 |
+
}
|
| 82 |
+
|
| 83 |
+
// 收集独立的签名 parts(用于 functionCall)
|
| 84 |
+
const standaloneSignatures = [];
|
| 85 |
+
for (let i = parts.length - 1; i >= 0; i--) {
|
| 86 |
+
const part = parts[i];
|
| 87 |
+
if (part.thoughtSignature && !part.thought && !part.functionCall && !part.text) {
|
| 88 |
+
standaloneSignatures.unshift({ index: i, signature: part.thoughtSignature });
|
| 89 |
+
}
|
| 90 |
+
}
|
| 91 |
+
|
| 92 |
+
// 为 functionCall 分配签名
|
| 93 |
+
let sigIndex = 0;
|
| 94 |
+
for (let i = 0; i < parts.length; i++) {
|
| 95 |
+
const part = parts[i];
|
| 96 |
+
if (part.functionCall && !part.thoughtSignature) {
|
| 97 |
+
if (sigIndex < standaloneSignatures.length) {
|
| 98 |
+
part.thoughtSignature = standaloneSignatures[sigIndex].signature;
|
| 99 |
+
sigIndex++;
|
| 100 |
+
} else {
|
| 101 |
+
part.thoughtSignature = toolSignature;
|
| 102 |
+
}
|
| 103 |
+
}
|
| 104 |
+
}
|
| 105 |
+
|
| 106 |
+
// 移除已使用的独立签名 parts
|
| 107 |
+
for (let i = standaloneSignatures.length - 1; i >= 0; i--) {
|
| 108 |
+
if (i < sigIndex) {
|
| 109 |
+
parts.splice(standaloneSignatures[i].index, 1);
|
| 110 |
+
}
|
| 111 |
+
}
|
| 112 |
+
}
|
| 113 |
|
| 114 |
export function generateGeminiRequestBody(geminiBody, modelName, token) {
|
| 115 |
const enableThinking = isEnableThinking(modelName);
|
| 116 |
const actualModelName = modelMapping(modelName);
|
|
|
|
| 117 |
const request = JSON.parse(JSON.stringify(geminiBody));
|
| 118 |
|
| 119 |
if (request.contents && Array.isArray(request.contents)) {
|
| 120 |
+
processFunctionCallIds(request.contents);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
|
| 122 |
if (enableThinking) {
|
| 123 |
+
const { reasoningSignature, toolSignature } = getSignatureContext(token.sessionId, actualModelName);
|
| 124 |
+
|
|
|
|
| 125 |
request.contents.forEach(content => {
|
| 126 |
if (content.role === 'model' && content.parts && Array.isArray(content.parts)) {
|
| 127 |
+
processModelThoughts(content, reasoningSignature, toolSignature);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 128 |
}
|
| 129 |
});
|
| 130 |
}
|
| 131 |
}
|
| 132 |
|
| 133 |
+
// 使用统一参数规范化模块处理 Gemini 格式参数
|
| 134 |
+
const normalizedParams = normalizeGeminiParameters(request.generationConfig || {});
|
| 135 |
+
|
| 136 |
+
// 转换为 generationConfig 格式
|
| 137 |
+
request.generationConfig = toGenerationConfig(normalizedParams, enableThinking, actualModelName);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 138 |
request.sessionId = token.sessionId;
|
| 139 |
delete request.safetySettings;
|
| 140 |
+
|
| 141 |
+
// 转换工具定义
|
| 142 |
+
if (request.tools && Array.isArray(request.tools)) {
|
| 143 |
+
request.tools = convertGeminiToolsToAntigravity(request.tools, token.sessionId, actualModelName);
|
| 144 |
+
}
|
| 145 |
+
|
| 146 |
+
// 添加工具配置
|
| 147 |
+
if (request.tools && request.tools.length > 0 && !request.toolConfig) {
|
| 148 |
+
request.toolConfig = { functionCallingConfig: { mode: 'VALIDATED' } };
|
| 149 |
+
}
|
| 150 |
|
| 151 |
const existingText = request.systemInstruction?.parts?.[0]?.text || '';
|
| 152 |
const mergedText = existingText ? `${config.systemInstruction}\n\n${existingText}` : config.systemInstruction ?? "";
|
|
@@ -1,9 +1,21 @@
|
|
| 1 |
// OpenAI 格式转换工具
|
| 2 |
import config from '../../config/config.js';
|
| 3 |
-
import {
|
| 4 |
-
import {
|
| 5 |
-
import {
|
| 6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
|
| 8 |
function extractImagesFromContent(content) {
|
| 9 |
const result = { text: '', images: [] };
|
|
@@ -32,86 +44,33 @@ function extractImagesFromContent(content) {
|
|
| 32 |
return result;
|
| 33 |
}
|
| 34 |
|
| 35 |
-
function handleUserMessage(extracted, antigravityMessages) {
|
| 36 |
-
antigravityMessages.push({
|
| 37 |
-
role: 'user',
|
| 38 |
-
parts: [{ text: extracted.text }, ...extracted.images]
|
| 39 |
-
});
|
| 40 |
-
}
|
| 41 |
-
|
| 42 |
function handleAssistantMessage(message, antigravityMessages, enableThinking, actualModelName, sessionId) {
|
| 43 |
-
const lastMessage = antigravityMessages[antigravityMessages.length - 1];
|
| 44 |
const hasToolCalls = message.tool_calls && message.tool_calls.length > 0;
|
| 45 |
const hasContent = message.content && message.content.trim() !== '';
|
|
|
|
| 46 |
|
| 47 |
-
const
|
| 48 |
? message.tool_calls.map(toolCall => {
|
| 49 |
-
const
|
| 50 |
-
const
|
| 51 |
-
|
| 52 |
-
functionCall: {
|
| 53 |
-
id: toolCall.id,
|
| 54 |
-
name: safeName,
|
| 55 |
-
args: { query: toolCall.function.arguments }
|
| 56 |
-
}
|
| 57 |
-
};
|
| 58 |
-
if (sessionId && actualModelName && safeName !== originalName) {
|
| 59 |
-
setToolNameMapping(sessionId, actualModelName, safeName, originalName);
|
| 60 |
-
}
|
| 61 |
-
if (enableThinking) {
|
| 62 |
-
const cachedToolSig = getToolSignature(sessionId, actualModelName);
|
| 63 |
-
part.thoughtSignature = toolCall.thoughtSignature || cachedToolSig || getToolSignatureForModel(actualModelName);
|
| 64 |
-
}
|
| 65 |
-
return part;
|
| 66 |
})
|
| 67 |
: [];
|
| 68 |
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
const cachedSig = getReasoningSignature(sessionId, actualModelName);
|
| 75 |
-
const thoughtSignature = message.thoughtSignature || cachedSig || getThoughtSignatureForModel(actualModelName);
|
| 76 |
-
const reasoningText = (typeof message.reasoning_content === 'string' && message.reasoning_content.length > 0) ? message.reasoning_content : ' ';
|
| 77 |
-
parts.push({ text: reasoningText, thought: true });
|
| 78 |
-
parts.push({ text: ' ', thoughtSignature });
|
| 79 |
-
}
|
| 80 |
-
if (hasContent) parts.push({ text: message.content.trimEnd() });
|
| 81 |
-
parts.push(...antigravityTools);
|
| 82 |
-
antigravityMessages.push({ role: 'model', parts });
|
| 83 |
}
|
|
|
|
|
|
|
|
|
|
| 84 |
}
|
| 85 |
|
| 86 |
function handleToolCall(message, antigravityMessages) {
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
if (antigravityMessages[i].role === 'model') {
|
| 90 |
-
const parts = antigravityMessages[i].parts;
|
| 91 |
-
for (const part of parts) {
|
| 92 |
-
if (part.functionCall && part.functionCall.id === message.tool_call_id) {
|
| 93 |
-
functionName = part.functionCall.name;
|
| 94 |
-
break;
|
| 95 |
-
}
|
| 96 |
-
}
|
| 97 |
-
if (functionName) break;
|
| 98 |
-
}
|
| 99 |
-
}
|
| 100 |
-
|
| 101 |
-
const lastMessage = antigravityMessages[antigravityMessages.length - 1];
|
| 102 |
-
const functionResponse = {
|
| 103 |
-
functionResponse: {
|
| 104 |
-
id: message.tool_call_id,
|
| 105 |
-
name: functionName,
|
| 106 |
-
response: { output: message.content }
|
| 107 |
-
}
|
| 108 |
-
};
|
| 109 |
-
|
| 110 |
-
if (lastMessage?.role === 'user' && lastMessage.parts.some(p => p.functionResponse)) {
|
| 111 |
-
lastMessage.parts.push(functionResponse);
|
| 112 |
-
} else {
|
| 113 |
-
antigravityMessages.push({ role: 'user', parts: [functionResponse] });
|
| 114 |
-
}
|
| 115 |
}
|
| 116 |
|
| 117 |
function openaiMessageToAntigravity(openaiMessages, enableThinking, actualModelName, sessionId) {
|
|
@@ -119,7 +78,7 @@ function openaiMessageToAntigravity(openaiMessages, enableThinking, actualModelN
|
|
| 119 |
for (const message of openaiMessages) {
|
| 120 |
if (message.role === 'user' || message.role === 'system') {
|
| 121 |
const extracted = extractImagesFromContent(message.content);
|
| 122 |
-
|
| 123 |
} else if (message.role === 'assistant') {
|
| 124 |
handleAssistantMessage(message, antigravityMessages, enableThinking, actualModelName, sessionId);
|
| 125 |
} else if (message.role === 'tool') {
|
|
@@ -129,34 +88,11 @@ function openaiMessageToAntigravity(openaiMessages, enableThinking, actualModelN
|
|
| 129 |
return antigravityMessages;
|
| 130 |
}
|
| 131 |
|
| 132 |
-
function convertOpenAIToolsToAntigravity(openaiTools, sessionId, actualModelName) {
|
| 133 |
-
if (!openaiTools || openaiTools.length === 0) return [];
|
| 134 |
-
return openaiTools.map((tool) => {
|
| 135 |
-
const rawParams = tool.function?.parameters || {};
|
| 136 |
-
const cleanedParams = cleanParameters(rawParams) || {};
|
| 137 |
-
if (cleanedParams.type === undefined) cleanedParams.type = 'object';
|
| 138 |
-
if (cleanedParams.type === 'object' && cleanedParams.properties === undefined) cleanedParams.properties = {};
|
| 139 |
-
|
| 140 |
-
const originalName = tool.function?.name;
|
| 141 |
-
const safeName = sanitizeToolName(originalName);
|
| 142 |
-
if (sessionId && actualModelName && safeName !== originalName) {
|
| 143 |
-
setToolNameMapping(sessionId, actualModelName, safeName, originalName);
|
| 144 |
-
}
|
| 145 |
-
|
| 146 |
-
return {
|
| 147 |
-
functionDeclarations: [{
|
| 148 |
-
name: safeName,
|
| 149 |
-
description: tool.function.description,
|
| 150 |
-
parameters: cleanedParams
|
| 151 |
-
}]
|
| 152 |
-
};
|
| 153 |
-
});
|
| 154 |
-
}
|
| 155 |
-
|
| 156 |
export function generateRequestBody(openaiMessages, modelName, parameters, openaiTools, token) {
|
| 157 |
const enableThinking = isEnableThinking(modelName);
|
| 158 |
const actualModelName = modelMapping(modelName);
|
| 159 |
const mergedSystemInstruction = extractSystemInstruction(openaiMessages);
|
|
|
|
| 160 |
let filteredMessages = openaiMessages;
|
| 161 |
let startIndex = 0;
|
| 162 |
if (config.useContextSystemPrompt) {
|
|
@@ -170,26 +106,11 @@ export function generateRequestBody(openaiMessages, modelName, parameters, opena
|
|
| 170 |
}
|
| 171 |
}
|
| 172 |
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
generationConfig: generateGenerationConfig(parameters, enableThinking, actualModelName),
|
| 181 |
-
sessionId: token.sessionId
|
| 182 |
-
},
|
| 183 |
-
model: actualModelName,
|
| 184 |
-
userAgent: 'antigravity'
|
| 185 |
-
};
|
| 186 |
-
|
| 187 |
-
if (mergedSystemInstruction) {
|
| 188 |
-
requestBody.request.systemInstruction = {
|
| 189 |
-
role: 'user',
|
| 190 |
-
parts: [{ text: mergedSystemInstruction }]
|
| 191 |
-
};
|
| 192 |
-
}
|
| 193 |
-
|
| 194 |
-
return requestBody;
|
| 195 |
}
|
|
|
|
| 1 |
// OpenAI 格式转换工具
|
| 2 |
import config from '../../config/config.js';
|
| 3 |
+
import { extractSystemInstruction } from '../utils.js';
|
| 4 |
+
import { convertOpenAIToolsToAntigravity } from '../toolConverter.js';
|
| 5 |
+
import {
|
| 6 |
+
getSignatureContext,
|
| 7 |
+
pushUserMessage,
|
| 8 |
+
findFunctionNameById,
|
| 9 |
+
pushFunctionResponse,
|
| 10 |
+
createThoughtPart,
|
| 11 |
+
createFunctionCallPart,
|
| 12 |
+
processToolName,
|
| 13 |
+
pushModelMessage,
|
| 14 |
+
buildRequestBody,
|
| 15 |
+
modelMapping,
|
| 16 |
+
isEnableThinking,
|
| 17 |
+
generateGenerationConfig
|
| 18 |
+
} from './common.js';
|
| 19 |
|
| 20 |
function extractImagesFromContent(content) {
|
| 21 |
const result = { text: '', images: [] };
|
|
|
|
| 44 |
return result;
|
| 45 |
}
|
| 46 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
function handleAssistantMessage(message, antigravityMessages, enableThinking, actualModelName, sessionId) {
|
|
|
|
| 48 |
const hasToolCalls = message.tool_calls && message.tool_calls.length > 0;
|
| 49 |
const hasContent = message.content && message.content.trim() !== '';
|
| 50 |
+
const { reasoningSignature, toolSignature } = getSignatureContext(sessionId, actualModelName);
|
| 51 |
|
| 52 |
+
const toolCalls = hasToolCalls
|
| 53 |
? message.tool_calls.map(toolCall => {
|
| 54 |
+
const safeName = processToolName(toolCall.function.name, sessionId, actualModelName);
|
| 55 |
+
const signature = enableThinking ? (toolCall.thoughtSignature || toolSignature) : null;
|
| 56 |
+
return createFunctionCallPart(toolCall.id, safeName, toolCall.function.arguments, signature);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
})
|
| 58 |
: [];
|
| 59 |
|
| 60 |
+
const parts = [];
|
| 61 |
+
if (enableThinking) {
|
| 62 |
+
const reasoningText = (typeof message.reasoning_content === 'string' && message.reasoning_content.length > 0)
|
| 63 |
+
? message.reasoning_content : ' ';
|
| 64 |
+
parts.push(createThoughtPart(reasoningText, message.thoughtSignature || reasoningSignature));
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
}
|
| 66 |
+
if (hasContent) parts.push({ text: message.content.trimEnd() });
|
| 67 |
+
|
| 68 |
+
pushModelMessage({ parts, toolCalls, hasContent }, antigravityMessages);
|
| 69 |
}
|
| 70 |
|
| 71 |
function handleToolCall(message, antigravityMessages) {
|
| 72 |
+
const functionName = findFunctionNameById(message.tool_call_id, antigravityMessages);
|
| 73 |
+
pushFunctionResponse(message.tool_call_id, functionName, message.content, antigravityMessages);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
}
|
| 75 |
|
| 76 |
function openaiMessageToAntigravity(openaiMessages, enableThinking, actualModelName, sessionId) {
|
|
|
|
| 78 |
for (const message of openaiMessages) {
|
| 79 |
if (message.role === 'user' || message.role === 'system') {
|
| 80 |
const extracted = extractImagesFromContent(message.content);
|
| 81 |
+
pushUserMessage(extracted, antigravityMessages);
|
| 82 |
} else if (message.role === 'assistant') {
|
| 83 |
handleAssistantMessage(message, antigravityMessages, enableThinking, actualModelName, sessionId);
|
| 84 |
} else if (message.role === 'tool') {
|
|
|
|
| 88 |
return antigravityMessages;
|
| 89 |
}
|
| 90 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
export function generateRequestBody(openaiMessages, modelName, parameters, openaiTools, token) {
|
| 92 |
const enableThinking = isEnableThinking(modelName);
|
| 93 |
const actualModelName = modelMapping(modelName);
|
| 94 |
const mergedSystemInstruction = extractSystemInstruction(openaiMessages);
|
| 95 |
+
|
| 96 |
let filteredMessages = openaiMessages;
|
| 97 |
let startIndex = 0;
|
| 98 |
if (config.useContextSystemPrompt) {
|
|
|
|
| 106 |
}
|
| 107 |
}
|
| 108 |
|
| 109 |
+
return buildRequestBody({
|
| 110 |
+
contents: openaiMessageToAntigravity(filteredMessages, enableThinking, actualModelName, token.sessionId),
|
| 111 |
+
tools: convertOpenAIToolsToAntigravity(openaiTools, token.sessionId, actualModelName),
|
| 112 |
+
generationConfig: generateGenerationConfig(parameters, enableThinking, actualModelName),
|
| 113 |
+
sessionId: token.sessionId,
|
| 114 |
+
systemInstruction: mergedSystemInstruction
|
| 115 |
+
}, token, actualModelName);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 116 |
}
|
|
@@ -1,26 +1,41 @@
|
|
| 1 |
/**
|
| 2 |
* 智能内存管理器
|
| 3 |
* 采用分级策略,根据内存压力动态调整缓存和对象池
|
| 4 |
-
*
|
| 5 |
* @module utils/memoryManager
|
| 6 |
*/
|
| 7 |
|
| 8 |
import logger from './logger.js';
|
| 9 |
-
import {
|
| 10 |
|
| 11 |
/**
|
| 12 |
* 内存压力级别枚举
|
| 13 |
* @enum {string}
|
| 14 |
*/
|
| 15 |
const MemoryPressure = {
|
| 16 |
-
LOW: 'low', // <
|
| 17 |
-
MEDIUM: 'medium', //
|
| 18 |
-
HIGH: 'high', //
|
| 19 |
-
CRITICAL: 'critical' // >
|
| 20 |
};
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
// 对象池最大大小配置(根据压力调整)
|
| 26 |
const POOL_SIZES = {
|
|
@@ -45,6 +60,8 @@ class MemoryManager {
|
|
| 45 |
this.gcCooldown = GC_COOLDOWN;
|
| 46 |
this.checkInterval = null;
|
| 47 |
this.isShuttingDown = false;
|
|
|
|
|
|
|
| 48 |
|
| 49 |
// 统计信息
|
| 50 |
this.stats = {
|
|
@@ -54,6 +71,31 @@ class MemoryManager {
|
|
| 54 |
};
|
| 55 |
}
|
| 56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
/**
|
| 58 |
* 启动内存监控
|
| 59 |
* @param {number} interval - 检查间隔(毫秒)
|
|
@@ -264,7 +306,8 @@ class MemoryManager {
|
|
| 264 |
currentPressure: this.currentPressure,
|
| 265 |
currentHeapMB: memory.heapUsedMB,
|
| 266 |
peakMemoryMB: Math.round(this.stats.peakMemory / 1024 / 1024 * 10) / 10,
|
| 267 |
-
poolSizes: this.getPoolSizes()
|
|
|
|
| 268 |
};
|
| 269 |
}
|
| 270 |
}
|
|
|
|
| 1 |
/**
|
| 2 |
* 智能内存管理器
|
| 3 |
* 采用分级策略,根据内存压力动态调整缓存和对象池
|
| 4 |
+
* 阈值基于用户配置的 memoryThreshold(MB)动态计算
|
| 5 |
* @module utils/memoryManager
|
| 6 |
*/
|
| 7 |
|
| 8 |
import logger from './logger.js';
|
| 9 |
+
import { GC_COOLDOWN } from '../constants/index.js';
|
| 10 |
|
| 11 |
/**
|
| 12 |
* 内存压力级别枚举
|
| 13 |
* @enum {string}
|
| 14 |
*/
|
| 15 |
const MemoryPressure = {
|
| 16 |
+
LOW: 'low', // < 30% 阈值 - 正常运行
|
| 17 |
+
MEDIUM: 'medium', // 30%-60% 阈值 - 轻度清理
|
| 18 |
+
HIGH: 'high', // 60%-100% 阈值 - 积极清理
|
| 19 |
+
CRITICAL: 'critical' // > 100% 阈值 - 紧急清理
|
| 20 |
};
|
| 21 |
|
| 22 |
+
/**
|
| 23 |
+
* 根据用户配置的内存阈值计算各级别阈值
|
| 24 |
+
* @param {number} thresholdMB - 用户配置的内存阈值(MB),即高压力阈值
|
| 25 |
+
* @returns {Object} 各级别阈值(字节)
|
| 26 |
+
*/
|
| 27 |
+
function calculateThresholds(thresholdMB) {
|
| 28 |
+
const highBytes = thresholdMB * 1024 * 1024;
|
| 29 |
+
return {
|
| 30 |
+
LOW: Math.floor(highBytes * 0.3), // 30% 为低压力阈值
|
| 31 |
+
MEDIUM: Math.floor(highBytes * 0.6), // 60% 为中等压力阈值
|
| 32 |
+
HIGH: highBytes, // 100% 为高压力阈值(用户配置值)
|
| 33 |
+
TARGET: Math.floor(highBytes * 0.5) // 50% 为目标内存
|
| 34 |
+
};
|
| 35 |
+
}
|
| 36 |
+
|
| 37 |
+
// 默认阈值(100MB),会在初始化时被配置覆盖
|
| 38 |
+
let THRESHOLDS = calculateThresholds(100);
|
| 39 |
|
| 40 |
// 对象池最大大小配置(根据压力调整)
|
| 41 |
const POOL_SIZES = {
|
|
|
|
| 60 |
this.gcCooldown = GC_COOLDOWN;
|
| 61 |
this.checkInterval = null;
|
| 62 |
this.isShuttingDown = false;
|
| 63 |
+
/** @type {number} 用户配置的内存阈值(MB) */
|
| 64 |
+
this.configuredThresholdMB = 100;
|
| 65 |
|
| 66 |
// 统计信息
|
| 67 |
this.stats = {
|
|
|
|
| 71 |
};
|
| 72 |
}
|
| 73 |
|
| 74 |
+
/**
|
| 75 |
+
* 设置内存阈值(从配置加载)
|
| 76 |
+
* @param {number} thresholdMB - 内存阈值(MB)
|
| 77 |
+
*/
|
| 78 |
+
setThreshold(thresholdMB) {
|
| 79 |
+
if (thresholdMB && thresholdMB > 0) {
|
| 80 |
+
this.configuredThresholdMB = thresholdMB;
|
| 81 |
+
THRESHOLDS = calculateThresholds(thresholdMB);
|
| 82 |
+
logger.info(`内存阈值已设置: ${thresholdMB}MB (LOW: ${Math.floor(THRESHOLDS.LOW/1024/1024)}MB, MEDIUM: ${Math.floor(THRESHOLDS.MEDIUM/1024/1024)}MB, HIGH: ${Math.floor(THRESHOLDS.HIGH/1024/1024)}MB)`);
|
| 83 |
+
}
|
| 84 |
+
}
|
| 85 |
+
|
| 86 |
+
/**
|
| 87 |
+
* 获取当前阈值配置
|
| 88 |
+
*/
|
| 89 |
+
getThresholds() {
|
| 90 |
+
return {
|
| 91 |
+
configuredMB: this.configuredThresholdMB,
|
| 92 |
+
lowMB: Math.floor(THRESHOLDS.LOW / 1024 / 1024),
|
| 93 |
+
mediumMB: Math.floor(THRESHOLDS.MEDIUM / 1024 / 1024),
|
| 94 |
+
highMB: Math.floor(THRESHOLDS.HIGH / 1024 / 1024),
|
| 95 |
+
targetMB: Math.floor(THRESHOLDS.TARGET / 1024 / 1024)
|
| 96 |
+
};
|
| 97 |
+
}
|
| 98 |
+
|
| 99 |
/**
|
| 100 |
* 启动内存监控
|
| 101 |
* @param {number} interval - 检查间隔(毫秒)
|
|
|
|
| 306 |
currentPressure: this.currentPressure,
|
| 307 |
currentHeapMB: memory.heapUsedMB,
|
| 308 |
peakMemoryMB: Math.round(this.stats.peakMemory / 1024 / 1024 * 10) / 10,
|
| 309 |
+
poolSizes: this.getPoolSizes(),
|
| 310 |
+
thresholds: this.getThresholds()
|
| 311 |
};
|
| 312 |
}
|
| 313 |
}
|
|
@@ -0,0 +1,189 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
// 统一参数处理模块
|
| 2 |
+
// 将 OpenAI、Claude、Gemini 三种格式的参数统一转换为内部格式
|
| 3 |
+
|
| 4 |
+
import config from '../config/config.js';
|
| 5 |
+
import { REASONING_EFFORT_MAP } from '../constants/index.js';
|
| 6 |
+
|
| 7 |
+
/**
|
| 8 |
+
* 内部统一参数格式
|
| 9 |
+
* @typedef {Object} NormalizedParameters
|
| 10 |
+
* @property {number} max_tokens - 最大输出 token 数
|
| 11 |
+
* @property {number} temperature - 温度
|
| 12 |
+
* @property {number} top_p - Top-P 采样
|
| 13 |
+
* @property {number} top_k - Top-K 采样
|
| 14 |
+
* @property {number|undefined} thinking_budget - 思考预算(undefined 表示使用默认值)
|
| 15 |
+
*/
|
| 16 |
+
|
| 17 |
+
/**
|
| 18 |
+
* 从 OpenAI 格式提取参数
|
| 19 |
+
* OpenAI 格式参数:
|
| 20 |
+
* - max_tokens: number
|
| 21 |
+
* - temperature: number
|
| 22 |
+
* - top_p: number
|
| 23 |
+
* - top_k: number (非标准,但支持)
|
| 24 |
+
* - thinking_budget: number (扩展)
|
| 25 |
+
* - reasoning_effort: 'low' | 'medium' | 'high' (扩展)
|
| 26 |
+
*
|
| 27 |
+
* @param {Object} params - OpenAI 格式的参数对象
|
| 28 |
+
* @returns {NormalizedParameters}
|
| 29 |
+
*/
|
| 30 |
+
export function normalizeOpenAIParameters(params = {}) {
|
| 31 |
+
const normalized = {
|
| 32 |
+
max_tokens: params.max_tokens ?? config.defaults.max_tokens,
|
| 33 |
+
temperature: params.temperature ?? config.defaults.temperature,
|
| 34 |
+
top_p: params.top_p ?? config.defaults.top_p,
|
| 35 |
+
top_k: params.top_k ?? config.defaults.top_k,
|
| 36 |
+
};
|
| 37 |
+
|
| 38 |
+
// 处理思考预算
|
| 39 |
+
if (params.thinking_budget !== undefined) {
|
| 40 |
+
normalized.thinking_budget = params.thinking_budget;
|
| 41 |
+
} else if (params.reasoning_effort !== undefined) {
|
| 42 |
+
normalized.thinking_budget = REASONING_EFFORT_MAP[params.reasoning_effort];
|
| 43 |
+
}
|
| 44 |
+
|
| 45 |
+
return normalized;
|
| 46 |
+
}
|
| 47 |
+
|
| 48 |
+
/**
|
| 49 |
+
* 从 Claude 格式提取参数
|
| 50 |
+
* Claude 格式参数:
|
| 51 |
+
* - max_tokens: number
|
| 52 |
+
* - temperature: number
|
| 53 |
+
* - top_p: number
|
| 54 |
+
* - top_k: number
|
| 55 |
+
* - thinking: { type: 'enabled' | 'disabled', budget_tokens?: number }
|
| 56 |
+
*
|
| 57 |
+
* @param {Object} params - Claude 格式的参数对象
|
| 58 |
+
* @returns {NormalizedParameters}
|
| 59 |
+
*/
|
| 60 |
+
export function normalizeClaudeParameters(params = {}) {
|
| 61 |
+
const { max_tokens, temperature, top_p, top_k, thinking, ...rest } = params;
|
| 62 |
+
|
| 63 |
+
const normalized = {
|
| 64 |
+
max_tokens: max_tokens ?? config.defaults.max_tokens,
|
| 65 |
+
temperature: temperature ?? config.defaults.temperature,
|
| 66 |
+
top_p: top_p ?? config.defaults.top_p,
|
| 67 |
+
top_k: top_k ?? config.defaults.top_k,
|
| 68 |
+
};
|
| 69 |
+
|
| 70 |
+
// 处理 Claude 的 thinking 参数
|
| 71 |
+
// 格式: { "type": "enabled", "budget_tokens": 10000 } 或 { "type": "disabled" }
|
| 72 |
+
if (thinking && typeof thinking === 'object') {
|
| 73 |
+
if (thinking.type === 'enabled' && thinking.budget_tokens !== undefined) {
|
| 74 |
+
normalized.thinking_budget = thinking.budget_tokens;
|
| 75 |
+
} else if (thinking.type === 'disabled') {
|
| 76 |
+
// 显式禁用思考
|
| 77 |
+
normalized.thinking_budget = 0;
|
| 78 |
+
}
|
| 79 |
+
}
|
| 80 |
+
|
| 81 |
+
// 保留其他参数
|
| 82 |
+
Object.assign(normalized, rest);
|
| 83 |
+
|
| 84 |
+
return normalized;
|
| 85 |
+
}
|
| 86 |
+
|
| 87 |
+
/**
|
| 88 |
+
* 从 Gemini 格式提取参数
|
| 89 |
+
* Gemini 格式参数(在 generationConfig 中):
|
| 90 |
+
* - temperature: number
|
| 91 |
+
* - topP: number
|
| 92 |
+
* - topK: number
|
| 93 |
+
* - maxOutputTokens: number
|
| 94 |
+
* - thinkingConfig: { includeThoughts: boolean, thinkingBudget?: number }
|
| 95 |
+
*
|
| 96 |
+
* @param {Object} generationConfig - Gemini 格式的 generationConfig 对象
|
| 97 |
+
* @returns {NormalizedParameters}
|
| 98 |
+
*/
|
| 99 |
+
export function normalizeGeminiParameters(generationConfig = {}) {
|
| 100 |
+
const normalized = {
|
| 101 |
+
max_tokens: generationConfig.maxOutputTokens ?? config.defaults.max_tokens,
|
| 102 |
+
temperature: generationConfig.temperature ?? config.defaults.temperature,
|
| 103 |
+
top_p: generationConfig.topP ?? config.defaults.top_p,
|
| 104 |
+
top_k: generationConfig.topK ?? config.defaults.top_k,
|
| 105 |
+
};
|
| 106 |
+
|
| 107 |
+
// 处理 Gemini 的 thinkingConfig 参数
|
| 108 |
+
if (generationConfig.thinkingConfig && typeof generationConfig.thinkingConfig === 'object') {
|
| 109 |
+
if (generationConfig.thinkingConfig.includeThoughts === false) {
|
| 110 |
+
// 显式禁用思考
|
| 111 |
+
normalized.thinking_budget = 0;
|
| 112 |
+
} else if (generationConfig.thinkingConfig.thinkingBudget !== undefined) {
|
| 113 |
+
normalized.thinking_budget = generationConfig.thinkingConfig.thinkingBudget;
|
| 114 |
+
}
|
| 115 |
+
}
|
| 116 |
+
|
| 117 |
+
return normalized;
|
| 118 |
+
}
|
| 119 |
+
|
| 120 |
+
/**
|
| 121 |
+
* 自动检测格式并规范化参数
|
| 122 |
+
* @param {Object} params - 原始参数对象
|
| 123 |
+
* @param {'openai' | 'claude' | 'gemini'} format - API 格式
|
| 124 |
+
* @returns {NormalizedParameters}
|
| 125 |
+
*/
|
| 126 |
+
export function normalizeParameters(params, format) {
|
| 127 |
+
switch (format) {
|
| 128 |
+
case 'openai':
|
| 129 |
+
return normalizeOpenAIParameters(params);
|
| 130 |
+
case 'claude':
|
| 131 |
+
return normalizeClaudeParameters(params);
|
| 132 |
+
case 'gemini':
|
| 133 |
+
return normalizeGeminiParameters(params);
|
| 134 |
+
default:
|
| 135 |
+
return normalizeOpenAIParameters(params);
|
| 136 |
+
}
|
| 137 |
+
}
|
| 138 |
+
|
| 139 |
+
/**
|
| 140 |
+
* 将规范化参数转换为 Gemini generationConfig 格式
|
| 141 |
+
* @param {NormalizedParameters} normalized - 规范化后的参数
|
| 142 |
+
* @param {boolean} enableThinking - 是否启用思考
|
| 143 |
+
* @param {string} actualModelName - 实际模型名称
|
| 144 |
+
* @returns {Object} Gemini generationConfig 格式
|
| 145 |
+
*/
|
| 146 |
+
export function toGenerationConfig(normalized, enableThinking, actualModelName) {
|
| 147 |
+
const defaultThinkingBudget = config.defaults.thinking_budget ?? 1024;
|
| 148 |
+
let thinkingBudget = 0;
|
| 149 |
+
let actualEnableThinking = enableThinking;
|
| 150 |
+
|
| 151 |
+
if (enableThinking) {
|
| 152 |
+
if (normalized.thinking_budget !== undefined) {
|
| 153 |
+
thinkingBudget = normalized.thinking_budget;
|
| 154 |
+
// 如果用户显式设置 thinking_budget = 0,则禁用思考
|
| 155 |
+
if (thinkingBudget === 0) {
|
| 156 |
+
actualEnableThinking = false;
|
| 157 |
+
}
|
| 158 |
+
} else {
|
| 159 |
+
thinkingBudget = defaultThinkingBudget;
|
| 160 |
+
}
|
| 161 |
+
}
|
| 162 |
+
|
| 163 |
+
const generationConfig = {
|
| 164 |
+
topP: normalized.top_p,
|
| 165 |
+
topK: normalized.top_k,
|
| 166 |
+
temperature: normalized.temperature,
|
| 167 |
+
candidateCount: 1,
|
| 168 |
+
maxOutputTokens: normalized.max_tokens,
|
| 169 |
+
thinkingConfig: {
|
| 170 |
+
includeThoughts: actualEnableThinking,
|
| 171 |
+
thinkingBudget: thinkingBudget
|
| 172 |
+
}
|
| 173 |
+
};
|
| 174 |
+
|
| 175 |
+
// Claude 模型在启用思考时不支持 topP
|
| 176 |
+
if (actualEnableThinking && actualModelName && actualModelName.includes('claude')) {
|
| 177 |
+
delete generationConfig.topP;
|
| 178 |
+
}
|
| 179 |
+
|
| 180 |
+
return generationConfig;
|
| 181 |
+
}
|
| 182 |
+
|
| 183 |
+
export default {
|
| 184 |
+
normalizeOpenAIParameters,
|
| 185 |
+
normalizeClaudeParameters,
|
| 186 |
+
normalizeGeminiParameters,
|
| 187 |
+
normalizeParameters,
|
| 188 |
+
toGenerationConfig
|
| 189 |
+
};
|
|
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
// 工具转换公共模块
|
| 2 |
+
import { sanitizeToolName, cleanParameters } from './utils.js';
|
| 3 |
+
import { setToolNameMapping } from './toolNameCache.js';
|
| 4 |
+
|
| 5 |
+
/**
|
| 6 |
+
* 将单个工具定义转换为 Antigravity 格式的 functionDeclaration
|
| 7 |
+
* @param {string} name - 工具名称
|
| 8 |
+
* @param {string} description - 工具描述
|
| 9 |
+
* @param {Object} parameters - 工具参数 schema
|
| 10 |
+
* @param {string} sessionId - 会话 ID
|
| 11 |
+
* @param {string} actualModelName - 实际模型名称
|
| 12 |
+
* @returns {Object} functionDeclaration 对象
|
| 13 |
+
*/
|
| 14 |
+
function convertSingleTool(name, description, parameters, sessionId, actualModelName) {
|
| 15 |
+
const originalName = name;
|
| 16 |
+
const safeName = sanitizeToolName(originalName);
|
| 17 |
+
|
| 18 |
+
if (sessionId && actualModelName && safeName !== originalName) {
|
| 19 |
+
setToolNameMapping(sessionId, actualModelName, safeName, originalName);
|
| 20 |
+
}
|
| 21 |
+
|
| 22 |
+
const rawParams = parameters || {};
|
| 23 |
+
const cleanedParams = cleanParameters(rawParams) || {};
|
| 24 |
+
if (cleanedParams.type === undefined) cleanedParams.type = 'object';
|
| 25 |
+
if (cleanedParams.type === 'object' && cleanedParams.properties === undefined) cleanedParams.properties = {};
|
| 26 |
+
|
| 27 |
+
return {
|
| 28 |
+
name: safeName,
|
| 29 |
+
description: description || '',
|
| 30 |
+
parameters: cleanedParams
|
| 31 |
+
};
|
| 32 |
+
}
|
| 33 |
+
|
| 34 |
+
/**
|
| 35 |
+
* 将 OpenAI 格式的工具列表转换为 Antigravity 格式
|
| 36 |
+
* OpenAI 格式: [{ type: 'function', function: { name, description, parameters } }]
|
| 37 |
+
* @param {Array} openaiTools - OpenAI 格式的工具列表
|
| 38 |
+
* @param {string} sessionId - 会话 ID
|
| 39 |
+
* @param {string} actualModelName - 实际模型名称
|
| 40 |
+
* @returns {Array} Antigravity 格式的工具列表
|
| 41 |
+
*/
|
| 42 |
+
export function convertOpenAIToolsToAntigravity(openaiTools, sessionId, actualModelName) {
|
| 43 |
+
if (!openaiTools || openaiTools.length === 0) return [];
|
| 44 |
+
|
| 45 |
+
return openaiTools.map((tool) => {
|
| 46 |
+
const func = tool.function || {};
|
| 47 |
+
const declaration = convertSingleTool(
|
| 48 |
+
func.name,
|
| 49 |
+
func.description,
|
| 50 |
+
func.parameters,
|
| 51 |
+
sessionId,
|
| 52 |
+
actualModelName
|
| 53 |
+
);
|
| 54 |
+
|
| 55 |
+
return {
|
| 56 |
+
functionDeclarations: [declaration]
|
| 57 |
+
};
|
| 58 |
+
});
|
| 59 |
+
}
|
| 60 |
+
|
| 61 |
+
/**
|
| 62 |
+
* 将 Claude 格式的工具列表转换为 Antigravity 格式
|
| 63 |
+
* Claude 格式: [{ name, description, input_schema }]
|
| 64 |
+
* @param {Array} claudeTools - Claude 格式的工具列表
|
| 65 |
+
* @param {string} sessionId - 会话 ID
|
| 66 |
+
* @param {string} actualModelName - 实际模型名称
|
| 67 |
+
* @returns {Array} Antigravity 格式的工具列表
|
| 68 |
+
*/
|
| 69 |
+
export function convertClaudeToolsToAntigravity(claudeTools, sessionId, actualModelName) {
|
| 70 |
+
if (!claudeTools || claudeTools.length === 0) return [];
|
| 71 |
+
|
| 72 |
+
return claudeTools.map((tool) => {
|
| 73 |
+
const declaration = convertSingleTool(
|
| 74 |
+
tool.name,
|
| 75 |
+
tool.description,
|
| 76 |
+
tool.input_schema,
|
| 77 |
+
sessionId,
|
| 78 |
+
actualModelName
|
| 79 |
+
);
|
| 80 |
+
|
| 81 |
+
return {
|
| 82 |
+
functionDeclarations: [declaration]
|
| 83 |
+
};
|
| 84 |
+
});
|
| 85 |
+
}
|
| 86 |
+
|
| 87 |
+
/**
|
| 88 |
+
* 将 Gemini 格式的工具列表转换为 Antigravity 格式
|
| 89 |
+
* Gemini 格式可能是:
|
| 90 |
+
* 1. [{ functionDeclarations: [{ name, description, parameters }] }]
|
| 91 |
+
* 2. [{ name, description, parameters }]
|
| 92 |
+
* @param {Array} geminiTools - Gemini 格式的工具列表
|
| 93 |
+
* @param {string} sessionId - 会话 ID
|
| 94 |
+
* @param {string} actualModelName - 实际模型名称
|
| 95 |
+
* @returns {Array} Antigravity 格式的工具列表
|
| 96 |
+
*/
|
| 97 |
+
export function convertGeminiToolsToAntigravity(geminiTools, sessionId, actualModelName) {
|
| 98 |
+
if (!geminiTools || geminiTools.length === 0) return [];
|
| 99 |
+
|
| 100 |
+
return geminiTools.map((tool) => {
|
| 101 |
+
// 格式1: 已经是 functionDeclarations 格式
|
| 102 |
+
if (tool.functionDeclarations) {
|
| 103 |
+
return {
|
| 104 |
+
functionDeclarations: tool.functionDeclarations.map(fd =>
|
| 105 |
+
convertSingleTool(fd.name, fd.description, fd.parameters, sessionId, actualModelName)
|
| 106 |
+
)
|
| 107 |
+
};
|
| 108 |
+
}
|
| 109 |
+
|
| 110 |
+
// 格式2: 单个工具定义格式
|
| 111 |
+
if (tool.name) {
|
| 112 |
+
const declaration = convertSingleTool(
|
| 113 |
+
tool.name,
|
| 114 |
+
tool.description,
|
| 115 |
+
tool.parameters || tool.input_schema,
|
| 116 |
+
sessionId,
|
| 117 |
+
actualModelName
|
| 118 |
+
);
|
| 119 |
+
|
| 120 |
+
return {
|
| 121 |
+
functionDeclarations: [declaration]
|
| 122 |
+
};
|
| 123 |
+
}
|
| 124 |
+
|
| 125 |
+
// 未知格式,原样返回
|
| 126 |
+
return tool;
|
| 127 |
+
});
|
| 128 |
+
}
|
|
@@ -2,6 +2,7 @@
|
|
| 2 |
import config from '../config/config.js';
|
| 3 |
import os from 'os';
|
| 4 |
import { REASONING_EFFORT_MAP, DEFAULT_STOP_SEQUENCES } from '../constants/index.js';
|
|
|
|
| 5 |
|
| 6 |
// ==================== 签名常量 ====================
|
| 7 |
const CLAUDE_THOUGHT_SIGNATURE = 'RXFRRENrZ0lDaEFDR0FJcVFKV1Bvcy9GV20wSmtMV2FmWkFEbGF1ZTZzQTdRcFlTc1NvbklmemtSNFo4c1dqeitIRHBOYW9hS2NYTE1TeTF3bjh2T1RHdE1KVjVuYUNQclZ5cm9DMFNETHk4M0hOSWsrTG1aRUhNZ3hvTTl0ZEpXUDl6UUMzOExxc2ZJakI0UkkxWE1mdWJ1VDQrZnY0Znp0VEoyTlhtMjZKL2daYi9HL1gwcmR4b2x0VE54empLemtLcEp0ZXRia2plb3NBcWlRSWlXUHloMGhVVTk1dHNha1dyNDVWNUo3MTJjZDNxdHQ5Z0dkbjdFaFk4dUllUC9CcThVY2VZZC9YbFpYbDc2bHpEbmdzL2lDZXlNY3NuZXdQMjZBTDRaQzJReXdibVQzbXlSZmpld3ZSaUxxOWR1TVNidHIxYXRtYTJ0U1JIRjI0Z0JwUnpadE1RTmoyMjR4bTZVNUdRNXlOSWVzUXNFNmJzRGNSV0RTMGFVOEZERExybmhVQWZQT2JYMG5lTGR1QnU1VGZOWW9NZglRbTgyUHVqVE1xaTlmN0t2QmJEUUdCeXdyVXR2eUNnTEFHNHNqeWluZDRCOEg3N2ZJamt5blI3Q3ZpQzlIOTVxSENVTCt3K3JzMmsvV0sxNlVsbGlTK0pET3UxWXpPMWRPOUp3V3hEMHd5ZVU0a0Y5MjIxaUE5Z2lUd2djZXhSU2c4TWJVMm1NSjJlaGdlY3g0YjJ3QloxR0FFPQ==';
|
|
@@ -71,33 +72,19 @@ export function isEnableThinking(modelName) {
|
|
| 71 |
|
| 72 |
// ==================== 生成配置 ====================
|
| 73 |
export function generateGenerationConfig(parameters, enableThinking, actualModelName) {
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
if (
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
} else if (parameters.reasoning_effort !== undefined) {
|
| 80 |
-
thinkingBudget = REASONING_EFFORT_MAP[parameters.reasoning_effort] ?? defaultThinkingBudget;
|
| 81 |
-
} else {
|
| 82 |
-
thinkingBudget = defaultThinkingBudget;
|
| 83 |
-
}
|
| 84 |
-
}
|
| 85 |
-
|
| 86 |
-
const generationConfig = {
|
| 87 |
-
topP: parameters.top_p ?? config.defaults.top_p,
|
| 88 |
-
topK: parameters.top_k ?? config.defaults.top_k,
|
| 89 |
-
temperature: parameters.temperature ?? config.defaults.temperature,
|
| 90 |
-
candidateCount: 1,
|
| 91 |
-
maxOutputTokens: parameters.max_tokens ?? config.defaults.max_tokens,
|
| 92 |
-
stopSequences: DEFAULT_STOP_SEQUENCES,
|
| 93 |
-
thinkingConfig: {
|
| 94 |
-
includeThoughts: enableThinking,
|
| 95 |
-
thinkingBudget: thinkingBudget
|
| 96 |
-
}
|
| 97 |
-
};
|
| 98 |
-
if (enableThinking && actualModelName.includes('claude')) {
|
| 99 |
-
delete generationConfig.topP;
|
| 100 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
return generationConfig;
|
| 102 |
}
|
| 103 |
|
|
|
|
| 2 |
import config from '../config/config.js';
|
| 3 |
import os from 'os';
|
| 4 |
import { REASONING_EFFORT_MAP, DEFAULT_STOP_SEQUENCES } from '../constants/index.js';
|
| 5 |
+
import { toGenerationConfig } from './parameterNormalizer.js';
|
| 6 |
|
| 7 |
// ==================== 签名常量 ====================
|
| 8 |
const CLAUDE_THOUGHT_SIGNATURE = 'RXFRRENrZ0lDaEFDR0FJcVFKV1Bvcy9GV20wSmtMV2FmWkFEbGF1ZTZzQTdRcFlTc1NvbklmemtSNFo4c1dqeitIRHBOYW9hS2NYTE1TeTF3bjh2T1RHdE1KVjVuYUNQclZ5cm9DMFNETHk4M0hOSWsrTG1aRUhNZ3hvTTl0ZEpXUDl6UUMzOExxc2ZJakI0UkkxWE1mdWJ1VDQrZnY0Znp0VEoyTlhtMjZKL2daYi9HL1gwcmR4b2x0VE54empLemtLcEp0ZXRia2plb3NBcWlRSWlXUHloMGhVVTk1dHNha1dyNDVWNUo3MTJjZDNxdHQ5Z0dkbjdFaFk4dUllUC9CcThVY2VZZC9YbFpYbDc2bHpEbmdzL2lDZXlNY3NuZXdQMjZBTDRaQzJReXdibVQzbXlSZmpld3ZSaUxxOWR1TVNidHIxYXRtYTJ0U1JIRjI0Z0JwUnpadE1RTmoyMjR4bTZVNUdRNXlOSWVzUXNFNmJzRGNSV0RTMGFVOEZERExybmhVQWZQT2JYMG5lTGR1QnU1VGZOWW9NZglRbTgyUHVqVE1xaTlmN0t2QmJEUUdCeXdyVXR2eUNnTEFHNHNqeWluZDRCOEg3N2ZJamt5blI3Q3ZpQzlIOTVxSENVTCt3K3JzMmsvV0sxNlVsbGlTK0pET3UxWXpPMWRPOUp3V3hEMHd5ZVU0a0Y5MjIxaUE5Z2lUd2djZXhSU2c4TWJVMm1NSjJlaGdlY3g0YjJ3QloxR0FFPQ==';
|
|
|
|
| 72 |
|
| 73 |
// ==================== 生成配置 ====================
|
| 74 |
export function generateGenerationConfig(parameters, enableThinking, actualModelName) {
|
| 75 |
+
// 处理 reasoning_effort 到 thinking_budget 的转换
|
| 76 |
+
const normalizedParams = { ...parameters };
|
| 77 |
+
if (normalizedParams.thinking_budget === undefined && normalizedParams.reasoning_effort !== undefined) {
|
| 78 |
+
const defaultThinkingBudget = config.defaults.thinking_budget ?? 1024;
|
| 79 |
+
normalizedParams.thinking_budget = REASONING_EFFORT_MAP[normalizedParams.reasoning_effort] ?? defaultThinkingBudget;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
}
|
| 81 |
+
|
| 82 |
+
// 使用统一的参数转换函数
|
| 83 |
+
const generationConfig = toGenerationConfig(normalizedParams, enableThinking, actualModelName);
|
| 84 |
+
|
| 85 |
+
// 添加 stopSequences
|
| 86 |
+
generationConfig.stopSequences = DEFAULT_STOP_SEQUENCES;
|
| 87 |
+
|
| 88 |
return generationConfig;
|
| 89 |
}
|
| 90 |
|