lcolok commited on
Commit
dca8a47
·
1 Parent(s): ad3c7d6

Refactor Docker Launch Method for MindSearch (#185)

Browse files

* Build a mindsearch docker launcher with Python

* chore: Update Docker Compose configuration and dependencies for mindsearch docker launcher

* chore: Add copy_frontend_dockerfile method to copy the frontend Dockerfile

* chore: Refactor Dockerfile copying logic and update Docker Compose configuration

* chore: Update Docker Compose configuration to use internlm_server as the default model format

* Refactor Docker Compose configuration and Dockerfile copying logic

* chore: Update Docker Compose configuration and Dockerfile copying logic

* chore: Update Docker Compose configuration and Dockerfile copying logic

* Refactor Docker Compose configuration and Dockerfile copying logic

* Refactor Docker Compose configuration and Dockerfile copying logic

* Refactor Docker Compose configuration and remove commented environment variables

* Refactor Docker Compose configuration and add backend language choice

* Refactor Docker Compose configuration and update i18n imports

* Refactor Docker Compose configuration and update i18n imports

* Refactor Docker Compose configuration and update i18n imports

* Refactor Docker Compose configuration and update i18n imports

* Refactor Docker Compose configuration and Dockerfile copying logic

* Refactor Docker Compose configuration and Dockerfile copying logic

* Update README

.gitignore CHANGED
@@ -160,3 +160,4 @@ cython_debug/
160
  # and can be added to the global gitignore or merged into this file. For a more nuclear
161
  # option (not recommended) you can uncomment the following to ignore the entire idea folder.
162
  #.idea/
 
 
160
  # and can be added to the global gitignore or merged into this file. For a more nuclear
161
  # option (not recommended) you can uncomment the following to ignore the entire idea folder.
162
  #.idea/
163
+ temp
docker/README.md CHANGED
@@ -1,128 +1,125 @@
1
- # MindSearch Docker Compose User Guide
2
 
3
  English | [简体中文](README_zh-CN.md)
4
 
5
- ## 🚀 Quick Start with Docker Compose
6
 
7
- MindSearch now supports quick deployment and startup using Docker Compose. This method simplifies the environment configuration process, allowing you to easily run the entire system.
8
 
9
- ### Prerequisites
10
 
11
- - Docker (Docker Compose V2 is integrated into Docker)
12
- - NVIDIA GPU and NVIDIA Container Toolkit (required for NVIDIA GPU support)
 
 
 
13
 
14
- Note: Newer versions of Docker have integrated Docker Compose V2, so you can use the `docker compose` command directly without a separate installation of docker-compose.
15
 
16
- ### Usage Instructions
17
-
18
- All commands should be executed in the `mindsearch/docker` directory.
19
-
20
- #### First-time Startup
21
-
22
- ```bash
23
- docker compose up --build -d
24
- ```
25
-
26
- #### Daily Use
27
-
28
- Start services:
29
-
30
- ```bash
31
- docker compose up -d
32
- ```
33
-
34
- View running services:
35
-
36
- ```bash
37
- docker ps
38
- ```
39
-
40
- Stop services:
41
 
42
- ```bash
43
- docker compose down
44
- ```
 
45
 
46
- #### Major Version Updates
47
 
48
- Rebuild images after a major update:
49
 
50
  ```bash
51
- docker compose build --no-cache
52
- docker compose up -d
53
  ```
54
 
55
- ### Configuration Details
56
-
57
- 1. **Environment Variables**:
58
- The system automatically reads the following variables from your environment:
 
 
 
59
 
60
- - `OPENAI_API_KEY`: Your OpenAI API key
61
- - `OPENAI_API_BASE`: OpenAI API base URL (default: https://api.openai.com/v1)
62
- - `LANG`: Language setting ('en' or 'cn')
63
- - `MODEL_FORMAT`: Model format ('gpt4' or 'internlm_server')
64
 
65
- Example setup:
66
 
67
- Using local internlm2.5-7b-chat model:
68
 
69
- ```bash
70
- export LANG=cn
71
- export MODEL_FORMAT=internlm_server
72
- docker compose up -d
73
- ```
 
74
 
75
- Using OpenAI's API:
 
 
 
76
 
77
- ```bash
78
- export OPENAI_API_KEY=your_api_key_here
79
- export LANG=en
80
- export MODEL_FORMAT=gpt4
81
- docker compose up -d
82
- ```
83
 
84
- Using SiliconFlow's cloud LLM service:
85
 
86
- ```bash
87
- export SILICON_API_KEY=your_api_key_here
88
- export LANG=en
89
- export MODEL_FORMAT=internlm_silicon
90
- docker compose up -d
91
- ```
92
 
93
- 2. **Model Cache**:
94
- The container maps the `/root/.cache:/root/.cache` path to store model files.
 
95
 
96
- 3. **GPU Support**:
97
- The default configuration uses NVIDIA GPUs. For other GPU types, please refer to the comments in docker-compose.yaml.
 
98
 
99
- 4. **Service Access**:
100
- In the Docker Compose environment, the frontend container can directly access the backend service via `http://backend:8002`.
101
 
102
- 5. **Backend Server Address Configuration**:
103
- Currently, the method for changing the backend server address is temporary. We use a sed command in the Dockerfile to modify the vite.config.ts file to replace the server proxy address. This method is effective in the development environment but not suitable for production.
 
 
104
 
105
- ### Important Notes
106
 
107
- - The first run may take some time to download necessary model files, depending on your chosen model and network conditions.
108
- - Ensure you have sufficient disk space to store model files and Docker images.
109
- - If you encounter permission issues, you may need to use sudo to run Docker commands.
 
 
110
 
111
- ### Cross-Origin Access Note
112
 
113
- In the current version, we temporarily solve the cross-origin access issue by using Vite's development mode in the frontend Docker container:
114
 
115
- 1. The frontend Dockerfile uses the `npm start` command to start the Vite development server.
116
- 2. In the `vite.config.ts` file, we configure proxy settings to forward requests for the `/solve` path to the backend service.
117
 
118
- Please note:
119
 
120
- - This method is effective in the development environment but not suitable for production use.
121
- - We plan to implement a more robust cross-origin solution suitable for production environments in future versions.
122
- - If you plan to deploy this project in a production environment, you may need to consider other cross-origin handling methods, such as configuring backend CORS policies or using a reverse proxy server.
123
 
124
- ### Conclusion
 
 
 
 
 
 
 
 
 
125
 
126
- We appreciate your understanding and patience. MindSearch is still in its early stages, and we are working hard to improve various aspects of the system. Your feedback is very important to us as it helps us continuously refine the project. If you encounter any issues or have any suggestions during use, please feel free to provide feedback.
127
 
128
- By using Docker Compose, you can quickly deploy MindSearch without worrying about complex environment configurations. This method is particularly suitable for rapid testing and development environment deployment. If you encounter any problems during deployment, please refer to our troubleshooting guide or seek community support.
 
1
+ # MSDL (MindSearch Docker Launcher) User Guide
2
 
3
  English | [简体中文](README_zh-CN.md)
4
 
5
+ ## Introduction
6
 
7
+ MSDL (MindSearch Docker Launcher) is a command-line tool designed to simplify the deployment process of MindSearch. It helps users configure and launch the Docker environment for MindSearch through an interactive interface, reducing the complexity of deployment. MSDL primarily serves as a scaffold for deploying containers and does not involve optimization of MindSearch's core logic.
8
 
9
+ ## Prerequisites
10
 
11
+ - Python 3.7 or higher
12
+ - Docker (Docker Compose included; most newer Docker versions have it integrated)
13
+ - Git (for cloning the repository)
14
+ - Stable internet connection
15
+ - Sufficient disk space (required space varies depending on the selected deployment option)
16
 
17
+ ## Installation Steps
18
 
19
+ 1. Clone the MindSearch repository:
20
+ ```bash
21
+ git clone https://github.com/InternLM/MindSearch.git # If you have already cloned the repository, you can skip this step.
22
+ cd MindSearch/docker
23
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
+ 2. Install MSDL:
26
+ ```bash
27
+ pip install -e .
28
+ ```
29
 
30
+ ## Usage
31
 
32
+ After installation, you can run the MSDL command from any directory:
33
 
34
  ```bash
35
+ msdl
 
36
  ```
37
 
38
+ Follow the interactive prompts for configuration:
39
+ - Choose the language for the Agent (Chinese or English; this only affects the language of prompts).
40
+ - Select the model deployment type (local model or cloud model).
41
+ - Choose the model format:
42
+ - Currently, only `internlm_silicon` works properly for cloud models.
43
+ - For local models, only `internlm_server` has passed tests and runs correctly.
44
+ - Enter the necessary API keys (e.g., SILICON_API_KEY).
45
 
46
+ MSDL will automatically perform the following actions:
47
+ - Copy and configure the necessary Dockerfile and docker-compose.yaml files.
48
+ - Build Docker images.
49
+ - Launch Docker containers.
50
 
51
+ ## Deployment Options Comparison
52
 
53
+ ### Cloud Model Deployment (Recommended)
54
 
55
+ **Advantages**:
56
+ - Lightweight deployment with minimal disk usage (frontend around 510MB, backend around 839MB).
57
+ - No need for high-performance hardware.
58
+ - Easy to deploy and maintain.
59
+ - You can freely use the `internlm/internlm2_5-7b-chat` model via SiliconCloud.
60
+ - High concurrency, fast inference speed.
61
 
62
+ **Instructions**:
63
+ - Select the "Cloud Model" option.
64
+ - Choose "internlm_silicon" as the model format.
65
+ - Enter the SiliconCloud API Key (register at https://cloud.siliconflow.cn/ to obtain it).
66
 
67
+ **Important Notes**:
68
+ - The `internlm/internlm2_5-7b-chat` model is freely accessible on SiliconCloud.
69
+ - MindSearch has no financial relationship with SiliconCloud; this service is recommended solely because it provides valuable resources to the open-source community.
 
 
 
70
 
71
+ ### Local Model Deployment
72
 
73
+ **Features**:
74
+ - Uses the `openmmlab/lmdeploy` image.
75
+ - Based on the PyTorch environment.
76
+ - Requires significant disk space (backend container 15GB+, model 15GB+, totaling 30GB+).
77
+ - Requires a powerful GPU (12GB or more of VRAM recommended).
 
78
 
79
+ **Instructions**:
80
+ - Select the "Local Model" option.
81
+ - Choose "internlm_server" as the model format.
82
 
83
+ **Relevant Links**:
84
+ - lmdeploy image: https://hub.docker.com/r/openmmlab/lmdeploy/tags
85
+ - InternLM2.5 project: https://huggingface.co/internlm/internlm2_5-7b-chat
86
 
87
+ ## Notes
 
88
 
89
+ - Currently, only the `internlm_silicon` format works properly for cloud models, and only the `internlm_server` format has passed tests for local models.
90
+ - The language selection only affects the language of the Agent's prompts and does not change the language of the React frontend.
91
+ - The first run might take a long time to download necessary model files and Docker images.
92
+ - When using cloud models, ensure a stable network connection.
93
 
94
+ ## Troubleshooting
95
 
96
+ 1. Ensure the Docker service is running.
97
+ 2. Check if there is sufficient disk space.
98
+ 3. Ensure all necessary environment variables are set correctly.
99
+ 4. Check if the network connection is stable.
100
+ 5. Verify the validity of API keys (e.g., for cloud models).
101
 
102
+ If problems persist, check the Issues section in the MindSearch GitHub repository or submit a new issue.
103
 
104
+ ## Privacy and Security
105
 
106
+ MSDL is a locally executed tool and does not transmit any API keys or sensitive information. All configuration information is stored in the `msdl/temp/.env` file, used only to simplify the deployment process.
 
107
 
108
+ ## Updating MSDL
109
 
110
+ To update MSDL to the latest version, follow these steps:
 
 
111
 
112
+ 1. Navigate to the MindSearch directory.
113
+ 2. Pull the latest code:
114
+ ```bash
115
+ git pull origin main
116
+ ```
117
+ 3. Reinstall MSDL:
118
+ ```bash
119
+ cd docker
120
+ pip install -e .
121
+ ```
122
 
123
+ ## Conclusion
124
 
125
+ If you have any questions or suggestions, feel free to submit an issue on GitHub or contact us directly. Thank you for using MindSearch and MSDL!
docker/README_zh-CN.md CHANGED
@@ -1,128 +1,125 @@
1
- # MindSearch Docker Compose 使用指南
2
 
3
  [English](README.md) | 简体中文
4
 
5
- ## 🚀 使用 Docker Compose 快速启动
6
 
7
- MindSearch 支持使用 Docker Compose 进行快速部署和启动,简化了环境配置过程让您能轻松运行整个系统
8
 
9
- ### 前提条件
10
 
11
- - Docker(已集成 Docker Compose V2)
12
- - NVIDIA GPU 和 NVIDIA Container Toolkit(如NVIDIA GPU 支持)
 
 
 
13
 
14
- 注意:较新版本的 Docker 已整合 Docker Compose V2,可直接使用 `docker compose` 命令。
15
 
16
- ### 使用说明
17
-
18
- 所有命令都应在 `mindsearch/docker` 目录下执行
19
-
20
- #### 首次启动
21
-
22
- ```bash
23
- docker compose up --build -d
24
- ```
25
-
26
- #### 日常使用
27
-
28
- 启动服务:
29
-
30
- ```bash
31
- docker compose up -d
32
- ```
33
-
34
- 查看运行中的服务:
35
-
36
- ```bash
37
- docker ps
38
- ```
39
-
40
- 停止服务:
41
 
42
- ```bash
43
- docker compose down
44
- ```
 
45
 
46
- #### 大版本更新
47
 
48
- 更新重新构建镜像
49
 
50
  ```bash
51
- docker compose build --no-cache
52
- docker compose up -d
53
  ```
54
 
55
- ### 配置说明
56
-
57
- 1. **环境变量设置**:
58
- 系统自动读取以下环境变量:
 
 
 
59
 
60
- - `OPENAI_API_KEY`:OpenAI API 密钥
61
- - `OPENAI_API_BASE`:OpenAI API 基础 URL(默认:https://api.openai.com/v1)
62
- - `LANG`:语言设置('en' 'cn')
63
- - `MODEL_FORMAT`:模型格式('gpt4' 'internlm_server')
64
 
65
- 设置示例:
66
 
67
- 使用本地的 internlm2.5-7b-chat 模型:
68
 
69
- ```bash
70
- export LANG=cn
71
- export MODEL_FORMAT=internlm_server
72
- docker compose up -d
73
- ```
 
74
 
75
- 使用 OpenAI 的 LLM 服务:
 
 
 
76
 
77
- ```bash
78
- export OPENAI_API_KEY=your_api_key_here
79
- export LANG=cn
80
- export MODEL_FORMAT=gpt4
81
- docker compose up -d
82
- ```
83
 
84
- 使用 SiliconFlow 云端 LLM 服务:
85
 
86
- ```bash
87
- export SILICON_API_KEY=your_api_key_here
88
- export LANG=cn
89
- export MODEL_FORMAT=internlm_silicon
90
- docker compose up -d
91
- ```
92
 
93
- 2. **模型缓存**
94
- 容器映射 `/root/.cache:/root/.cache` 路径存储模型文件。
 
95
 
96
- 3. **GPU 支持**
97
- 默认配置使用 NVIDIA GPU。其他 GPU 类型请参考 docker-compose.yaml 中的注释。
 
98
 
99
- 4. **服务访问**:
100
- 在 Docker Compose 环境中,前端容器可以通过 `http://backend:8002` 直接访问后端服务。
101
 
102
- 5. **后服务器址配置**:
103
- 目前,更改后端服务器地址的方法是临时的。我们在 Dockerfile 中使用 sed 命令来修vite.config.ts 文件,以替换服务器代理地址这种方法在开发环境中有效,但不适合生产环境。
 
 
104
 
105
- ### 注意事项
106
 
107
- - 首次运行可能需要时间下载模型文件
108
- - 确保有足够磁盘空间存储模型和 Docker 镜像
109
- - 如遇权限问题,可能需使用 sudo 运行 Docker 命令
 
 
110
 
111
- ### 跨域访说明
112
 
113
- 当前版本通过 Vite 开发模式临时解决跨域问题:
114
 
115
- 1. 前端 Dockerfile 使用 `npm start` 启动 Vite 开发服务器
116
- 2. `vite.config.ts` 配置代理,将 `/solve` 路径请求代理到后端。
117
 
118
- 注意:
119
 
120
- - 此方法适用于开发环境不适合生产环境。
121
- - 未来版本将实现更适合生产环境的跨域解决方案。
122
- - 生产环境部署可能需要考虑其他跨域处理方法。
123
 
124
- ### 结语
 
 
 
 
 
 
 
 
 
125
 
126
- 感谢您的支持。MindSearch 正在不断改进,您的反馈对我们至关重要。如有任何问题或建议,请随时与我们联系。
127
 
128
- Docker Compose 方法简化了 MindSearch 的部署流程,特别适合快速测试和开发环境。遇部署问题,请参考故障排除指南寻求社区支持
 
1
+ # MSDL (MindSearch Docker Launcher) 使用指南
2
 
3
  [English](README.md) | 简体中文
4
 
5
+ ## 简介
6
 
7
+ MSDL (MindSearch Docker Launcher) 是一个专为简化 MindSearch 部署过程而设计的命令行工具。它通过交互式界面帮助用户轻松配置和启动 MindSearch 的 Docker 环境,降低了部署的复杂性MSDL 主要作为部署容器的脚手架,不涉及 MindSearch 核心逻辑的优化。
8
 
9
+ ## 环境要求
10
 
11
+ - Python 3.7 或更高版本
12
+ - Docker (包含 Docker Compose,新版本的 Docker 通常已集成)
13
+ - Git (用于克隆仓库)
14
+ - 稳定的网络连接
15
+ - 充足的磁盘空间(根据选择的部署方案,所需空间有所不同)
16
 
17
+ ## 安装步骤
18
 
19
+ 1. 克隆 MindSearch 仓库:
20
+ ```bash
21
+ git clone https://github.com/InternLM/MindSearch.git # 已经克隆过的,可以忽略执行此步骤
22
+ cd MindSearch/docker
23
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
+ 2. 安装 MSDL:
26
+ ```bash
27
+ pip install -e .
28
+ ```
29
 
30
+ ## 使用方法
31
 
32
+ 安装完成,您可以在任意目录下运行 MSDL 命令
33
 
34
  ```bash
35
+ msdl
 
36
  ```
37
 
38
+ 按照交互式提示进行配置:
39
+ - 选择 Agent 使用的语言(中文或英文,仅影响 Agent 的提示词语言)
40
+ - 选择模型部署类型(本地模型或云端模型)
41
+ - 选择模型格式
42
+ - 云端模型目前只有 internlm_silicon 能够正常运行
43
+ - 本地模型目前只有 internlm_server 通过测试,能正常运行
44
+ - 输入必要的 API 密钥(如 SILICON_API_KEY)
45
 
46
+ MSDL 将自动执行以下操作:
47
+ - 复制并配置必要的 Dockerfile docker-compose.yaml 文件
48
+ - 构建 Docker 镜像
49
+ - 启动 Docker 容器
50
 
51
+ ## 部署方案比较
52
 
53
+ ### 云端模型部署(推荐)
54
 
55
+ **优势**:
56
+ - 轻量级部署,磁盘占用小(前端约 510MB,后端约 839MB)
57
+ - 无需高性能硬件
58
+ - 部署和维护简单
59
+ - 使用 SiliconCloud 可免费调用 internlm/internlm2_5-7b-chat 模型
60
+ - 高并发量,推理速度快
61
 
62
+ **使用说明**:
63
+ - 选择"云端模型"选项
64
+ - 选择 "internlm_silicon" 作为模型格式
65
+ - 输入 SiliconCloud API Key(需在 https://cloud.siliconflow.cn/ 注册获取)
66
 
67
+ **重要说明**:
68
+ - internlm/internlm2_5-7b-chat 模型在 SiliconCloud 上可以免费调用,但 API Key 仍需妥善保管好。
69
+ - MindSearch 项目与 SiliconCloud 并无利益关系,只是使用它能更好地体验 MindSearch 的效果,感谢 SiliconCloud 为开源社区所做的贡献。
 
 
 
70
 
71
+ ### 本地模型部署
72
 
73
+ **特点**:
74
+ - 使用 openmmlab/lmdeploy 镜像
75
+ - 基于 PyTorch 环境
76
+ - 需要大量磁盘空间(后端容器 15GB+,模型 15GB+,总计 30GB 以上)
77
+ - 需要强大的 GPU(建议 12GB 或以上显存)
 
78
 
79
+ **使用说明**:
80
+ - 选择"本地模型"选项
81
+ - 选择 "internlm_server" 作为模型格式
82
 
83
+ **相关链接**:
84
+ - lmdeploy 镜像: https://hub.docker.com/r/openmmlab/lmdeploy/tags
85
+ - InternLM2.5 项目: https://huggingface.co/internlm/internlm2_5-7b-chat
86
 
87
+ ## 注意事项
 
88
 
89
+ - 模型目前只有 internlm_silicon 格式能够正常运行,本模型只有 internlm_server 格式通过测试能正常运行。
90
+ - 选择语言只会影响 Agent 的提示词语言,不会 React 前端的界面语言
91
+ - 首次运行可能需要较长时间来下载必要的模型文件和 Docker 镜像。
92
+ - 使用云端模型时,请确保网络连接稳定。
93
 
94
+ ## 故障排除
95
 
96
+ 1. 确保 Docker 服务正在运行。
97
+ 2. 检查是否有足够磁盘空间。
98
+ 3. 确保所有必的环境变量已正确设置
99
+ 4. 检查网络连接是否正常。
100
+ 5. 验证 API Key 是否有效(如使用云端模型)。
101
 
102
+ 如果题持续,请查看 MindSearch 的 GitHub 仓库中的 Issues 部分,或提交新的 Issue。
103
 
104
+ ## 隐私和安全
105
 
106
+ MSDL 是纯本地执行的工具,不会上报任何 API Key 或其他敏感信息。所有配置信息存储在 `msdl/temp/.env` 文件中,仅用于简化部署过程
 
107
 
108
+ ## 更新 MSDL
109
 
110
+ 要更新 MSDL 到最新版本请执行以下步骤:
 
 
111
 
112
+ 1. 进入 MindSearch 目录
113
+ 2. 拉取最新的代码:
114
+ ```bash
115
+ git pull origin main
116
+ ```
117
+ 3. 重新安装 MSDL:
118
+ ```bash
119
+ cd docker
120
+ pip install -e .
121
+ ```
122
 
123
+ ## 结语
124
 
125
+ 有任何问题或建议欢迎在 GitHub 上提交 Issue 直接联系我们感谢您使用 MindSearch 和 MSDL!
docker/msdl/__init__.py ADDED
File without changes
docker/msdl/__main__.py ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # msdl/__main__.py
2
+ import signal
3
+ import sys
4
+ from pathlib import Path
5
+
6
+ from InquirerPy import inquirer
7
+ from msdl.config import (
8
+ BACKEND_DOCKERFILE_DIR,
9
+ CLOUD_LLM_DOCKERFILE,
10
+ FRONTEND_DOCKERFILE_DIR,
11
+ LOCAL_LLM_DOCKERFILE,
12
+ PACKAGE_DIR,
13
+ PROJECT_ROOT,
14
+ REACT_DOCKERFILE,
15
+ TEMP_DIR,
16
+ TEMPLATE_FILES,
17
+ )
18
+ from msdl.docker_manager import (
19
+ check_docker_install,
20
+ run_docker_compose,
21
+ stop_and_remove_containers,
22
+ update_docker_compose_paths,
23
+ )
24
+ from msdl.i18n import setup_i18n, t
25
+ from msdl.utils import (
26
+ clean_api_key,
27
+ copy_templates_to_temp,
28
+ get_existing_api_key,
29
+ get_model_formats,
30
+ modify_docker_compose,
31
+ save_api_key_to_env,
32
+ validate_api_key,
33
+ )
34
+
35
+
36
+ def signal_handler(signum, frame):
37
+ print(t("TERMINATION_SIGNAL"))
38
+ stop_and_remove_containers()
39
+ sys.exit(0)
40
+
41
+
42
+ def copy_backend_dockerfile(choice):
43
+ source_file = Path(BACKEND_DOCKERFILE_DIR) / choice
44
+ dest_file = "backend.dockerfile"
45
+ source_path = PACKAGE_DIR / "templates" / source_file
46
+ dest_path = TEMP_DIR / dest_file
47
+
48
+ if not source_path.exists():
49
+ raise FileNotFoundError(t("FILE_NOT_FOUND", file=source_file))
50
+
51
+ dest_path.parent.mkdir(parents=True, exist_ok=True)
52
+ dest_path.write_text(source_path.read_text())
53
+ print(
54
+ t(
55
+ "BACKEND_DOCKERFILE_COPIED",
56
+ source_path=str(source_path),
57
+ dest_path=str(dest_path),
58
+ )
59
+ )
60
+
61
+
62
+ def copy_frontend_dockerfile():
63
+ source_file = Path(FRONTEND_DOCKERFILE_DIR) / REACT_DOCKERFILE
64
+ dest_file = "frontend.dockerfile"
65
+ source_path = PACKAGE_DIR / "templates" / source_file
66
+ dest_path = TEMP_DIR / dest_file
67
+
68
+ if not source_path.exists():
69
+ raise FileNotFoundError(t("FILE_NOT_FOUND", file=source_file))
70
+
71
+ dest_path.parent.mkdir(parents=True, exist_ok=True)
72
+ dest_path.write_text(source_path.read_text())
73
+ print(
74
+ t(
75
+ "FRONTEND_DOCKERFILE_COPIED",
76
+ source_path=str(source_path),
77
+ dest_path=str(dest_path),
78
+ )
79
+ )
80
+
81
+
82
+ def get_user_choices():
83
+ backend_language_choices = [
84
+ {"name": t("CHINESE"), "value": "cn"},
85
+ {"name": t("ENGLISH"), "value": "en"},
86
+ ]
87
+
88
+ model_deployment_type = [
89
+ {"name": t("CLOUD_MODEL"), "value": CLOUD_LLM_DOCKERFILE},
90
+ {"name": t("LOCAL_MODEL"), "value": LOCAL_LLM_DOCKERFILE},
91
+ ]
92
+
93
+ backend_language = inquirer.select(
94
+ message=t("BACKEND_LANGUAGE_CHOICE"),
95
+ choices=backend_language_choices,
96
+ ).execute()
97
+
98
+ model = inquirer.select(
99
+ message=t("MODEL_DEPLOYMENT_TYPE"),
100
+ choices=model_deployment_type,
101
+ ).execute()
102
+
103
+ model_formats = get_model_formats(model)
104
+ model_format = inquirer.select(
105
+ message=t("MODEL_FORMAT_CHOICE"),
106
+ choices=[{"name": format, "value": format} for format in model_formats],
107
+ ).execute()
108
+
109
+ # If the model is cloud_llm, ask for the API key
110
+ if model == CLOUD_LLM_DOCKERFILE:
111
+ env_var_name = {
112
+ "internlm_silicon": "SILICON_API_KEY",
113
+ "gpt4": "OPENAI_API_KEY",
114
+ "qwen": "QWEN_API_KEY",
115
+ }.get(model_format)
116
+
117
+ existing_api_key = get_existing_api_key(env_var_name)
118
+
119
+ if existing_api_key:
120
+ use_existing = inquirer.confirm(
121
+ message=t("CONFIRM_USE_EXISTING_API_KEY", ENV_VAR_NAME=env_var_name),
122
+ default=True,
123
+ ).execute()
124
+
125
+ if use_existing:
126
+ return backend_language, model, model_format
127
+ else:
128
+ print(
129
+ t("CONFIRM_OVERWRITE_EXISTING_API_KEY", ENV_VAR_NAME=env_var_name)
130
+ )
131
+ else:
132
+ print(t("PLEASE_INPUT_NEW_API_KEY", ENV_VAR_NAME=env_var_name))
133
+
134
+ while True:
135
+ api_key = inquirer.secret(
136
+ message=t(
137
+ "PLEASE_INPUT_NEW_API_KEY_FROM_ZERO", ENV_VAR_NAME=env_var_name
138
+ )
139
+ ).execute()
140
+ cleaned_api_key = clean_api_key(api_key)
141
+
142
+ if validate_api_key(cleaned_api_key, env_var_name, t):
143
+ save_api_key_to_env(model_format, cleaned_api_key, t)
144
+ break
145
+ else:
146
+ print(t("INVALID_API_KEY_FORMAT"))
147
+ retry = inquirer.confirm(
148
+ message=t("RETRY_API_KEY_INPUT"), default=True
149
+ ).execute()
150
+ if not retry:
151
+ print(t("API_KEY_INPUT_CANCELLED"))
152
+ sys.exit(1)
153
+
154
+ return backend_language, model, model_format
155
+
156
+
157
+ def main():
158
+ # Set the display language of the msdl launcher (based on the system language)
159
+ setup_i18n()
160
+
161
+ signal.signal(signal.SIGINT, signal_handler)
162
+ signal.signal(signal.SIGTERM, signal_handler)
163
+
164
+ try:
165
+ check_docker_install()
166
+
167
+ # Get user choices
168
+ backend_language, model_choice, model_format = get_user_choices()
169
+
170
+ # Copy backend Dockerfile to temp directory
171
+ copy_backend_dockerfile(model_choice)
172
+
173
+ # Copy frontend Dockerfile to temp directory
174
+ copy_frontend_dockerfile()
175
+
176
+ # Copy templates to temp directory
177
+ copy_templates_to_temp(TEMPLATE_FILES)
178
+
179
+ # Modify docker-compose.yaml
180
+ modify_docker_compose(model_choice, backend_language, model_format)
181
+
182
+ update_docker_compose_paths()
183
+ stop_and_remove_containers()
184
+ run_docker_compose()
185
+
186
+ print(t("DOCKER_LAUNCHER_COMPLETE"))
187
+ except KeyboardInterrupt:
188
+ print(t("KEYBOARD_INTERRUPT"))
189
+ stop_and_remove_containers()
190
+ sys.exit(0)
191
+ except Exception as e:
192
+ print(t("UNEXPECTED_ERROR", error=str(e)))
193
+ stop_and_remove_containers()
194
+ sys.exit(1)
195
+
196
+
197
+ if __name__ == "__main__":
198
+ main()
docker/msdl/config.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # msdl/config.py
2
+
3
+ from pathlib import Path
4
+
5
+ # Get the directory where the script is located
6
+ PACKAGE_DIR = Path(__file__).resolve().parent
7
+
8
+ # Get the root directory of the MindSearch project
9
+ PROJECT_ROOT = PACKAGE_DIR.parent.parent
10
+
11
+ # Get the temp directory path, which is actually the working directory for executing the docker compose up command
12
+ TEMP_DIR = PACKAGE_DIR / "temp"
13
+
14
+ # Configuration file name list
15
+ TEMPLATE_FILES = ["docker-compose.yaml"]
16
+
17
+ # Backend Dockerfile directory
18
+ BACKEND_DOCKERFILE_DIR = "backend"
19
+
20
+ # Backend Dockerfile name
21
+ CLOUD_LLM_DOCKERFILE = "cloud_llm.dockerfile"
22
+ LOCAL_LLM_DOCKERFILE = "local_llm.dockerfile"
23
+
24
+ # Frontend Dockerfile directory
25
+ FRONTEND_DOCKERFILE_DIR = "frontend"
26
+
27
+ # Frontend Dockerfile name
28
+ REACT_DOCKERFILE = "react.dockerfile"
29
+
30
+ # i18n translations directory
31
+ TRANSLATIONS_DIR = PACKAGE_DIR / "translations"
32
+
33
+ # Get the path of the .env file
34
+ ENV_FILE_PATH = TEMP_DIR / ".env"
docker/msdl/docker_manager.py ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # msdl/docker_manager.py
2
+
3
+ import os
4
+ import subprocess
5
+ import sys
6
+ from functools import lru_cache
7
+
8
+ import yaml
9
+ from msdl.config import PROJECT_ROOT, TEMP_DIR
10
+ from msdl.i18n import t
11
+
12
+
13
+ @lru_cache(maxsize=1)
14
+ def get_docker_command():
15
+ try:
16
+ subprocess.run(
17
+ ["docker", "compose", "version"], check=True, capture_output=True
18
+ )
19
+ return ["docker", "compose"]
20
+ except subprocess.CalledProcessError:
21
+ try:
22
+ subprocess.run(
23
+ ["docker-compose", "--version"], check=True, capture_output=True
24
+ )
25
+ return ["docker-compose"]
26
+ except subprocess.CalledProcessError:
27
+ print(t("DOCKER_COMPOSE_NOT_FOUND"))
28
+ sys.exit(1)
29
+
30
+
31
+ @lru_cache(maxsize=1)
32
+ def check_docker_install():
33
+ try:
34
+ subprocess.run(["docker", "--version"], check=True, capture_output=True)
35
+ docker_compose_cmd = get_docker_command()
36
+ subprocess.run(
37
+ docker_compose_cmd + ["version"], check=True, capture_output=True
38
+ )
39
+ print(t("DOCKER_INSTALLED"))
40
+ return True
41
+ except subprocess.CalledProcessError as e:
42
+ print(t("DOCKER_INSTALL_ERROR", error=str(e)))
43
+ return False
44
+ except FileNotFoundError:
45
+ print(t("DOCKER_NOT_FOUND"))
46
+ return False
47
+
48
+
49
+ def stop_and_remove_containers():
50
+ docker_compose_cmd = get_docker_command()
51
+ try:
52
+ subprocess.run(
53
+ docker_compose_cmd
54
+ + [
55
+ "-f",
56
+ os.path.join(TEMP_DIR, "docker-compose.yaml"),
57
+ "down",
58
+ "-v",
59
+ ],
60
+ check=True,
61
+ )
62
+ print(t("CONTAINERS_STOPPED"))
63
+ except subprocess.CalledProcessError as e:
64
+ print(t("CONTAINER_STOP_ERROR", error=str(e)))
65
+
66
+
67
+ def run_docker_compose():
68
+ docker_compose_cmd = get_docker_command()
69
+ try:
70
+ print(t("BUILDING_IMAGES"))
71
+ subprocess.run(
72
+ docker_compose_cmd
73
+ + [
74
+ "-f",
75
+ os.path.join(TEMP_DIR, "docker-compose.yaml"),
76
+ "--env-file",
77
+ os.path.join(TEMP_DIR, ".env"),
78
+ "build",
79
+ ],
80
+ check=True,
81
+ )
82
+ print(t("IMAGES_BUILT"))
83
+ print(t("STARTING_CONTAINERS"))
84
+ subprocess.run(
85
+ docker_compose_cmd
86
+ + [
87
+ "-f",
88
+ os.path.join(TEMP_DIR, "docker-compose.yaml"),
89
+ "--env-file",
90
+ os.path.join(TEMP_DIR, ".env"),
91
+ "up",
92
+ "-d",
93
+ ],
94
+ check=True,
95
+ )
96
+ print(t("CONTAINERS_STARTED"))
97
+ except subprocess.CalledProcessError as e:
98
+ print(t("DOCKER_ERROR", error=str(e)))
99
+ print(t("DOCKER_OUTPUT"))
100
+ print(e.output.decode() if e.output else "No output")
101
+ stop_and_remove_containers()
102
+ sys.exit(1)
103
+
104
+
105
+ def update_docker_compose_paths():
106
+ docker_compose_path = os.path.join(TEMP_DIR, "docker-compose.yaml")
107
+ with open(docker_compose_path, "r") as file:
108
+ compose_data = yaml.safe_load(file)
109
+ for service in compose_data["services"].values():
110
+ if "build" in service:
111
+ if "context" in service["build"]:
112
+ if service["build"]["context"] == "..":
113
+ service["build"]["context"] = PROJECT_ROOT
114
+ else:
115
+ service["build"]["context"] = os.path.join(
116
+ PROJECT_ROOT, service["build"]["context"]
117
+ )
118
+ if "dockerfile" in service["build"]:
119
+ dockerfile_name = os.path.basename(service["build"]["dockerfile"])
120
+ service["build"]["dockerfile"] = os.path.join(TEMP_DIR, dockerfile_name)
121
+ with open(docker_compose_path, "w") as file:
122
+ yaml.dump(compose_data, file)
123
+ print(t("PATHS_UPDATED"))
124
+
125
+
126
+ def main():
127
+ if check_docker_install():
128
+ update_docker_compose_paths()
129
+ run_docker_compose()
130
+ else:
131
+ sys.exit(1)
132
+
133
+
134
+ if __name__ == "__main__":
135
+ main()
docker/msdl/i18n.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # msdl/translations/i18n_setup.py
2
+
3
+ import os
4
+ import i18n
5
+ import locale
6
+ from msdl.config import TRANSLATIONS_DIR, ENV_FILE_PATH
7
+
8
+
9
+ def get_env_variable(var_name, default=None):
10
+ if os.path.exists(ENV_FILE_PATH):
11
+ with open(ENV_FILE_PATH, "r") as env_file:
12
+ for line in env_file:
13
+ if line.startswith(f"{var_name}="):
14
+ return line.strip().split("=", 1)[1]
15
+ return os.getenv(var_name, default)
16
+
17
+
18
+ def get_system_language():
19
+ try:
20
+ return locale.getlocale()[0].split("_")[0]
21
+ except:
22
+ return "en"
23
+
24
+
25
+ def setup_i18n():
26
+ i18n.load_path.append(TRANSLATIONS_DIR)
27
+ i18n.set("filename_format", "{locale}.{format}")
28
+ i18n.set("file_format", "yaml")
29
+
30
+ env_language = get_env_variable("LAUNCHER_INTERACTION_LANGUAGE")
31
+ if env_language:
32
+ i18n.set("locale", env_language)
33
+ else:
34
+ i18n.set("locale", get_system_language())
35
+
36
+
37
+ def t(key, **kwargs):
38
+ return i18n.t(key, **kwargs)
docker/msdl/templates/backend/cloud_llm.dockerfile ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use Python 3.11.9 as the base image
2
+ FROM python:3.11.9-slim
3
+
4
+ # Set the working directory
5
+ WORKDIR /root
6
+
7
+ # Install Git
8
+ RUN apt-get update && apt-get install -y git && apt-get clean && rm -rf /var/lib/apt/lists/*
9
+
10
+ # Install specified dependency packages
11
+ # Note: lmdeploy dependency is already included in the base image, no need to reinstall
12
+ RUN pip install --no-cache-dir git+https://github.com/InternLM/lagent.git
13
+
14
+ RUN pip install --no-cache-dir \
15
+ duckduckgo_search==5.3.1b1 \
16
+ einops \
17
+ fastapi \
18
+ janus \
19
+ pyvis \
20
+ sse-starlette \
21
+ termcolor \
22
+ uvicorn \
23
+ griffe==0.48.0
24
+
25
+ # Copy the mindsearch folder to the /root directory of the container
26
+ COPY mindsearch /root/mindsearch
docker/{backend.dockerfile → msdl/templates/backend/local_llm.dockerfile} RENAMED
File without changes
docker/{docker-compose.yaml → msdl/templates/docker-compose.yaml} RENAMED
@@ -2,8 +2,8 @@ services:
2
  backend:
3
  container_name: mindsearch-backend
4
  build:
5
- context: ..
6
- dockerfile: docker/backend.dockerfile
7
  image: mindsearch/backend:latest
8
  restart: unless-stopped
9
  # Uncomment the following line to force using local build
@@ -12,9 +12,10 @@ services:
12
  - "8002:8002"
13
  environment:
14
  - PYTHONUNBUFFERED=1
15
- - OPENAI_API_KEY=${OPENAI_API_KEY:-}
16
  - OPENAI_API_BASE=${OPENAI_API_BASE:-https://api.openai.com/v1}
17
- - SILICON_API_KEY=${SILICON_API_KEY:-}
 
18
  command: python -m mindsearch.app --lang ${LANG:-cn} --model_format ${MODEL_FORMAT:-internlm_server}
19
  volumes:
20
  - /root/.cache:/root/.cache
@@ -49,8 +50,8 @@ services:
49
  frontend:
50
  container_name: mindsearch-frontend
51
  build:
52
- context: ..
53
- dockerfile: docker/frontend.dockerfile
54
  image: mindsearch/frontend:latest
55
  restart: unless-stopped
56
  # Uncomment the following line to force using local build
 
2
  backend:
3
  container_name: mindsearch-backend
4
  build:
5
+ context: .
6
+ dockerfile: backend.dockerfile
7
  image: mindsearch/backend:latest
8
  restart: unless-stopped
9
  # Uncomment the following line to force using local build
 
12
  - "8002:8002"
13
  environment:
14
  - PYTHONUNBUFFERED=1
15
+ # - OPENAI_API_KEY=${OPENAI_API_KEY:-}
16
  - OPENAI_API_BASE=${OPENAI_API_BASE:-https://api.openai.com/v1}
17
+ # - QWEN_API_KEY=${QWEN_API_KEY:-}
18
+ # - SILICON_API_KEY=${SILICON_API_KEY:-}
19
  command: python -m mindsearch.app --lang ${LANG:-cn} --model_format ${MODEL_FORMAT:-internlm_server}
20
  volumes:
21
  - /root/.cache:/root/.cache
 
50
  frontend:
51
  container_name: mindsearch-frontend
52
  build:
53
+ context: .
54
+ dockerfile: frontend.dockerfile
55
  image: mindsearch/frontend:latest
56
  restart: unless-stopped
57
  # Uncomment the following line to force using local build
docker/{frontend.dockerfile → msdl/templates/frontend/react.dockerfile} RENAMED
File without changes
docker/msdl/translations/en.yaml ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ en:
2
+ SCRIPT_DIR: "Script directory: %{dir}"
3
+ PROJECT_ROOT: "Project root directory: %{dir}"
4
+ TEMP_DIR: "Temporary directory: %{dir}"
5
+ DOCKER_LAUNCHER_START: "Starting Docker launcher process"
6
+ DOCKER_LAUNCHER_COMPLETE: "Docker launcher process completed"
7
+ DIR_CREATED: "Directory created: %{dir}"
8
+ FILE_COPIED: "Copied %{file} to the temp directory"
9
+ FILE_NOT_FOUND: "Error: %{file} not found in the templates directory"
10
+ CONTAINERS_STOPPED: "Existing containers and volumes stopped and removed"
11
+ CONTAINER_STOP_ERROR: "Error stopping and removing containers (this may be normal if there were no running containers): %{error}"
12
+ BUILDING_IMAGES: "Starting to build Docker images..."
13
+ IMAGES_BUILT: "Docker images built successfully"
14
+ STARTING_CONTAINERS: "Starting Docker containers..."
15
+ CONTAINERS_STARTED: "Docker containers started successfully"
16
+ DOCKER_ERROR: "Error while building or starting Docker containers: %{error}"
17
+ DOCKER_OUTPUT: "Docker command output:"
18
+ DOCKER_INSTALLED: "Docker and Docker Compose installed correctly"
19
+ DOCKER_INSTALL_ERROR: "Error: Docker or Docker Compose may not be installed correctly: %{error}"
20
+ DOCKER_NOT_FOUND: "Error: Docker or Docker Compose command not found. Please ensure they are correctly installed and added to the PATH."
21
+ DOCKER_COMPOSE_NOT_FOUND: "Error: Docker Compose command not found. Please ensure it is correctly installed and added to the PATH."
22
+ PATHS_UPDATED: "Paths updated in docker-compose.yaml"
23
+ COMPOSE_FILE_CONTENT: "docker-compose.yaml file content:"
24
+ COMPOSE_FILE_NOT_FOUND: "Error: %{file} file not found"
25
+ COMPOSE_FILE_READ_ERROR: "Error reading docker-compose.yaml file: %{error}"
26
+ TERMINATION_SIGNAL: "Termination signal caught. Exiting gracefully..."
27
+ KEYBOARD_INTERRUPT: "Keyboard interrupt caught. Exiting gracefully..."
28
+ UNEXPECTED_ERROR: "An unexpected error occurred: %{error}"
29
+ BACKEND_LANGUAGE_CHOICE: "Select backend language (default is cn):"
30
+ CHINESE: "中文(cn)"
31
+ ENGLISH: "English(en)"
32
+ MODEL_DEPLOYMENT_TYPE: "Select model deployment type:"
33
+ CLOUD_MODEL: "Cloud model"
34
+ LOCAL_MODEL: "Local model"
35
+ MODEL_FORMAT_CHOICE: "Select model format:"
36
+ CONFIRM_USE_EXISTING_API_KEY: "Do you want to use the existing %{ENV_VAR_NAME} API key?"
37
+ CONFIRM_OVERWRITE_EXISTING_API_KEY: "Do you want to overwrite the existing %{ENV_VAR_NAME} API key?"
38
+ PLEASE_INPUT_NEW_API_KEY: "Please enter a new %{ENV_VAR_NAME} API key:"
39
+ PLEASE_INPUT_NEW_API_KEY_FROM_ZERO: "Please enter a new %{ENV_VAR_NAME} API key:"
40
+ INVALID_API_KEY_FORMAT: "Invalid API key format"
41
+ RETRY_API_KEY_INPUT: "Retry API key input"
42
+ API_KEY_INPUT_CANCELLED: "API key input cancelled"
43
+ UNKNOWN_API_KEY_TYPE: "Unknown API key type: %{KEY_TYPE}"
44
+ UNKNOWN_MODEL_FORMAT: "Unknown model format: %{MODEL_FORMAT}"
45
+ INVALID_API_KEY: "Invalid API key: %{KEY_TYPE}"
46
+ API_KEY_SAVED: "API key for %{ENV_VAR_NAME} saved"
47
+ UNKNOWN_DOCKERFILE: "Unknown Dockerfile: %{dockerfile}"
48
+ UNKNOWN_MODEL_TYPE: "Unknown model type: %{model_type}"
49
+ BACKEND_DOCKERFILE_COPIED: "Backend Dockerfile copied from %{source_path} to %{dest_path}"
50
+ FRONTEND_DOCKERFILE_COPIED: "Frontend Dockerfile copied from %{source_path} to %{dest_path}"
docker/msdl/translations/zh_CN.yaml ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ zh_CN:
2
+ SCRIPT_DIR: "脚本目录:%{dir}"
3
+ PROJECT_ROOT: "项目根目录:%{dir}"
4
+ TEMP_DIR: "临时目录:%{dir}"
5
+ DOCKER_LAUNCHER_START: "开始 Docker 启动器流程"
6
+ DOCKER_LAUNCHER_COMPLETE: "Docker 启动器流程完成"
7
+ DIR_CREATED: "创建目录:%{dir}"
8
+ FILE_COPIED: "已复制 %{file} 到 temp 目录"
9
+ FILE_NOT_FOUND: "错误:%{file} 在 templates 目录中不存在"
10
+ CONTAINERS_STOPPED: "已停止并删除现有容器和卷"
11
+ CONTAINER_STOP_ERROR: "停止和删除容器时出错(这可能是正常的,如果没有正在运行的容器):%{error}"
12
+ BUILDING_IMAGES: "开始构建Docker镜像..."
13
+ IMAGES_BUILT: "Docker镜像构建成功"
14
+ STARTING_CONTAINERS: "开始启动Docker容器..."
15
+ CONTAINERS_STARTED: "Docker 容器已成功启动"
16
+ DOCKER_ERROR: "构建或启动 Docker 容器时出错:%{error}"
17
+ DOCKER_OUTPUT: "Docker 命令输出:"
18
+ DOCKER_INSTALLED: "Docker 和 Docker Compose 安装正确"
19
+ DOCKER_INSTALL_ERROR: "错误:Docker 或 Docker Compose 可能没有正确安装:%{error}"
20
+ DOCKER_NOT_FOUND: "错误:Docker 或 Docker Compose 命令未找到。请确保它们已正确安装并添加到PATH中。"
21
+ DOCKER_COMPOSE_NOT_FOUND: "错误:Docker Compose 命令未找到。请确保它已正确安装并添加到PATH中。"
22
+ PATHS_UPDATED: "已更新 docker-compose.yaml 中的路径"
23
+ COMPOSE_FILE_CONTENT: "docker-compose.yaml 文件内容:"
24
+ COMPOSE_FILE_NOT_FOUND: "错误:%{file} 文件不存在"
25
+ COMPOSE_FILE_READ_ERROR: "读取 docker-compose.yaml 文件时出错:%{error}"
26
+ TERMINATION_SIGNAL: "捕获到终止信号。正在优雅地退出..."
27
+ KEYBOARD_INTERRUPT: "捕获到键盘中断。正在优雅地退出..."
28
+ UNEXPECTED_ERROR: "发生未预期的错误:%{error}"
29
+ BACKEND_LANGUAGE_CHOICE: "选择后端语言(默认为 cn):"
30
+ CHINESE: "中文(cn)"
31
+ ENGLISH: "English(en)"
32
+ MODEL_DEPLOYMENT_TYPE: "选择模型部署类型:"
33
+ CLOUD_MODEL: "云端模型"
34
+ LOCAL_MODEL: "本地模型"
35
+ MODEL_FORMAT_CHOICE: "选择模型格式:"
36
+ CONFIRM_USE_EXISTING_API_KEY: "是否使用现有的 %{ENV_VAR_NAME} API 密钥?"
37
+ CONFIRM_OVERWRITE_EXISTING_API_KEY: "是否覆盖现有的 %{ENV_VAR_NAME} API 密钥?"
38
+ PLEASE_INPUT_NEW_API_KEY: "请输入新的 %{ENV_VAR_NAME} API 密钥:"
39
+ PLEASE_INPUT_NEW_API_KEY_FROM_ZERO: "请输入新的 %{ENV_VAR_NAME} API 密钥:"
40
+ INVALID_API_KEY_FORMAT: "无效的 API 密钥格式"
41
+ RETRY_API_KEY_INPUT: "重试 API 密钥输入"
42
+ API_KEY_INPUT_CANCELLED: "API 密钥输入已取消"
43
+ UNKNOWN_API_KEY_TYPE: "未知的 API 密钥类型:%{KEY_TYPE}"
44
+ UNKNOWN_MODEL_FORMAT: "未知的模型格式:%{MODEL_FORMAT}"
45
+ INVALID_API_KEY: "无效的 API 密钥:%{KEY_TYPE}"
46
+ API_KEY_SAVED: "%{ENV_VAR_NAME} 的 API 密钥已保存"
47
+ UNKNOWN_DOCKERFILE: "未知的 Dockerfile:%{dockerfile}"
48
+ UNKNOWN_MODEL_TYPE: "未知的模型类型:%{model_type}"
49
+ BACKEND_DOCKERFILE_COPIED: "后端 Dockerfile 已经从 %{source_path} 复制为 %{dest_path}"
50
+ FRONTEND_DOCKERFILE_COPIED: "前端 Dockerfile 已经从 %{source_path} 复制为 %{dest_path}"
docker/msdl/utils.py ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # msdl/utils.py
2
+
3
+ import os
4
+ import re
5
+ import shutil
6
+ import sys
7
+ import yaml
8
+ from functools import lru_cache
9
+ from pathlib import Path
10
+ from msdl.config import (
11
+ CLOUD_LLM_DOCKERFILE,
12
+ LOCAL_LLM_DOCKERFILE,
13
+ PACKAGE_DIR,
14
+ TEMP_DIR,
15
+ ENV_FILE_PATH,
16
+ )
17
+ from msdl.i18n import t
18
+
19
+
20
+ @lru_cache(maxsize=None)
21
+ def get_env_variable(var_name, default=None):
22
+ if ENV_FILE_PATH.exists():
23
+ with ENV_FILE_PATH.open("r") as env_file:
24
+ for line in env_file:
25
+ if line.startswith(f"{var_name}="):
26
+ return line.strip().split("=", 1)[1]
27
+ return os.getenv(var_name, default)
28
+
29
+
30
+ @lru_cache(maxsize=None)
31
+ def get_existing_api_key(env_var_name):
32
+ env_vars = read_env_file()
33
+ return env_vars.get(env_var_name)
34
+
35
+
36
+ @lru_cache(maxsize=None)
37
+ def read_env_file():
38
+ env_vars = {}
39
+ if ENV_FILE_PATH.exists():
40
+ with ENV_FILE_PATH.open("r") as env_file:
41
+ for line in env_file:
42
+ if "=" in line and not line.strip().startswith("#"):
43
+ key, value = line.strip().split("=", 1)
44
+ env_vars[key] = value.strip('"').strip("'")
45
+ return env_vars
46
+
47
+
48
+ def clean_api_key(api_key):
49
+ cleaned_key = api_key.strip()
50
+ cleaned_key = re.sub(r"\s+", "", cleaned_key)
51
+ return cleaned_key
52
+
53
+
54
+ @lru_cache(maxsize=None)
55
+ def validate_api_key(api_key, key_type, t):
56
+ basic_pattern = r"^sk-[A-Za-z0-9]+$"
57
+
58
+ validation_rules = {
59
+ "SILICON_API_KEY": basic_pattern,
60
+ "OPENAI_API_KEY": basic_pattern,
61
+ "QWEN_API_KEY": basic_pattern,
62
+ }
63
+
64
+ if key_type not in validation_rules:
65
+ raise ValueError(t("UNKNOWN_API_KEY_TYPE", KEY_TYPE=key_type))
66
+
67
+ pattern = validation_rules[key_type]
68
+ return re.match(pattern, api_key) is not None
69
+
70
+
71
+ def ensure_directory(path):
72
+ path = Path(path)
73
+ if not path.exists():
74
+ path.mkdir(parents=True, exist_ok=True)
75
+ print(t("DIR_CREATED", dir=path))
76
+
77
+
78
+ def copy_templates_to_temp(template_files):
79
+ template_dir = PACKAGE_DIR / "templates"
80
+
81
+ ensure_directory(TEMP_DIR)
82
+
83
+ for filename in template_files:
84
+ src = template_dir / filename
85
+ dst = TEMP_DIR / filename
86
+ if src.exists():
87
+ shutil.copy2(src, dst)
88
+ print(t("FILE_COPIED", file=filename))
89
+ else:
90
+ print(t("FILE_NOT_FOUND", file=filename))
91
+ sys.exit(1)
92
+
93
+
94
+ def save_api_key_to_env(model_format, api_key, t):
95
+ env_var_name = {
96
+ "internlm_silicon": "SILICON_API_KEY",
97
+ "gpt4": "OPENAI_API_KEY",
98
+ "qwen": "QWEN_API_KEY",
99
+ }.get(model_format)
100
+
101
+ if not env_var_name:
102
+ raise ValueError(t("UNKNOWN_MODEL_FORMAT", MODEL_FORMAT=model_format))
103
+
104
+ if not validate_api_key(api_key, env_var_name, t):
105
+ raise ValueError(t("INVALID_API_KEY", KEY_TYPE=env_var_name))
106
+
107
+ env_vars = read_env_file()
108
+ env_vars[env_var_name] = api_key
109
+
110
+ with ENV_FILE_PATH.open("w") as env_file:
111
+ for key, value in env_vars.items():
112
+ env_file.write(f"{key}={value}\n")
113
+
114
+ print(t("API_KEY_SAVED", ENV_VAR_NAME=env_var_name))
115
+
116
+
117
+ def modify_docker_compose(selected_dockerfile, backend_language, model_format):
118
+ docker_compose_path = TEMP_DIR / "docker-compose.yaml"
119
+
120
+ with docker_compose_path.open("r") as file:
121
+ compose_data = yaml.safe_load(file)
122
+
123
+ backend_service = compose_data["services"]["backend"]
124
+
125
+ if "env_file" not in backend_service:
126
+ backend_service["env_file"] = [".env"]
127
+ elif ".env" not in backend_service["env_file"]:
128
+ backend_service["env_file"].append(".env")
129
+
130
+ command = f"python -m mindsearch.app --lang {backend_language} --model_format {model_format}"
131
+ if selected_dockerfile == CLOUD_LLM_DOCKERFILE:
132
+ if "deploy" in backend_service:
133
+ del backend_service["deploy"]
134
+ backend_service["command"] = command
135
+ elif selected_dockerfile == LOCAL_LLM_DOCKERFILE:
136
+ if "deploy" not in backend_service:
137
+ backend_service["deploy"] = {
138
+ "resources": {
139
+ "reservations": {
140
+ "devices": [
141
+ {"driver": "nvidia", "count": 1, "capabilities": ["gpu"]}
142
+ ]
143
+ }
144
+ }
145
+ }
146
+ backend_service["command"] = command
147
+ else:
148
+ raise ValueError(t("UNKNOWN_DOCKERFILE", dockerfile=selected_dockerfile))
149
+
150
+ with docker_compose_path.open("w") as file:
151
+ yaml.dump(compose_data, file)
152
+
153
+ print(
154
+ t(
155
+ "docker_compose_updated",
156
+ mode=(
157
+ t("CLOUD")
158
+ if selected_dockerfile == CLOUD_LLM_DOCKERFILE
159
+ else t("LOCAL")
160
+ ),
161
+ format=model_format,
162
+ )
163
+ )
164
+
165
+
166
+ def get_model_formats(model_type):
167
+ if model_type == CLOUD_LLM_DOCKERFILE:
168
+ return ["internlm_silicon", "qwen", "gpt4"]
169
+ elif model_type == LOCAL_LLM_DOCKERFILE:
170
+ return ["internlm_server", "internlm_client", "internlm_hf"]
171
+ else:
172
+ raise ValueError(t("UNKNOWN_MODEL_TYPE", model_type=model_type))
docker/setup.py ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from setuptools import setup, find_packages
2
+
3
+ setup(
4
+ name="msdl",
5
+ version="0.1.0",
6
+ description="MindSearch Docker Launcher",
7
+ packages=find_packages(),
8
+ python_requires=">=3.7",
9
+ install_requires=[
10
+ "pyyaml>=6.0",
11
+ "python-i18n>=0.3.9",
12
+ "inquirerpy>=0.3.4",
13
+ ],
14
+ entry_points={
15
+ "console_scripts": [
16
+ "msdl=msdl.__main__:main",
17
+ ],
18
+ },
19
+ include_package_data=True,
20
+ package_data={
21
+ "msdl": ["translations/*.yaml", "templates/*"],
22
+ },
23
+ )