hins111 commited on
Commit
9019371
·
verified ·
1 Parent(s): 6066aef

Upload 6 files

Browse files
Files changed (6) hide show
  1. .env +2 -0
  2. .gitignore +6 -0
  3. Dockerfile +16 -0
  4. LICENSE +21 -0
  5. README.md +100 -11
  6. main.py +366 -0
.env ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ SECURE_1PSID=g.a000wggDrdz_LVKBo6Z8CvdWxP12JoNF4C4dEYJPGL7uc0KJVfZELwQKSVUPOi_4IHa7olel8AACgYKAWUSARUSFQHGX2MiPxpB3JHYmE2q0wDguZNRHRoVAUF8yKpa6mhrstdOMGpvDIUa_5zp0076
2
+ SECURE_1PSIDTS=sidts-CjEBjplskE4IkRFwPaDjNxMhcCG6ew5OVVhzBdyp_C1yHqZry3rSlMmWf-2HZ_uWklWzEAA
.gitignore ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ .python-version
2
+ .idea
3
+ .venv
4
+ uv.lock
5
+ .env
6
+ __pycache__
Dockerfile ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install dependencies
6
+ COPY pyproject.toml .
7
+ RUN uv sync
8
+
9
+ # Copy application code
10
+ COPY main.py .
11
+
12
+ # Expose the port the app runs on
13
+ EXPOSE 8000
14
+
15
+ # Command to run the application
16
+ CMD ["uv", "run", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 RrOrange
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md CHANGED
@@ -1,11 +1,100 @@
1
- ---
2
- title: Gemini2api
3
- emoji: 🔥
4
- colorFrom: pink
5
- colorTo: green
6
- sdk: docker
7
- pinned: false
8
- license: apache-2.0
9
- ---
10
-
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Gemi2Api-Server
2
+ [HanaokaYuzu / Gemini-API](https://github.com/HanaokaYuzu/Gemini-API) 的服务端简单实现
3
+
4
+ [![pE79pPf.png](https://s21.ax1x.com/2025/04/28/pE79pPf.png)](https://imgse.com/i/pE79pPf)
5
+
6
+ ## 快捷部署
7
+
8
+ ### Render
9
+
10
+ [![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/zhiyu1998/Gemi2Api-Server)
11
+
12
+ ## 直接运行
13
+
14
+ 0. 填入 `SECURE_1PSID` 和 `SECURE_1PSIDTS`(登录 Gemini 在浏览器开发工具中查找 Cookie)
15
+ ```properties
16
+ SECURE_1PSID = "COOKIE VALUE HERE"
17
+ SECURE_1PSIDTS = "COOKIE VALUE HERE"
18
+ ```
19
+ 1. `uv` 安装一下依赖
20
+ > uv init
21
+ >
22
+ > uv add fastapi uvicorn gemini-webapi
23
+
24
+ > [!NOTE]
25
+ > 如果存在`pyproject.toml` 那么就使用下面的命令:
26
+ > uv sync
27
+
28
+ 或者 `pip` 也可以
29
+
30
+ > pip install fastapi uvicorn gemini-webapi
31
+
32
+ 2. 激活一下环境
33
+ > source venv/bin/activate
34
+
35
+ 3. 启动
36
+ > uvicorn main:app --reload --host 127.0.0.1 --port 8000
37
+
38
+ > [!WARNING]
39
+ > tips: 没有任何API Key,直接使用
40
+
41
+ ## 使用Docker运行(推荐)
42
+
43
+ ### 快速开始
44
+
45
+ 1. 克隆本项目
46
+ ```bash
47
+ git clone https://github.com/zhiyu1998/Gemi2Api-Server.git
48
+ ```
49
+
50
+ 2. 创建 `.env` 文件并填入你的 Gemini Cookie 凭据:
51
+ ```bash
52
+ cp .env.example .env
53
+ # 用编辑器打开 .env 文件,填入你的 Cookie 值
54
+ ```
55
+
56
+ 3. 启动服务:
57
+ ```bash
58
+ docker-compose up -d
59
+ ```
60
+
61
+ 4. 服务将在 http://0.0.0.0:8000 上运行
62
+
63
+ ### 其他 Docker 命令
64
+
65
+ ```bash
66
+ # 查看日志
67
+ docker-compose logs
68
+
69
+ # 重启服务
70
+ docker-compose restart
71
+
72
+ # 停止服务
73
+ docker-compose down
74
+
75
+ # 重新构建并启动
76
+ docker-compose up -d --build
77
+ ```
78
+
79
+ ## API端点
80
+
81
+ - `GET /`: 服务状态检查
82
+ - `GET /v1/models`: 获取可用模型列表
83
+ - `POST /v1/chat/completions`: 与模型聊天 (类似OpenAI接口)
84
+
85
+ ## 常见问题
86
+
87
+ ### 服务器报 500 问题解决方案
88
+
89
+ 500 的问题一般是 IP 不太行 或者 请求太频繁(后者等待一段时间或者重新新建一个隐身标签登录一下重新给 Secure_1PSID 和 Secure_1PSIDTS 即可),见 issue:
90
+ - [__Secure-1PSIDTS · Issue #6 · HanaokaYuzu/Gemini-API](https://github.com/HanaokaYuzu/Gemini-API/issues/6)
91
+ - [Failed to initialize client. SECURE_1PSIDTS could get expired frequently · Issue #72 · HanaokaYuzu/Gemini-API](https://github.com/HanaokaYuzu/Gemini-API/issues/72)
92
+
93
+ 解决步骤:
94
+ 1. 使用隐身标签访问 [Google Gemini](https://gemini.google.com/) 并登录
95
+ 2. 打开浏览器开发工具 (F12)
96
+ 3. 切换到 "Application" 或 "应用程序" 标签
97
+ 4. 在左侧找到 "Cookies" > "gemini.google.com"
98
+ 5. 复制 `__Secure-1PSID` 和 `__Secure-1PSIDTS` 的值
99
+ 6. 更新 `.env` 文件
100
+ 7. 重新构建并启动: `docker-compose up -d --build`
main.py ADDED
@@ -0,0 +1,366 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import asyncio
2
+ import json
3
+ from datetime import datetime, timezone
4
+ import os
5
+ import base64
6
+ import tempfile
7
+
8
+ from fastapi import FastAPI, HTTPException, Request
9
+ from fastapi.middleware.cors import CORSMiddleware
10
+ from fastapi.responses import JSONResponse
11
+ from fastapi.responses import StreamingResponse
12
+ from pydantic import BaseModel
13
+ from typing import List, Optional, Dict, Any, Union
14
+ import time
15
+ import uuid
16
+ import logging
17
+
18
+ from gemini_webapi import GeminiClient, set_log_level
19
+ from gemini_webapi.constants import Model
20
+
21
+ # Configure logging
22
+ logging.basicConfig(level=logging.INFO)
23
+ logger = logging.getLogger(__name__)
24
+ set_log_level("INFO")
25
+
26
+ app = FastAPI(title="Gemini API FastAPI Server")
27
+
28
+ # Add CORS middleware
29
+ app.add_middleware(
30
+ CORSMiddleware,
31
+ allow_origins=["*"],
32
+ allow_credentials=True,
33
+ allow_methods=["*"],
34
+ allow_headers=["*"],
35
+ )
36
+
37
+ # Global client
38
+ gemini_client = None
39
+
40
+ # Authentication credentials
41
+ SECURE_1PSID = os.environ.get("SECURE_1PSID", "")
42
+ SECURE_1PSIDTS = os.environ.get("SECURE_1PSIDTS", "")
43
+
44
+ # Print debug info at startup
45
+ if not SECURE_1PSID or not SECURE_1PSIDTS:
46
+ logger.warning("⚠️ Gemini API credentials are not set or empty! Please check your environment variables.")
47
+ logger.warning("Make sure SECURE_1PSID and SECURE_1PSIDTS are correctly set in your .env file or environment.")
48
+ logger.warning("If using Docker, ensure the .env file is correctly mounted and formatted.")
49
+ logger.warning("Example format in .env file (no quotes):")
50
+ logger.warning("SECURE_1PSID=your_secure_1psid_value_here")
51
+ logger.warning("SECURE_1PSIDTS=your_secure_1psidts_value_here")
52
+ else:
53
+ # Only log the first few characters for security
54
+ logger.info(f"Credentials found. SECURE_1PSID starts with: {SECURE_1PSID[:5]}...")
55
+ logger.info(f"Credentials found. SECURE_1PSIDTS starts with: {SECURE_1PSIDTS[:5]}...")
56
+
57
+
58
+ # Pydantic models for API requests and responses
59
+ class ContentItem(BaseModel):
60
+ type: str
61
+ text: Optional[str] = None
62
+ image_url: Optional[Dict[str, str]] = None
63
+
64
+
65
+ class Message(BaseModel):
66
+ role: str
67
+ content: Union[str, List[ContentItem]]
68
+ name: Optional[str] = None
69
+
70
+
71
+ class ChatCompletionRequest(BaseModel):
72
+ model: str
73
+ messages: List[Message]
74
+ temperature: Optional[float] = 0.7
75
+ top_p: Optional[float] = 1.0
76
+ n: Optional[int] = 1
77
+ stream: Optional[bool] = False
78
+ max_tokens: Optional[int] = None
79
+ presence_penalty: Optional[float] = 0
80
+ frequency_penalty: Optional[float] = 0
81
+ user: Optional[str] = None
82
+
83
+
84
+ class Choice(BaseModel):
85
+ index: int
86
+ message: Message
87
+ finish_reason: str
88
+
89
+
90
+ class Usage(BaseModel):
91
+ prompt_tokens: int
92
+ completion_tokens: int
93
+ total_tokens: int
94
+
95
+
96
+ class ChatCompletionResponse(BaseModel):
97
+ id: str
98
+ object: str = "chat.completion"
99
+ created: int
100
+ model: str
101
+ choices: List[Choice]
102
+ usage: Usage
103
+
104
+
105
+ class ModelData(BaseModel):
106
+ id: str
107
+ object: str = "model"
108
+ created: int
109
+ owned_by: str = "google"
110
+
111
+
112
+ class ModelList(BaseModel):
113
+ object: str = "list"
114
+ data: List[ModelData]
115
+
116
+
117
+ # Simple error handler middleware
118
+ @app.middleware("http")
119
+ async def error_handling(request: Request, call_next):
120
+ try:
121
+ return await call_next(request)
122
+ except Exception as e:
123
+ logger.error(f"Request failed: {str(e)}")
124
+ return JSONResponse(status_code=500, content={"error": {"message": str(e), "type": "internal_server_error"}})
125
+
126
+
127
+ # Get list of available models
128
+ @app.get("/v1/models")
129
+ async def list_models():
130
+ """返回 gemini_webapi 中声明的模型列表"""
131
+ now = int(datetime.now(tz=timezone.utc).timestamp())
132
+ data = [
133
+ {
134
+ "id": m.model_name, # 如 "gemini-2.0-flash"
135
+ "object": "model",
136
+ "created": now,
137
+ "owned_by": "google-gemini-web",
138
+ }
139
+ for m in Model
140
+ ]
141
+ print(data)
142
+ return {"object": "list", "data": data}
143
+
144
+
145
+ # Helper to convert between Gemini and OpenAI model names
146
+ def map_model_name(openai_model_name: str) -> Model:
147
+ """根据模型名称字符串查找匹配的 Model 枚举值"""
148
+ # 打印所有可用模型以便调试
149
+ all_models = [m.model_name if hasattr(m, "model_name") else str(m) for m in Model]
150
+ logger.info(f"Available models: {all_models}")
151
+
152
+ # 首先尝试直接查找匹配的模型名称
153
+ for m in Model:
154
+ model_name = m.model_name if hasattr(m, "model_name") else str(m)
155
+ if openai_model_name.lower() in model_name.lower():
156
+ return m
157
+
158
+ # 如果找不到匹配项,使用默认映射
159
+ model_keywords = {
160
+ "gemini-pro": ["pro", "2.0"],
161
+ "gemini-pro-vision": ["vision", "pro"],
162
+ "gemini-flash": ["flash", "2.0"],
163
+ "gemini-1.5-pro": ["1.5", "pro"],
164
+ "gemini-1.5-flash": ["1.5", "flash"],
165
+ }
166
+
167
+ # 根据关键词匹配
168
+ keywords = model_keywords.get(openai_model_name, ["pro"]) # 默认使用pro模型
169
+
170
+ for m in Model:
171
+ model_name = m.model_name if hasattr(m, "model_name") else str(m)
172
+ if all(kw.lower() in model_name.lower() for kw in keywords):
173
+ return m
174
+
175
+ # ��果还是找不到,返回第一个模型
176
+ return next(iter(Model))
177
+
178
+
179
+ # Prepare conversation history from OpenAI messages format
180
+ def prepare_conversation(messages: List[Message]) -> tuple:
181
+ conversation = ""
182
+ temp_files = []
183
+
184
+ for msg in messages:
185
+ if isinstance(msg.content, str):
186
+ # String content handling
187
+ if msg.role == "system":
188
+ conversation += f"System: {msg.content}\n\n"
189
+ elif msg.role == "user":
190
+ conversation += f"Human: {msg.content}\n\n"
191
+ elif msg.role == "assistant":
192
+ conversation += f"Assistant: {msg.content}\n\n"
193
+ else:
194
+ # Mixed content handling
195
+ if msg.role == "user":
196
+ conversation += "Human: "
197
+ elif msg.role == "system":
198
+ conversation += "System: "
199
+ elif msg.role == "assistant":
200
+ conversation += "Assistant: "
201
+
202
+ for item in msg.content:
203
+ if item.type == "text":
204
+ conversation += item.text or ""
205
+ elif item.type == "image_url" and item.image_url:
206
+ # Handle image
207
+ image_url = item.image_url.get("url", "")
208
+ if image_url.startswith("data:image/"):
209
+ # Process base64 encoded image
210
+ try:
211
+ # Extract the base64 part
212
+ base64_data = image_url.split(",")[1]
213
+ image_data = base64.b64decode(base64_data)
214
+
215
+ # Create temporary file to hold the image
216
+ with tempfile.NamedTemporaryFile(delete=False, suffix=".png") as tmp:
217
+ tmp.write(image_data)
218
+ temp_files.append(tmp.name)
219
+ except Exception as e:
220
+ logger.error(f"Error processing base64 image: {str(e)}")
221
+
222
+ conversation += "\n\n"
223
+
224
+ # Add a final prompt for the assistant to respond to
225
+ conversation += "Assistant: "
226
+
227
+ return conversation, temp_files
228
+
229
+
230
+ # Dependency to get the initialized Gemini client
231
+ async def get_gemini_client():
232
+ global gemini_client
233
+ if gemini_client is None:
234
+ try:
235
+ gemini_client = GeminiClient(SECURE_1PSID, SECURE_1PSIDTS)
236
+ await gemini_client.init(timeout=300)
237
+ except Exception as e:
238
+ logger.error(f"Failed to initialize Gemini client: {str(e)}")
239
+ raise HTTPException(status_code=500, detail=f"Failed to initialize Gemini client: {str(e)}")
240
+ return gemini_client
241
+
242
+
243
+ @app.post("/v1/chat/completions")
244
+ async def create_chat_completion(request: ChatCompletionRequest):
245
+ try:
246
+ # 确保客户端已初始化
247
+ global gemini_client
248
+ if gemini_client is None:
249
+ gemini_client = GeminiClient(SECURE_1PSID, SECURE_1PSIDTS)
250
+ await gemini_client.init(timeout=300)
251
+ logger.info("Gemini client initialized successfully")
252
+
253
+ # 转换消息为对话格式
254
+ conversation, temp_files = prepare_conversation(request.messages)
255
+ logger.info(f"Prepared conversation: {conversation}")
256
+ logger.info(f"Temp files: {temp_files}")
257
+
258
+ # 获取适当的模型
259
+ model = map_model_name(request.model)
260
+ logger.info(f"Using model: {model}")
261
+
262
+ # 生成响应
263
+ logger.info("Sending request to Gemini...")
264
+ if temp_files:
265
+ # With files
266
+ response = await gemini_client.generate_content(conversation, files=temp_files, model=model)
267
+ else:
268
+ # Text only
269
+ response = await gemini_client.generate_content(conversation, model=model)
270
+
271
+ # 清理临时文件
272
+ for temp_file in temp_files:
273
+ try:
274
+ os.unlink(temp_file)
275
+ except Exception as e:
276
+ logger.warning(f"Failed to delete temp file {temp_file}: {str(e)}")
277
+
278
+ # 提取文本响应
279
+ reply_text = ""
280
+ if hasattr(response, "text"):
281
+ reply_text = response.text
282
+ else:
283
+ reply_text = str(response)
284
+
285
+ logger.info(f"Response: {reply_text}")
286
+
287
+ if not reply_text or reply_text.strip() == "":
288
+ logger.warning("Empty response received from Gemini")
289
+ reply_text = "服务器返回了空响应。请检查 Gemini API 凭据是否有效。"
290
+
291
+ # 创建响应对象
292
+ completion_id = f"chatcmpl-{uuid.uuid4()}"
293
+ created_time = int(time.time())
294
+
295
+ # 检查客户端是否请求流式响应
296
+ if request.stream:
297
+ # 实现流式响应
298
+ async def generate_stream():
299
+ # 创建 SSE 格式的流式响应
300
+ # 先发送开始事件
301
+ data = {
302
+ "id": completion_id,
303
+ "object": "chat.completion.chunk",
304
+ "created": created_time,
305
+ "model": request.model,
306
+ "choices": [{"index": 0, "delta": {"role": "assistant"}, "finish_reason": None}],
307
+ }
308
+ yield f"data: {json.dumps(data)}\n\n"
309
+
310
+ # 模拟流式输出 - 将文本按字符分割发送
311
+ for char in reply_text:
312
+ data = {
313
+ "id": completion_id,
314
+ "object": "chat.completion.chunk",
315
+ "created": created_time,
316
+ "model": request.model,
317
+ "choices": [{"index": 0, "delta": {"content": char}, "finish_reason": None}],
318
+ }
319
+ yield f"data: {json.dumps(data)}\n\n"
320
+ # 可选:添加短暂延迟以模拟真实的流式输出
321
+ await asyncio.sleep(0.01)
322
+
323
+ # 发送结束事件
324
+ data = {
325
+ "id": completion_id,
326
+ "object": "chat.completion.chunk",
327
+ "created": created_time,
328
+ "model": request.model,
329
+ "choices": [{"index": 0, "delta": {}, "finish_reason": "stop"}],
330
+ }
331
+ yield f"data: {json.dumps(data)}\n\n"
332
+ yield "data: [DONE]\n\n"
333
+
334
+ return StreamingResponse(generate_stream(), media_type="text/event-stream")
335
+ else:
336
+ # 非流式响应(原来的逻辑)
337
+ result = {
338
+ "id": completion_id,
339
+ "object": "chat.completion",
340
+ "created": created_time,
341
+ "model": request.model,
342
+ "choices": [{"index": 0, "message": {"role": "assistant", "content": reply_text}, "finish_reason": "stop"}],
343
+ "usage": {
344
+ "prompt_tokens": len(conversation.split()),
345
+ "completion_tokens": len(reply_text.split()),
346
+ "total_tokens": len(conversation.split()) + len(reply_text.split()),
347
+ },
348
+ }
349
+
350
+ logger.info(f"Returning response: {result}")
351
+ return result
352
+
353
+ except Exception as e:
354
+ logger.error(f"Error generating completion: {str(e)}", exc_info=True)
355
+ raise HTTPException(status_code=500, detail=f"Error generating completion: {str(e)}")
356
+
357
+
358
+ @app.get("/")
359
+ async def root():
360
+ return {"status": "online", "message": "Gemini API FastAPI Server is running"}
361
+
362
+
363
+ if __name__ == "__main__":
364
+ import uvicorn
365
+
366
+ uvicorn.run("main:app", host="0.0.0.0", port=8000, log_level="info")