bibibi12345 commited on
Commit
b687da5
·
0 Parent(s):

first commit

Browse files
Files changed (8) hide show
  1. .env.example +2 -0
  2. .gitignore +7 -0
  3. Dockerfile +26 -0
  4. README.md +58 -0
  5. docker-compose.yml +9 -0
  6. main.py +256 -0
  7. models.json +19 -0
  8. requirements.txt +7 -0
.env.example ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ FLOWITH_AUTH_TOKEN=YOUR_TOKEN_HERE
2
+ API_KEY=123456
.gitignore ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ __pycache__/
2
+ *.pyc
3
+ .env
4
+ venv/
5
+ *.log
6
+ .idea/
7
+ .vscode/
Dockerfile ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use an official Python runtime as a parent image
2
+ FROM python:3.10-slim
3
+
4
+ # Set the working directory in the container
5
+ WORKDIR /app
6
+
7
+ # Copy the requirements file into the container at /app
8
+ COPY requirements.txt .
9
+
10
+ # Install any needed packages specified in requirements.txt
11
+ RUN pip install --no-cache-dir -r requirements.txt
12
+
13
+ # Copy the rest of the application code into the container at /app
14
+ COPY . .
15
+
16
+ # Ensure models.json is copied
17
+ COPY models.json .
18
+
19
+ # Make port 8000 available to the world outside this container
20
+ EXPOSE 7860
21
+
22
+ # Define environment variable
23
+ ENV NAME World
24
+
25
+ # Run main.py when the container launches
26
+ CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"]
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: OpenAI to Flowith Converter
3
+ emoji: 🔄
4
+ colorFrom: blue
5
+ colorTo: green
6
+ sdk: docker
7
+ app_port: 7860
8
+ env:
9
+ - FLOWITH_AUTH_TOKEN
10
+ - API_KEY # Add this line
11
+ secrets:
12
+ - FLOWITH_AUTH_TOKEN
13
+ - API_KEY # Add this line
14
+ ---
15
+
16
+ # OpenAI to Flowith Converter
17
+
18
+ This project provides a simple API endpoint that accepts OpenAI-compatible chat completion requests and forwards them to the Flowith API, translating the request and response formats as needed. It's designed to be run easily using Docker, both locally and on Hugging Face Spaces.
19
+
20
+ ## Setup
21
+
22
+ 1. **Environment Variables:** Create a `.env` file in the project root by copying the example: `cp .env.example .env` (or manually create it).
23
+ 2. **Flowith Token:** Open the `.env` file and replace the placeholder with your actual Flowith authorization token:
24
+ ```
25
+ FLOWITH_AUTH_TOKEN=your_actual_token_here
26
+ ```
27
+ 3. **Model Mappings (Optional):** If you need to use different Flowith models or map OpenAI model names differently, you can update the [`models.json`](models.json) file.
28
+ 4. **API Key (Optional):** By default, the API uses the key `123456`. If you want to use a different key, set the `API_KEY` environment variable in your `.env` file (for local runs) or as a secret named `API_KEY` in Hugging Face Spaces.
29
+
30
+ ## Running Locally (Docker)
31
+
32
+ To build and run the service locally using Docker Compose:
33
+
34
+ ```bash
35
+ docker-compose up --build
36
+ ```
37
+
38
+ The API will then be accessible at [`http://localhost:8099/v1/chat/completions`](http://localhost:8099/v1/chat/completions).
39
+
40
+ ## Running on Hugging Face Spaces
41
+
42
+ This repository is configured for deployment on Hugging Face Spaces using Docker.
43
+
44
+ 1. Create a new Space on Hugging Face, selecting "Docker" as the SDK.
45
+ 2. Link this repository to your Space.
46
+ 3. Navigate to your Space's "Settings" page.
47
+ 4. Go to the "Secrets" section.
48
+ 5. Add a new secret with the name `FLOWITH_AUTH_TOKEN` and paste your actual Flowith authorization token as the value. The application will automatically read this secret.
49
+
50
+ The Space will build the Docker image and start the service. The API endpoint will be available at your Space's URL (e.g., `https://your-username-your-space-name.hf.space/v1/chat/completions`).
51
+
52
+ ## API Endpoint
53
+
54
+ * **URL:** `/v1/chat/completions`
55
+ * **Method:** `POST`
56
+ * **Request Body:** Send a JSON payload conforming to the OpenAI Chat Completions API schema (e.g., specifying `model`, `messages`, `stream`, etc.). The `model` field should correspond to a key in [`models.json`](models.json).
57
+ * **Authentication:** Requests must include an `Authorization` header with your API key. Use the format `Bearer your_api_key`. For example, if using the default key, the header would be `Authorization: Bearer 123456`.
58
+ * **Response:** The API will return either a standard JSON response or a server-sent event stream, mimicking the OpenAI API behavior based on the `stream` parameter in the request.
docker-compose.yml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ version: '3.8'
2
+
3
+ services:
4
+ api:
5
+ build: .
6
+ ports:
7
+ - "8099:7860"
8
+ env_file:
9
+ - .env
main.py ADDED
@@ -0,0 +1,256 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import uuid
4
+ from typing import List, Optional, Literal, Dict, Any, AsyncGenerator
5
+
6
+ import httpx
7
+ import dotenv
8
+ from fastapi import FastAPI, HTTPException, Request, Depends, status
9
+ from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
10
+ from fastapi.responses import StreamingResponse
11
+ from pydantic import BaseModel, Field
12
+
13
+ # Load environment variables from .env file
14
+ dotenv.load_dotenv()
15
+
16
+ # --- Configuration ---
17
+ FLOWITH_AUTH_TOKEN = os.getenv("FLOWITH_AUTH_TOKEN")
18
+ FLOWITH_API_URL = "https://edge.flo.ing/completion?mode=general"
19
+ MODELS_FILE_PATH = "models.json"
20
+ API_KEY = os.getenv("API_KEY", "123456") # Read API Key
21
+
22
+ # --- Security Scheme ---
23
+ security = HTTPBearer()
24
+
25
+ if not FLOWITH_AUTH_TOKEN:
26
+ # In a real app, you might want to log this and exit,
27
+ # but for simplicity, we'll raise an error if accessed.
28
+ print("Warning: FLOWITH_AUTH_TOKEN environment variable not set.")
29
+ # raise ValueError("FLOWITH_AUTH_TOKEN environment variable is required.") # Or handle differently
30
+
31
+ # --- Load Model Mappings ---
32
+ try:
33
+ with open(MODELS_FILE_PATH, 'r') as f:
34
+ model_mappings = json.load(f)
35
+ except FileNotFoundError:
36
+ print(f"Error: Models file not found at {MODELS_FILE_PATH}")
37
+ model_mappings = {} # Or raise an error / exit
38
+ except json.JSONDecodeError:
39
+ print(f"Error: Invalid JSON in models file: {MODELS_FILE_PATH}")
40
+ model_mappings = {} # Or raise an error / exit
41
+
42
+ # --- Pydantic Models ---
43
+ class OpenAIMessage(BaseModel):
44
+ role: Literal["system", "user", "assistant"]
45
+ content: str
46
+
47
+ class OpenAIRequest(BaseModel):
48
+ model: str
49
+ messages: List[OpenAIMessage]
50
+ stream: Optional[bool] = False
51
+ # Add other potential OpenAI fields if needed, e.g., temperature, max_tokens
52
+ # temperature: Optional[float] = None
53
+ # max_tokens: Optional[int] = None
54
+
55
+ class FlowithMessage(BaseModel):
56
+ role: Literal["user", "assistant"] # Flowith uses 'user' and 'assistant'
57
+ content: str
58
+
59
+ class FlowithRequest(BaseModel):
60
+ model: str
61
+ messages: List[FlowithMessage]
62
+ stream: bool
63
+ nodeId: str # UUID for Flowith
64
+
65
+ # --- FastAPI App ---
66
+ app = FastAPI(
67
+ title="OpenAI to Flowith Proxy",
68
+ description="Translates OpenAI-compatible chat completion requests to Flowith's format.",
69
+ )
70
+
71
+ # --- Helper for Streaming ---
72
+ async def stream_flowith_response(flowith_stream: httpx.Response) -> AsyncGenerator[str, None]:
73
+ """Asynchronously streams the response from Flowith."""
74
+ async for chunk in flowith_stream.aiter_bytes():
75
+ # Assuming Flowith streams data in a format compatible with OpenAI's SSE
76
+ # If Flowith uses a different streaming format, this needs adjustment.
77
+ yield chunk.decode('utf-8') # Decode bytes to string
78
+
79
+ # --- Security Dependency ---
80
+ async def verify_api_key(credentials: HTTPAuthorizationCredentials = Depends(security)):
81
+ """Verify the provided API key against the environment variable."""
82
+ # Use constant time comparison to prevent timing attacks
83
+ # This requires Python 3.3+
84
+ import hmac
85
+ is_valid = hmac.compare_digest(credentials.credentials, API_KEY)
86
+
87
+ if not credentials or credentials.scheme != "Bearer" or not is_valid:
88
+ raise HTTPException(
89
+ status_code=status.HTTP_401_UNAUTHORIZED,
90
+ detail="Invalid or missing API Key",
91
+ headers={"WWW-Authenticate": "Bearer"},
92
+ )
93
+ return credentials.credentials # Return the key or True if successful
94
+
95
+ # --- API Endpoint ---
96
+ @app.post("/v1/chat/completions")
97
+ async def chat_completions(
98
+ request: OpenAIRequest,
99
+ http_request: Request,
100
+ api_key: str = Depends(verify_api_key) # Add API Key dependency
101
+ ):
102
+ """
103
+ Accepts OpenAI-like chat completion requests and forwards them to Flowith.
104
+ """
105
+ if not FLOWITH_AUTH_TOKEN:
106
+ raise HTTPException(status_code=500, detail="Server configuration error: Flowith auth token not set.")
107
+
108
+ # 1. Map the model
109
+ flowith_model_name = model_mappings.get(request.model)
110
+ if not flowith_model_name:
111
+ raise HTTPException(
112
+ status_code=400,
113
+ detail=f"Model '{request.model}' not found in mappings. Available: {list(model_mappings.keys())}"
114
+ )
115
+
116
+ # 2. Generate nodeId
117
+ node_id = str(uuid.uuid4())
118
+
119
+ # 3. Process messages (Handle system prompt for specific models)
120
+ processed_messages: List[FlowithMessage] = []
121
+ # Check if the *target* Flowith model indicates Claude or Gemini
122
+ # Adjust this logic if the check should be based on the *source* OpenAI model name
123
+ is_claude_or_gemini = "claude" in flowith_model_name.lower() or "gemini" in flowith_model_name.lower()
124
+
125
+ for msg in request.messages:
126
+ role = msg.role
127
+ if is_claude_or_gemini and role == "system":
128
+ # Convert system message to user message for Claude/Gemini via Flowith
129
+ role = "user"
130
+ elif role == "system":
131
+ # If it's a system message but not for Claude/Gemini, Flowith might not support it directly.
132
+ # Option 1: Skip it (might lose context)
133
+ # continue
134
+ # Option 2: Convert to user (might change semantics)
135
+ role = "user"
136
+ # Option 3: Prepend to the next user message (complex)
137
+ # For now, converting to 'user' is a simple approach.
138
+ print(f"Warning: Converting system message to 'user' for model {flowith_model_name}")
139
+
140
+ # Ensure only 'user' or 'assistant' roles are sent to Flowith
141
+ if role in ["user", "assistant"]:
142
+ processed_messages.append(FlowithMessage(role=role, content=msg.content))
143
+ # else: # Handle unexpected roles if necessary
144
+
145
+
146
+ # 4. Construct Flowith Request Payload
147
+ flowith_payload = FlowithRequest(
148
+ model=flowith_model_name,
149
+ messages=processed_messages,
150
+ stream=True, # Always stream from Flowith
151
+ nodeId=node_id,
152
+ )
153
+
154
+ # 5. Prepare Headers for Flowith Request
155
+ # Headers exactly matching the curl -H flags provided
156
+ headers = {
157
+ 'accept': '*/*',
158
+ 'accept-language': 'en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7,zh-TW;q=0.6,ja;q=0.5',
159
+ 'authorization': f'Bearer {FLOWITH_AUTH_TOKEN}', # Keep this dynamic
160
+ 'content-type': 'application/json',
161
+ 'origin': 'https://flowith.net',
162
+ 'priority': 'u=1, i',
163
+ 'referer': 'https://flowith.net/',
164
+ 'responsetype': 'stream', # Added - Was present in -H flags
165
+ 'sec-ch-ua': '"Google Chrome";v="135", "Not-A.Brand";v="8", "Chromium";v="135"',
166
+ 'sec-ch-ua-mobile': '?0',
167
+ 'sec-ch-ua-platform': '"Windows"',
168
+ 'sec-fetch-dest': 'empty',
169
+ 'sec-fetch-mode': 'cors',
170
+ 'sec-fetch-site': 'cross-site',
171
+ 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36'
172
+ }
173
+
174
+ # 6. Make Asynchronous Request to Flowith
175
+ async with httpx.AsyncClient(timeout=300.0) as client: # Increased timeout for potentially long requests
176
+ try:
177
+ # Always use stream for the request to Flowith
178
+ # Serialize payload manually for content=
179
+ payload_bytes = json.dumps(flowith_payload.dict()).encode('utf-8')
180
+ response_stream = client.stream( # Changed from post to stream
181
+ "POST", # Explicitly set method for stream
182
+ FLOWITH_API_URL,
183
+ content=payload_bytes, # Send serialized bytes
184
+ headers=headers,
185
+ )
186
+ # Need to acquire the stream context
187
+ async with response_stream as response:
188
+ # Check status code *before* attempting to read the stream body
189
+ if response.status_code != 200:
190
+ # Attempt to read error details from the response body if possible
191
+ try:
192
+ error_detail = await response.aread()
193
+ detail_msg = f"Flowith API Error ({response.status_code}): {error_detail.decode()}"
194
+ except Exception:
195
+ detail_msg = f"Flowith API Error ({response.status_code})"
196
+ # No need to manually close here, 'async with' handles it
197
+ raise HTTPException(status_code=response.status_code, detail=detail_msg)
198
+
199
+ # 7. Handle Flowith Response based on *client's* request.stream preference
200
+ if request.stream:
201
+ # Client wants streaming: Use StreamingResponse with the helper
202
+ return StreamingResponse(
203
+ stream_flowith_response(response), # Pass the response object itself
204
+ media_type="text/event-stream"
205
+ )
206
+ else:
207
+ # Client wants non-streaming: Accumulate the response
208
+ full_response_bytes = bytearray()
209
+ try:
210
+ async for chunk in response.aiter_bytes():
211
+ full_response_bytes.extend(chunk)
212
+ except Exception as e:
213
+ # Handle potential errors during stream reading
214
+ print(f"Error reading stream from Flowith: {e}")
215
+ raise HTTPException(status_code=502, detail=f"Error reading stream from Flowith: {e}")
216
+ finally:
217
+ # 'async with' ensures the stream is closed
218
+ pass
219
+
220
+ # Decode the accumulated bytes
221
+ full_response_text = full_response_bytes.decode('utf-8')
222
+
223
+ # Try to parse as JSON
224
+ try:
225
+ response_data = json.loads(full_response_text)
226
+ from fastapi.responses import JSONResponse # Import locally if not already global
227
+ return JSONResponse(content=response_data)
228
+ except json.JSONDecodeError:
229
+ # If not valid JSON, return as plain text
230
+ print(f"Warning: Flowith response was not valid JSON. Returning as plain text. Content: {full_response_text[:200]}...") # Log snippet
231
+ from fastapi.responses import PlainTextResponse # Import locally
232
+ return PlainTextResponse(content=full_response_text)
233
+
234
+ except httpx.RequestError as exc:
235
+ print(f"Error requesting Flowith: {exc}")
236
+ raise HTTPException(status_code=503, detail=f"Error connecting to Flowith service: {exc}")
237
+ except HTTPException as http_exc:
238
+ # Re-raise HTTPExceptions raised during stream processing
239
+ raise http_exc
240
+ except Exception as exc:
241
+ print(f"Unexpected error during Flowith request/processing: {exc}")
242
+ # Log the traceback for debugging
243
+ import traceback
244
+ traceback.print_exc()
245
+ raise HTTPException(status_code=500, detail=f"Internal server error: {exc}")
246
+
247
+
248
+ # --- Optional: Add a root endpoint for health check ---
249
+ @app.get("/")
250
+ async def root():
251
+ return {"message": "OpenAI to Flowith Proxy is running"}
252
+
253
+ # --- To run locally (for development) ---
254
+ # if __name__ == "__main__":
255
+ # import uvicorn
256
+ # uvicorn.run(app, host="0.0.0.0", port=8000)
models.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "deepseek-chat": "deepseek-chat",
3
+ "deepseek-reasoner": "deepseek-reasoner",
4
+ "claude-4-opus": "claude-4-opus",
5
+ "claude-4-sonnet": "claude-4-sonnet",
6
+ "gemini-2.5-flash": "gemini-2.5-flash",
7
+ "gemini-2.5-pro-exp": "gemini-2.5-pro-exp",
8
+ "claude-3.5-haiku": "claude-3.5-haiku",
9
+ "claude-3.5-sonnet": "claude-3.5-sonnet",
10
+ "claude-3.7-sonnet": "claude-3.7-sonnet",
11
+ "gpt-4.1": "gpt-4.1",
12
+ "gpt-4.1-mini": "gpt-4.1-mini",
13
+ "o3": "o3",
14
+ "o4-mini": "o4-mini",
15
+ "grok-3-mini": "grok-3-mini",
16
+ "grok-3": "grok-3",
17
+ "glm-4-plus": "glm-4-plus",
18
+ "gemini-2.0-flash": "gemini-2.0-flash"
19
+ }
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ fastapi
2
+ uvicorn[standard]
3
+ pydantic
4
+ requests
5
+ python-dotenv
6
+ aiohttp
7
+ httpx