Matrix Agent commited on
Commit
8910367
Β·
1 Parent(s): c5b1f8b

Add frontend dashboard, comprehensive docs, and enhanced logging v3.1

Browse files
Files changed (4) hide show
  1. Dockerfile +2 -1
  2. README.md +244 -51
  3. app.py +39 -2
  4. static/index.html +323 -0
Dockerfile CHANGED
@@ -20,8 +20,9 @@ RUN mkdir -p /app/models && \
20
  curl -L -o /app/models/qwen2.5-coder-7b-instruct-q4_k_m.gguf \
21
  "https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/qwen2.5-coder-7b-instruct-q4_k_m.gguf"
22
 
23
- # Copy application code
24
  COPY app.py .
 
25
 
26
  # Expose port
27
  EXPOSE 7860
 
20
  curl -L -o /app/models/qwen2.5-coder-7b-instruct-q4_k_m.gguf \
21
  "https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/qwen2.5-coder-7b-instruct-q4_k_m.gguf"
22
 
23
+ # Copy application code and static files
24
  COPY app.py .
25
+ COPY static/ ./static/
26
 
27
  # Expose port
28
  EXPOSE 7860
README.md CHANGED
@@ -10,100 +10,293 @@ license: apache-2.0
10
 
11
  # Anthropic-Compatible API
12
 
13
- A lightweight, CPU-based API endpoint that provides **Anthropic Messages API compatibility** using the SmolLM2-135M model.
 
 
14
 
15
  ## Features
16
 
17
- - βœ… Full Anthropic Messages API compatibility
18
- - βœ… Streaming support (SSE)
19
- - βœ… Token counting endpoint
20
- - βœ… Ultra-lightweight (135M parameters)
21
- - βœ… CPU-optimized
22
- - βœ… No GPU required
 
 
 
 
 
 
23
 
24
- ## API Endpoints
25
 
26
- ### Create Message
27
- ```bash
28
- POST /v1/messages
29
- ```
 
30
 
31
- ### Example Request
32
  ```bash
33
- curl -X POST "https://YOUR_SPACE.hf.space/anthropic/v1/messages" \
34
- -H "Content-Type: application/json" \
35
- -H "x-api-key: your-api-key" \
36
- -H "anthropic-version: 2023-06-01" \
37
- -d '{
38
- "model": "qwen2.5-coder-7b",
39
- "max_tokens": 256,
40
- "messages": [
41
- {"role": "user", "content": "Hello, how are you?"}
42
- ]
43
- }'
44
  ```
45
 
46
- ### Streaming Example
 
47
  ```bash
48
- curl -X POST "https://YOUR_SPACE.hf.space/anthropic/v1/messages" \
49
- -H "Content-Type: application/json" \
50
- -d '{
51
- "model": "qwen2.5-coder-7b",
52
- "max_tokens": 256,
53
- "stream": true,
54
- "messages": [
55
- {"role": "user", "content": "Tell me a short story"}
56
- ]
57
- }'
58
  ```
59
 
60
- ## SDK Compatibility
61
 
62
- ### Python
63
  ```python
64
  import anthropic
65
 
66
  client = anthropic.Anthropic(
67
  api_key="any-key",
68
- base_url="https://YOUR_SPACE.hf.space/anthropic"
69
  )
70
 
 
71
  message = client.messages.create(
72
  model="qwen2.5-coder-7b",
73
- max_tokens=256,
74
- messages=[{"role": "user", "content": "Hello!"}]
75
  )
76
  print(message.content[0].text)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  ```
78
 
79
- ### TypeScript/JavaScript
 
80
  ```typescript
81
  import Anthropic from '@anthropic-ai/sdk';
82
 
83
  const client = new Anthropic({
84
  apiKey: 'any-key',
85
- baseURL: 'https://YOUR_SPACE.hf.space/anthropic'
86
  });
87
 
88
  const message = await client.messages.create({
89
  model: 'qwen2.5-coder-7b',
90
- max_tokens: 256,
91
  messages: [{ role: 'user', content: 'Hello!' }]
92
  });
 
93
  console.log(message.content[0].text);
94
  ```
95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
  ## Model Info
97
 
98
- - **Model**: Qwen2.5-Coder-7B-Instruct (Q4_K_M GGUF)
99
- - **Parameters**: 7 Billion (quantized)
100
- - **Backend**: llama.cpp
101
- - **Optimized for**: Code, Tool reasoning, Agent workflows
102
- - **Context Length**: 8K tokens
 
 
 
 
 
103
 
104
- ## Rate Limits
105
 
106
- This is a free CPU-based endpoint. Please be mindful of usage.
 
 
 
 
 
107
 
108
  ---
109
- Built with ❀️ by Matrix Agent
 
 
 
 
10
 
11
  # Anthropic-Compatible API
12
 
13
+ A **production-ready, self-hosted API** that provides full **Anthropic Messages API compatibility** using the Qwen2.5-Coder-7B model with llama.cpp backend.
14
+
15
+ > **Live Dashboard**: [https://likhonsheikh-anthropic-compatible-api.hf.space](https://likhonsheikh-anthropic-compatible-api.hf.space)
16
 
17
  ## Features
18
 
19
+ | Feature | Description |
20
+ |---------|-------------|
21
+ | **Full Anthropic API** | Complete Messages API compatibility |
22
+ | **OpenAI API** | Dual compatibility with OpenAI Chat API |
23
+ | **Streaming (SSE)** | Real-time token streaming |
24
+ | **Tool Use** | Function calling / tool use support |
25
+ | **Extended Thinking** | `<thinking>` block support for reasoning |
26
+ | **Request Queue** | Concurrency control with priority |
27
+ | **Prompt Caching** | LRU cache for system prompts |
28
+ | **Multi-Model** | Hot-swap between models |
29
+ | **Live Dashboard** | Built-in web UI with playground |
30
+ | **Logs Viewer** | Real-time API logs |
31
 
32
+ ---
33
 
34
+ ## Quick Start
35
+
36
+ ### 1. Claude Code CLI
37
+
38
+ The easiest way to use this API with Claude Code:
39
 
 
40
  ```bash
41
+ # Set environment variables
42
+ export ANTHROPIC_API_KEY="any-key"
43
+ export ANTHROPIC_BASE_URL="https://likhonsheikh-anthropic-compatible-api.hf.space/anthropic"
44
+
45
+ # Run Claude Code
46
+ claude "Write a Python script that reads a CSV file"
47
+
48
+ # Or with explicit model
49
+ claude --model qwen2.5-coder-7b "Explain this code"
 
 
50
  ```
51
 
52
+ **Persistent Configuration** (add to `~/.bashrc` or `~/.zshrc`):
53
+
54
  ```bash
55
+ # Anthropic-Compatible API Configuration
56
+ export ANTHROPIC_API_KEY="any-key"
57
+ export ANTHROPIC_BASE_URL="https://likhonsheikh-anthropic-compatible-api.hf.space/anthropic"
 
 
 
 
 
 
 
58
  ```
59
 
60
+ ### 2. Python SDK
61
 
 
62
  ```python
63
  import anthropic
64
 
65
  client = anthropic.Anthropic(
66
  api_key="any-key",
67
+ base_url="https://likhonsheikh-anthropic-compatible-api.hf.space/anthropic"
68
  )
69
 
70
+ # Basic message
71
  message = client.messages.create(
72
  model="qwen2.5-coder-7b",
73
+ max_tokens=1024,
74
+ messages=[{"role": "user", "content": "Hello! Write a hello world in Python."}]
75
  )
76
  print(message.content[0].text)
77
+
78
+ # With system prompt
79
+ message = client.messages.create(
80
+ model="qwen2.5-coder-7b",
81
+ max_tokens=1024,
82
+ system="You are a helpful coding assistant. Always include comments in your code.",
83
+ messages=[{"role": "user", "content": "Write a function to calculate factorial"}]
84
+ )
85
+ print(message.content[0].text)
86
+ ```
87
+
88
+ ### 3. Streaming Response
89
+
90
+ ```python
91
+ import anthropic
92
+
93
+ client = anthropic.Anthropic(
94
+ api_key="any-key",
95
+ base_url="https://likhonsheikh-anthropic-compatible-api.hf.space/anthropic"
96
+ )
97
+
98
+ with client.messages.stream(
99
+ model="qwen2.5-coder-7b",
100
+ max_tokens=1024,
101
+ messages=[{"role": "user", "content": "Write a detailed explanation of recursion"}]
102
+ ) as stream:
103
+ for text in stream.text_stream:
104
+ print(text, end="", flush=True)
105
+ ```
106
+
107
+ ### 4. Tool Use / Function Calling
108
+
109
+ ```python
110
+ import anthropic
111
+ import json
112
+
113
+ client = anthropic.Anthropic(
114
+ api_key="any-key",
115
+ base_url="https://likhonsheikh-anthropic-compatible-api.hf.space/anthropic"
116
+ )
117
+
118
+ tools = [
119
+ {
120
+ "name": "get_weather",
121
+ "description": "Get the current weather for a location",
122
+ "input_schema": {
123
+ "type": "object",
124
+ "properties": {
125
+ "location": {"type": "string", "description": "City name"},
126
+ "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
127
+ },
128
+ "required": ["location"]
129
+ }
130
+ }
131
+ ]
132
+
133
+ message = client.messages.create(
134
+ model="qwen2.5-coder-7b",
135
+ max_tokens=1024,
136
+ tools=tools,
137
+ messages=[{"role": "user", "content": "What's the weather in Tokyo?"}]
138
+ )
139
+
140
+ if message.stop_reason == "tool_use":
141
+ for block in message.content:
142
+ if block.type == "tool_use":
143
+ print(f"Tool: {block.name}")
144
+ print(f"Input: {json.dumps(block.input, indent=2)}")
145
+ ```
146
+
147
+ ### 5. Extended Thinking
148
+
149
+ ```python
150
+ import anthropic
151
+
152
+ client = anthropic.Anthropic(
153
+ api_key="any-key",
154
+ base_url="https://likhonsheikh-anthropic-compatible-api.hf.space/anthropic"
155
+ )
156
+
157
+ message = client.messages.create(
158
+ model="qwen2.5-coder-7b",
159
+ max_tokens=2048,
160
+ thinking={"type": "enabled", "budget_tokens": 1024},
161
+ messages=[{"role": "user", "content": "Solve step by step: What is 15% of 240?"}]
162
+ )
163
+
164
+ for block in message.content:
165
+ if block.type == "thinking":
166
+ print("=== THINKING ===")
167
+ print(block.thinking)
168
+ elif block.type == "text":
169
+ print("=== ANSWER ===")
170
+ print(block.text)
171
  ```
172
 
173
+ ### 6. TypeScript/JavaScript
174
+
175
  ```typescript
176
  import Anthropic from '@anthropic-ai/sdk';
177
 
178
  const client = new Anthropic({
179
  apiKey: 'any-key',
180
+ baseURL: 'https://likhonsheikh-anthropic-compatible-api.hf.space/anthropic'
181
  });
182
 
183
  const message = await client.messages.create({
184
  model: 'qwen2.5-coder-7b',
185
+ max_tokens: 1024,
186
  messages: [{ role: 'user', content: 'Hello!' }]
187
  });
188
+
189
  console.log(message.content[0].text);
190
  ```
191
 
192
+ ### 7. cURL
193
+
194
+ ```bash
195
+ curl -X POST "https://likhonsheikh-anthropic-compatible-api.hf.space/anthropic/v1/messages" \
196
+ -H "Content-Type: application/json" \
197
+ -H "x-api-key: any-key" \
198
+ -H "anthropic-version: 2023-06-01" \
199
+ -d '{
200
+ "model": "qwen2.5-coder-7b",
201
+ "max_tokens": 256,
202
+ "messages": [{"role": "user", "content": "Hello!"}]
203
+ }'
204
+ ```
205
+
206
+ ### 8. OpenAI SDK (Alternative)
207
+
208
+ ```python
209
+ from openai import OpenAI
210
+
211
+ client = OpenAI(
212
+ api_key="any-key",
213
+ base_url="https://likhonsheikh-anthropic-compatible-api.hf.space/v1"
214
+ )
215
+
216
+ response = client.chat.completions.create(
217
+ model="qwen2.5-coder-7b",
218
+ messages=[{"role": "user", "content": "Hello!"}],
219
+ max_tokens=1024
220
+ )
221
+ print(response.choices[0].message.content)
222
+ ```
223
+
224
+ ---
225
+
226
+ ## API Reference
227
+
228
+ ### Endpoints
229
+
230
+ | Method | Endpoint | Description |
231
+ |--------|----------|-------------|
232
+ | `GET` | `/` | Dashboard with status & playground |
233
+ | `GET` | `/health` | Health check with queue/cache stats |
234
+ | `GET` | `/logs?lines=100` | View API logs |
235
+ | `GET` | `/queue/status` | Request queue statistics |
236
+ | `GET` | `/models/status` | Loaded models information |
237
+ | `POST` | `/models/{id}/load` | Manually load a model |
238
+ | `POST` | `/models/{id}/unload` | Unload a model |
239
+ | `GET` | `/anthropic/v1/models` | List models (Anthropic format) |
240
+ | `POST` | `/anthropic/v1/messages` | Create message (Anthropic API) |
241
+ | `POST` | `/anthropic/v1/messages/count_tokens` | Count tokens |
242
+ | `GET` | `/v1/models` | List models (OpenAI format) |
243
+ | `POST` | `/v1/chat/completions` | Chat completion (OpenAI API) |
244
+
245
+ ### Request Format
246
+
247
+ ```json
248
+ {
249
+ "model": "qwen2.5-coder-7b",
250
+ "max_tokens": 1024,
251
+ "messages": [{"role": "user", "content": "Hello!"}],
252
+ "system": "You are a helpful assistant.",
253
+ "temperature": 0.7,
254
+ "stream": false,
255
+ "tools": [...],
256
+ "thinking": {"type": "enabled", "budget_tokens": 1024}
257
+ }
258
+ ```
259
+
260
+ ### Response Format
261
+
262
+ ```json
263
+ {
264
+ "id": "msg_abc123",
265
+ "type": "message",
266
+ "role": "assistant",
267
+ "content": [{"type": "text", "text": "Hello!"}],
268
+ "model": "qwen2.5-coder-7b",
269
+ "stop_reason": "end_turn",
270
+ "usage": {"input_tokens": 10, "output_tokens": 25}
271
+ }
272
+ ```
273
+
274
+ ---
275
+
276
  ## Model Info
277
 
278
+ | Property | Value |
279
+ |----------|-------|
280
+ | **Model** | Qwen2.5-Coder-7B-Instruct |
281
+ | **Format** | GGUF (Q4_K_M quantization) |
282
+ | **Parameters** | 7 Billion |
283
+ | **Context Length** | 8,192 tokens |
284
+ | **Backend** | llama.cpp |
285
+ | **Optimized For** | Code, tool use, agent workflows |
286
+
287
+ ---
288
 
289
+ ## Troubleshooting
290
 
291
+ | Issue | Solution |
292
+ |-------|----------|
293
+ | Connection Timeout | Space may be sleeping. First request wakes it (~30s) |
294
+ | 503 Queue Full | Too many requests. Retry in a few seconds |
295
+ | Slow Response | CPU-based, expect ~10-30 tokens/second |
296
+ | Tool Use Issues | Ensure valid JSON schema |
297
 
298
  ---
299
+
300
+ ## License
301
+
302
+ Apache 2.0 | Built with llama.cpp + FastAPI by Matrix Agent
app.py CHANGED
@@ -23,8 +23,9 @@ from collections import OrderedDict
23
  from dataclasses import dataclass, field
24
 
25
  from fastapi import FastAPI, HTTPException, Header, Request, BackgroundTasks
26
- from fastapi.responses import StreamingResponse, JSONResponse
27
  from fastapi.middleware.cors import CORSMiddleware
 
28
  from pydantic import BaseModel, Field
29
  from llama_cpp import Llama
30
 
@@ -752,10 +753,46 @@ def parse_tool_use(text: str) -> Optional[Dict[str, Any]]:
752
  def generate_id(prefix: str = "msg") -> str:
753
  return f"{prefix}_{uuid.uuid4().hex[:24]}"
754
 
 
 
 
 
 
 
755
  # ============== ROOT ENDPOINTS ==============
756
 
757
- @app.get("/")
758
  async def root():
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
759
  return {
760
  "status": "healthy",
761
  "version": "3.0.0",
 
23
  from dataclasses import dataclass, field
24
 
25
  from fastapi import FastAPI, HTTPException, Header, Request, BackgroundTasks
26
+ from fastapi.responses import StreamingResponse, JSONResponse, HTMLResponse, FileResponse
27
  from fastapi.middleware.cors import CORSMiddleware
28
+ from fastapi.staticfiles import StaticFiles
29
  from pydantic import BaseModel, Field
30
  from llama_cpp import Llama
31
 
 
753
  def generate_id(prefix: str = "msg") -> str:
754
  return f"{prefix}_{uuid.uuid4().hex[:24]}"
755
 
756
+ # ============== STATIC FILES ==============
757
+ STATIC_DIR = os.path.join(os.path.dirname(__file__), "static")
758
+ if os.path.exists(STATIC_DIR):
759
+ app.mount("/static", StaticFiles(directory=STATIC_DIR), name="static")
760
+ logger.info(f"Static files mounted from {STATIC_DIR}")
761
+
762
  # ============== ROOT ENDPOINTS ==============
763
 
764
+ @app.get("/", response_class=HTMLResponse)
765
  async def root():
766
+ """Serve the dashboard or API status"""
767
+ static_file = os.path.join(STATIC_DIR, "index.html")
768
+ if os.path.exists(static_file):
769
+ return FileResponse(static_file, media_type="text/html")
770
+ # Fallback to JSON if no static file
771
+ return JSONResponse({
772
+ "status": "healthy",
773
+ "version": "3.0.0",
774
+ "backend": "llama.cpp",
775
+ "features": [
776
+ "request-queue",
777
+ "prompt-caching",
778
+ "multi-model",
779
+ "extended-thinking",
780
+ "streaming",
781
+ "tool-use",
782
+ "dual-compatibility"
783
+ ],
784
+ "endpoints": {
785
+ "openai": "/v1/chat/completions",
786
+ "anthropic": "/anthropic/v1/messages"
787
+ },
788
+ "models": model_manager.list_models(),
789
+ "queue": request_queue.get_status(),
790
+ "cache": prompt_cache.get_stats()
791
+ })
792
+
793
+ @app.get("/api/status")
794
+ async def api_status():
795
+ """API status as JSON (for dashboard AJAX calls)"""
796
  return {
797
  "status": "healthy",
798
  "version": "3.0.0",
static/index.html ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>Anthropic-Compatible API Dashboard</title>
7
+ <script src="https://cdn.tailwindcss.com"></script>
8
+ <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/github-dark.min.css">
9
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js"></script>
10
+ <style>
11
+ .gradient-bg { background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); }
12
+ .card { background: rgba(255,255,255,0.95); backdrop-filter: blur(10px); }
13
+ pre code { font-size: 0.85rem !important; }
14
+ .status-dot { animation: pulse 2s infinite; }
15
+ @keyframes pulse { 0%, 100% { opacity: 1; } 50% { opacity: 0.5; } }
16
+ </style>
17
+ </head>
18
+ <body class="min-h-screen gradient-bg">
19
+ <div class="container mx-auto px-4 py-8">
20
+ <!-- Header -->
21
+ <div class="text-center mb-8">
22
+ <h1 class="text-4xl font-bold text-white mb-2">Anthropic-Compatible API</h1>
23
+ <p class="text-purple-100 text-lg">Self-hosted Claude-compatible endpoint powered by Qwen2.5-Coder-7B</p>
24
+ </div>
25
+
26
+ <!-- Status Cards -->
27
+ <div class="grid md:grid-cols-3 gap-6 mb-8">
28
+ <div class="card rounded-xl p-6 shadow-lg">
29
+ <div class="flex items-center justify-between mb-4">
30
+ <h3 class="text-lg font-semibold text-gray-800">API Status</h3>
31
+ <span id="status-indicator" class="status-dot w-3 h-3 bg-yellow-400 rounded-full"></span>
32
+ </div>
33
+ <div id="api-status" class="text-sm text-gray-600">Checking...</div>
34
+ </div>
35
+ <div class="card rounded-xl p-6 shadow-lg">
36
+ <div class="flex items-center justify-between mb-4">
37
+ <h3 class="text-lg font-semibold text-gray-800">Queue Status</h3>
38
+ <svg class="w-5 h-5 text-purple-500" fill="none" stroke="currentColor" viewBox="0 0 24 24">
39
+ <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M4 6h16M4 12h16M4 18h16"/>
40
+ </svg>
41
+ </div>
42
+ <div id="queue-status" class="text-sm text-gray-600">Loading...</div>
43
+ </div>
44
+ <div class="card rounded-xl p-6 shadow-lg">
45
+ <div class="flex items-center justify-between mb-4">
46
+ <h3 class="text-lg font-semibold text-gray-800">Cache Stats</h3>
47
+ <svg class="w-5 h-5 text-purple-500" fill="none" stroke="currentColor" viewBox="0 0 24 24">
48
+ <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 12l2 2 4-4m5.618-4.016A11.955 11.955 0 0112 2.944a11.955 11.955 0 01-8.618 3.04A12.02 12.02 0 003 9c0 5.591 3.824 10.29 9 11.622 5.176-1.332 9-6.03 9-11.622 0-1.042-.133-2.052-.382-3.016z"/>
49
+ </svg>
50
+ </div>
51
+ <div id="cache-status" class="text-sm text-gray-600">Loading...</div>
52
+ </div>
53
+ </div>
54
+
55
+ <!-- Models Section -->
56
+ <div class="card rounded-xl p-6 shadow-lg mb-8">
57
+ <h3 class="text-xl font-bold text-gray-800 mb-4">Available Models</h3>
58
+ <div id="models-list" class="space-y-3">Loading models...</div>
59
+ </div>
60
+
61
+ <!-- Quick Start Guide -->
62
+ <div class="card rounded-xl p-6 shadow-lg mb-8">
63
+ <h3 class="text-xl font-bold text-gray-800 mb-4">Quick Start</h3>
64
+
65
+ <div class="space-y-6">
66
+ <!-- Claude Code -->
67
+ <div>
68
+ <h4 class="font-semibold text-purple-700 mb-2 flex items-center gap-2">
69
+ <svg class="w-5 h-5" fill="currentColor" viewBox="0 0 24 24"><path d="M12 2L2 7l10 5 10-5-10-5zM2 17l10 5 10-5M2 12l10 5 10-5"/></svg>
70
+ Claude Code CLI
71
+ </h4>
72
+ <pre><code class="language-bash"># Set environment variables
73
+ export ANTHROPIC_API_KEY="any-key"
74
+ export ANTHROPIC_BASE_URL="https://likhonsheikh-anthropic-compatible-api.hf.space/anthropic"
75
+
76
+ # Run Claude Code with custom model
77
+ claude --model qwen2.5-coder-7b "Write a hello world in Python"</code></pre>
78
+ </div>
79
+
80
+ <!-- Python SDK -->
81
+ <div>
82
+ <h4 class="font-semibold text-purple-700 mb-2 flex items-center gap-2">
83
+ <svg class="w-5 h-5" fill="currentColor" viewBox="0 0 24 24"><path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2z"/></svg>
84
+ Python SDK
85
+ </h4>
86
+ <pre><code class="language-python">import anthropic
87
+
88
+ client = anthropic.Anthropic(
89
+ api_key="any-key",
90
+ base_url="https://likhonsheikh-anthropic-compatible-api.hf.space/anthropic"
91
+ )
92
+
93
+ message = client.messages.create(
94
+ model="qwen2.5-coder-7b",
95
+ max_tokens=1024,
96
+ messages=[{"role": "user", "content": "Hello!"}]
97
+ )
98
+ print(message.content[0].text)</code></pre>
99
+ </div>
100
+
101
+ <!-- cURL -->
102
+ <div>
103
+ <h4 class="font-semibold text-purple-700 mb-2 flex items-center gap-2">
104
+ <svg class="w-5 h-5" fill="currentColor" viewBox="0 0 24 24"><path d="M8 5v14l11-7z"/></svg>
105
+ cURL
106
+ </h4>
107
+ <pre><code class="language-bash">curl -X POST "https://likhonsheikh-anthropic-compatible-api.hf.space/anthropic/v1/messages" \
108
+ -H "Content-Type: application/json" \
109
+ -H "x-api-key: any-key" \
110
+ -H "anthropic-version: 2023-06-01" \
111
+ -d '{
112
+ "model": "qwen2.5-coder-7b",
113
+ "max_tokens": 256,
114
+ "messages": [{"role": "user", "content": "Hello!"}]
115
+ }'</code></pre>
116
+ </div>
117
+ </div>
118
+ </div>
119
+
120
+ <!-- Interactive Playground -->
121
+ <div class="card rounded-xl p-6 shadow-lg mb-8">
122
+ <h3 class="text-xl font-bold text-gray-800 mb-4">Try it Now</h3>
123
+ <div class="space-y-4">
124
+ <textarea id="prompt-input" class="w-full p-4 border border-gray-200 rounded-lg focus:ring-2 focus:ring-purple-500 focus:border-transparent" rows="3" placeholder="Enter your prompt here...">Hello! Can you write a simple Python function that calculates factorial?</textarea>
125
+ <div class="flex gap-4">
126
+ <button onclick="sendMessage()" class="bg-purple-600 hover:bg-purple-700 text-white font-semibold py-2 px-6 rounded-lg transition">
127
+ Send Message
128
+ </button>
129
+ <button onclick="sendStreamingMessage()" class="bg-indigo-600 hover:bg-indigo-700 text-white font-semibold py-2 px-6 rounded-lg transition">
130
+ Stream Response
131
+ </button>
132
+ </div>
133
+ <div id="response-output" class="bg-gray-50 rounded-lg p-4 min-h-[100px] text-sm font-mono whitespace-pre-wrap hidden"></div>
134
+ </div>
135
+ </div>
136
+
137
+ <!-- Logs Viewer -->
138
+ <div class="card rounded-xl p-6 shadow-lg mb-8">
139
+ <div class="flex items-center justify-between mb-4">
140
+ <h3 class="text-xl font-bold text-gray-800">Live Logs</h3>
141
+ <button onclick="fetchLogs()" class="text-purple-600 hover:text-purple-800 text-sm font-medium">Refresh</button>
142
+ </div>
143
+ <div id="logs-output" class="bg-gray-900 text-green-400 rounded-lg p-4 h-64 overflow-y-auto text-xs font-mono">
144
+ Click "Refresh" to load logs...
145
+ </div>
146
+ </div>
147
+
148
+ <!-- API Endpoints Reference -->
149
+ <div class="card rounded-xl p-6 shadow-lg">
150
+ <h3 class="text-xl font-bold text-gray-800 mb-4">API Endpoints</h3>
151
+ <div class="overflow-x-auto">
152
+ <table class="w-full text-sm">
153
+ <thead class="bg-gray-50">
154
+ <tr>
155
+ <th class="px-4 py-3 text-left font-semibold text-gray-600">Method</th>
156
+ <th class="px-4 py-3 text-left font-semibold text-gray-600">Endpoint</th>
157
+ <th class="px-4 py-3 text-left font-semibold text-gray-600">Description</th>
158
+ </tr>
159
+ </thead>
160
+ <tbody class="divide-y divide-gray-100">
161
+ <tr><td class="px-4 py-3"><span class="bg-green-100 text-green-700 px-2 py-1 rounded text-xs font-semibold">GET</span></td><td class="px-4 py-3 font-mono text-purple-600">/</td><td class="px-4 py-3 text-gray-600">Health check with full status</td></tr>
162
+ <tr><td class="px-4 py-3"><span class="bg-green-100 text-green-700 px-2 py-1 rounded text-xs font-semibold">GET</span></td><td class="px-4 py-3 font-mono text-purple-600">/health</td><td class="px-4 py-3 text-gray-600">Simple health check</td></tr>
163
+ <tr><td class="px-4 py-3"><span class="bg-green-100 text-green-700 px-2 py-1 rounded text-xs font-semibold">GET</span></td><td class="px-4 py-3 font-mono text-purple-600">/logs</td><td class="px-4 py-3 text-gray-600">View API logs</td></tr>
164
+ <tr><td class="px-4 py-3"><span class="bg-green-100 text-green-700 px-2 py-1 rounded text-xs font-semibold">GET</span></td><td class="px-4 py-3 font-mono text-purple-600">/queue/status</td><td class="px-4 py-3 text-gray-600">Request queue statistics</td></tr>
165
+ <tr><td class="px-4 py-3"><span class="bg-green-100 text-green-700 px-2 py-1 rounded text-xs font-semibold">GET</span></td><td class="px-4 py-3 font-mono text-purple-600">/models/status</td><td class="px-4 py-3 text-gray-600">Loaded models info</td></tr>
166
+ <tr><td class="px-4 py-3"><span class="bg-blue-100 text-blue-700 px-2 py-1 rounded text-xs font-semibold">POST</span></td><td class="px-4 py-3 font-mono text-purple-600">/anthropic/v1/messages</td><td class="px-4 py-3 text-gray-600">Anthropic Messages API</td></tr>
167
+ <tr><td class="px-4 py-3"><span class="bg-blue-100 text-blue-700 px-2 py-1 rounded text-xs font-semibold">POST</span></td><td class="px-4 py-3 font-mono text-purple-600">/v1/chat/completions</td><td class="px-4 py-3 text-gray-600">OpenAI Chat API</td></tr>
168
+ <tr><td class="px-4 py-3"><span class="bg-green-100 text-green-700 px-2 py-1 rounded text-xs font-semibold">GET</span></td><td class="px-4 py-3 font-mono text-purple-600">/anthropic/v1/models</td><td class="px-4 py-3 text-gray-600">List available models</td></tr>
169
+ </tbody>
170
+ </table>
171
+ </div>
172
+ </div>
173
+
174
+ <!-- Footer -->
175
+ <div class="text-center mt-8 text-purple-100">
176
+ <p>Built with llama.cpp + FastAPI | Model: Qwen2.5-Coder-7B-Instruct (Q4_K_M)</p>
177
+ <p class="mt-2 text-sm opacity-75">Open source and self-hostable</p>
178
+ </div>
179
+ </div>
180
+
181
+ <script>
182
+ hljs.highlightAll();
183
+
184
+ const BASE_URL = window.location.origin;
185
+
186
+ async function fetchStatus() {
187
+ try {
188
+ const res = await fetch(BASE_URL + '/api/status');
189
+ const data = await res.json();
190
+
191
+ document.getElementById('status-indicator').className = 'status-dot w-3 h-3 bg-green-400 rounded-full';
192
+ document.getElementById('api-status').innerHTML = `
193
+ <div class="font-semibold text-green-600">Online</div>
194
+ <div class="text-xs text-gray-500 mt-1">Version: ${data.version || 'N/A'}</div>
195
+ `;
196
+
197
+ if (data.queue) {
198
+ document.getElementById('queue-status').innerHTML = `
199
+ <div>Active: <span class="font-semibold">${data.queue.active_requests || 0}</span></div>
200
+ <div>Queue: <span class="font-semibold">${data.queue.queue_length || 0}</span></div>
201
+ <div class="text-xs text-gray-500 mt-1">Total: ${data.queue.stats?.total_requests || 0}</div>
202
+ `;
203
+ }
204
+
205
+ if (data.cache) {
206
+ document.getElementById('cache-status').innerHTML = `
207
+ <div>Hit Rate: <span class="font-semibold">${data.cache.hit_rate || '0%'}</span></div>
208
+ <div>Size: <span class="font-semibold">${data.cache.size || 0}/${data.cache.max_size || 10}</span></div>
209
+ `;
210
+ }
211
+
212
+ if (data.models) {
213
+ const modelsHtml = data.models.map(m => `
214
+ <div class="flex items-center justify-between p-3 bg-gray-50 rounded-lg">
215
+ <div>
216
+ <span class="font-semibold">${m.id}</span>
217
+ <span class="text-sm text-gray-500 ml-2">${m.size} (${m.quantization})</span>
218
+ </div>
219
+ <div class="flex items-center gap-2">
220
+ ${m.loaded ? '<span class="bg-green-100 text-green-700 px-2 py-1 rounded text-xs">Loaded</span>' : '<span class="bg-gray-100 text-gray-600 px-2 py-1 rounded text-xs">Available</span>'}
221
+ ${m.default ? '<span class="bg-purple-100 text-purple-700 px-2 py-1 rounded text-xs">Default</span>' : ''}
222
+ </div>
223
+ </div>
224
+ `).join('');
225
+ document.getElementById('models-list').innerHTML = modelsHtml;
226
+ }
227
+ } catch (e) {
228
+ document.getElementById('status-indicator').className = 'status-dot w-3 h-3 bg-red-400 rounded-full';
229
+ document.getElementById('api-status').innerHTML = '<span class="text-red-600">Offline or Building</span>';
230
+ }
231
+ }
232
+
233
+ async function sendMessage() {
234
+ const prompt = document.getElementById('prompt-input').value;
235
+ const output = document.getElementById('response-output');
236
+ output.classList.remove('hidden');
237
+ output.textContent = 'Sending...';
238
+
239
+ try {
240
+ const res = await fetch(BASE_URL + '/anthropic/v1/messages', {
241
+ method: 'POST',
242
+ headers: {
243
+ 'Content-Type': 'application/json',
244
+ 'x-api-key': 'test-key',
245
+ 'anthropic-version': '2023-06-01'
246
+ },
247
+ body: JSON.stringify({
248
+ model: 'qwen2.5-coder-7b',
249
+ max_tokens: 1024,
250
+ messages: [{ role: 'user', content: prompt }]
251
+ })
252
+ });
253
+ const data = await res.json();
254
+ output.textContent = data.content?.[0]?.text || JSON.stringify(data, null, 2);
255
+ } catch (e) {
256
+ output.textContent = 'Error: ' + e.message;
257
+ }
258
+ }
259
+
260
+ async function sendStreamingMessage() {
261
+ const prompt = document.getElementById('prompt-input').value;
262
+ const output = document.getElementById('response-output');
263
+ output.classList.remove('hidden');
264
+ output.textContent = '';
265
+
266
+ try {
267
+ const res = await fetch(BASE_URL + '/anthropic/v1/messages', {
268
+ method: 'POST',
269
+ headers: {
270
+ 'Content-Type': 'application/json',
271
+ 'x-api-key': 'test-key',
272
+ 'anthropic-version': '2023-06-01'
273
+ },
274
+ body: JSON.stringify({
275
+ model: 'qwen2.5-coder-7b',
276
+ max_tokens: 1024,
277
+ stream: true,
278
+ messages: [{ role: 'user', content: prompt }]
279
+ })
280
+ });
281
+
282
+ const reader = res.body.getReader();
283
+ const decoder = new TextDecoder();
284
+
285
+ while (true) {
286
+ const { done, value } = await reader.read();
287
+ if (done) break;
288
+
289
+ const chunk = decoder.decode(value);
290
+ const lines = chunk.split('\n');
291
+
292
+ for (const line of lines) {
293
+ if (line.startsWith('data: ')) {
294
+ try {
295
+ const data = JSON.parse(line.slice(6));
296
+ if (data.delta?.text) {
297
+ output.textContent += data.delta.text;
298
+ }
299
+ } catch {}
300
+ }
301
+ }
302
+ }
303
+ } catch (e) {
304
+ output.textContent = 'Error: ' + e.message;
305
+ }
306
+ }
307
+
308
+ async function fetchLogs() {
309
+ try {
310
+ const res = await fetch(BASE_URL + '/logs?lines=50');
311
+ const data = await res.json();
312
+ document.getElementById('logs-output').textContent = data.logs || 'No logs available';
313
+ } catch (e) {
314
+ document.getElementById('logs-output').textContent = 'Error loading logs: ' + e.message;
315
+ }
316
+ }
317
+
318
+ // Initial fetch
319
+ fetchStatus();
320
+ setInterval(fetchStatus, 30000);
321
+ </script>
322
+ </body>
323
+ </html>