amirkabiri commited on
Commit
b0c801d
·
1 Parent(s): 690837a
Files changed (1) hide show
  1. README.md +107 -442
README.md CHANGED
@@ -1,527 +1,192 @@
1
- # Duck.ai OpenAI-Compatible Server
2
 
3
- A high-performance HTTP server built with Bun that provides OpenAI-compatible API endpoints using Duck.ai as the backend. This allows you to use any OpenAI SDK or tool with Duck.ai's free AI models.
4
 
5
- ## 🚀 Features
6
 
7
- - **OpenAI API Compatible**: Drop-in replacement for OpenAI API
8
- - **Multiple Models**: Support for GPT-4o-mini, Claude-3-Haiku, Llama, Mistral, and more
9
- - **Function Calling/Tools**: Full support for OpenAI-compatible function calling
10
- - **Streaming Support**: Real-time streaming responses (including with tools)
11
- - **Built with Bun**: Ultra-fast TypeScript runtime
12
- - **CORS Enabled**: Ready for web applications
13
- - **Comprehensive Testing**: Full test suite ensuring compatibility
14
 
15
- ## 📋 Available Models
 
 
 
 
16
 
17
- - `gpt-4o-mini`
 
 
18
  - `o3-mini`
19
  - `claude-3-haiku-20240307`
20
  - `meta-llama/Llama-3.3-70B-Instruct-Turbo`
21
- - `mistralai/Mistral-Small-24B-Instruct-2501`
22
-
23
- ## 🛠️ Installation
24
 
25
- 1. **Install Bun** (if not already installed):
26
- ```bash
27
- curl -fsSL https://bun.sh/install | bash
28
- ```
29
 
30
- 2. **Clone and setup the project**:
31
- ```bash
32
- git clone <your-repo>
33
- cd duckai-openai-server
34
- bun install
35
- ```
 
 
36
 
37
- 3. **Start the server**:
38
- ```bash
39
- bun run dev
40
- ```
41
 
42
- The server will start on `http://localhost:3000` by default.
43
 
44
- ## 🔧 Usage
45
 
46
- ### Basic cURL Example
47
 
 
48
  ```bash
49
- curl -X POST http://localhost:3000/v1/chat/completions \
50
- -H "Content-Type: application/json" \
51
- -d '{
52
- "model": "gpt-4o-mini",
53
- "messages": [
54
- {"role": "user", "content": "Hello, how are you?"}
55
- ]
56
- }'
57
  ```
58
 
59
- ### Streaming Example
60
-
61
  ```bash
62
- curl -X POST http://localhost:3000/v1/chat/completions \
63
- -H "Content-Type: application/json" \
64
- -d '{
65
- "model": "gpt-4o-mini",
66
- "messages": [
67
- {"role": "user", "content": "Count from 1 to 10"}
68
- ],
69
- "stream": true
70
- }'
71
  ```
72
 
73
- ### Using with OpenAI SDK (Python)
74
-
75
- ```python
76
- from openai import OpenAI
77
 
78
- client = OpenAI(
79
- base_url="http://localhost:3000/v1",
80
- api_key="dummy-key" # Not required, but SDK expects it
81
- )
82
 
83
- # Non-streaming
84
- response = client.chat.completions.create(
85
- model="gpt-4o-mini",
86
- messages=[
87
- {"role": "user", "content": "Hello!"}
88
- ]
89
- )
90
- print(response.choices[0].message.content)
91
-
92
- # Streaming
93
- stream = client.chat.completions.create(
94
- model="gpt-4o-mini",
95
- messages=[
96
- {"role": "user", "content": "Tell me a story"}
97
- ],
98
- stream=True
99
- )
100
-
101
- for chunk in stream:
102
- if chunk.choices[0].delta.content is not None:
103
- print(chunk.choices[0].delta.content, end="")
104
- ```
105
 
106
- ### Using with OpenAI SDK (Node.js)
107
 
108
  ```javascript
109
- import OpenAI from 'openai';
110
 
111
  const openai = new OpenAI({
112
- baseURL: 'http://localhost:3000/v1',
113
- apiKey: 'dummy-key', // Not required, but SDK expects it
114
  });
115
 
116
- // Non-streaming
117
  const completion = await openai.chat.completions.create({
118
- model: 'gpt-4o-mini',
119
  messages: [
120
- { role: 'user', content: 'Hello!' }
121
  ],
122
  });
123
 
124
  console.log(completion.choices[0].message.content);
125
-
126
- // Streaming
127
- const stream = await openai.chat.completions.create({
128
- model: 'gpt-4o-mini',
129
- messages: [
130
- { role: 'user', content: 'Tell me a story' }
131
- ],
132
- stream: true,
133
- });
134
-
135
- for await (const chunk of stream) {
136
- process.stdout.write(chunk.choices[0]?.delta?.content || '');
137
- }
138
  ```
139
 
140
- ### Function Calling (Tools)
141
 
142
  ```javascript
143
- import OpenAI from 'openai';
144
-
145
- const openai = new OpenAI({
146
- baseURL: 'http://localhost:3000/v1',
147
- apiKey: 'dummy-key',
148
- });
 
 
 
 
 
 
 
 
 
 
 
 
 
149
 
150
  const completion = await openai.chat.completions.create({
151
- model: 'gpt-4o-mini',
152
  messages: [
153
- { role: 'user', content: 'What time is it?' }
154
  ],
155
- tools: [
156
- {
157
- type: 'function',
158
- function: {
159
- name: 'get_current_time',
160
- description: 'Get the current time'
161
- }
162
- }
163
- ]
164
  });
165
 
166
- // Handle function calls
167
- if (completion.choices[0].finish_reason === 'tool_calls') {
168
- const toolCalls = completion.choices[0].message.tool_calls;
169
- console.log('Function calls:', toolCalls);
170
- }
171
- ```
172
-
173
- ## 🌐 API Endpoints
174
-
175
- ### `GET /health`
176
- Health check endpoint.
177
-
178
- **Response:**
179
- ```json
180
- {"status": "ok"}
181
- ```
182
-
183
- ### `GET /v1/models`
184
- List available models.
185
-
186
- **Response:**
187
- ```json
188
- {
189
- "object": "list",
190
- "data": [
191
- {
192
- "id": "gpt-4o-mini",
193
- "object": "model",
194
- "created": 1640995200,
195
- "owned_by": "duckai"
196
- }
197
- ]
198
- }
199
  ```
200
 
201
- ### `POST /v1/chat/completions`
202
- Create chat completions (OpenAI compatible).
203
-
204
- **Request Body:**
205
- ```json
206
- {
207
- "model": "gpt-4o-mini",
208
- "messages": [
209
- {"role": "user", "content": "Hello!"}
210
- ],
211
- "stream": false,
212
- "temperature": 0.7,
213
- "max_tokens": 150,
214
- "tools": [
215
- {
216
- "type": "function",
217
- "function": {
218
- "name": "get_weather",
219
- "description": "Get weather for a location",
220
- "parameters": {
221
- "type": "object",
222
- "properties": {
223
- "location": {
224
- "type": "string",
225
- "description": "City name"
226
- }
227
- },
228
- "required": ["location"]
229
- }
230
- }
231
- }
232
- ],
233
- "tool_choice": "auto"
234
- }
235
- ```
236
 
237
- **Response (Non-streaming):**
238
- ```json
239
- {
240
- "id": "chatcmpl-abc123",
241
- "object": "chat.completion",
242
- "created": 1640995200,
243
- "model": "gpt-4o-mini",
244
- "choices": [
245
- {
246
- "index": 0,
247
- "message": {
248
- "role": "assistant",
249
- "content": "Hello! How can I help you today?"
250
- },
251
- "finish_reason": "stop"
252
- }
253
  ],
254
- "usage": {
255
- "prompt_tokens": 10,
256
- "completion_tokens": 20,
257
- "total_tokens": 30
258
- }
259
- }
260
- ```
261
-
262
- **Response (Streaming):**
263
- ```
264
- data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk",...}
265
-
266
- data: [DONE]
267
- ```
268
 
269
- **Response (Function Call):**
270
- ```json
271
- {
272
- "id": "chatcmpl-abc123",
273
- "object": "chat.completion",
274
- "created": 1640995200,
275
- "model": "gpt-4o-mini",
276
- "choices": [
277
- {
278
- "index": 0,
279
- "message": {
280
- "role": "assistant",
281
- "content": null,
282
- "tool_calls": [
283
- {
284
- "id": "call_1",
285
- "type": "function",
286
- "function": {
287
- "name": "get_weather",
288
- "arguments": "{\"location\": \"San Francisco\"}"
289
- }
290
- }
291
- ]
292
- },
293
- "finish_reason": "tool_calls"
294
- }
295
- ],
296
- "usage": {
297
- "prompt_tokens": 25,
298
- "completion_tokens": 15,
299
- "total_tokens": 40
300
  }
301
  }
302
  ```
303
 
304
- ## 🛠️ Function Calling (Tools)
305
-
306
- This server implements OpenAI-compatible function calling using a "prompt engineering trick" since Duck.ai doesn't natively support tools. The system works by:
307
-
308
- 1. **Converting tools to system prompts**: Tool definitions are converted into detailed instructions for the AI
309
- 2. **Parsing AI responses**: The AI is instructed to respond with JSON when it needs to call functions
310
- 3. **Executing functions**: Built-in and custom functions can be executed
311
- 4. **Returning results**: Function results are formatted as OpenAI-compatible responses
312
-
313
- ### Built-in Functions
314
-
315
- The server comes with several built-in functions:
316
-
317
- - `get_current_time`: Returns the current timestamp
318
- - `calculate`: Performs mathematical calculations
319
- - `get_weather`: Returns mock weather data (for demonstration)
320
-
321
- ### Custom Functions
322
-
323
- You can register custom functions programmatically:
324
-
325
- ```javascript
326
- // In your server setup
327
- import { OpenAIService } from './src/openai-service';
328
-
329
- const service = new OpenAIService();
330
- service.registerFunction('my_function', (args) => {
331
- return `Hello ${args.name}!`;
332
- });
333
- ```
334
-
335
- ### Tool Choice Options
336
-
337
- - `"auto"` (default): AI decides whether to call functions
338
- - `"none"`: AI will not call any functions
339
- - `"required"`: AI must call at least one function
340
- - `{"type": "function", "function": {"name": "specific_function"}}`: AI must call the specified function
341
-
342
- ### Example: Multi-turn Conversation with Tools
343
 
344
  ```bash
345
  curl -X POST http://localhost:3000/v1/chat/completions \
346
  -H "Content-Type: application/json" \
 
347
  -d '{
348
  "model": "gpt-4o-mini",
349
  "messages": [
350
- {"role": "user", "content": "What time is it?"},
351
- {
352
- "role": "assistant",
353
- "content": null,
354
- "tool_calls": [
355
- {
356
- "id": "call_1",
357
- "type": "function",
358
- "function": {
359
- "name": "get_current_time",
360
- "arguments": "{}"
361
- }
362
- }
363
- ]
364
- },
365
- {
366
- "role": "tool",
367
- "content": "2024-01-15T10:30:00Z",
368
- "tool_call_id": "call_1"
369
- },
370
- {"role": "user", "content": "Thanks! Now calculate 15 + 27"}
371
- ],
372
- "tools": [
373
- {
374
- "type": "function",
375
- "function": {
376
- "name": "get_current_time",
377
- "description": "Get the current time"
378
- }
379
- },
380
- {
381
- "type": "function",
382
- "function": {
383
- "name": "calculate",
384
- "description": "Perform mathematical calculations",
385
- "parameters": {
386
- "type": "object",
387
- "properties": {
388
- "expression": {
389
- "type": "string",
390
- "description": "Mathematical expression to evaluate"
391
- }
392
- },
393
- "required": ["expression"]
394
- }
395
- }
396
- }
397
  ]
398
  }'
399
  ```
400
 
401
- ## 🧪 Testing
402
-
403
- ### Run All Tests
404
- ```bash
405
- bun test
406
- ```
407
-
408
- ### Run Specific Test Suites
409
- ```bash
410
- # Run server API tests
411
- bun test tests/server.test.ts
412
-
413
- # Run OpenAI JavaScript library compatibility tests
414
- bun run test:openai
415
-
416
- # Run comprehensive OpenAI library tests (more extensive)
417
- bun run test:openai-full
418
-
419
- # Run all core tests together
420
- bun run test:all
421
- ```
422
-
423
- ### Run Manual OpenAI SDK Compatibility Test
424
- ```bash
425
- # Start the server first
426
- bun run dev
427
-
428
- # In another terminal, run the manual compatibility test
429
- bun run tests/openai-sdk-test.ts
430
- ```
431
-
432
- ### Manual Testing with Different Tools
433
-
434
- **Test with HTTPie:**
435
- ```bash
436
- http POST localhost:3000/v1/chat/completions \
437
- model=gpt-4o-mini \
438
- messages:='[{"role":"user","content":"Hello!"}]'
439
- ```
440
-
441
- **Test with Postman:**
442
- - URL: `http://localhost:3000/v1/chat/completions`
443
- - Method: POST
444
- - Headers: `Content-Type: application/json`
445
- - Body: Raw JSON with the request format above
446
 
447
- ## 🔧 Configuration
 
 
448
 
449
  ### Environment Variables
450
 
451
- - `PORT`: Server port (default: 3000)
452
-
453
- ### Custom Model Selection
454
-
455
- You can specify any of the available models in your requests:
456
-
457
- ```json
458
- {
459
- "model": "claude-3-haiku-20240307",
460
- "messages": [{"role": "user", "content": "Hello!"}]
461
- }
462
- ```
463
-
464
- ## 🏗️ Development
465
-
466
- ### Project Structure
467
- ```
468
- src/
469
- ├── types.ts # TypeScript type definitions
470
- ├── duckai.ts # Duck.ai integration
471
- ├── openai-service.ts # OpenAI compatibility layer
472
- └── server.ts # Main HTTP server
473
-
474
- tests/
475
- ├── server.test.ts # Server API unit tests
476
- ├── openai-simple.test.ts # OpenAI library compatibility tests
477
- ├── openai-library.test.ts # Comprehensive OpenAI library tests
478
- └── openai-sdk-test.ts # Manual SDK compatibility demo
479
- ```
480
 
481
- ### Adding New Features
482
 
483
- 1. **Add new types** in `src/types.ts`
484
- 2. **Extend Duck.ai integration** in `src/duckai.ts`
485
- 3. **Update OpenAI service** in `src/openai-service.ts`
486
- 4. **Add tests** in `tests/`
487
-
488
- ### Building for Production
489
 
490
  ```bash
491
- bun run build
492
- bun run start
493
  ```
494
 
495
- ## 🐛 Troubleshooting
496
-
497
- ### Common Issues
498
 
499
- 1. **"fetch is not defined"**: Make sure you're using Bun, not Node.js
500
- 2. **CORS errors**: The server includes CORS headers, but check your client configuration
501
- 3. **Model not found**: Use one of the supported models listed above
502
- 4. **Streaming not working**: Ensure your client properly handles Server-Sent Events
503
-
504
- ### Debug Mode
505
-
506
- Set environment variable for verbose logging:
507
  ```bash
508
- DEBUG=1 bun run dev
509
  ```
510
 
511
- ## 📝 License
512
-
513
- MIT License - see LICENSE file for details.
514
-
515
- ## 🤝 Contributing
516
 
517
  1. Fork the repository
518
  2. Create a feature branch
519
- 3. Add tests for new functionality
520
- 4. Ensure all tests pass
521
- 5. Submit a pull request
 
 
 
 
 
522
 
523
- ## 🙏 Acknowledgments
524
 
525
- - Built on top of the reverse-engineered Duck.ai API
526
- - Compatible with OpenAI API specification
527
- - Powered by Bun runtime
 
1
+ # DuckAI OpenAI Server
2
 
3
+ A high-performance OpenAI-compatible HTTP server that uses DuckDuckGo's AI backend, providing free access to multiple AI models through the familiar OpenAI API interface.
4
 
5
+ ## Introduction
6
 
7
+ DuckAI OpenAI Server bridges the gap between DuckDuckGo's free AI chat service and the widely-adopted OpenAI API format. This allows you to:
 
 
 
 
 
 
8
 
9
+ - **Use multiple AI models for free** - Access GPT-4o-mini, Claude-3-Haiku, Llama-3.3-70B, and more
10
+ - **Drop-in OpenAI replacement** - Compatible with existing OpenAI client libraries
11
+ - **Tool calling support** - Full function calling capabilities
12
+ - **Streaming responses** - Real-time response streaming
13
+ - **Rate limiting** - Built-in intelligent rate limiting to respect DuckDuckGo's limits
14
 
15
+ ### Supported Models
16
+
17
+ - `gpt-4o-mini` (Default)
18
  - `o3-mini`
19
  - `claude-3-haiku-20240307`
20
  - `meta-llama/Llama-3.3-70B-Instruct-Turbo`
21
+ - `mistralai/Mixtral-8x7B-Instruct-v0.1`
 
 
22
 
23
+ ### Features
 
 
 
24
 
25
+ - Chat completions
26
+ - ✅ Streaming responses
27
+ - Function/tool calling
28
+ - ✅ Multiple model support
29
+ - ✅ Rate limiting with intelligent backoff
30
+ - ✅ OpenAI-compatible error handling
31
+ - ✅ CORS support
32
+ - ✅ Health check endpoint
33
 
34
+ ## Usage
 
 
 
35
 
36
+ ### Prerequisites
37
 
38
+ - [Bun](https://bun.sh/) runtime (recommended) or Node.js 18+
39
 
40
+ ### Installation
41
 
42
+ 1. Clone the repository:
43
  ```bash
44
+ git clone git@github.com:amirkabiri/duckai.git
45
+ cd duckai
 
 
 
 
 
 
46
  ```
47
 
48
+ 2. Install dependencies:
 
49
  ```bash
50
+ bun install
 
 
 
 
 
 
 
 
51
  ```
52
 
53
+ 3. Start the server:
54
+ ```bash
55
+ bun run dev
56
+ ```
57
 
58
+ The server will start on `http://localhost:3000` by default.
 
 
 
59
 
60
+ ### Basic Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
 
62
+ #### Using with OpenAI JavaScript Library
63
 
64
  ```javascript
65
+ import OpenAI from "openai";
66
 
67
  const openai = new OpenAI({
68
+ baseURL: "http://localhost:3000/v1",
69
+ apiKey: "dummy-key", // Any string works
70
  });
71
 
72
+ // Basic chat completion
73
  const completion = await openai.chat.completions.create({
74
+ model: "gpt-4o-mini",
75
  messages: [
76
+ { role: "user", content: "Hello! How are you?" }
77
  ],
78
  });
79
 
80
  console.log(completion.choices[0].message.content);
 
 
 
 
 
 
 
 
 
 
 
 
 
81
  ```
82
 
83
+ #### Tool Calling Example
84
 
85
  ```javascript
86
+ const tools = [
87
+ {
88
+ type: "function",
89
+ function: {
90
+ name: "calculate",
91
+ description: "Perform mathematical calculations",
92
+ parameters: {
93
+ type: "object",
94
+ properties: {
95
+ expression: {
96
+ type: "string",
97
+ description: "Mathematical expression to evaluate"
98
+ }
99
+ },
100
+ required: ["expression"]
101
+ }
102
+ }
103
+ }
104
+ ];
105
 
106
  const completion = await openai.chat.completions.create({
107
+ model: "gpt-4o-mini",
108
  messages: [
109
+ { role: "user", content: "What is 15 * 8?" }
110
  ],
111
+ tools: tools,
112
+ tool_choice: "auto"
 
 
 
 
 
 
 
113
  });
114
 
115
+ // The AI will call the calculate function
116
+ console.log(completion.choices[0].message.tool_calls);
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
  ```
118
 
119
+ #### Streaming Responses
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
120
 
121
+ ```javascript
122
+ const stream = await openai.chat.completions.create({
123
+ model: "gpt-4o-mini",
124
+ messages: [
125
+ { role: "user", content: "Tell me a story" }
 
 
 
 
 
 
 
 
 
 
 
126
  ],
127
+ stream: true
128
+ });
 
 
 
 
 
 
 
 
 
 
 
 
129
 
130
+ for await (const chunk of stream) {
131
+ const content = chunk.choices[0]?.delta?.content;
132
+ if (content) {
133
+ process.stdout.write(content);
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
134
  }
135
  }
136
  ```
137
 
138
+ #### Using with curl
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
140
  ```bash
141
  curl -X POST http://localhost:3000/v1/chat/completions \
142
  -H "Content-Type: application/json" \
143
+ -H "Authorization: Bearer dummy-key" \
144
  -d '{
145
  "model": "gpt-4o-mini",
146
  "messages": [
147
+ {"role": "user", "content": "Hello!"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
148
  ]
149
  }'
150
  ```
151
 
152
+ ### API Endpoints
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
 
154
+ - `POST /v1/chat/completions` - Chat completions (compatible with OpenAI)
155
+ - `GET /v1/models` - List available models
156
+ - `GET /health` - Health check endpoint
157
 
158
  ### Environment Variables
159
 
160
+ - `PORT` - Server port (default: 3000)
161
+ - `HOST` - Server host (default: 0.0.0.0)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
162
 
163
+ ## Usage with Docker
164
 
165
+ ### Building the Docker Image
 
 
 
 
 
166
 
167
  ```bash
168
+ docker build -t duckai .
 
169
  ```
170
 
171
+ ### Running with Docker
 
 
172
 
 
 
 
 
 
 
 
 
173
  ```bash
174
+ docker run -p 3000:3000 duckai
175
  ```
176
 
177
+ ## Contributing
 
 
 
 
178
 
179
  1. Fork the repository
180
  2. Create a feature branch
181
+ 3. Make your changes
182
+ 4. Add tests for new functionality
183
+ 5. Run the test suite
184
+ 6. Submit a pull request
185
+
186
+ ## License
187
+
188
+ MIT License - see LICENSE file for details.
189
 
190
+ ## Disclaimer
191
 
192
+ This project is not affiliated with DuckDuckGo or OpenAI. It's an unofficial bridge service for educational and development purposes. Please respect DuckDuckGo's terms of service and rate limits.