AJ STUDIOZ commited on
Commit
6207efe
·
1 Parent(s): dfa2ec3

Updated huggingface-hub version + Added simple test examples

Browse files
Files changed (3) hide show
  1. TEST-EXAMPLES.md +257 -0
  2. app.py +55 -71
  3. requirements.txt +1 -1
TEST-EXAMPLES.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🧪 Simple Test Examples for AJ STUDIOZ API
2
+
3
+ ## ✅ Super Simple Tests (No Auth Required)
4
+
5
+ ### 1. Basic Chat Test
6
+ ```bash
7
+ curl https://kamesh14151-aj-studioz-api.hf.space/chat \
8
+ -H "Content-Type: application/json" \
9
+ -d "{\"message\": \"Hello AJ!\"}"
10
+ ```
11
+
12
+ ### 2. Code Generation Test
13
+ ```bash
14
+ curl https://kamesh14151-aj-studioz-api.hf.space/chat \
15
+ -H "Content-Type: application/json" \
16
+ -d "{\"message\": \"Write a Python function to reverse a string\"}"
17
+ ```
18
+
19
+ ### 3. Direct Generation
20
+ ```bash
21
+ curl https://kamesh14151-aj-studioz-api.hf.space/api/generate \
22
+ -H "Content-Type: application/json" \
23
+ -d "{\"prompt\": \"Explain what is an API\", \"max_tokens\": 500}"
24
+ ```
25
+
26
+ ### 4. Health Check
27
+ ```bash
28
+ curl https://kamesh14151-aj-studioz-api.hf.space/health
29
+ ```
30
+
31
+ ### 5. Service Info
32
+ ```bash
33
+ curl https://kamesh14151-aj-studioz-api.hf.space/
34
+ ```
35
+
36
+ ## 🔐 With Authentication
37
+
38
+ ### OpenAI Format
39
+ ```bash
40
+ curl https://kamesh14151-aj-studioz-api.hf.space/v1/chat/completions \
41
+ -H "Content-Type: application/json" \
42
+ -H "Authorization: Bearer aj_test123456" \
43
+ -d "{
44
+ \"model\": \"aj-mini\",
45
+ \"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]
46
+ }"
47
+ ```
48
+
49
+ ### Anthropic Claude Format
50
+ ```bash
51
+ curl https://kamesh14151-aj-studioz-api.hf.space/v1/messages \
52
+ -H "x-api-key: sk-ant-test123456" \
53
+ -H "anthropic-version: 2023-06-01" \
54
+ -H "content-type: application/json" \
55
+ -d "{
56
+ \"model\": \"claude-sonnet-4-20250514\",
57
+ \"max_tokens\": 1024,
58
+ \"messages\": [{\"role\": \"user\", \"content\": \"Hello world\"}]
59
+ }"
60
+ ```
61
+
62
+ ## 💻 PowerShell Examples (Windows)
63
+
64
+ ### Simple Chat
65
+ ```powershell
66
+ curl.exe https://kamesh14151-aj-studioz-api.hf.space/chat `
67
+ -H "Content-Type: application/json" `
68
+ -d '{\"message\": \"Hello AJ!\"}'
69
+ ```
70
+
71
+ ### Code Generation
72
+ ```powershell
73
+ $body = @{
74
+ message = "Write a JavaScript function to check if a number is prime"
75
+ } | ConvertTo-Json
76
+
77
+ Invoke-RestMethod -Uri "https://kamesh14151-aj-studioz-api.hf.space/chat" `
78
+ -Method Post `
79
+ -ContentType "application/json" `
80
+ -Body $body
81
+ ```
82
+
83
+ ## 🐍 Python Examples
84
+
85
+ ### Simple Chat
86
+ ```python
87
+ import requests
88
+
89
+ response = requests.post(
90
+ "https://kamesh14151-aj-studioz-api.hf.space/chat",
91
+ json={"message": "Write a function to calculate factorial"}
92
+ )
93
+
94
+ print(response.json()["reply"])
95
+ ```
96
+
97
+ ### With OpenAI SDK
98
+ ```python
99
+ from openai import OpenAI
100
+
101
+ client = OpenAI(
102
+ base_url="https://kamesh14151-aj-studioz-api.hf.space/v1",
103
+ api_key="aj_test123456"
104
+ )
105
+
106
+ response = client.chat.completions.create(
107
+ model="aj-mini",
108
+ messages=[
109
+ {"role": "user", "content": "Explain Python decorators"}
110
+ ]
111
+ )
112
+
113
+ print(response.choices[0].message.content)
114
+ ```
115
+
116
+ ### With Anthropic SDK
117
+ ```python
118
+ import anthropic
119
+
120
+ client = anthropic.Anthropic(
121
+ api_key="sk-ant-test123",
122
+ base_url="https://kamesh14151-aj-studioz-api.hf.space"
123
+ )
124
+
125
+ message = client.messages.create(
126
+ model="claude-sonnet-4-20250514",
127
+ max_tokens=1024,
128
+ messages=[
129
+ {"role": "user", "content": "Explain async/await"}
130
+ ]
131
+ )
132
+
133
+ print(message.content[0].text)
134
+ ```
135
+
136
+ ## 🌐 JavaScript/Node.js Examples
137
+
138
+ ### Fetch API
139
+ ```javascript
140
+ const response = await fetch('https://kamesh14151-aj-studioz-api.hf.space/chat', {
141
+ method: 'POST',
142
+ headers: { 'Content-Type': 'application/json' },
143
+ body: JSON.stringify({
144
+ message: 'Write a sorting algorithm in JavaScript'
145
+ })
146
+ });
147
+
148
+ const data = await response.json();
149
+ console.log(data.reply);
150
+ ```
151
+
152
+ ### Axios
153
+ ```javascript
154
+ const axios = require('axios');
155
+
156
+ const response = await axios.post(
157
+ 'https://kamesh14151-aj-studioz-api.hf.space/chat',
158
+ { message: 'Explain promises in JavaScript' }
159
+ );
160
+
161
+ console.log(response.data.reply);
162
+ ```
163
+
164
+ ## 📊 Expected Response Formats
165
+
166
+ ### Chat Endpoint Response
167
+ ```json
168
+ {
169
+ "reply": "Hello! I'm AJ, your AI assistant...",
170
+ "model": "AJ-Mini v1.0",
171
+ "provider": "AJ STUDIOZ"
172
+ }
173
+ ```
174
+
175
+ ### Health Check Response
176
+ ```json
177
+ {
178
+ "status": "healthy",
179
+ "model": "aj-mini",
180
+ "provider": "huggingface"
181
+ }
182
+ ```
183
+
184
+ ### OpenAI Format Response
185
+ ```json
186
+ {
187
+ "id": "chatcmpl-abc123...",
188
+ "object": "chat.completion",
189
+ "created": 1704067200,
190
+ "model": "aj-mini",
191
+ "choices": [
192
+ {
193
+ "index": 0,
194
+ "message": {
195
+ "role": "assistant",
196
+ "content": "Response text here..."
197
+ },
198
+ "finish_reason": "stop"
199
+ }
200
+ ],
201
+ "usage": {
202
+ "prompt_tokens": 10,
203
+ "completion_tokens": 50,
204
+ "total_tokens": 60
205
+ }
206
+ }
207
+ ```
208
+
209
+ ### Anthropic Format Response
210
+ ```json
211
+ {
212
+ "id": "msg_abc123...",
213
+ "type": "message",
214
+ "role": "assistant",
215
+ "content": [
216
+ {
217
+ "type": "text",
218
+ "text": "Response text here..."
219
+ }
220
+ ],
221
+ "model": "claude-sonnet-4-20250514",
222
+ "stop_reason": "end_turn",
223
+ "usage": {
224
+ "input_tokens": 10,
225
+ "output_tokens": 50
226
+ }
227
+ }
228
+ ```
229
+
230
+ ---
231
+
232
+ ## 🎯 Quick Start
233
+
234
+ 1. **Test if it's working:**
235
+ ```bash
236
+ curl https://kamesh14151-aj-studioz-api.hf.space/health
237
+ ```
238
+
239
+ 2. **Send your first message:**
240
+ ```bash
241
+ curl https://kamesh14151-aj-studioz-api.hf.space/chat \
242
+ -H "Content-Type: application/json" \
243
+ -d '{"message": "Hello!"}'
244
+ ```
245
+
246
+ 3. **Ask for code:**
247
+ ```bash
248
+ curl https://kamesh14151-aj-studioz-api.hf.space/chat \
249
+ -H "Content-Type: application/json" \
250
+ -d '{"message": "Write a Python web scraper"}'
251
+ ```
252
+
253
+ ---
254
+
255
+ **Your API is ready!** 🚀
256
+ **URL**: https://kamesh14151-aj-studioz-api.hf.space
257
+ **Status**: FREE FOREVER • UNLIMITED USAGE • NO RATE LIMITS
app.py CHANGED
@@ -151,8 +151,9 @@ async def anthropic_messages(
151
  }
152
  )
153
 
154
- # Convert to HF format and call API
155
- hf_messages = []
 
156
  for msg in messages:
157
  role = msg.get("role")
158
  content = msg.get("content")
@@ -160,24 +161,24 @@ async def anthropic_messages(
160
  # Handle complex content (text, images, etc.)
161
  text_parts = [c.get("text", "") for c in content if c.get("type") == "text"]
162
  content = " ".join(text_parts)
163
- hf_messages.append({"role": role, "content": content})
 
 
 
 
 
 
164
 
165
- # Add system message for AJ branding
166
- if not any(msg["role"] == "system" for msg in hf_messages):
167
- hf_messages.insert(0, {
168
- "role": "system",
169
- "content": "You are AJ, a powerful AI assistant created by AJ STUDIOZ with advanced coding and problem-solving abilities."
170
- })
171
 
172
- response = client.chat_completion(
173
- messages=hf_messages,
174
  model=MODEL_NAME,
175
- max_tokens=max_tokens,
176
  temperature=temperature
177
  )
178
 
179
- assistant_message = response.choices[0].message.content
180
-
181
  # Return Anthropic-compatible response
182
  return {
183
  "id": f"msg_{secrets.token_hex(12)}",
@@ -246,23 +247,20 @@ async def list_models(authorization: Optional[str] = Header(None)):
246
  async def stream_chat_response(prompt: str, model: str, temperature: float, max_tokens: int, completion_id: str):
247
  """Generator for streaming responses using Hugging Face Inference API"""
248
  try:
249
- # Format as chat messages
250
- messages = [
251
- {"role": "system", "content": "You are AJ, a professional AI assistant created by AJ STUDIOZ. You provide thoughtful, accurate, and helpful responses with enterprise-grade reliability."},
252
- {"role": "user", "content": prompt}
253
- ]
254
 
255
- stream = client.chat_completion(
256
- messages=messages,
257
  model=MODEL_NAME,
258
- max_tokens=max_tokens,
259
  temperature=temperature,
260
  stream=True
261
- )
262
-
263
- for chunk in stream:
264
- if chunk.choices and chunk.choices[0].delta.content:
265
- content = chunk.choices[0].delta.content
266
  # OpenAI-compatible streaming format
267
  stream_chunk = {
268
  "id": completion_id,
@@ -271,7 +269,7 @@ async def stream_chat_response(prompt: str, model: str, temperature: float, max_
271
  "model": model,
272
  "choices": [{
273
  "index": 0,
274
- "delta": {"content": content},
275
  "finish_reason": None
276
  }]
277
  }
@@ -350,20 +348,17 @@ async def chat_completions(request: Request, authorization: Optional[str] = Head
350
  )
351
 
352
  # Non-streaming response using Hugging Face Inference API
353
- messages = [
354
- {"role": "system", "content": "You are AJ, a professional AI assistant created by AJ STUDIOZ. You provide thoughtful, accurate, and helpful responses with enterprise-grade reliability."},
355
- {"role": "user", "content": prompt}
356
- ]
357
 
358
- response = client.chat_completion(
359
- messages=messages,
360
  model=MODEL_NAME,
361
- max_tokens=max_tokens,
362
  temperature=temperature
363
  )
364
 
365
- assistant_message = response.choices[0].message.content
366
-
367
  # OpenAI-compatible response
368
  return {
369
  "id": completion_id,
@@ -411,20 +406,18 @@ async def completions(request: Request, authorization: Optional[str] = Header(No
411
  raise HTTPException(status_code=400, detail="Prompt is required")
412
 
413
  # Call Hugging Face Inference API
414
- messages = [
415
- {"role": "system", "content": "You are AJ, a professional AI assistant created by AJ STUDIOZ. You provide thoughtful, accurate, and helpful responses."},
416
- {"role": "user", "content": prompt}
417
- ]
418
 
419
- response = client.chat_completion(
420
- messages=messages,
421
  model=MODEL_NAME,
422
- max_tokens=max_tokens,
423
  temperature=temperature
424
  )
425
 
426
- completion_text = response.choices[0].message.content
427
-
428
  return {
429
  "id": f"cmpl-{secrets.token_hex(12)}",
430
  "object": "text_completion",
@@ -460,20 +453,18 @@ async def chat(request: Request):
460
  return JSONResponse({"error": "Message is required"}, status_code=400)
461
 
462
  # Call Hugging Face Inference API
463
- messages = [
464
- {"role": "system", "content": "You are AJ, a professional AI assistant created by AJ STUDIOZ."},
465
- {"role": "user", "content": message}
466
- ]
467
 
468
- response = client.chat_completion(
469
- messages=messages,
470
  model=MODEL_NAME,
471
- max_tokens=1000,
472
- temperature=0.3
473
  )
474
 
475
- reply = response.choices[0].message.content
476
-
477
  return JSONResponse({
478
  "reply": reply,
479
  "model": "AJ-Mini v1.0",
@@ -493,24 +484,18 @@ async def generate(request: Request):
493
  data = await request.json()
494
  prompt = data.get("prompt", "")
495
  max_tokens = data.get("max_tokens", 1000)
496
- temperature = data.get("temperature", 0.3)
497
 
498
  if not prompt:
499
  return JSONResponse({"error": "Prompt is required"}, status_code=400)
500
 
501
- messages = [
502
- {"role": "user", "content": prompt}
503
- ]
504
-
505
- response = client.chat_completion(
506
- messages=messages,
507
  model=MODEL_NAME,
508
- max_tokens=max_tokens,
509
  temperature=temperature
510
  )
511
 
512
- response_text = response.choices[0].message.content
513
-
514
  return JSONResponse({
515
  "response": response_text,
516
  "model": "AJ-Mini v1.0",
@@ -545,12 +530,11 @@ async def tags():
545
  async def health():
546
  """Health check endpoint"""
547
  try:
548
- # Test HF API with a simple message
549
- test_messages = [{"role": "user", "content": "Hi"}]
550
- test = client.chat_completion(
551
- messages=test_messages,
552
  model=MODEL_NAME,
553
- max_tokens=5
554
  )
555
  return {"status": "healthy", "model": "aj-mini", "provider": "huggingface"}
556
  except Exception as e:
 
151
  }
152
  )
153
 
154
+ # Convert to prompt format for text_generation
155
+ prompt_parts = ["You are AJ, a powerful AI assistant created by AJ STUDIOZ with advanced coding and problem-solving abilities.\n"]
156
+
157
  for msg in messages:
158
  role = msg.get("role")
159
  content = msg.get("content")
 
161
  # Handle complex content (text, images, etc.)
162
  text_parts = [c.get("text", "") for c in content if c.get("type") == "text"]
163
  content = " ".join(text_parts)
164
+
165
+ if role == "user":
166
+ prompt_parts.append(f"User: {content}")
167
+ elif role == "assistant":
168
+ prompt_parts.append(f"Assistant: {content}")
169
+ elif role == "system":
170
+ prompt_parts.insert(0, content)
171
 
172
+ prompt_parts.append("Assistant:")
173
+ full_prompt = "\n\n".join(prompt_parts)
 
 
 
 
174
 
175
+ assistant_message = client.text_generation(
176
+ full_prompt,
177
  model=MODEL_NAME,
178
+ max_new_tokens=max_tokens,
179
  temperature=temperature
180
  )
181
 
 
 
182
  # Return Anthropic-compatible response
183
  return {
184
  "id": f"msg_{secrets.token_hex(12)}",
 
247
  async def stream_chat_response(prompt: str, model: str, temperature: float, max_tokens: int, completion_id: str):
248
  """Generator for streaming responses using Hugging Face Inference API"""
249
  try:
250
+ # Format as a single prompt for text generation
251
+ full_prompt = f"""You are AJ, a professional AI assistant created by AJ STUDIOZ with advanced coding and problem-solving abilities.
252
+
253
+ User: {prompt}
254
+ Assistant:"""
255
 
256
+ for token in client.text_generation(
257
+ full_prompt,
258
  model=MODEL_NAME,
259
+ max_new_tokens=max_tokens,
260
  temperature=temperature,
261
  stream=True
262
+ ):
263
+ if token:
 
 
 
264
  # OpenAI-compatible streaming format
265
  stream_chunk = {
266
  "id": completion_id,
 
269
  "model": model,
270
  "choices": [{
271
  "index": 0,
272
+ "delta": {"content": token},
273
  "finish_reason": None
274
  }]
275
  }
 
348
  )
349
 
350
  # Non-streaming response using Hugging Face Inference API
351
+ full_prompt = f"""You are AJ, a professional AI assistant created by AJ STUDIOZ with advanced coding and problem-solving abilities.
352
+
353
+ {prompt}"""
 
354
 
355
+ assistant_message = client.text_generation(
356
+ full_prompt,
357
  model=MODEL_NAME,
358
+ max_new_tokens=max_tokens,
359
  temperature=temperature
360
  )
361
 
 
 
362
  # OpenAI-compatible response
363
  return {
364
  "id": completion_id,
 
406
  raise HTTPException(status_code=400, detail="Prompt is required")
407
 
408
  # Call Hugging Face Inference API
409
+ full_prompt = f"""You are AJ, a professional AI assistant created by AJ STUDIOZ with advanced coding abilities.
410
+
411
+ User: {prompt}
412
+ Assistant:"""
413
 
414
+ completion_text = client.text_generation(
415
+ full_prompt,
416
  model=MODEL_NAME,
417
+ max_new_tokens=max_tokens,
418
  temperature=temperature
419
  )
420
 
 
 
421
  return {
422
  "id": f"cmpl-{secrets.token_hex(12)}",
423
  "object": "text_completion",
 
453
  return JSONResponse({"error": "Message is required"}, status_code=400)
454
 
455
  # Call Hugging Face Inference API
456
+ full_message = f"""You are AJ, a professional AI assistant created by AJ STUDIOZ with advanced coding abilities.
457
+
458
+ User: {message}
459
+ Assistant:"""
460
 
461
+ reply = client.text_generation(
462
+ full_message,
463
  model=MODEL_NAME,
464
+ max_new_tokens=1000,
465
+ temperature=0.7
466
  )
467
 
 
 
468
  return JSONResponse({
469
  "reply": reply,
470
  "model": "AJ-Mini v1.0",
 
484
  data = await request.json()
485
  prompt = data.get("prompt", "")
486
  max_tokens = data.get("max_tokens", 1000)
487
+ temperature = data.get("temperature", 0.7)
488
 
489
  if not prompt:
490
  return JSONResponse({"error": "Prompt is required"}, status_code=400)
491
 
492
+ response_text = client.text_generation(
493
+ prompt,
 
 
 
 
494
  model=MODEL_NAME,
495
+ max_new_tokens=max_tokens,
496
  temperature=temperature
497
  )
498
 
 
 
499
  return JSONResponse({
500
  "response": response_text,
501
  "model": "AJ-Mini v1.0",
 
530
  async def health():
531
  """Health check endpoint"""
532
  try:
533
+ # Test HF API with a simple text generation
534
+ test = client.text_generation(
535
+ "Hello",
 
536
  model=MODEL_NAME,
537
+ max_new_tokens=5
538
  )
539
  return {"status": "healthy", "model": "aj-mini", "provider": "huggingface"}
540
  except Exception as e:
requirements.txt CHANGED
@@ -1,5 +1,5 @@
1
  fastapi==0.104.1
2
  uvicorn[standard]==0.24.0
3
- huggingface-hub==0.19.4
4
  requests==2.31.0
5
  python-multipart==0.0.6
 
1
  fastapi==0.104.1
2
  uvicorn[standard]==0.24.0
3
+ huggingface-hub>=0.20.0
4
  requests==2.31.0
5
  python-multipart==0.0.6