API / docs /DEVELOPER_GUIDE.md
Rox-Turbo's picture
Upload 12 files
58ec31b verified
|
raw
history blame
8.2 kB
# Developer Guide
**API**: `https://Rox-Turbo-API.hf.space`
## Overview
Rox AI provides 8 AI models through a REST API. OpenAI-compatible.
## Quick Start
```bash
curl -X POST https://Rox-Turbo-API.hf.space/chat \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "Hello"}
]
}'
```
Response:
```json
{
"content": "Hello! I'm Rox Core..."
}
```
## Basic Usage
### Python
```python
import requests
def ask_rox(message, model='chat'):
response = requests.post(
f'https://Rox-Turbo-API.hf.space/{model}',
json={'messages': [{'role': 'user', 'content': message}]}
)
return response.json()['content']
answer = ask_rox('What is AI?')
print(answer)
```
### JavaScript
```javascript
async function askRox(message, model = 'chat') {
const response = await fetch(`https://Rox-Turbo-API.hf.space/${model}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
messages: [{ role: 'user', content: message }]
})
});
return (await response.json()).content;
}
const answer = await askRox('What is AI?');
```
## System Prompts
Add custom behavior:
```python
def ask_with_prompt(message, system_prompt, model='chat'):
response = requests.post(
f'https://Rox-Turbo-API.hf.space/{model}',
json={
'messages': [
{'role': 'system', 'content': system_prompt},
{'role': 'user', 'content': message}
]
}
)
return response.json()['content']
answer = ask_with_prompt(
'Tell me about AI',
'You are a pirate. Talk like a pirate.',
'chat'
)
```
## Parameters
### Temperature
Controls randomness (0.0 = focused, 2.0 = creative):
```python
response = requests.post(
'https://Rox-Turbo-API.hf.space/chat',
json={
'messages': [{'role': 'user', 'content': 'Write a poem'}],
'temperature': 1.5
}
)
```
### Top P
Controls diversity (0.0 = narrow, 1.0 = diverse):
```python
response = requests.post(
'https://Rox-Turbo-API.hf.space/chat',
json={
'messages': [{'role': 'user', 'content': 'What is 2+2?'}],
'temperature': 0.3,
'top_p': 0.7
}
)
```
### Max Tokens
Limits response length:
```python
response = requests.post(
'https://Rox-Turbo-API.hf.space/chat',
json={
'messages': [{'role': 'user', 'content': 'Brief summary'}],
'max_tokens': 100
}
)
```
## OpenAI SDK
Use the official OpenAI SDK:
### Python
```python
from openai import OpenAI
client = OpenAI(
base_url="https://Rox-Turbo-API.hf.space",
api_key="not-needed"
)
response = client.chat.completions.create(
model="chat",
messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)
```
### JavaScript
```javascript
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://Rox-Turbo-API.hf.space',
apiKey: 'not-needed'
});
const response = await client.chat.completions.create({
model: 'chat',
messages: [{ role: 'user', content: 'Hello' }]
});
console.log(response.choices[0].message.content);
```
## Model Selection
```python
# General conversation
ask_rox('Tell me about AI', model='chat')
# Fast response
ask_rox('What is 2+2?', model='turbo')
# Code generation
ask_rox('Write a Python function', model='coder')
# Advanced reasoning
ask_rox('Explain quantum physics', model='turbo45')
# Complex tasks
ask_rox('Design a system', model='ultra')
# Long documents
ask_rox('Analyze this document...', model='dyno')
# Advanced coding
ask_rox('Build an algorithm', model='coder7')
# Visual tasks
ask_rox('Describe this image', model='vision')
```
## Conversation History
Maintain context:
```python
conversation = [
{'role': 'user', 'content': 'My name is Alice'},
{'role': 'assistant', 'content': 'Nice to meet you, Alice!'},
{'role': 'user', 'content': 'What is my name?'}
]
response = requests.post(
'https://Rox-Turbo-API.hf.space/chat',
json={'messages': conversation}
)
print(response.json()['content']) # "Your name is Alice"
```
## Chatbot Example
### Python
```python
class RoxChatbot:
def __init__(self, model='chat'):
self.model = model
self.conversation = []
self.base_url = 'https://Rox-Turbo-API.hf.space'
def chat(self, message):
self.conversation.append({'role': 'user', 'content': message})
response = requests.post(
f'{self.base_url}/{self.model}',
json={'messages': self.conversation}
)
reply = response.json()['content']
self.conversation.append({'role': 'assistant', 'content': reply})
return reply
def clear(self):
self.conversation = []
bot = RoxChatbot()
print(bot.chat('Hello'))
print(bot.chat('What is AI?'))
print(bot.chat('Tell me more'))
```
### JavaScript
```javascript
class RoxChatbot {
constructor(model = 'chat') {
this.model = model;
this.conversation = [];
this.baseUrl = 'https://Rox-Turbo-API.hf.space';
}
async chat(message) {
this.conversation.push({ role: 'user', content: message });
const response = await fetch(`${this.baseUrl}/${this.model}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ messages: this.conversation })
});
const data = await response.json();
const reply = data.content;
this.conversation.push({ role: 'assistant', content: reply });
return reply;
}
clear() {
this.conversation = [];
}
}
const bot = new RoxChatbot();
console.log(await bot.chat('Hello'));
console.log(await bot.chat('What is AI?'));
```
## Error Handling
```python
def safe_ask(message, model='chat'):
try:
response = requests.post(
f'https://Rox-Turbo-API.hf.space/{model}',
json={'messages': [{'role': 'user', 'content': message}]},
timeout=30
)
response.raise_for_status()
return response.json()['content']
except requests.exceptions.Timeout:
return "Request timed out"
except requests.exceptions.RequestException as e:
return f"Error: {str(e)}"
```
## Rate Limiting
```python
import time
class RateLimiter:
def __init__(self, max_requests=10, time_window=60):
self.max_requests = max_requests
self.time_window = time_window
self.requests = []
def can_request(self):
now = time.time()
self.requests = [r for r in self.requests if now - r < self.time_window]
return len(self.requests) < self.max_requests
def record_request(self):
self.requests.append(time.time())
limiter = RateLimiter(10, 60)
def ask_with_limit(message):
if not limiter.can_request():
return "Rate limit exceeded"
limiter.record_request()
return ask_rox(message)
```
## Caching
```python
from functools import lru_cache
@lru_cache(maxsize=100)
def cached_ask(message, model='chat'):
response = requests.post(
f'https://Rox-Turbo-API.hf.space/{model}',
json={'messages': [{'role': 'user', 'content': message}]}
)
return response.json()['content']
answer1 = cached_ask('What is AI?') # API call
answer2 = cached_ask('What is AI?') # Cached
```
## Reference
```python
# Basic request
requests.post('https://Rox-Turbo-API.hf.space/chat',
json={'messages': [{'role': 'user', 'content': 'Hello'}]})
# With parameters
requests.post('https://Rox-Turbo-API.hf.space/chat',
json={'messages': [...], 'temperature': 0.7, 'max_tokens': 500})
# With system prompt
requests.post('https://Rox-Turbo-API.hf.space/chat',
json={'messages': [
{'role': 'system', 'content': 'You are helpful'},
{'role': 'user', 'content': 'Hello'}
]})
```
## Endpoints
- `/chat` - Rox Core
- `/turbo` - Rox 2.1 Turbo
- `/coder` - Rox 3.5 Coder
- `/turbo45` - Rox 4.5 Turbo
- `/ultra` - Rox 5 Ultra
- `/dyno` - Rox 6 Dyno
- `/coder7` - Rox 7 Coder
- `/vision` - Rox Vision Max
---
Built by Mohammad Faiz