# API Reference Technical reference for Rox AI API endpoints. ## Base URL ``` https://Rox-Turbo-API.hf.space ``` --- ## Endpoints ### POST /chat - Rox Core ### POST /turbo - Rox 2.1 Turbo ### POST /coder - Rox 3.5 Coder ### POST /turbo45 - Rox 4.5 Turbo ### POST /ultra - Rox 5 Ultra ### POST /dyno - Rox 6 Dyno ### POST /coder7 - Rox 7 Coder ### POST /vision - Rox Vision Max ### POST /hf/generate - HuggingFace Compatible All endpoints use the same request/response format. #### Request **URL**: `/chat` **Method**: `POST` **Content-Type**: `application/json` **Body Parameters**: | Parameter | Type | Required | Default | Description | |-----------|------|----------|---------|-------------| | `messages` | Array | Yes | - | Array of conversation messages | | `temperature` | Float | No | 1.0 | Controls randomness (0.0 - 2.0) | | `top_p` | Float | No | 1.0 | Nucleus sampling parameter (0.0 - 1.0) | | `max_tokens` | Integer | No | 4096 | Maximum tokens in response | **Message Object**: ```typescript { role: "user" | "assistant", content: string } ``` **Example Request**: ```json { "messages": [ { "role": "user", "content": "What is artificial intelligence?" } ], "temperature": 1.0, "top_p": 0.95, "max_tokens": 512 } ``` #### Response **Success Response** (200 OK): ```json { "content": "Artificial intelligence (AI) refers to..." } ``` **Response Fields**: | Field | Type | Description | |-------|------|-------------| | `content` | String | The generated response from Rox Core | **Error Responses**: **500 Internal Server Error**: ```json { "detail": "Internal server error while calling Rox Core." } ``` **502 Bad Gateway**: ```json { "detail": "Bad response from upstream model provider." } ``` **422 Unprocessable Entity**: ```json { "detail": [ { "loc": ["body", "messages"], "msg": "field required", "type": "value_error.missing" } ] } ``` #### Example Usage **cURL**: ```bash curl -X POST https://Rox-Turbo-API.hf.space/chat \ -H "Content-Type: application/json" \ -d '{ "messages": [ {"role": "user", "content": "Hello!"} ], "temperature": 1.0, "max_tokens": 512 }' ``` **JavaScript**: ```javascript const response = await fetch('https://Rox-Turbo-API.hf.space/chat', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ messages: [{ role: 'user', content: 'Hello!' }], temperature: 1.0, max_tokens: 512 }) }); const data = await response.json(); console.log(data.content); ``` **Python**: ```python import requests response = requests.post('https://Rox-Turbo-API.hf.space/chat', json={ 'messages': [{'role': 'user', 'content': 'Hello!'}], 'temperature': 1.0, 'max_tokens': 512 }) print(response.json()['content']) ``` --- ### POST /hf/generate Hugging Face compatible text generation endpoint for single-turn interactions. #### Request **URL**: `/hf/generate` **Method**: `POST` **Content-Type**: `application/json` **Body Parameters**: | Parameter | Type | Required | Default | Description | |-----------|------|----------|---------|-------------| | `inputs` | String | Yes | - | The input text/prompt | | `parameters` | Object | No | {} | Generation parameters | **Parameters Object**: | Field | Type | Required | Default | Description | |-------|------|----------|---------|-------------| | `temperature` | Float | No | 1.0 | Controls randomness (0.0 - 2.0) | | `top_p` | Float | No | 0.95 | Nucleus sampling (0.0 - 1.0) | | `max_new_tokens` | Integer | No | 8192 | Maximum tokens to generate | **Example Request**: ```json { "inputs": "Write a haiku about technology", "parameters": { "temperature": 0.7, "top_p": 0.95, "max_new_tokens": 256 } } ``` #### Response **Success Response** (200 OK): ```json [ { "generated_text": "Silicon dreams awake\nCircuits pulse with electric life\nFuture in our hands" } ] ``` **Response Format**: Returns an array with a single object containing the generated text. | Field | Type | Description | |-------|------|-------------| | `generated_text` | String | The generated response | **Error Responses**: Same as `/chat` endpoint (500, 502, 422). #### Example Usage **cURL**: ```bash curl -X POST https://Rox-Turbo-API.hf.space/hf/generate \ -H "Content-Type: application/json" \ -d '{ "inputs": "Explain quantum computing", "parameters": { "temperature": 0.7, "max_new_tokens": 256 } }' ``` **JavaScript**: ```javascript const response = await fetch('https://Rox-Turbo-API.hf.space/hf/generate', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ inputs: 'Explain quantum computing', parameters: { temperature: 0.7, max_new_tokens: 256 } }) }); const data = await response.json(); console.log(data[0].generated_text); ``` **Python**: ```python import requests response = requests.post('https://Rox-Turbo-API.hf.space/hf/generate', json={ 'inputs': 'Explain quantum computing', 'parameters': { 'temperature': 0.7, 'max_new_tokens': 256 } }) print(response.json()[0]['generated_text']) ``` --- ## Parameters ### temperature Controls output randomness. - **Range**: 0.0 to 2.0 - **Default**: 1.0 - **Lower** (0.1-0.5): Focused, deterministic - **Medium** (0.6-1.0): Balanced - **Higher** (1.1-2.0): Creative, varied Examples: - `0.3`: Math problems, factual questions, code generation - `0.7`: General conversation, explanations - `1.2`: Creative writing, brainstorming, storytelling **Example**: ```json { "messages": [{"role": "user", "content": "What is 2+2?"}], "temperature": 0.2 } ``` ### top_p Nucleus sampling parameter. - **Range**: 0.0 to 1.0 - **Default**: 0.95 (/hf/generate), 1.0 (/chat) - **Lower**: More focused - **Higher**: More diverse Example: ```json { "messages": [{"role": "user", "content": "Tell me a story"}], "top_p": 0.9 } ``` ### max_tokens / max_new_tokens Maximum tokens in response. - **Range**: 1 to 8192 - **Default**: 4096 (/chat), 8192 (/hf/generate) Token estimation: - ~1 token ≈ 4 characters - ~1 token ≈ 0.75 words Example: ```json { "messages": [{"role": "user", "content": "Brief summary of AI"}], "max_tokens": 150 } ``` --- ## Error Handling ### Status Codes | Code | Meaning | Description | |------|---------|-------------| | 200 | OK | Request successful | | 422 | Unprocessable Entity | Invalid request format | | 500 | Internal Server Error | Server-side error | | 502 | Bad Gateway | Upstream model error | ### Error Response Format ```json { "detail": "Error message here" } ``` ### Common Errors Missing field: ```json { "detail": [ { "loc": ["body", "messages"], "msg": "field required", "type": "value_error.missing" } ] } ``` Invalid type: ```json { "detail": [ { "loc": ["body", "temperature"], "msg": "value is not a valid float", "type": "type_error.float" } ] } ``` Example error handler: ```javascript async function safeRequest(endpoint, body) { try { const response = await fetch(endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(body) }); if (!response.ok) { const error = await response.json(); throw new Error(error.detail || `HTTP ${response.status}`); } return await response.json(); } catch (error) { console.error('API Error:', error); throw error; } } ``` --- ## Rate Limiting No enforced rate limits. Implement client-side limiting as needed. --- ## Client Wrapper Example ```javascript class RoxAI { constructor(baseURL = 'https://Rox-Turbo-API.hf.space') { this.baseURL = baseURL; } async chat(messages, options = {}) { const response = await fetch(`${this.baseURL}/chat`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ messages, temperature: options.temperature || 1.0, top_p: options.top_p || 0.95, max_tokens: options.max_tokens || 512 }) }); if (!response.ok) { throw new Error(`HTTP ${response.status}`); } const data = await response.json(); return data.content; } async generate(text, options = {}) { const response = await fetch(`${this.baseURL}/hf/generate`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ inputs: text, parameters: options }) }); if (!response.ok) { throw new Error(`HTTP ${response.status}`); } const data = await response.json(); return data[0].generated_text; } } // Usage const rox = new RoxAI(); const response = await rox.chat([ { role: 'user', content: 'Hello!' } ]); ``` --- --- Built by Mohammad Faiz