Spaces:
Sleeping
Sleeping
Loomyloo Personal Coder AI - API Reference
This document details all available endpoints and parameters for your custom AI API hosted on Hugging Face Spaces.
π Base URL
https://loomisgitarrist-personal-coder-ai.hf.space
π‘ Endpoints
1. Chat Completion (/ask)
Generates a response from the Qwen 2.5 Coder 1.5B model.
Type: Coding Assistant
Status: Active (Stable)
Context: 20 Message Memory
Method:
GETURL:
/ask
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
string | Yes | The user's input message, question, or code task. |
Example Request
curl "https://loomisgitarrist-personal-coder-ai.hf.space/ask?prompt=Write%20a%20Python%20Hello%20World"
Example Response (JSON)
{
"generated_text": "Here is the Python code:\n\n```python\nprint('Hello, World!')\n```"
}
2. Reset Memory (/reset)
Clears the conversation history stored on the server. Use this when starting a completely new task or topic to prevent the AI from getting confused by previous context.
- Method:
GET - URL:
/reset
Parameters
None
Example Request
curl "https://loomisgitarrist-personal-coder-ai.hf.space/reset"
Example Response (JSON)
{
"status": "Memory reset",
"message": "Conversation history cleared."
}
3. Visual Chat UI (/)
A graphical web interface to chat with the model in your browser.
- Method:
GET - URL:
/ - Access: Open https://loomisgitarrist-personal-coder-ai.hf.space in your browser.
π» Code Integration Examples
Python Client
import requests
API_URL = "https://loomisgitarrist-personal-coder-ai.hf.space"
def ask_ai(prompt):
# 1. Send request
response = requests.get(f"{API_URL}/ask", params={"prompt": prompt})
# 2. Parse JSON
if response.status_code == 200:
return response.json()['generated_text']
else:
return f"Error: {response.status_code}"
# Usage
print(ask_ai("Write a binary search function in Go"))
JavaScript / Node.js Client
const API_URL = "https://loomisgitarrist-personal-coder-ai.hf.space";
async function askAI(prompt) {
try {
const response = await fetch(`${API_URL}/ask?prompt=${encodeURIComponent(prompt)}`);
const data = await response.json();
console.log("AI Response:", data.generated_text);
} catch (error) {
console.error("Error:", error);
}
}
// Usage
askAI("Explain how to use Docker with FastAPI");
βοΈ Model Configuration (Server-Side)
These settings are hardcoded on the server for stability on the free tier.
- Model: Qwen2.5-Coder-1.5B-Instruct
- Max Response Length: 512 tokens
- Temperature: 0.7 (Creativity balance)
- Top P: 0.9
- Memory Limit: Last 20 messages