chatbot-api / README.md
pierreramez's picture
Upload README.md
ce60d17 verified
---
title: Personalized Chatbot API
emoji: πŸ€–
colorFrom: blue
colorTo: green
sdk: docker
pinned: false
license: mit
---
# Personalized Chatbot Backend
FastAPI backend for a personalized chatbot with Human-in-the-Loop (HITL) feedback.
## Features
- πŸš€ Fast inference with Llama 3.2
- πŸ’Ύ Feedback collection for continuous learning
- πŸ“Š Statistics tracking
- πŸ”§ LoRA adapter support for finetuned models
## API Endpoints
### POST /chat
Generate chatbot response
**Request:**
```json
{
"message": "Hello, how are you?",
"history": [],
"max_length": 200,
"temperature": 0.7
}
```
**Response:**
```json
{
"reply": "I'm doing well, thank you!",
"timestamp": 1234567890.123
}
```
### POST /feedback
Submit correction for model response
**Request:**
```json
{
"user_input": "What is the capital of France?",
"model_reply": "The capital is Berlin",
"user_correction": "The capital is Paris",
"reason": "incorrect_answer"
}
```
### GET /stats
Get feedback statistics
**Response:**
```json
{
"total_interactions": 100,
"corrections": 15,
"accepted": 85,
"correction_rate": 0.15
}
```
### GET /health
Health check endpoint
## Configuration
The model is configured in the `startup_event()` function:
```python
model_manager.initialize(
model_name="meta-llama/Llama-3.2-1B-Instruct",
adapter_path=None, # Path to LoRA adapter if finetuned
use_4bit=True # Use 4-bit quantization
)
```
## Usage
1. Fork this Space
2. Modify `model_name` and `adapter_path` in `app.py` if needed
3. The API will be available at: `https://YOUR-USERNAME-chatbot-api.hf.space`
## Local Development
```bash
pip install -r requirements.txt
python app.py
```
API will be available at: http://localhost:7860
## License
MIT