Launchlab / README.md
MuhammadSaad16's picture
Upload 112 files
971b4ea verified
# FastAPI RAG Chatbot Backend
This project implements a FastAPI backend for a Retrieval-Augmented Generation (RAG) chatbot. It integrates with OpenAI for chat completions and embeddings, Neon Serverless Postgres for persistent data, and Qdrant Cloud for vector storage.
## Project Structure
```
backend/
β”œβ”€β”€ app/
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ main.py
β”‚ β”œβ”€β”€ config.py
β”‚ β”œβ”€β”€ database.py
β”‚ β”œβ”€β”€ qdrant_client.py
β”‚ β”œβ”€β”€ models/
β”‚ β”‚ β”œβ”€β”€ __init__.py
β”‚ β”‚ β”œβ”€β”€ user.py
β”‚ β”‚ └── chat.py
β”‚ β”œβ”€β”€ schemas/
β”‚ β”‚ β”œβ”€β”€ __init__.py
β”‚ β”‚ └── chat.py
β”‚ β”œβ”€β”€ routes/
β”‚ β”‚ β”œβ”€β”€ __init__.py
β”‚ β”‚ β”œβ”€β”€ chat.py
β”‚ β”‚ └── health.py
β”‚ └── services/
β”‚ β”œβ”€β”€ __init__.py
β”‚ β”œβ”€β”€ rag_service.py
β”‚ β”œβ”€β”€ embeddings_service.py
β”‚ └── openai_service.py
β”œβ”€β”€ scripts/
β”‚ └── ingest_content.py
β”œβ”€β”€ .env.example
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ README.md
└── .gitignore
```
## Setup Instructions
1. **Clone the repository**:
```bash
git clone <repository-url>
cd robotic
```
2. **Navigate to the backend directory**:
```bash
cd backend
```
3. **Set up the Python virtual environment and install dependencies**:
On Windows, run:
```bash
.\setup.bat
```
On Linux/macOS, run:
```bash
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
```
4. **Configure Environment Variables**:
Create a `.env` file in the `backend/` directory by copying `.env.example` and filling in your credentials:
```bash
copy .env.example .env
# or for Linux/macOS
cp .env.example .env
```
Edit the `.env` file with your actual API keys and database URLs:
```
OPENAI_API_KEY=your_openai_api_key_here
NEON_DATABASE_URL=your_neon_postgres_connection_string_here
QDRANT_URL=your_qdrant_cluster_url_here
QDRANT_API_KEY=your_qdrant_api_key_here
```
## Running the Application
1. **Activate your virtual environment**:
On Windows:
```bash
.\venv\Scripts\activate
```
On Linux/macOS:
```bash
source venv/bin/activate
```
2. **Start the FastAPI server**:
On Windows, run:
```bash
.\run.bat
```
On Linux/macOS, run:
```bash
uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
```
The API will be accessible at `http://localhost:8000`.
## API Endpoints
*(Note: Detailed API documentation will be available at `/docs` once the server is running)*
### Health Check
- **GET `/health`**
- Returns the health status of the backend and its integrated services.
### Chatbot Interaction
- **POST `/chat/`**
- **Description**: Sends a user query to the RAG chatbot and receives a generated response.
- **Request Body Example**:
```json
{
"query": "What is the main topic of the book?"
}
```
- **Response Body Example**:
```json
{
"response": "The main topic of the book is ..."
}
```
## Content Ingestion Script
- **`scripts/ingest_content.py`**
- This script is responsible for reading MDX files from `../physical-ai-humanoid-robotics/docs/`, chunking the text, generating OpenAI embeddings, and storing them in Qdrant.
- **Usage**:
```bash
python scripts/ingest_content.py
```