Spaces:
Configuration error
Configuration error
| # FastAPI RAG Chatbot Backend | |
| This project implements a FastAPI backend for a Retrieval-Augmented Generation (RAG) chatbot. It integrates with OpenAI for chat completions and embeddings, Neon Serverless Postgres for persistent data, and Qdrant Cloud for vector storage. | |
| ## Project Structure | |
| ``` | |
| backend/ | |
| βββ app/ | |
| β βββ __init__.py | |
| β βββ main.py | |
| β βββ config.py | |
| β βββ database.py | |
| β βββ qdrant_client.py | |
| β βββ models/ | |
| β β βββ __init__.py | |
| β β βββ user.py | |
| β β βββ chat.py | |
| β βββ schemas/ | |
| β β βββ __init__.py | |
| β β βββ chat.py | |
| β βββ routes/ | |
| β β βββ __init__.py | |
| β β βββ chat.py | |
| β β βββ health.py | |
| β βββ services/ | |
| β βββ __init__.py | |
| β βββ rag_service.py | |
| β βββ embeddings_service.py | |
| β βββ openai_service.py | |
| βββ scripts/ | |
| β βββ ingest_content.py | |
| βββ .env.example | |
| βββ requirements.txt | |
| βββ README.md | |
| βββ .gitignore | |
| ``` | |
| ## Setup Instructions | |
| 1. **Clone the repository**: | |
| ```bash | |
| git clone <repository-url> | |
| cd robotic | |
| ``` | |
| 2. **Navigate to the backend directory**: | |
| ```bash | |
| cd backend | |
| ``` | |
| 3. **Set up the Python virtual environment and install dependencies**: | |
| On Windows, run: | |
| ```bash | |
| .\setup.bat | |
| ``` | |
| On Linux/macOS, run: | |
| ```bash | |
| python3 -m venv venv | |
| source venv/bin/activate | |
| pip install -r requirements.txt | |
| ``` | |
| 4. **Configure Environment Variables**: | |
| Create a `.env` file in the `backend/` directory by copying `.env.example` and filling in your credentials: | |
| ```bash | |
| copy .env.example .env | |
| # or for Linux/macOS | |
| cp .env.example .env | |
| ``` | |
| Edit the `.env` file with your actual API keys and database URLs: | |
| ``` | |
| OPENAI_API_KEY=your_openai_api_key_here | |
| NEON_DATABASE_URL=your_neon_postgres_connection_string_here | |
| QDRANT_URL=your_qdrant_cluster_url_here | |
| QDRANT_API_KEY=your_qdrant_api_key_here | |
| ``` | |
| ## Running the Application | |
| 1. **Activate your virtual environment**: | |
| On Windows: | |
| ```bash | |
| .\venv\Scripts\activate | |
| ``` | |
| On Linux/macOS: | |
| ```bash | |
| source venv/bin/activate | |
| ``` | |
| 2. **Start the FastAPI server**: | |
| On Windows, run: | |
| ```bash | |
| .\run.bat | |
| ``` | |
| On Linux/macOS, run: | |
| ```bash | |
| uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload | |
| ``` | |
| The API will be accessible at `http://localhost:8000`. | |
| ## API Endpoints | |
| *(Note: Detailed API documentation will be available at `/docs` once the server is running)* | |
| ### Health Check | |
| - **GET `/health`** | |
| - Returns the health status of the backend and its integrated services. | |
| ### Chatbot Interaction | |
| - **POST `/chat/`** | |
| - **Description**: Sends a user query to the RAG chatbot and receives a generated response. | |
| - **Request Body Example**: | |
| ```json | |
| { | |
| "query": "What is the main topic of the book?" | |
| } | |
| ``` | |
| - **Response Body Example**: | |
| ```json | |
| { | |
| "response": "The main topic of the book is ..." | |
| } | |
| ``` | |
| ## Content Ingestion Script | |
| - **`scripts/ingest_content.py`** | |
| - This script is responsible for reading MDX files from `../physical-ai-humanoid-robotics/docs/`, chunking the text, generating OpenAI embeddings, and storing them in Qdrant. | |
| - **Usage**: | |
| ```bash | |
| python scripts/ingest_content.py | |
| ``` | |