Spaces:
Sleeping
Sleeping
Arif
commited on
Commit
Β·
ca592ac
1
Parent(s):
68d0867
Updated readme
Browse files
README.md
CHANGED
|
@@ -1,228 +1,221 @@
|
|
| 1 |
-
RAG Portfolio Project
|
| 2 |
-
A state-of-the-art Retrieval-Augmented Generation (RAG) system leveraging modern generative AI and vector search technologies. This project demonstrates how to build a production-grade system that enables advanced question answering, document search, and contextual generation on your own infrastructureβprivate, scalable, and fast.
|
| 3 |
-
|
| 4 |
-
Table of Contents
|
| 5 |
-
Project Overview
|
| 6 |
-
|
| 7 |
-
Features
|
| 8 |
-
|
| 9 |
-
Tech Stack
|
| 10 |
-
|
| 11 |
-
Getting Started
|
| 12 |
-
|
| 13 |
-
Architecture
|
| 14 |
-
|
| 15 |
-
API Endpoints
|
| 16 |
-
|
| 17 |
-
Usage Examples
|
| 18 |
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
Project Structure
|
| 22 |
-
|
| 23 |
-
Troubleshooting
|
| 24 |
-
|
| 25 |
-
Contributing
|
| 26 |
-
|
| 27 |
-
License
|
| 28 |
|
| 29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
This project showcases how to combine large language models (LLMs), local vector databases, and a modern Python web API for secure, high-performance knowledge and document retrieval. All LLM operations run locallyβno data leaves your machine.
|
| 31 |
-
It is ideal for applications in internal research, enterprise QA, knowledge management, or compliance-sensitive AI tasks.
|
| 32 |
-
|
| 33 |
-
Features
|
| 34 |
-
Local LLM Inference: Runs entirely on your machine using Ollama and open-source models (e.g., Llama 3.1).
|
| 35 |
-
|
| 36 |
-
Vector Database Search: Uses Qdrant for fast, scalable semantic retrieval.
|
| 37 |
-
|
| 38 |
-
Flexible Document Ingestion: Upload PDF, DOCX, or TXT files for indexing and search.
|
| 39 |
|
| 40 |
-
|
| 41 |
|
| 42 |
-
|
| 43 |
|
| 44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
-
|
| 47 |
|
| 48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
-
|
| 51 |
-
LLM: Ollama (local inference engine), Llama 3.1
|
| 52 |
|
| 53 |
-
|
| 54 |
|
| 55 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
Package Manager: uv
|
| 60 |
-
|
| 61 |
-
Code Editor: Cursor (recommended)
|
| 62 |
-
|
| 63 |
-
Testing & Quality: Pytest, Black, Ruff
|
| 64 |
-
|
| 65 |
-
DevOps: Docker-ready
|
| 66 |
-
|
| 67 |
-
Getting Started
|
| 68 |
-
1. Prerequisites
|
| 69 |
-
Python 3.10+
|
| 70 |
-
|
| 71 |
-
uv package manager
|
| 72 |
-
|
| 73 |
-
Ollama installed locally
|
| 74 |
-
|
| 75 |
-
Qdrant (Docker recommended)
|
| 76 |
-
|
| 77 |
-
2. Setup
|
| 78 |
-
|
| 79 |
-
# Clone the repository
|
| 80 |
git clone https://github.com/YOUR_USERNAME/rag-portfolio-project.git
|
| 81 |
cd rag-portfolio-project
|
| 82 |
-
|
| 83 |
-
# Install dependencies
|
| 84 |
uv sync
|
| 85 |
-
|
| 86 |
-
# Copy and configure environment variables
|
| 87 |
cp .env.example .env
|
| 88 |
-
# (Update .env if needed)
|
| 89 |
|
|
|
|
| 90 |
|
| 91 |
-
3. Start Qdrant (Vector DB)
|
| 92 |
|
| 93 |
docker run -p 6333:6333 qdrant/qdrant
|
| 94 |
|
| 95 |
-
|
| 96 |
|
|
|
|
| 97 |
ollama pull llama3.1
|
| 98 |
|
|
|
|
| 99 |
|
| 100 |
-
5. Run the FastAPI Application
|
| 101 |
-
|
| 102 |
uv run uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
|
| 103 |
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
β
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
curl -X POST "http://localhost:8000/
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
|
| 171 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 172 |
curl -X DELETE "http://localhost:8000/reset"
|
| 173 |
|
|
|
|
| 174 |
|
| 175 |
-
|
| 176 |
-
Unit tests provided in /tests using Pytest.
|
| 177 |
|
| 178 |
-
|
|
|
|
|
|
|
| 179 |
uv run pytest
|
| 180 |
|
| 181 |
-
|
|
|
|
| 182 |
uv run black app/ tests/
|
| 183 |
uv run ruff app/ tests/
|
| 184 |
|
|
|
|
|
|
|
|
|
|
| 185 |
|
| 186 |
-
Project Structure
|
| 187 |
rag-portfolio-project/
|
| 188 |
-
βββ .env
|
| 189 |
-
βββ pyproject.toml
|
| 190 |
-
|
| 191 |
βββ app/
|
| 192 |
-
β
|
| 193 |
-
β
|
| 194 |
-
β
|
| 195 |
-
β
|
| 196 |
-
β
|
| 197 |
-
β
|
| 198 |
βββ data/
|
| 199 |
-
β
|
| 200 |
-
β
|
| 201 |
βββ tests/
|
| 202 |
-
β
|
| 203 |
βββ scripts/
|
| 204 |
-
|
| 205 |
-
|
| 206 |
|
|
|
|
| 207 |
|
| 208 |
-
|
| 209 |
-
Missing Modules?
|
| 210 |
-
Run uv add <module-name> for any missing Python packages.
|
| 211 |
|
| 212 |
-
|
| 213 |
-
|
|
|
|
|
|
|
|
|
|
| 214 |
|
| 215 |
-
|
| 216 |
-
Ensure container is up (docker ps).
|
| 217 |
|
| 218 |
-
|
| 219 |
-
|
| 220 |
|
| 221 |
-
|
| 222 |
-
Contributions are welcome! Please fork the repository, open issues, or submit pull requests for bug fixes, docs improvements, or new features.
|
| 223 |
|
| 224 |
-
License
|
| 225 |
Open-source under the MIT License.
|
| 226 |
|
| 227 |
-
|
|
|
|
|
|
|
| 228 |
Contact the repository owner or open an issue β happy to help!
|
|
|
|
| 1 |
+
# RAG Portfolio Project
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
+
A state-of-the-art Retrieval-Augmented Generation (RAG) system leveraging modern generative AI and vector search technologies. This project demonstrates how to build a production-grade system that enables advanced question answering, document search, and contextual generation on your own infrastructureβprivate, scalable, and fast.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
## Table of Contents
|
| 8 |
+
- Project Overview
|
| 9 |
+
- Features
|
| 10 |
+
- Tech Stack
|
| 11 |
+
- Getting Started
|
| 12 |
+
- Architecture
|
| 13 |
+
- API Endpoints
|
| 14 |
+
- Usage Examples
|
| 15 |
+
- Testing
|
| 16 |
+
- Project Structure
|
| 17 |
+
- Troubleshooting
|
| 18 |
+
- Contributing
|
| 19 |
+
- License
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
## Project Overview
|
| 24 |
This project showcases how to combine large language models (LLMs), local vector databases, and a modern Python web API for secure, high-performance knowledge and document retrieval. All LLM operations run locallyβno data leaves your machine.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
+
Ideal for applications in internal research, enterprise QA, knowledge management, or compliance-sensitive AI tasks.
|
| 27 |
|
| 28 |
+
---
|
| 29 |
|
| 30 |
+
## Features
|
| 31 |
+
- **Local LLM Inference:** Runs entirely on your machine using Ollama and open-source models (e.g., Llama 3.1).
|
| 32 |
+
- **Vector Database Search:** Uses Qdrant for fast, scalable semantic retrieval.
|
| 33 |
+
- **Flexible Document Ingestion:** Upload PDF, DOCX, or TXT files for indexing and search.
|
| 34 |
+
- **FastAPI Back End:** High-concurrency, type-safe REST API with automatic documentation.
|
| 35 |
+
- **Modern Python Package Management:** Built with `uv` for blazing-fast dependency resolution.
|
| 36 |
+
- **Modular, Extensible Codebase:** Clean architecture, easy to extend and maintain.
|
| 37 |
+
- **Privacy and Security:** No cloud callsβideal for regulated sectors.
|
| 38 |
+
- **Fully Containerizable:** Easily deploy with Docker.
|
| 39 |
|
| 40 |
+
---
|
| 41 |
|
| 42 |
+
## Tech Stack
|
| 43 |
+
- **LLM:** Ollama (local inference engine), Llama 3.1
|
| 44 |
+
- **Vector DB:** Qdrant
|
| 45 |
+
- **Embeddings:** Sentence Transformers
|
| 46 |
+
- **API:** FastAPI + Uvicorn
|
| 47 |
+
- **Package Manager:** uv
|
| 48 |
+
- **Code Editor:** Cursor (recommended)
|
| 49 |
+
- **Testing & Quality:** Pytest, Black, Ruff
|
| 50 |
+
- **DevOps:** Docker-ready
|
| 51 |
|
| 52 |
+
---
|
|
|
|
| 53 |
|
| 54 |
+
## Getting Started
|
| 55 |
|
| 56 |
+
### 1. Prerequisites
|
| 57 |
+
- Python 3.10+
|
| 58 |
+
- `uv` package manager
|
| 59 |
+
- Ollama installed locally
|
| 60 |
+
- Qdrant (Docker recommended)
|
| 61 |
|
| 62 |
+
### 2. Setup
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
git clone https://github.com/YOUR_USERNAME/rag-portfolio-project.git
|
| 64 |
cd rag-portfolio-project
|
|
|
|
|
|
|
| 65 |
uv sync
|
|
|
|
|
|
|
| 66 |
cp .env.example .env
|
|
|
|
| 67 |
|
| 68 |
+
(Update .env if needed)
|
| 69 |
|
| 70 |
+
### 3. Start Qdrant (Vector DB)
|
| 71 |
|
| 72 |
docker run -p 6333:6333 qdrant/qdrant
|
| 73 |
|
| 74 |
+
text
|
| 75 |
|
| 76 |
+
### 4. Pull Ollama LLM Model
|
| 77 |
ollama pull llama3.1
|
| 78 |
|
| 79 |
+
text
|
| 80 |
|
| 81 |
+
### 5. Run the FastAPI Application
|
|
|
|
| 82 |
uv run uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
|
| 83 |
|
| 84 |
+
text
|
| 85 |
+
|
| 86 |
+
### 6. Open API Documentation
|
| 87 |
+
Access at: [http://localhost:8000/docs](http://localhost:8000/docs)
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
## Architecture
|
| 92 |
+
text
|
| 93 |
+
ββββββββββββββ
|
| 94 |
+
β User β
|
| 95 |
+
βββββββ¬βββββββ
|
| 96 |
+
β
|
| 97 |
+
ββββββββΌββββββββ
|
| 98 |
+
β FastAPI REST β
|
| 99 |
+
β Backend β
|
| 100 |
+
βββββββ¬βββββββββ
|
| 101 |
+
ββββββββββββββ΄βββββββββββββ
|
| 102 |
+
β β
|
| 103 |
+
βββββΌββββββ βββββββββΌβββββββββ
|
| 104 |
+
β Document β β Query, RAG β
|
| 105 |
+
β Ingestionβ β Chain & Gen. β
|
| 106 |
+
βββββ¬βββββββ ββββββββββββββββββ
|
| 107 |
+
β
|
| 108 |
+
βββββΌβββββββββ
|
| 109 |
+
β Embedding β
|
| 110 |
+
β Generation β
|
| 111 |
+
βββββ¬βββββββββ
|
| 112 |
+
β
|
| 113 |
+
βββββΌββββββββββ
|
| 114 |
+
β Qdrant β
|
| 115 |
+
β Vector DB β
|
| 116 |
+
βββββ¬ββββββββββ
|
| 117 |
+
β
|
| 118 |
+
βββββΌββββββββββ
|
| 119 |
+
β Ollama LLM β
|
| 120 |
+
βββββββββββββββ
|
| 121 |
+
|
| 122 |
+
text
|
| 123 |
+
|
| 124 |
+
**Workflow:**
|
| 125 |
+
- Documents are split into semantic chunks and indexed as vectors.
|
| 126 |
+
- Sentence Transformers generate embeddings.
|
| 127 |
+
- Qdrant retrieves the most relevant contexts.
|
| 128 |
+
- Ollama answers using retrieved context (true RAG).
|
| 129 |
+
|
| 130 |
+
---
|
| 131 |
+
|
| 132 |
+
## API Endpoints
|
| 133 |
+
| Method | Path | Description |
|
| 134 |
+
|--------|----------------|-----------------------------------|
|
| 135 |
+
| GET | `/` | Root endpoint |
|
| 136 |
+
| GET | `/health` | Check system status |
|
| 137 |
+
| POST | `/ingest/file` | Upload and index document |
|
| 138 |
+
| POST | `/query` | Query system for answer |
|
| 139 |
+
| DELETE | `/reset` | Reset vector database (danger!) |
|
| 140 |
+
|
| 141 |
+
Docs available at [http://localhost:8000/docs](http://localhost:8000/docs)
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
## Usage Examples
|
| 146 |
+
1. Upload a Document (.pdf/.docx/.txt)
|
| 147 |
+
curl -X POST "http://localhost:8000/ingest/file"
|
| 148 |
+
-H "accept: application/json"
|
| 149 |
+
-F "file=@your_document.pdf"
|
| 150 |
+
|
| 151 |
+
2. Query the System
|
| 152 |
+
curl -X POST "http://localhost:8000/query"
|
| 153 |
+
-H "Content-Type: application/json"
|
| 154 |
+
-d '{"question": "What is the key insight in the uploaded document?", "top_k": 5}'
|
| 155 |
+
|
| 156 |
+
3. Reset Collection
|
| 157 |
curl -X DELETE "http://localhost:8000/reset"
|
| 158 |
|
| 159 |
+
text
|
| 160 |
|
| 161 |
+
---
|
|
|
|
| 162 |
|
| 163 |
+
## Testing
|
| 164 |
+
- Unit tests in `/tests` using Pytest.
|
| 165 |
+
- Run all tests:
|
| 166 |
uv run pytest
|
| 167 |
|
| 168 |
+
text
|
| 169 |
+
- Ensure formatting and linting:
|
| 170 |
uv run black app/ tests/
|
| 171 |
uv run ruff app/ tests/
|
| 172 |
|
| 173 |
+
text
|
| 174 |
+
|
| 175 |
+
---
|
| 176 |
|
| 177 |
+
## Project Structure
|
| 178 |
rag-portfolio-project/
|
| 179 |
+
βββ .env
|
| 180 |
+
βββ pyproject.toml
|
| 181 |
+
βββ README.md
|
| 182 |
βββ app/
|
| 183 |
+
β βββ main.py
|
| 184 |
+
β βββ config.py
|
| 185 |
+
β βββ models/
|
| 186 |
+
β βββ core/
|
| 187 |
+
β βββ services/
|
| 188 |
+
β βββ api/
|
| 189 |
βββ data/
|
| 190 |
+
β βββ documents/
|
| 191 |
+
β βββ processed/
|
| 192 |
βββ tests/
|
| 193 |
+
β βββ test_rag.py
|
| 194 |
βββ scripts/
|
| 195 |
+
βββ setup_qdrant.py
|
| 196 |
+
βββ ingest_documents.py
|
| 197 |
|
| 198 |
+
text
|
| 199 |
|
| 200 |
+
---
|
|
|
|
|
|
|
| 201 |
|
| 202 |
+
## Troubleshooting
|
| 203 |
+
- **Missing Modules?** Run `uv add <module-name>`
|
| 204 |
+
- **Ollama Model Not Found?** Check with `ollama list` or update `.env`
|
| 205 |
+
- **Qdrant Not Running?** Ensure the Docker container is up (`docker ps`)
|
| 206 |
+
- **File Upload Errors?** Install `python-multipart`
|
| 207 |
|
| 208 |
+
---
|
|
|
|
| 209 |
|
| 210 |
+
## Contributing
|
| 211 |
+
Contributions are welcome! Fork the repo, open issues, or submit pull requests for enhancements or bug fixes.
|
| 212 |
|
| 213 |
+
---
|
|
|
|
| 214 |
|
| 215 |
+
## License
|
| 216 |
Open-source under the MIT License.
|
| 217 |
|
| 218 |
+
---
|
| 219 |
+
|
| 220 |
+
## Questions?
|
| 221 |
Contact the repository owner or open an issue β happy to help!
|