Spaces:
Sleeping
Sleeping
File size: 5,011 Bytes
1f725d8 5551822 1f1b226 1f725d8 5551822 1f725d8 1f1b226 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 5551822 1f725d8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | ---
title: Multi-Rag
emoji: π€
colorFrom: blue
colorTo: green
sdk: docker
app_file: main.py
pinned: false
short_description: This is the Multi-Rag Agent
---
<div align="center">
<h1>π Multi-RAG AI Pipeline</h1>
<p><strong>Advanced Multi-Agent RAG Orchestration powered by LangGraph, AWS Bedrock, and FAISS</strong></p>
[](https://www.python.org/)
[](https://github.com/langchain-ai/langgraph)
[](https://fastapi.tiangolo.com/)
[](https://github.com/facebookresearch/faiss)
</div>
---
## π Overview
**Multi-RAG AI** is a state-of-the-art, multi-agent RAG (Retrieval-Augmented Generation) pipeline designed for high-performance document intelligence. It leverages **LangGraph** for sophisticated orchestration, allowing an autonomous "Orchestrator" agent to decide which specialized workers (PDF, DOCX, TXT, Images, Web Search) are needed to answer complex user queries.
### Why Multi-RAG?
- **Intelligent Fan-out**: The orchestrator can trigger multiple workers in parallel to gather information from different sources.
- **Dynamic Routing**: Automatically detects file types and routes tasks to specialized loaders.
- **OCR Integration**: Built-in support for image processing and optical character recognition.
- **Web Search Fallback**: If local documents are insufficient, the agents can autonomously search the live web.
---
## ποΈ Architecture
The system is built as a nested graph structure, providing a clean separation between high-level orchestration and low-level specialized tasks.
### 1. Main Orchestration Graph
The main graph handles the interaction between the user, the orchestrator, and the final chat response.

### 2. Worker Sub-Graph
The worker sub-graph is responsible for specialized information retrieval from various file formats.

---
## β¨ Key Features
- **π Multi-Format Support**:
- **PDF**: Deep document parsing.
- **DOCX**: Microsoft Word document integration.
- **TXT**: Plain text analysis.
- **Images (OCR)**: Extraction of text from PNG/JPG using specialized loaders.
- **π€ Autonomous Orchestration**: Uses a Llama-3.3-70B model on **AWS Bedrock** with a manual JSON fallback mechanism for 100% reliable structured output.
- **π Hybrid Retrieval**: Combines local FAISS vector stores with real-time Google Search integration.
- **π§ Persistence & Memory**: Full multi-turn conversation support with LangGraph checkpointers.
- **β‘ Modern Tech Stack**: Built with `uv` for lightning-fast dependency management and `FastAPI` for a high-performance backend.
---
## π οΈ Tech Stack
- **Core**: [Python 3.12](https://www.python.org/)
- **Orchestration**: [LangGraph](https://github.com/langchain-ai/langgraph) & [LangChain](https://github.com/langchain-ai/langchain)
- **Large Language Models**: [AWS Bedrock](https://aws.amazon.com/bedrock/) (Llama 3.3 70B)
- **Vector Storage**: [FAISS](https://github.com/facebookresearch/faiss)
- **Embeddings**: [HuggingFace](https://huggingface.co/) (all-MiniLM-L6-v2)
- **Backend API**: [FastAPI](https://fastapi.tiangolo.com/)
- **Package Management**: [uv](https://github.com/astral-sh/uv)
---
## π Getting Started
### Prerequisites
- Python 3.12+
- `uv` installed (`pip install uv`)
- AWS Credentials (for Bedrock access)
### 1. Installation
```bash
# Clone the repository
git clone https://github.com/VashuTheGreat/Multi-Rag.git
cd Multi-Rag
# Install dependencies
uv sync
```
### 2. Environment Setup
Create a `.env` file in the root directory:
```env
# AWS Bedrock Config
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION_NAME=us-east-1
# Tooling (e.g., Search API keys if applicable)
# ...
```
### 3. Run the Application
```bash
# Start the FastAPI server
uv run main.py
```
Navigate to `http://127.0.0.1:8000` to start chatting with your documents!
---
## π Project Structure
```bash
Multi-Rag/
βββ api/ # FastAPI Endpoints & Controllers
βββ src/
β βββ MultiRag/
β βββ components/ # Core graph runners & embedders
β βββ graph/ # LangGraph definitions (Main & Worker)
β βββ models/ # Pydantic state & output schemas
β βββ nodes/ # Individual graph node implementations
β βββ prompts/ # LLM system prompts
β βββ utils/ # Ingestion & document processing utilities
βββ static/ # Frontend assets (CSS, JS)
βββ templates/ # Jinja2 HTML templates
βββ db/ # Local FAISS index persistence
```
---
<div align="center">
<p>Built with π for the future of Agentic RAG.</p>
</div>
|