Spaces:
Sleeping
title: Multi-Rag
emoji: π€
colorFrom: blue
colorTo: green
sdk: docker
app_file: main.py
pinned: false
short_description: This is the Multi-Rag Agent
π Multi-RAG AI Pipeline
Advanced Multi-Agent RAG Orchestration powered by LangGraph, AWS Bedrock, and FAISS
π Overview
Multi-RAG AI is a state-of-the-art, multi-agent RAG (Retrieval-Augmented Generation) pipeline designed for high-performance document intelligence. It leverages LangGraph for sophisticated orchestration, allowing an autonomous "Orchestrator" agent to decide which specialized workers (PDF, DOCX, TXT, Images, Web Search) are needed to answer complex user queries.
Why Multi-RAG?
- Intelligent Fan-out: The orchestrator can trigger multiple workers in parallel to gather information from different sources.
- Dynamic Routing: Automatically detects file types and routes tasks to specialized loaders.
- OCR Integration: Built-in support for image processing and optical character recognition.
- Web Search Fallback: If local documents are insufficient, the agents can autonomously search the live web.
ποΈ Architecture
The system is built as a nested graph structure, providing a clean separation between high-level orchestration and low-level specialized tasks.
1. Main Orchestration Graph
The main graph handles the interaction between the user, the orchestrator, and the final chat response.
2. Worker Sub-Graph
The worker sub-graph is responsible for specialized information retrieval from various file formats.
β¨ Key Features
- π Multi-Format Support:
- PDF: Deep document parsing.
- DOCX: Microsoft Word document integration.
- TXT: Plain text analysis.
- Images (OCR): Extraction of text from PNG/JPG using specialized loaders.
- π€ Autonomous Orchestration: Uses a Llama-3.3-70B model on AWS Bedrock with a manual JSON fallback mechanism for 100% reliable structured output.
- π Hybrid Retrieval: Combines local FAISS vector stores with real-time Google Search integration.
- π§ Persistence & Memory: Full multi-turn conversation support with LangGraph checkpointers.
- β‘ Modern Tech Stack: Built with
uvfor lightning-fast dependency management andFastAPIfor a high-performance backend.
π οΈ Tech Stack
- Core: Python 3.12
- Orchestration: LangGraph & LangChain
- Large Language Models: AWS Bedrock (Llama 3.3 70B)
- Vector Storage: FAISS
- Embeddings: HuggingFace (all-MiniLM-L6-v2)
- Backend API: FastAPI
- Package Management: uv
π Getting Started
Prerequisites
- Python 3.12+
uvinstalled (pip install uv)- AWS Credentials (for Bedrock access)
1. Installation
# Clone the repository
git clone https://github.com/VashuTheGreat/Multi-Rag.git
cd Multi-Rag
# Install dependencies
uv sync
2. Environment Setup
Create a .env file in the root directory:
# AWS Bedrock Config
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION_NAME=us-east-1
# Tooling (e.g., Search API keys if applicable)
# ...
3. Run the Application
# Start the FastAPI server
uv run main.py
Navigate to http://127.0.0.1:8000 to start chatting with your documents!
π Project Structure
Multi-Rag/
βββ api/ # FastAPI Endpoints & Controllers
βββ src/
β βββ MultiRag/
β βββ components/ # Core graph runners & embedders
β βββ graph/ # LangGraph definitions (Main & Worker)
β βββ models/ # Pydantic state & output schemas
β βββ nodes/ # Individual graph node implementations
β βββ prompts/ # LLM system prompts
β βββ utils/ # Ingestion & document processing utilities
βββ static/ # Frontend assets (CSS, JS)
βββ templates/ # Jinja2 HTML templates
βββ db/ # Local FAISS index persistence
Built with π for the future of Agentic RAG.

