Multi-Rag / README.md
VashuTheGreat2's picture
Upload folder using huggingface_hub
1f1b226 verified
metadata
title: Multi-Rag
emoji: πŸ€–
colorFrom: blue
colorTo: green
sdk: docker
app_file: main.py
pinned: false
short_description: This is the Multi-Rag Agent

πŸš€ Multi-RAG AI Pipeline

Advanced Multi-Agent RAG Orchestration powered by LangGraph, AWS Bedrock, and FAISS

Python LangGraph FastAPI FAISS


πŸ“– Overview

Multi-RAG AI is a state-of-the-art, multi-agent RAG (Retrieval-Augmented Generation) pipeline designed for high-performance document intelligence. It leverages LangGraph for sophisticated orchestration, allowing an autonomous "Orchestrator" agent to decide which specialized workers (PDF, DOCX, TXT, Images, Web Search) are needed to answer complex user queries.

Why Multi-RAG?

  • Intelligent Fan-out: The orchestrator can trigger multiple workers in parallel to gather information from different sources.
  • Dynamic Routing: Automatically detects file types and routes tasks to specialized loaders.
  • OCR Integration: Built-in support for image processing and optical character recognition.
  • Web Search Fallback: If local documents are insufficient, the agents can autonomously search the live web.

πŸ—οΈ Architecture

The system is built as a nested graph structure, providing a clean separation between high-level orchestration and low-level specialized tasks.

1. Main Orchestration Graph

The main graph handles the interaction between the user, the orchestrator, and the final chat response.

Main Graph Architecture

2. Worker Sub-Graph

The worker sub-graph is responsible for specialized information retrieval from various file formats.

Worker Sub-Graph


✨ Key Features

  • πŸ“‚ Multi-Format Support:
    • PDF: Deep document parsing.
    • DOCX: Microsoft Word document integration.
    • TXT: Plain text analysis.
    • Images (OCR): Extraction of text from PNG/JPG using specialized loaders.
  • πŸ€– Autonomous Orchestration: Uses a Llama-3.3-70B model on AWS Bedrock with a manual JSON fallback mechanism for 100% reliable structured output.
  • πŸ” Hybrid Retrieval: Combines local FAISS vector stores with real-time Google Search integration.
  • 🧠 Persistence & Memory: Full multi-turn conversation support with LangGraph checkpointers.
  • ⚑ Modern Tech Stack: Built with uv for lightning-fast dependency management and FastAPI for a high-performance backend.

πŸ› οΈ Tech Stack


πŸš€ Getting Started

Prerequisites

  • Python 3.12+
  • uv installed (pip install uv)
  • AWS Credentials (for Bedrock access)

1. Installation

# Clone the repository
git clone https://github.com/VashuTheGreat/Multi-Rag.git
cd Multi-Rag

# Install dependencies
uv sync

2. Environment Setup

Create a .env file in the root directory:

# AWS Bedrock Config
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION_NAME=us-east-1

# Tooling (e.g., Search API keys if applicable)
# ...

3. Run the Application

# Start the FastAPI server
uv run main.py

Navigate to http://127.0.0.1:8000 to start chatting with your documents!


πŸ“‚ Project Structure

Multi-Rag/
β”œβ”€β”€ api/                # FastAPI Endpoints & Controllers
β”œβ”€β”€ src/
β”‚   └── MultiRag/
β”‚       β”œβ”€β”€ components/ # Core graph runners & embedders
β”‚       β”œβ”€β”€ graph/      # LangGraph definitions (Main & Worker)
β”‚       β”œβ”€β”€ models/     # Pydantic state & output schemas
β”‚       β”œβ”€β”€ nodes/      # Individual graph node implementations
β”‚       β”œβ”€β”€ prompts/    # LLM system prompts
β”‚       └── utils/      # Ingestion & document processing utilities
β”œβ”€β”€ static/             # Frontend assets (CSS, JS)
β”œβ”€β”€ templates/          # Jinja2 HTML templates
└── db/                 # Local FAISS index persistence

Built with πŸ’– for the future of Agentic RAG.