File size: 1,748 Bytes
8357835 dec533d 8357835 dec533d 8357835 dec533d 8357835 dec533d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | ---
title: BeRU Chat - RAG Assistant
emoji: π€
colorFrom: indigo
colorTo: yellow
sdk: streamlit
app_file: app.py
pinned: false
short_description: 100% Offline RAG System with Mistral 7B and VLM2Vec
---
# π€ BeRU Chat - RAG Assistant
A powerful **100% offline Retrieval-Augmented Generation (RAG) system** combining Mistral 7B LLM with VLM2Vec embeddings for intelligent document search and conversation.
## β¨ Features
- π **100% Offline Operation** - No internet required after startup
- π§ **Advanced RAG Architecture**
- Hybrid retrieval (Vector + BM25 keyword search)
- Ensemble retriever combining multiple strategies
- Re-ranking with FlashRank for relevance
- Multi-turn conversation with history awareness
- β‘ **Optimized Performance**
- 4-bit quantization with BitsAndBytes
- Flash Attention 2 support
- FAISS vector indexing
- π **Source Citations** - Every answer cites original sources
## π― Models Used
| Component | Model | Details |
|-----------|-------|---------|
| **LLM** | Mistral-7B-Instruct-v0.3 | 7B parameters |
| **Embedding** | VLM2Vec-Qwen2VL-2B | 2B parameters |
| **Vector Store** | FAISS | Meta's similarity search |
## π Getting Started
1. **Wait for Models** - First load takes 5-8 minutes (models download from HF Hub)
2. **Upload Documents** - Add PDFs or text files for RAG
3. **Ask Questions** - Chat with context-aware answers
4. **Get Sources** - Each answer includes citations
## π» System Requirements
- **GPU**: A10G (24GB VRAM) recommended
- **RAM**: 16GB minimum
- **Cold Start**: ~5-8 minutes (first time)
- **Runtime**: Streamlit app on port 7860
## π Documentation
For more information, visit the [GitHub repository](https://github.com/AnwinJosy/BeRU)
|