SimranShaikh's picture
commit
a55a7bf verified
---
license: mit
title: Enterprise Rag Assistant with IBM Granite
sdk: streamlit
emoji: πŸš€
colorFrom: red
colorTo: red
pinned: true
short_description: Retrieval-Augmented Generation app using IBM Granite Model
sdk_version: 1.46.1
---
# πŸ“„ Enterprise Rag Assistant with IBM Granite
A powerful Retrieval-Augmented Generation (RAG) application that allows you to upload PDF documents and ask intelligent questions about their content using IBM's Granite AI model.
## 🌟 Features
- **PDF Text Extraction**: Extract text from PDF documents with detailed progress tracking
- **Intelligent Chunking**: Split documents into manageable chunks with overlap for better context preservation
- **Semantic Search**: Find relevant content using advanced sentence embeddings
- **AI-Powered Q&A**: Generate accurate answers using IBM Granite language model
- **Interactive UI**: User-friendly Streamlit interface with real-time status updates
- **GPU/CPU Support**: Automatically detects and utilizes available hardware
- **Memory Optimization**: Efficient processing for large documents
## πŸš€ Quick Start
### Prerequisites
- Python 3.8 or higher
- pip package manager
- At least 4GB RAM (8GB+ recommended)
- Optional: CUDA-compatible GPU for faster processing
### Installation
1. **Clone the repository:**
```bash
git clone https://huggingface.co/spaces/SimranShaikh/enterprise-rag-assistant.git
cd pdf-rag-granite
```
2. **Create a virtual environment:**
```bash
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
3. **Install dependencies:**
```bash
pip install -r requirements.txt
```
4. **Run the application:**
```bash
streamlit run app.py
```
5. **Open your browser** and navigate to `http://localhost:8501`
## πŸ“¦ Dependencies
Create a `requirements.txt` file with the following content:
```txt
streamlit>=1.28.0
PyPDF2>=3.0.1
sentence-transformers>=2.2.2
transformers>=4.30.0
torch>=2.0.0
numpy>=1.24.0
scikit-learn>=1.3.0
```
## πŸ”§ Usage
### Step 1: Load Models
1. Click the **"πŸ€– Load Models"** button
2. Wait for the models to download and load (this may take a few minutes on first run)
3. Models are cached locally for faster subsequent loads
### Step 2: Upload PDF
1. Click **"Browse files"** and select your PDF document
2. Supported formats: PDF files only
3. Maximum recommended size: 50MB
### Step 3: Process PDF
1. Click **"πŸ“– Process PDF"** after uploading
2. The system will:
- Extract text from all pages
- Split text into overlapping chunks
- Generate embeddings for semantic search
- Display processing progress
### Step 4: Ask Questions
1. Type your question in the text input field
2. Click **"πŸ” Get Answer"**
3. View the AI-generated answer and source references
### Example Questions
- "What is the main topic of this document?"
- "Summarize the key findings"
- "What are the recommendations mentioned?"
- "Who are the main authors or contributors?"
- "What methodology was used?"
## πŸ—οΈ Architecture
```
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ PDF Upload │───▢│ Text Extraction │───▢│ Text Chunking β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ User Query │───▢│ Semantic Search │◀───│ Embeddings β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
└─────────────▢│ Answer Gen. β”‚
β”‚ (IBM Granite) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
```
## πŸ”§ Configuration
### Model Configuration
You can modify the models used in the `SimplePDFRAG` class:
```python
# Embedding model options
embedding_model = SentenceTransformer('all-MiniLM-L6-v2') # Default
# embedding_model = SentenceTransformer('all-mpnet-base-v2') # Better quality
# Language model options
model_name = "ibm-granite/granite-3-2b-instruct" # Default
# model_name = "ibm-granite/granite-3-8b-instruct" # Better performance
# model_name = "google/flan-t5-base" # Alternative
```
### Chunking Parameters
Adjust text chunking settings:
```python
def chunk_text(self, text, chunk_size=400, overlap=50):
# chunk_size: Number of words per chunk
# overlap: Number of overlapping words between chunks
```
### Search Parameters
Modify search behavior:
```python
def search_documents(self, query, top_k=3):
# top_k: Number of relevant chunks to retrieve
# min_threshold: Minimum similarity score (0.1 default)
```
## πŸ“Š Performance Tips
### For Better Performance:
- Use a GPU-enabled environment
- Increase chunk overlap for better context
- Use larger language models (8B+ parameters)
- Process smaller PDF files (< 20MB)
### Memory Management:
- The app automatically manages GPU memory
- Use the "Reset Everything" button to clear memory
- Process one PDF at a time for optimal performance
## πŸ› Troubleshooting
### Common Issues:
**1. Models not loading:**
```
Error: Model loading failed
```
- **Solution**: Check internet connection and try again
- **Alternative**: Use smaller models or CPU-only mode
**2. PDF text extraction fails:**
```
Error: No text could be extracted
```
- **Solution**: Ensure PDF contains selectable text (not just images)
- **Alternative**: Use OCR preprocessing tools
**3. Out of memory errors:**
```
Error: CUDA out of memory
```
- **Solution**: Reduce batch size or use CPU mode
- **Alternative**: Process smaller documents
**4. Slow processing:**
- **Solution**: Enable GPU acceleration
- **Alternative**: Use smaller embedding models
### Debug Mode
Enable debug logging by setting:
```python
logging.basicConfig(level=logging.DEBUG)
```
## πŸš€ Deployment
### Local Development
```bash
streamlit run app.py
```
### Docker Deployment
```dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8501
CMD ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]
```
### Cloud Deployment
**Streamlit Cloud:**
1. Push code to GitHub
2. Connect repository to Streamlit Cloud
3. Deploy with one click
**Heroku:**
```bash
git init
heroku create your-app-name
git add .
git commit -m "Initial commit"
git push heroku main
```
## πŸ“ˆ Advanced Features
### Custom Models
Add support for custom models:
```python
def load_custom_model(self, model_path):
"""Load a custom trained model"""
self.granite_model = AutoModelForCausalLM.from_pretrained(model_path)
self.tokenizer = AutoTokenizer.from_pretrained(model_path)
```
### Batch Processing
Process multiple PDFs:
```python
def process_multiple_pdfs(self, pdf_files):
"""Process multiple PDFs simultaneously"""
all_documents = []
all_embeddings = []
for pdf_file in pdf_files:
# Process each PDF
documents, embeddings = self.process_single_pdf(pdf_file)
all_documents.extend(documents)
all_embeddings.extend(embeddings)
return all_documents, all_embeddings
```
### Export Results
Save Q&A sessions:
```python
def export_qa_session(self, qa_pairs, filename):
"""Export Q&A session to file"""
import json
with open(filename, 'w') as f:
json.dump(qa_pairs, f, indent=2)
```
## 🀝 Contributing
We welcome contributions! Please follow these steps:
1. **Fork the repository**
2. **Create a feature branch:**
```bash
git checkout -b feature/amazing-feature
```
3. **Make your changes and commit:**
```bash
git commit -m "Add amazing feature"
```
4. **Push to your branch:**
```bash
git push origin feature/amazing-feature
```
5. **Create a Pull Request**
### Development Guidelines
- Follow PEP 8 style guidelines
- Add docstrings to all functions
- Include unit tests for new features
- Update documentation as needed
## πŸ“ License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
```
MIT License
Copyright (c) 2024 Your Name
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
## πŸ™ Acknowledgments
- **IBM** for the Granite language models
- **Hugging Face** for the transformers library
- **Sentence Transformers** for embedding models
- **Streamlit** for the web framework
- **PyPDF2** for PDF processing
---