Spaces:
Sleeping
Sleeping
File size: 1,679 Bytes
d37e51e 10c4ac4 d37e51e aa01e39 ab6c5be d37e51e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | ---
title: NLP Assignment RAG App
emoji: π§
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: "4.44.0"
app_file: main.py
pinned: false
python_version: "3.12"
app_build_command: uv sync
---
# NLP Assignment β Research Paper RAG Assistant
This app retrieves and summarizes academic papers related to your research query using a **Retrieval-Augmented Generation (RAG)** pipeline built with:
- **FAISS** vector search (`index.faiss`, `index.pkl`)
- **SentenceTransformer CrossEncoder** reranker
- **Google Gemini (via LangChain)** as the LLM
- **Gradio** UI frontend
- **uv** package manager (modern, fast, Python-native)
---
## π How to Run Locally
You can run this project locally using **uv** (recommended) or plain **Python**.
### π§© Prerequisites
- Python 3.12 or higher
- [uv](https://docs.astral.sh/uv/) installed:
```sh
pip install uv
```
1. Clone the repository
```sh
git clone https://huggingface.co/spaces/njayman/nlp-assignment
cd nlp-assignment
```
2. Add your FAISS index files
Make sure your FAISS index folder is structured like this:
```sh
faiss_index/
βββ index.faiss
βββ index.pkl
```
3. Set up environment
Set your Gemini API key (replace with your own key):
Linux/macOS:
```sh
export GOOGLE_API_KEY="your_google_api_key_here"
```
Windows (PowerShell):
```sh
$env:GOOGLE_API_KEY="your_google_api_key_here"
```
Or add a `.env` file
```sh
GOOGLE_API_KEY="your_google_api_key_here"
```
4. Create a virtual env and install dependencies using uv
```sh
uv venv
uv sync
```
5. Run the Gradio app
```sh
uv run main.py
```
Then open the link printed in your terminal β usually:
```sh
http://127.0.0.1:7860
```
|