| # Mixedbread CVE RAG Workflow | |
| Scripts in this directory let you build a Retrieval Augmented Generation (RAG) workflow over the `cvelistV5` dataset using Mixedbread's embedding and reranking models loaded locally. Vector storage relies on a persisted [Chroma](https://www.trychroma.com/) database. No steps have been executed yet—run them when you're ready. | |
| ## Prerequisites | |
| - Python 3.10+ | |
| - Packages: `sentence-transformers`, `chromadb`, `numpy`, `torch` (or `torch` with CUDA for GPU) | |
| - Hugging Face account (optional, only needed for private models or rate-limited downloads) | |
| - Optional: `HF_API_TOKEN` environment variable if downloading models requires authentication | |
| You can copy `env.example` to `.env` (or export vars directly) and populate any overrides. | |
| ## Workflow | |
| 1. **Unzip the CVE archive** | |
| ```bash | |
| python scripts/unzip_cvelist.py | |
| ``` | |
| - Reads `testing/cvelistV5-main.zip` | |
| - Extracts to `data/cvelistV5-main/` | |
| - Use `--force` to re-extract if the destination already exists. | |
| 2. **Prepare the corpus** | |
| ```bash | |
| python -m rag_mixedbread.prepare_cve_corpus \ | |
| --cve-root data/cvelistV5-main \ | |
| --output rag_mixedbread/artifacts/cve_corpus.jsonl | |
| ``` | |
| - Walks every CVE JSON file | |
| - Normalizes metadata + descriptions | |
| - Splits long descriptions into overlapping character chunks | |
| 3. **Build the Chroma index with Mixedbread embeddings** | |
| ```bash | |
| python -m rag_mixedbread.build_index \ | |
| --corpus rag_mixedbread/artifacts/cve_corpus.jsonl \ | |
| --batch-size 8 \ | |
| --normalize \ | |
| --reset | |
| ``` | |
| - Loads `mixedbread-ai/mxbai-embed-large-v1` locally (downloads on first run) | |
| - Embeds all corpus chunks and persists into Chroma at `rag_mixedbread/index/` | |
| - `--reset` wipes the existing collection before re-building | |
| - Models run on CPU by default; set `RAG_DEVICE=cuda` for GPU acceleration | |
| 4. **Query with reranking** | |
| ```bash | |
| python -m rag_mixedbread.query_service \ | |
| "buffer overflow in ssh" \ | |
| --top-k 20 \ | |
| --top-n 5 \ | |
| --normalize | |
| ``` | |
| - Loads embedding and reranker models locally (downloads on first run) | |
| - Retrieves similar chunks from Chroma | |
| - Reranks candidates using `mixedbread-ai/mxbai-rerank-base-v2` CrossEncoder | |
| - Prints human-friendly summaries or JSON (`--json`) for automation | |
| ## Configuration | |
| `rag_mixedbread/config.py` centralizes default paths and settings: | |
| - Archive path: `testing/cvelistV5-main.zip` | |
| - Extracted CVE directory: `data/cvelistV5-main` | |
| - Corpus output: `rag_mixedbread/artifacts/cve_corpus.jsonl` | |
| - Chroma directory: `rag_mixedbread/index/` (collection `cve_chunks` by default) | |
| Environment variables override defaults: | |
| | Variable | Purpose | Default | | |
| | --- | --- | --- | | |
| | `HF_API_TOKEN` | Optional: for private models or rate-limited downloads | _none_ | | |
| | `RAG_EMBED_MODEL` | Embedding model ID (Hugging Face Hub) | `mixedbread-ai/mxbai-embed-large-v1` | | |
| | `RAG_RERANK_MODEL` | Rerank model ID (Hugging Face Hub) | `mixedbread-ai/mxbai-rerank-base-v2` | | |
| | `RAG_EMBED_BATCH` | Batch size during indexing | `8` | | |
| | `RAG_DEVICE` | Device for model inference (`cpu` or `cuda`) | `cpu` | | |
| | `RAG_CHROMA_COLLECTION` | Collection name inside Chroma | `cve_chunks` | | |
| ## Notes | |
| - The scripts intentionally avoid running automatically; invoke them manually when ready. | |
| - Models are downloaded from Hugging Face Hub on first use (cached in `~/.cache/huggingface/`). | |
| - For GPU acceleration, install PyTorch with CUDA and set `RAG_DEVICE=cuda`. | |
| - Adjust `--batch-size` based on available memory (larger batches = faster but more memory). | |