File size: 3,587 Bytes
c980379
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
# Mixedbread CVE RAG Workflow

Scripts in this directory let you build a Retrieval Augmented Generation (RAG) workflow over the `cvelistV5` dataset using Mixedbread's embedding and reranking models loaded locally. Vector storage relies on a persisted [Chroma](https://www.trychroma.com/) database. No steps have been executed yet—run them when you're ready.

## Prerequisites
- Python 3.10+
- Packages: `sentence-transformers`, `chromadb`, `numpy`, `torch` (or `torch` with CUDA for GPU)
- Hugging Face account (optional, only needed for private models or rate-limited downloads)
- Optional: `HF_API_TOKEN` environment variable if downloading models requires authentication

You can copy `env.example` to `.env` (or export vars directly) and populate any overrides.

## Workflow

1. **Unzip the CVE archive**
   ```bash
   python scripts/unzip_cvelist.py
   ```
   - Reads `testing/cvelistV5-main.zip`
   - Extracts to `data/cvelistV5-main/`
   - Use `--force` to re-extract if the destination already exists.

2. **Prepare the corpus**
   ```bash
   python -m rag_mixedbread.prepare_cve_corpus \
     --cve-root data/cvelistV5-main \
     --output rag_mixedbread/artifacts/cve_corpus.jsonl
   ```
   - Walks every CVE JSON file
   - Normalizes metadata + descriptions
   - Splits long descriptions into overlapping character chunks

3. **Build the Chroma index with Mixedbread embeddings**
   ```bash
   python -m rag_mixedbread.build_index \
     --corpus rag_mixedbread/artifacts/cve_corpus.jsonl \
     --batch-size 8 \
     --normalize \
     --reset
   ```
   - Loads `mixedbread-ai/mxbai-embed-large-v1` locally (downloads on first run)
   - Embeds all corpus chunks and persists into Chroma at `rag_mixedbread/index/`
   - `--reset` wipes the existing collection before re-building
   - Models run on CPU by default; set `RAG_DEVICE=cuda` for GPU acceleration

4. **Query with reranking**
   ```bash
   python -m rag_mixedbread.query_service \
     "buffer overflow in ssh" \
     --top-k 20 \
     --top-n 5 \
     --normalize
   ```
   - Loads embedding and reranker models locally (downloads on first run)
   - Retrieves similar chunks from Chroma
   - Reranks candidates using `mixedbread-ai/mxbai-rerank-base-v2` CrossEncoder
   - Prints human-friendly summaries or JSON (`--json`) for automation

## Configuration

`rag_mixedbread/config.py` centralizes default paths and settings:

- Archive path: `testing/cvelistV5-main.zip`
- Extracted CVE directory: `data/cvelistV5-main`
- Corpus output: `rag_mixedbread/artifacts/cve_corpus.jsonl`
- Chroma directory: `rag_mixedbread/index/` (collection `cve_chunks` by default)

Environment variables override defaults:

| Variable | Purpose | Default |
| --- | --- | --- |
| `HF_API_TOKEN` | Optional: for private models or rate-limited downloads | _none_ |
| `RAG_EMBED_MODEL` | Embedding model ID (Hugging Face Hub) | `mixedbread-ai/mxbai-embed-large-v1` |
| `RAG_RERANK_MODEL` | Rerank model ID (Hugging Face Hub) | `mixedbread-ai/mxbai-rerank-base-v2` |
| `RAG_EMBED_BATCH` | Batch size during indexing | `8` |
| `RAG_DEVICE` | Device for model inference (`cpu` or `cuda`) | `cpu` |
| `RAG_CHROMA_COLLECTION` | Collection name inside Chroma | `cve_chunks` |

## Notes
- The scripts intentionally avoid running automatically; invoke them manually when ready.
- Models are downloaded from Hugging Face Hub on first use (cached in `~/.cache/huggingface/`).
- For GPU acceleration, install PyTorch with CUDA and set `RAG_DEVICE=cuda`.
- Adjust `--batch-size` based on available memory (larger batches = faster but more memory).