Mixedbread CVE RAG Workflow
Scripts in this directory let you build a Retrieval Augmented Generation (RAG) workflow over the cvelistV5 dataset using Mixedbread's embedding and reranking models loaded locally. Vector storage relies on a persisted Chroma database. No steps have been executed yet—run them when you're ready.
Prerequisites
- Python 3.10+
- Packages:
sentence-transformers,chromadb,numpy,torch(ortorchwith CUDA for GPU) - Hugging Face account (optional, only needed for private models or rate-limited downloads)
- Optional:
HF_API_TOKENenvironment variable if downloading models requires authentication
You can copy env.example to .env (or export vars directly) and populate any overrides.
Workflow
Unzip the CVE archive
python scripts/unzip_cvelist.py- Reads
testing/cvelistV5-main.zip - Extracts to
data/cvelistV5-main/ - Use
--forceto re-extract if the destination already exists.
- Reads
Prepare the corpus
python -m rag_mixedbread.prepare_cve_corpus \ --cve-root data/cvelistV5-main \ --output rag_mixedbread/artifacts/cve_corpus.jsonl- Walks every CVE JSON file
- Normalizes metadata + descriptions
- Splits long descriptions into overlapping character chunks
Build the Chroma index with Mixedbread embeddings
python -m rag_mixedbread.build_index \ --corpus rag_mixedbread/artifacts/cve_corpus.jsonl \ --batch-size 8 \ --normalize \ --reset- Loads
mixedbread-ai/mxbai-embed-large-v1locally (downloads on first run) - Embeds all corpus chunks and persists into Chroma at
rag_mixedbread/index/ --resetwipes the existing collection before re-building- Models run on CPU by default; set
RAG_DEVICE=cudafor GPU acceleration
- Loads
Query with reranking
python -m rag_mixedbread.query_service \ "buffer overflow in ssh" \ --top-k 20 \ --top-n 5 \ --normalize- Loads embedding and reranker models locally (downloads on first run)
- Retrieves similar chunks from Chroma
- Reranks candidates using
mixedbread-ai/mxbai-rerank-base-v2CrossEncoder - Prints human-friendly summaries or JSON (
--json) for automation
Configuration
rag_mixedbread/config.py centralizes default paths and settings:
- Archive path:
testing/cvelistV5-main.zip - Extracted CVE directory:
data/cvelistV5-main - Corpus output:
rag_mixedbread/artifacts/cve_corpus.jsonl - Chroma directory:
rag_mixedbread/index/(collectioncve_chunksby default)
Environment variables override defaults:
| Variable | Purpose | Default |
|---|---|---|
HF_API_TOKEN |
Optional: for private models or rate-limited downloads | none |
RAG_EMBED_MODEL |
Embedding model ID (Hugging Face Hub) | mixedbread-ai/mxbai-embed-large-v1 |
RAG_RERANK_MODEL |
Rerank model ID (Hugging Face Hub) | mixedbread-ai/mxbai-rerank-base-v2 |
RAG_EMBED_BATCH |
Batch size during indexing | 8 |
RAG_DEVICE |
Device for model inference (cpu or cuda) |
cpu |
RAG_CHROMA_COLLECTION |
Collection name inside Chroma | cve_chunks |
Notes
- The scripts intentionally avoid running automatically; invoke them manually when ready.
- Models are downloaded from Hugging Face Hub on first use (cached in
~/.cache/huggingface/). - For GPU acceleration, install PyTorch with CUDA and set
RAG_DEVICE=cuda. - Adjust
--batch-sizebased on available memory (larger batches = faster but more memory).