|
|
--- |
|
|
dataset_name: embedding-cve-nvd-dataset |
|
|
language: |
|
|
- en |
|
|
license: mit |
|
|
tags: |
|
|
- cybersecurity |
|
|
- cve |
|
|
- embeddings |
|
|
- nvd |
|
|
pretty_name: CVE NVD Embedding Dataset |
|
|
task_categories: |
|
|
- text-retrieval |
|
|
task_ids: |
|
|
- document-retrieval |
|
|
size_categories: |
|
|
- 100K<datasets<1M |
|
|
--- |
|
|
|
|
|
# CVE NVD Embedding Dataset |
|
|
|
|
|
This dataset contains the processed CVE/NVD corpus that was used with the `rag_mixedbread` pipeline. |
|
|
It bundles: |
|
|
|
|
|
- `cve_corpus.jsonl` (~700 MB): each line is a JSON object with `cve_id`, `title`, `description`, `cvss`, `vendors`, and the pre-computed text chunk that feeds the embedding model. |
|
|
- `decomposed_query_results.json` (63 KB): a dictionary of exemplar queries, decomposed sub-questions, and the retrieved doc IDs used for quality checks. |
|
|
|
|
|
## Generation pipeline |
|
|
1. Raw CVE/NVD feeds were normalized via `prepare_cve_corpus.py`. |
|
|
2. Fields were concatenated and deduplicated into retrieval-ready passages. |
|
|
3. The resulting corpus was used to build MixedBread vector indexes for the RAG workflow. |
|
|
|
|
|
## Usage |
|
|
You can stream the JSONL file and index it with any vector database: |
|
|
|
|
|
```python |
|
|
import json |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds = load_dataset("Kushalkhemka/embedding-cve-nvd-dataset", split="train") |
|
|
for row in ds: |
|
|
payload = json.loads(row["text"]) # each row is a JSON line |
|
|
# index payload["chunk"] into your vector store |
|
|
``` |
|
|
|
|
|
The `decomposed_query_results.json` file is useful for evaluation—each entry has the original user question, the decomposed sub-queries, and the reference CVE IDs that should match during retrieval. |
|
|
|
|
|
## License |
|
|
MIT. Please respect the original NVD data terms when redistributing. |
|
|
|