File size: 7,810 Bytes
a2df725
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5375814
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
---
license: mit
task_categories:
- feature-extraction
- text-retrieval
language:
- en
tags:
- epstein-files
- rag
- vector-embeddings
- chroma
- document-embedding
pretty_name: Epstein Files - Vector Embeddings (Chroma DB)
size_categories:
- 100K<n<1M
---

# Dataset Card for Epstein Files - Vector Embeddings (Chroma DB)

Pre-built ChromaDB vector database derived from the Epstein Files 20K dataset. Contains dense vector embeddings of 100K+ semantically chunked document pieces, ready to plug directly into a RAG pipeline — no re-embedding required.

## Dataset Details

### Dataset Description

This dataset is a pre-computed Chroma vector store built from the raw Epstein Files 20K document corpus. The raw text (2.5M+ lines) was cleaned, reconstructed by source filename, semantically chunked, and embedded using `sentence-transformers/all-MiniLM-L6-v2`. The resulting Chroma DB is uploaded here so that users can skip the computationally expensive embedding step (~20–45 minutes) and immediately run RAG queries.

- **Curated by:** [Ankit Kumar Nayak](https://github.com/AnkitNayak-eth)
- **Funded by [optional]:** Self-funded
- **Shared by [optional]:** [Ankit Kumar Nayak](https://github.com/AnkitNayak-eth)
- **Language(s) (NLP):** English
- **License:** MIT

### Dataset Sources [optional]

- **Repository:** [AnkitNayak-eth/EpsteinFiles-RAG](https://github.com/AnkitNayak-eth/EpsteinFiles-RAG)
- **Paper [optional]:** N/A
- **Demo [optional]:** See repository README for Streamlit UI setup

## Uses

### Direct Use

This dataset is intended to be used as a drop-in vector store for RAG (Retrieval-Augmented Generation) applications querying the Epstein Files documents. Load it with LangChain's `Chroma` integration and immediately run semantic search or MMR retrieval without any preprocessing.

```python
from langchain_community.vectorstores import Chroma
from langchain_huggingface import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2")

vectorstore = Chroma(
    persist_directory="./chroma_db",
    embedding_function=embeddings
)

retriever = vectorstore.as_retriever(
    search_type="mmr",
    search_kwargs={"k": 5, "fetch_k": 20}
)

docs = retriever.invoke("Who visited Epstein's island?")
```

### Out-of-Scope Use

This dataset is not suitable for training or fine-tuning language models. It should not be used for harassment, targeting individuals, or any illegal purposes. Users are responsible for complying with applicable laws and ethical guidelines.

## Dataset Structure

The dataset contains the ChromaDB persistence directory (`chroma_db/`) with the following files:

| File | Description |
|---|---|
| `chroma.sqlite3` | SQLite metadata store with document text, chunk IDs, and metadata |
| `data_level0.bin` | HNSW vector index for approximate nearest-neighbor search |
| `header.bin` | Index header |
| `length.bin` | Vector length metadata |
| `link_lists.bin` | HNSW graph structure |

Each embedded chunk contains these metadata fields: `source` (original document filename), `chunk_id` (unique identifier), and `text` (raw chunk content). The collection contains 100K+ chunks with 384-dimensional embeddings (MiniLM-L6-v2), chunk size of ~500 tokens with 50-token overlap, indexed via HNSW.

## Dataset Creation

### Curation Rationale

The Epstein Files 20K dataset contains raw, fragmented document text that is difficult to query directly. This vector database was created to enable fast, accurate semantic retrieval over the full document corpus as part of a no-hallucination RAG system. Pre-computing and uploading the embeddings eliminates the biggest time barrier (~45 min embedding job) for anyone wanting to build on top of this data.

### Source Data

#### Data Collection and Processing

The pipeline that produced this dataset follows four stages:

1. **Download** — Raw data fetched from `teyler/epstein-files-20k` on Hugging Face (~2.5M document lines) via the `datasets` library.
2. **Clean & Reconstruct** — Junk rows removed, documents reconstructed and grouped by source filename. Output: `cleaned.json`.
3. **Semantic Chunking** — Documents split into overlapping chunks (~500 tokens, 50-token overlap) using LangChain's `RecursiveCharacterTextSplitter`. Output: `chunks.json`.
4. **Embed & Index** — Chunks embedded with `sentence-transformers/all-MiniLM-L6-v2` and stored in ChromaDB. Output: `chroma_db/`.

#### Who are the source data producers?

The original documents are public records from the Jeffrey Epstein court case files, aggregated and published on Hugging Face by [teyler](https://huggingface.co/teyler). This dataset is a derivative of that public corpus.

### Annotations [optional]

#### Annotation process

No manual annotations were added. Metadata fields (`source`, `chunk_id`) are automatically generated during the chunking and embedding pipeline.

#### Who are the annotators?

Automated pipeline — no human annotators.

#### Personal and Sensitive Information

The source corpus contains real names of individuals mentioned in legal documents, including victims, witnesses, and associates. This data is sourced entirely from public court records. No anonymization has been applied. Users should handle this data responsibly and ethically.

## Bias, Risks, and Limitations

- **Named individuals:** The documents contain names of real people from court records. Any application built on this data must handle this with appropriate care.
- **OCR/scan artifacts:** Some source documents may contain OCR errors from the original scan-to-text conversion, which can affect retrieval quality.
- **Chunking boundary effects:** Semantic chunking may split context across boundaries, occasionally resulting in incomplete retrieved passages.
- **Embedding model limitations:** `all-MiniLM-L6-v2` is a general-purpose model; domain-specific legal terminology may not embed with optimal accuracy.
- **No ground truth:** There is no annotated QA benchmark for this corpus, so retrieval quality is evaluated qualitatively.

### Recommendations

Always use MMR retrieval (`search_type="mmr"`) over pure similarity search to avoid redundant results from the same document. Ground all LLM responses strictly in retrieved context. Review results critically — this is a research tool, not a legal or journalistic authority.

## Citation [optional]

**BibTeX:**

```bibtex
@misc{nayak2026epsteinfilesrag,
  author       = {Ankit Kumar Nayak},
  title        = {EpsteinFiles-RAG: A RAG Pipeline over the Epstein Files 20K Dataset},
  year         = {2026},
  howpublished = {\url{https://github.com/AnkitNayak-eth/EpsteinFiles-RAG}},
}
```

**APA:**

Nayak, A. K. (2026). *EpsteinFiles-RAG: A RAG Pipeline over the Epstein Files 20K Dataset*. GitHub. https://github.com/AnkitNayak-eth/EpsteinFiles-RAG

## Glossary [optional]

- **RAG (Retrieval-Augmented Generation):** A technique where an LLM answers questions using only context retrieved from a vector database, preventing hallucination.
- **MMR (Maximal Marginal Relevance):** A retrieval algorithm that balances relevance and diversity to avoid returning redundant chunks from the same document.
- **ChromaDB:** An open-source vector database used to store and query embeddings.
- **Embedding:** A dense numerical vector representation of text that captures semantic meaning.
- **HNSW:** Hierarchical Navigable Small World — the graph-based approximate nearest-neighbor index used by ChromaDB.

## More Information [optional]

Full pipeline source code, API server, and Streamlit UI available at: https://github.com/AnkitNayak-eth/EpsteinFiles-RAG

## Dataset Card Authors [optional]

[Ankit Kumar Nayak](https://github.com/AnkitNayak-eth)

## Dataset Card Contact

GitHub: https://github.com/AnkitNayak-eth