datacard_updated
Browse files
README.md
CHANGED
|
@@ -14,4 +14,153 @@ tags:
|
|
| 14 |
pretty_name: Epstein Files - Vector Embeddings (Chroma DB)
|
| 15 |
size_categories:
|
| 16 |
- 100K<n<1M
|
| 17 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
pretty_name: Epstein Files - Vector Embeddings (Chroma DB)
|
| 15 |
size_categories:
|
| 16 |
- 100K<n<1M
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# Dataset Card for Epstein Files - Vector Embeddings (Chroma DB)
|
| 20 |
+
|
| 21 |
+
Pre-built ChromaDB vector database derived from the Epstein Files 20K dataset. Contains dense vector embeddings of 100K+ semantically chunked document pieces, ready to plug directly into a RAG pipeline — no re-embedding required.
|
| 22 |
+
|
| 23 |
+
## Dataset Details
|
| 24 |
+
|
| 25 |
+
### Dataset Description
|
| 26 |
+
|
| 27 |
+
This dataset is a pre-computed Chroma vector store built from the raw Epstein Files 20K document corpus. The raw text (2.5M+ lines) was cleaned, reconstructed by source filename, semantically chunked, and embedded using `sentence-transformers/all-MiniLM-L6-v2`. The resulting Chroma DB is uploaded here so that users can skip the computationally expensive embedding step (~20–45 minutes) and immediately run RAG queries.
|
| 28 |
+
|
| 29 |
+
- **Curated by:** [Ankit Kumar Nayak](https://github.com/AnkitNayak-eth)
|
| 30 |
+
- **Funded by [optional]:** Self-funded
|
| 31 |
+
- **Shared by [optional]:** [Ankit Kumar Nayak](https://github.com/AnkitNayak-eth)
|
| 32 |
+
- **Language(s) (NLP):** English
|
| 33 |
+
- **License:** MIT
|
| 34 |
+
|
| 35 |
+
### Dataset Sources [optional]
|
| 36 |
+
|
| 37 |
+
- **Repository:** [AnkitNayak-eth/EpsteinFiles-RAG](https://github.com/AnkitNayak-eth/EpsteinFiles-RAG)
|
| 38 |
+
- **Paper [optional]:** N/A
|
| 39 |
+
- **Demo [optional]:** See repository README for Streamlit UI setup
|
| 40 |
+
|
| 41 |
+
## Uses
|
| 42 |
+
|
| 43 |
+
### Direct Use
|
| 44 |
+
|
| 45 |
+
This dataset is intended to be used as a drop-in vector store for RAG (Retrieval-Augmented Generation) applications querying the Epstein Files documents. Load it with LangChain's `Chroma` integration and immediately run semantic search or MMR retrieval without any preprocessing.
|
| 46 |
+
|
| 47 |
+
```python
|
| 48 |
+
from langchain_community.vectorstores import Chroma
|
| 49 |
+
from langchain_huggingface import HuggingFaceEmbeddings
|
| 50 |
+
|
| 51 |
+
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2")
|
| 52 |
+
|
| 53 |
+
vectorstore = Chroma(
|
| 54 |
+
persist_directory="./chroma_db",
|
| 55 |
+
embedding_function=embeddings
|
| 56 |
+
)
|
| 57 |
+
|
| 58 |
+
retriever = vectorstore.as_retriever(
|
| 59 |
+
search_type="mmr",
|
| 60 |
+
search_kwargs={"k": 5, "fetch_k": 20}
|
| 61 |
+
)
|
| 62 |
+
|
| 63 |
+
docs = retriever.invoke("Who visited Epstein's island?")
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
### Out-of-Scope Use
|
| 67 |
+
|
| 68 |
+
This dataset is not suitable for training or fine-tuning language models. It should not be used for harassment, targeting individuals, or any illegal purposes. Users are responsible for complying with applicable laws and ethical guidelines.
|
| 69 |
+
|
| 70 |
+
## Dataset Structure
|
| 71 |
+
|
| 72 |
+
The dataset contains the ChromaDB persistence directory (`chroma_db/`) with the following files:
|
| 73 |
+
|
| 74 |
+
| File | Description |
|
| 75 |
+
|---|---|
|
| 76 |
+
| `chroma.sqlite3` | SQLite metadata store with document text, chunk IDs, and metadata |
|
| 77 |
+
| `data_level0.bin` | HNSW vector index for approximate nearest-neighbor search |
|
| 78 |
+
| `header.bin` | Index header |
|
| 79 |
+
| `length.bin` | Vector length metadata |
|
| 80 |
+
| `link_lists.bin` | HNSW graph structure |
|
| 81 |
+
|
| 82 |
+
Each embedded chunk contains these metadata fields: `source` (original document filename), `chunk_id` (unique identifier), and `text` (raw chunk content). The collection contains 100K+ chunks with 384-dimensional embeddings (MiniLM-L6-v2), chunk size of ~500 tokens with 50-token overlap, indexed via HNSW.
|
| 83 |
+
|
| 84 |
+
## Dataset Creation
|
| 85 |
+
|
| 86 |
+
### Curation Rationale
|
| 87 |
+
|
| 88 |
+
The Epstein Files 20K dataset contains raw, fragmented document text that is difficult to query directly. This vector database was created to enable fast, accurate semantic retrieval over the full document corpus as part of a no-hallucination RAG system. Pre-computing and uploading the embeddings eliminates the biggest time barrier (~45 min embedding job) for anyone wanting to build on top of this data.
|
| 89 |
+
|
| 90 |
+
### Source Data
|
| 91 |
+
|
| 92 |
+
#### Data Collection and Processing
|
| 93 |
+
|
| 94 |
+
The pipeline that produced this dataset follows four stages:
|
| 95 |
+
|
| 96 |
+
1. **Download** — Raw data fetched from `teyler/epstein-files-20k` on Hugging Face (~2.5M document lines) via the `datasets` library.
|
| 97 |
+
2. **Clean & Reconstruct** — Junk rows removed, documents reconstructed and grouped by source filename. Output: `cleaned.json`.
|
| 98 |
+
3. **Semantic Chunking** — Documents split into overlapping chunks (~500 tokens, 50-token overlap) using LangChain's `RecursiveCharacterTextSplitter`. Output: `chunks.json`.
|
| 99 |
+
4. **Embed & Index** — Chunks embedded with `sentence-transformers/all-MiniLM-L6-v2` and stored in ChromaDB. Output: `chroma_db/`.
|
| 100 |
+
|
| 101 |
+
#### Who are the source data producers?
|
| 102 |
+
|
| 103 |
+
The original documents are public records from the Jeffrey Epstein court case files, aggregated and published on Hugging Face by [teyler](https://huggingface.co/teyler). This dataset is a derivative of that public corpus.
|
| 104 |
+
|
| 105 |
+
### Annotations [optional]
|
| 106 |
+
|
| 107 |
+
#### Annotation process
|
| 108 |
+
|
| 109 |
+
No manual annotations were added. Metadata fields (`source`, `chunk_id`) are automatically generated during the chunking and embedding pipeline.
|
| 110 |
+
|
| 111 |
+
#### Who are the annotators?
|
| 112 |
+
|
| 113 |
+
Automated pipeline — no human annotators.
|
| 114 |
+
|
| 115 |
+
#### Personal and Sensitive Information
|
| 116 |
+
|
| 117 |
+
The source corpus contains real names of individuals mentioned in legal documents, including victims, witnesses, and associates. This data is sourced entirely from public court records. No anonymization has been applied. Users should handle this data responsibly and ethically.
|
| 118 |
+
|
| 119 |
+
## Bias, Risks, and Limitations
|
| 120 |
+
|
| 121 |
+
- **Named individuals:** The documents contain names of real people from court records. Any application built on this data must handle this with appropriate care.
|
| 122 |
+
- **OCR/scan artifacts:** Some source documents may contain OCR errors from the original scan-to-text conversion, which can affect retrieval quality.
|
| 123 |
+
- **Chunking boundary effects:** Semantic chunking may split context across boundaries, occasionally resulting in incomplete retrieved passages.
|
| 124 |
+
- **Embedding model limitations:** `all-MiniLM-L6-v2` is a general-purpose model; domain-specific legal terminology may not embed with optimal accuracy.
|
| 125 |
+
- **No ground truth:** There is no annotated QA benchmark for this corpus, so retrieval quality is evaluated qualitatively.
|
| 126 |
+
|
| 127 |
+
### Recommendations
|
| 128 |
+
|
| 129 |
+
Always use MMR retrieval (`search_type="mmr"`) over pure similarity search to avoid redundant results from the same document. Ground all LLM responses strictly in retrieved context. Review results critically — this is a research tool, not a legal or journalistic authority.
|
| 130 |
+
|
| 131 |
+
## Citation [optional]
|
| 132 |
+
|
| 133 |
+
**BibTeX:**
|
| 134 |
+
|
| 135 |
+
```bibtex
|
| 136 |
+
@misc{nayak2026epsteinfilesrag,
|
| 137 |
+
author = {Ankit Kumar Nayak},
|
| 138 |
+
title = {EpsteinFiles-RAG: A RAG Pipeline over the Epstein Files 20K Dataset},
|
| 139 |
+
year = {2026},
|
| 140 |
+
howpublished = {\url{https://github.com/AnkitNayak-eth/EpsteinFiles-RAG}},
|
| 141 |
+
}
|
| 142 |
+
```
|
| 143 |
+
|
| 144 |
+
**APA:**
|
| 145 |
+
|
| 146 |
+
Nayak, A. K. (2026). *EpsteinFiles-RAG: A RAG Pipeline over the Epstein Files 20K Dataset*. GitHub. https://github.com/AnkitNayak-eth/EpsteinFiles-RAG
|
| 147 |
+
|
| 148 |
+
## Glossary [optional]
|
| 149 |
+
|
| 150 |
+
- **RAG (Retrieval-Augmented Generation):** A technique where an LLM answers questions using only context retrieved from a vector database, preventing hallucination.
|
| 151 |
+
- **MMR (Maximal Marginal Relevance):** A retrieval algorithm that balances relevance and diversity to avoid returning redundant chunks from the same document.
|
| 152 |
+
- **ChromaDB:** An open-source vector database used to store and query embeddings.
|
| 153 |
+
- **Embedding:** A dense numerical vector representation of text that captures semantic meaning.
|
| 154 |
+
- **HNSW:** Hierarchical Navigable Small World — the graph-based approximate nearest-neighbor index used by ChromaDB.
|
| 155 |
+
|
| 156 |
+
## More Information [optional]
|
| 157 |
+
|
| 158 |
+
Full pipeline source code, API server, and Streamlit UI available at: https://github.com/AnkitNayak-eth/EpsteinFiles-RAG
|
| 159 |
+
|
| 160 |
+
## Dataset Card Authors [optional]
|
| 161 |
+
|
| 162 |
+
[Ankit Kumar Nayak](https://github.com/AnkitNayak-eth)
|
| 163 |
+
|
| 164 |
+
## Dataset Card Contact
|
| 165 |
+
|
| 166 |
+
GitHub: https://github.com/AnkitNayak-eth
|