Datasets:
Upload PROVENANCE.md with huggingface_hub
Browse files- PROVENANCE.md +153 -0
PROVENANCE.md
ADDED
|
@@ -0,0 +1,153 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Data Provenance
|
| 2 |
+
|
| 3 |
+
This document describes the origin, processing method, and lineage of every table in the Epstein Document Archive.
|
| 4 |
+
|
| 5 |
+
## Overview
|
| 6 |
+
|
| 7 |
+
The dataset combines two OCR pipelines and multiple community-curated layers:
|
| 8 |
+
|
| 9 |
+
| Source | Files | Method | Quality |
|
| 10 |
+
|--------|-------|--------|---------|
|
| 11 |
+
| Gemini OCR | 848,228 | Gemini 2.5 Flash Lite | High (structured JSON output) |
|
| 12 |
+
| Community Tesseract | 537,622 | Tesseract OCR via community repos | Variable |
|
| 13 |
+
| Community curation | -- | Manual + automated analysis | Curated |
|
| 14 |
+
| ML recovery | 39,588 pages | Redaction recovery model | Experimental |
|
| 15 |
+
|
| 16 |
+
**Success rate: 99.97%** (1,385,850 / 1,386,322 attempted).
|
| 17 |
+
|
| 18 |
+
## Per-table provenance
|
| 19 |
+
|
| 20 |
+
### `documents` (1,413,765 rows)
|
| 21 |
+
|
| 22 |
+
| Subset | Rows | Source | Method |
|
| 23 |
+
|--------|------|--------|--------|
|
| 24 |
+
| `ocr_source IS NULL` | 848,228 | DOJ FOIA PDFs | Gemini 2.5 Flash Lite (`gemini-2.5-flash-lite`) via Google AI API. Each PDF page sent as image, returns structured JSON with full text, document type, date, entity extraction, and metadata. |
|
| 25 |
+
| `ocr_source = 'tesseract-community'` | 537,622 | Community repositories | Tesseract OCR. Primary source: [rhowardstone/Epstein-research-data](https://github.com/rhowardstone/Epstein-research-data) for DataSet 9 (531K files). Remaining community files fill gaps in DataSets 2-5, 12, FBIVault, and HouseOversightEstate. |
|
| 26 |
+
|
| 27 |
+
**Identifying OCR source**: Check the `ocr_source` column. NULL = Gemini, `"tesseract-community"` = community Tesseract.
|
| 28 |
+
|
| 29 |
+
### `entities` (8,542,849 rows)
|
| 30 |
+
|
| 31 |
+
Extracted entities (people, organizations, locations, dates, reference numbers) linked to their parent document via `file_key`.
|
| 32 |
+
|
| 33 |
+
- **Gemini documents**: Entities extracted by Gemini as part of OCR (structured JSON output includes entity arrays).
|
| 34 |
+
- **Community documents**: Entities extracted by post-processing NER on Tesseract OCR text.
|
| 35 |
+
- **Entity types**: `person`, `organization`, `location`, `date`, `reference_number`.
|
| 36 |
+
|
| 37 |
+
### `chunks` (2,039,205 rows)
|
| 38 |
+
|
| 39 |
+
Text chunks derived from `documents.full_text` using ~800-token splitting with overlap. Used for retrieval-augmented generation (RAG).
|
| 40 |
+
|
| 41 |
+
- **Source**: Derived from the full text of each document.
|
| 42 |
+
- **Method**: Token-count-based splitting with character offset tracking.
|
| 43 |
+
- **Fields**: `file_key`, `chunk_index`, `content`, `token_count`, `char_start`, `char_end`.
|
| 44 |
+
|
| 45 |
+
### `embeddings_chunk` (1,958,052 rows)
|
| 46 |
+
|
| 47 |
+
768-dimensional float32 embedding vectors for text chunks.
|
| 48 |
+
|
| 49 |
+
- **Model**: `gemini-embedding-001` (Gemini Embedding v1).
|
| 50 |
+
- **Input**: `chunks.content` (the chunk text).
|
| 51 |
+
- **Coverage**: 96% of chunks (1,958,052 / 2,039,205).
|
| 52 |
+
- **Format**: In Parquet, stored as `list<float32>` with fixed size 768. In SQLite, stored as raw `float32` BLOB (3,072 bytes per embedding).
|
| 53 |
+
|
| 54 |
+
### `embeddings_summary` (1,413,508 rows)
|
| 55 |
+
|
| 56 |
+
768-dimensional float32 embedding vectors for document summaries.
|
| 57 |
+
|
| 58 |
+
- **Model**: `gemini-embedding-001` (Gemini Embedding v1).
|
| 59 |
+
- **Input**: First ~2,000 characters of `documents.full_text`.
|
| 60 |
+
- **Coverage**: 99.98% of documents (1,413,508 / 1,413,765).
|
| 61 |
+
- **Format**: Same as chunk embeddings.
|
| 62 |
+
|
| 63 |
+
### `persons` (1,614 rows)
|
| 64 |
+
|
| 65 |
+
Curated registry of people mentioned across the corpus.
|
| 66 |
+
|
| 67 |
+
- **Source**: Community curation, combining automated entity extraction with manual verification.
|
| 68 |
+
- **Fields**: `canonical_name`, `slug`, `category` (perpetrator/victim/associate/other), `aliases` (JSON array), `search_terms`, `sources`, `notes`.
|
| 69 |
+
- **Upstream**: `release/persons_registry.json`.
|
| 70 |
+
|
| 71 |
+
### `kg_entities` (467 rows)
|
| 72 |
+
|
| 73 |
+
Knowledge graph entities with metadata.
|
| 74 |
+
|
| 75 |
+
- **Source**: Community curation -- manually identified key entities with occupation, legal status, and mention counts.
|
| 76 |
+
- **Fields**: `name`, `entity_type`, `description`, `metadata` (JSON with occupation, legal_status, mention counts).
|
| 77 |
+
- **Upstream**: `release/knowledge_graph_entities.json`.
|
| 78 |
+
|
| 79 |
+
### `kg_relationships` (4,190 rows)
|
| 80 |
+
|
| 81 |
+
Knowledge graph relationships between entities.
|
| 82 |
+
|
| 83 |
+
- **Source**: Community curation -- relationships extracted from document analysis.
|
| 84 |
+
- **Fields**: `source_name`, `target_name`, `relationship_type` (e.g., traveled_with, associated_with, employed_by), `weight`, `evidence`, `metadata`.
|
| 85 |
+
- **Relationship types**: traveled_with, associated_with, employed_by, legal_representative, financial_connection, and others.
|
| 86 |
+
- **Upstream**: `release/knowledge_graph_relationships.json`.
|
| 87 |
+
|
| 88 |
+
### `recovered_redactions` (39,588 rows)
|
| 89 |
+
|
| 90 |
+
Text recovered from behind redaction bars in DOJ documents.
|
| 91 |
+
|
| 92 |
+
- **Source**: Community ML analysis of redacted document pages.
|
| 93 |
+
- **Method**: Machine learning model trained to reconstruct text obscured by redaction bars.
|
| 94 |
+
- **Fields**: `file_key`, `page_number`, `reconstructed_text`, `interest_score`, `names_found`, `document_type`.
|
| 95 |
+
- **Quality**: Experimental. Higher `interest_score` indicates more significant recovered content.
|
| 96 |
+
- **Upstream**: `release/redacted_text_recovered.json.gz`.
|
| 97 |
+
|
| 98 |
+
### `provenance` (pipeline audit trail)
|
| 99 |
+
|
| 100 |
+
Three sub-tables from the OCR processing pipeline:
|
| 101 |
+
|
| 102 |
+
| Sub-table | Rows | Description |
|
| 103 |
+
|-----------|------|-------------|
|
| 104 |
+
| `provenance/files` | 1,386,322 | Per-file processing record: SHA-256 hashes, status, tokens, latency, model |
|
| 105 |
+
| `provenance/audit_log` | ~3.6M | Append-only event log: every processing step recorded |
|
| 106 |
+
| `provenance/runs` | 120 | Pipeline run metadata: start/end times, success/failure counts |
|
| 107 |
+
|
| 108 |
+
- **Source**: `epstein_provenance.db` -- the forensic audit trail from the OCR pipeline.
|
| 109 |
+
- **Key fields on `files`**: `file_key`, `pdf_sha256`, `output_sha256`, `status`, `input_tokens`, `output_tokens`, `api_latency_ms`, `model_used`.
|
| 110 |
+
|
| 111 |
+
## Dataset-level breakdown
|
| 112 |
+
|
| 113 |
+
### DOJ FOIA Releases
|
| 114 |
+
|
| 115 |
+
| Dataset | Total Files | Gemini | Community | Description |
|
| 116 |
+
|---------|-------------|--------|-----------|-------------|
|
| 117 |
+
| DataSet 1 | 3,158 | 3,158 | 0 | Initial release |
|
| 118 |
+
| DataSet 2 | 574 | 49 | 525 | |
|
| 119 |
+
| DataSet 3 | 67 | 49 | 18 | |
|
| 120 |
+
| DataSet 4 | 152 | 49 | 103 | |
|
| 121 |
+
| DataSet 5 | 120 | 49 | 71 | |
|
| 122 |
+
| DataSet 6 | 13 | 13 | 0 | |
|
| 123 |
+
| DataSet 7 | 17 | 17 | 0 | |
|
| 124 |
+
| DataSet 8 | 10,595 | 10,595 | 0 | |
|
| 125 |
+
| DataSet 9 | 531,279 | 0 | 531,279 | Entirely community-processed |
|
| 126 |
+
| DataSet 10 | 503,154 | 502,548 | 606 | |
|
| 127 |
+
| DataSet 11 | 331,655 | 331,651 | 4 | |
|
| 128 |
+
| DataSet 12 | 152 | 50 | 102 | |
|
| 129 |
+
|
| 130 |
+
### Non-DOJ Sources
|
| 131 |
+
|
| 132 |
+
| Dataset | Files | Source | OCR |
|
| 133 |
+
|---------|-------|--------|-----|
|
| 134 |
+
| FBIVault | 22 | FBI Vault FOIA | Community Tesseract |
|
| 135 |
+
| HouseOversightEstate | 4,892 | House Oversight Committee | Community Tesseract |
|
| 136 |
+
|
| 137 |
+
## Processing failures
|
| 138 |
+
|
| 139 |
+
472 source PDFs could not be processed. These are documented in `release/epstein_problems.json` with:
|
| 140 |
+
- `file_key` -- EFTA identifier
|
| 141 |
+
- `doj_url` -- Original DOJ download URL
|
| 142 |
+
- `error_message` -- Why processing failed
|
| 143 |
+
- `category` -- Failure type (empty_source_pdf, corrupt_source_pdf, api_disconnect, doj_file_unavailable)
|
| 144 |
+
|
| 145 |
+
## Reproducibility
|
| 146 |
+
|
| 147 |
+
The full processing pipeline is available at [github.com/kevinnbass/epstein](https://github.com/kevinnbass/epstein):
|
| 148 |
+
- `process_pdfs_gemini.py` -- Gemini OCR pipeline
|
| 149 |
+
- `import_community.py` -- Community data importer
|
| 150 |
+
- `validate_dataset.py` -- 14-check validation suite
|
| 151 |
+
- `epstein_audit.db` -- Complete audit trail with per-file SHA-256 checksums
|
| 152 |
+
|
| 153 |
+
Every output JSON has a SHA-256 checksum recorded in the provenance database, enabling verification that published data matches pipeline output.
|