kabasshouse commited on
Commit
133ef9f
·
verified ·
1 Parent(s): 0f932c5

v2: Add financial, events, curated docs; upgrade 604 DS10 files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. PROVENANCE.md +86 -100
  2. README.md +143 -125
  3. data/chunks/chunks-00000-of-00011.parquet +3 -0
  4. data/chunks/chunks-00001-of-00011.parquet +3 -0
  5. data/chunks/chunks-00002-of-00011.parquet +3 -0
  6. data/chunks/chunks-00003-of-00011.parquet +3 -0
  7. data/chunks/chunks-00004-of-00011.parquet +3 -0
  8. data/chunks/chunks-00005-of-00011.parquet +3 -0
  9. data/chunks/chunks-00006-of-00011.parquet +3 -0
  10. data/chunks/chunks-00007-of-00011.parquet +3 -0
  11. data/chunks/chunks-00008-of-00011.parquet +3 -0
  12. data/chunks/chunks-00009-of-00011.parquet +3 -0
  13. data/chunks/chunks-00010-of-00011.parquet +3 -0
  14. data/communication_records/communication_records-00000-of-00001.parquet +3 -0
  15. data/curated_docs/curated_docs-00000-of-00001.parquet +3 -0
  16. data/derived_events/derived_events-00000-of-00001.parquet +3 -0
  17. data/documents/documents-00000-of-00015.parquet +3 -0
  18. data/documents/documents-00001-of-00015.parquet +3 -0
  19. data/documents/documents-00002-of-00015.parquet +3 -0
  20. data/documents/documents-00003-of-00015.parquet +3 -0
  21. data/documents/documents-00004-of-00015.parquet +3 -0
  22. data/documents/documents-00005-of-00015.parquet +3 -0
  23. data/documents/documents-00006-of-00015.parquet +3 -0
  24. data/documents/documents-00007-of-00015.parquet +3 -0
  25. data/documents/documents-00008-of-00015.parquet +3 -0
  26. data/documents/documents-00009-of-00015.parquet +3 -0
  27. data/documents/documents-00010-of-00015.parquet +3 -0
  28. data/documents/documents-00011-of-00015.parquet +3 -0
  29. data/documents/documents-00012-of-00015.parquet +3 -0
  30. data/documents/documents-00013-of-00015.parquet +3 -0
  31. data/documents/documents-00014-of-00015.parquet +3 -0
  32. data/embeddings_chunk/embeddings_chunk-00000-of-00022.parquet +3 -0
  33. data/embeddings_chunk/embeddings_chunk-00001-of-00022.parquet +3 -0
  34. data/embeddings_chunk/embeddings_chunk-00002-of-00022.parquet +3 -0
  35. data/embeddings_chunk/embeddings_chunk-00003-of-00022.parquet +3 -0
  36. data/embeddings_chunk/embeddings_chunk-00004-of-00022.parquet +3 -0
  37. data/embeddings_chunk/embeddings_chunk-00005-of-00022.parquet +3 -0
  38. data/embeddings_chunk/embeddings_chunk-00006-of-00022.parquet +3 -0
  39. data/embeddings_chunk/embeddings_chunk-00007-of-00022.parquet +3 -0
  40. data/embeddings_chunk/embeddings_chunk-00008-of-00022.parquet +3 -0
  41. data/embeddings_chunk/embeddings_chunk-00009-of-00022.parquet +3 -0
  42. data/embeddings_chunk/embeddings_chunk-00010-of-00022.parquet +3 -0
  43. data/embeddings_chunk/embeddings_chunk-00011-of-00022.parquet +3 -0
  44. data/embeddings_chunk/embeddings_chunk-00012-of-00022.parquet +3 -0
  45. data/embeddings_chunk/embeddings_chunk-00013-of-00022.parquet +3 -0
  46. data/embeddings_chunk/embeddings_chunk-00014-of-00022.parquet +3 -0
  47. data/embeddings_chunk/embeddings_chunk-00015-of-00022.parquet +3 -0
  48. data/embeddings_chunk/embeddings_chunk-00016-of-00022.parquet +3 -0
  49. data/embeddings_chunk/embeddings_chunk-00017-of-00022.parquet +3 -0
  50. data/embeddings_chunk/embeddings_chunk-00018-of-00022.parquet +3 -0
PROVENANCE.md CHANGED
@@ -1,145 +1,131 @@
1
  # Data Provenance
2
 
3
- This document describes the origin, processing method, and lineage of every table in the Epstein Document Archive.
4
 
5
- ## Overview
6
 
7
- The dataset combines two OCR pipelines and multiple community-curated layers:
8
 
9
- | Source | Files | Method | Quality |
10
- |--------|-------|--------|---------|
11
- | Gemini OCR | 848,228 | Gemini 2.5 Flash Lite | High (structured JSON output) |
12
- | Community Tesseract | 537,622 | Tesseract OCR via community repos | Variable |
13
- | Community curation | -- | Manual + automated analysis | Curated |
14
- | ML recovery | 39,588 pages | Redaction recovery model | Experimental |
15
 
16
- **Success rate: 99.97%** (1,385,850 / 1,386,322 attempted).
17
 
18
- ## Per-table provenance
19
 
20
- ### `documents` (1,413,765 rows)
21
 
22
- | Subset | Rows | Source | Method |
23
- |--------|------|--------|--------|
24
- | `ocr_source IS NULL` | 848,228 | DOJ FOIA PDFs | Gemini 2.5 Flash Lite (`gemini-2.5-flash-lite`) via Google AI API. Each PDF page sent as image, returns structured JSON with full text, document type, date, entity extraction, and metadata. |
25
- | `ocr_source = 'tesseract-community'` | 537,622 | Community repositories | Tesseract OCR. Primary source: [rhowardstone/Epstein-research-data](https://github.com/rhowardstone/Epstein-research-data) for DataSet 9 (531K files). Remaining community files fill gaps in DataSets 2-5, 12, FBIVault, and HouseOversightEstate. |
26
 
27
- **Identifying OCR source**: Check the `ocr_source` column. NULL = Gemini, `"tesseract-community"` = community Tesseract.
28
 
29
- ### `entities` (8,542,849 rows)
30
 
31
- Extracted entities (people, organizations, locations, dates, reference numbers) linked to their parent document via `file_key`.
 
32
 
33
- - **Gemini documents**: Entities extracted by Gemini as part of OCR (structured JSON output includes entity arrays).
34
- - **Community documents**: Entities extracted by post-processing NER on Tesseract OCR text.
35
- - **Entity types**: `person`, `organization`, `location`, `date`, `reference_number`.
36
 
37
- ### `chunks` (2,039,205 rows)
38
 
39
- Text chunks derived from `documents.full_text` using ~800-token splitting with overlap. Used for retrieval-augmented generation (RAG).
 
 
40
 
41
- - **Source**: Derived from the full text of each document.
42
- - **Method**: Token-count-based splitting with character offset tracking.
43
- - **Fields**: `file_key`, `chunk_index`, `content`, `token_count`, `char_start`, `char_end`.
44
 
45
- ### `embeddings_chunk` (1,956,803 rows)
46
 
47
- 768-dimensional float32 embedding vectors for text chunks.
 
 
 
48
 
49
- - **Model**: `gemini-embedding-001` (Gemini Embedding v1).
50
- - **Input**: `chunks.content` (the chunk text).
51
- - **Coverage**: 96% of chunks (1,956,803 / 2,039,205). 1,249 malformed embeddings excluded.
52
- - **Format**: In Parquet, stored as `list<float32>` with fixed size 768. In SQLite, stored as raw `float32` BLOB (3,072 bytes per embedding).
53
- - **Note**: Summary embeddings (one per document) were removed from the published dataset. 92% of documents contain a single chunk, making summary and chunk embeddings identical. For multi-chunk documents, use the first chunk embedding or mean-pool across chunks for a document-level vector.
54
 
55
- ### `persons` (1,614 rows)
56
 
57
- Curated registry of people mentioned across the corpus.
 
 
58
 
59
- - **Source**: Community curation, combining automated entity extraction with manual verification.
60
- - **Fields**: `canonical_name`, `slug`, `category` (perpetrator/victim/associate/other), `aliases` (JSON array), `search_terms`, `sources`, `notes`.
61
- - **Upstream**: `release/persons_registry.json`.
62
 
63
- ### `kg_entities` (467 rows)
64
 
65
- Knowledge graph entities with metadata.
 
 
66
 
67
- - **Source**: Community curation -- manually identified key entities with occupation, legal status, and mention counts.
68
- - **Fields**: `name`, `entity_type`, `description`, `metadata` (JSON with occupation, legal_status, mention counts).
69
- - **Upstream**: `release/knowledge_graph_entities.json`.
70
 
71
- ### `kg_relationships` (4,190 rows)
72
 
73
- Knowledge graph relationships between entities.
 
 
 
 
74
 
75
- - **Source**: Community curation -- relationships extracted from document analysis.
76
- - **Fields**: `source_name`, `target_name`, `relationship_type` (e.g., traveled_with, associated_with, employed_by), `weight`, `evidence`, `metadata`.
77
- - **Relationship types**: traveled_with, associated_with, employed_by, legal_representative, financial_connection, and others.
78
- - **Upstream**: `release/knowledge_graph_relationships.json`.
79
 
80
- ### `recovered_redactions` (39,588 rows)
81
 
82
- Text recovered from behind redaction bars in DOJ documents.
 
 
83
 
84
- - **Source**: Community ML analysis of redacted document pages.
85
- - **Method**: Machine learning model trained to reconstruct text obscured by redaction bars.
86
- - **Fields**: `file_key`, `page_number`, `reconstructed_text`, `interest_score`, `names_found`, `document_type`.
87
- - **Quality**: Experimental. Higher `interest_score` indicates more significant recovered content.
88
- - **Upstream**: `release/redacted_text_recovered.json.gz`.
89
 
90
- ### `provenance` (pipeline audit trail)
91
 
92
- Three sub-tables from the OCR processing pipeline:
 
 
93
 
94
- | Sub-table | Rows | Description |
95
- |-----------|------|-------------|
96
- | `provenance/files` | 1,386,322 | Per-file processing record: SHA-256 hashes, status, tokens, latency, model |
97
- | `provenance/audit_log` | ~3.6M | Append-only event log: every processing step recorded |
98
- | `provenance/runs` | 120 | Pipeline run metadata: start/end times, success/failure counts |
99
 
100
- - **Source**: `epstein_provenance.db` -- the forensic audit trail from the OCR pipeline.
101
- - **Key fields on `files`**: `file_key`, `pdf_sha256`, `output_sha256`, `status`, `input_tokens`, `output_tokens`, `api_latency_ms`, `model_used`.
102
 
103
- ## Dataset-level breakdown
104
 
105
- ### DOJ FOIA Releases
 
 
106
 
107
- | Dataset | Total Files | Gemini | Community | Description |
108
- |---------|-------------|--------|-----------|-------------|
109
- | DataSet 1 | 3,158 | 3,158 | 0 | Initial release |
110
- | DataSet 2 | 574 | 49 | 525 | |
111
- | DataSet 3 | 67 | 49 | 18 | |
112
- | DataSet 4 | 152 | 49 | 103 | |
113
- | DataSet 5 | 120 | 49 | 71 | |
114
- | DataSet 6 | 13 | 13 | 0 | |
115
- | DataSet 7 | 17 | 17 | 0 | |
116
- | DataSet 8 | 10,595 | 10,595 | 0 | |
117
- | DataSet 9 | 531,279 | 0 | 531,279 | Entirely community-processed |
118
- | DataSet 10 | 503,154 | 502,548 | 606 | |
119
- | DataSet 11 | 331,655 | 331,651 | 4 | |
120
- | DataSet 12 | 152 | 50 | 102 | |
121
 
122
- ### Non-DOJ Sources
123
 
124
- | Dataset | Files | Source | OCR |
125
- |---------|-------|--------|-----|
126
- | FBIVault | 22 | FBI Vault FOIA | Community Tesseract |
127
- | HouseOversightEstate | 4,892 | House Oversight Committee | Community Tesseract |
128
 
129
- ## Processing failures
 
 
130
 
131
- 472 source PDFs could not be processed. These are documented in `release/epstein_problems.json` with:
132
- - `file_key` -- EFTA identifier
133
- - `doj_url` -- Original DOJ download URL
134
- - `error_message` -- Why processing failed
135
- - `category` -- Failure type (empty_source_pdf, corrupt_source_pdf, api_disconnect, doj_file_unavailable)
136
 
137
- ## Reproducibility
138
 
139
- The full processing pipeline is available at [github.com/kevinnbass/epstein](https://github.com/kevinnbass/epstein):
140
- - `process_pdfs_gemini.py` -- Gemini OCR pipeline
141
- - `import_community.py` -- Community data importer
142
- - `validate_dataset.py` -- 14-check validation suite
143
- - `epstein_audit.db` -- Complete audit trail with per-file SHA-256 checksums
144
 
145
- Every output JSON has a SHA-256 checksum recorded in the provenance database, enabling verification that published data matches pipeline output.
 
 
 
 
 
 
 
 
 
 
 
1
  # Data Provenance
2
 
3
+ Per-table documentation of data sources, extraction methods, and quality notes.
4
 
5
+ ## documents
6
 
7
+ **1,424,673 rows.** One row per PDF file from the DOJ Epstein release.
8
 
9
+ - **856,028 files** processed with **Gemini 2.5 Flash Lite** ($0.10/$0.40 per 1M tokens). Full structured extraction: document type classification, date parsing, entity extraction, handwriting/stamp detection, photo description.
10
+ - **531,279 files** (DataSet 9) imported from the [rhowardstone/Epstein-research-data](https://github.com/rhowardstone/Epstein-research-data) community project using **Tesseract OCR**. Raw text only — no entity extraction or document classification. These have `ocr_source = 'tesseract-community'`.
11
+ - **1,377 files** originally Tesseract, upgraded to Gemini in v2 (604 from DS10, 773 from other datasets).
12
+ - **37,369 files** have `is_photo = true` (photos, stamps, blank pages).
13
+ - `email_fields` column (new in v2): JSON-encoded email metadata (from, to, cc, subject, date) for email-type documents.
 
14
 
15
+ Distinguish OCR source: `ocr_source IS NULL` = Gemini, `ocr_source = 'tesseract-community'` = community.
16
 
17
+ ## entities
18
 
19
+ **10,629,198 rows.** Named entities extracted from documents.
20
 
21
+ - Gemini documents: entities come from the structured JSON extraction prompt (types: person, organization, location, date, reference_number, email_address, phone_number, monetary_amount).
22
+ - Community documents: entities from post-processing NER pipeline.
23
+ - Entity `normalized_value` provides cleaned/canonical forms where available.
 
24
 
25
+ ## chunks
26
 
27
+ **2,193,090 rows.** Text chunks for RAG (retrieval-augmented generation).
28
 
29
+ - ~800 token target per chunk with overlap at sentence boundaries.
30
+ - `char_start` and `char_end` map back to the parent document's `full_text`.
31
 
32
+ ## embeddings_chunk
 
 
33
 
34
+ **2,111,356 rows.** 768-dimensional float32 vectors from `gemini-embedding-001`.
35
 
36
+ - ~96% coverage (documents with malformed text excluded).
37
+ - Stored as `list<float32>` in Parquet.
38
+ - `source_text_hash` links to the chunk text that was embedded.
39
 
40
+ ## persons
 
 
41
 
42
+ **1,614 rows.** Curated person registry with canonical names, aliases, and categories.
43
 
44
+ - Categories: perpetrator, victim, associate, other.
45
+ - `aliases` field: JSON array of known alternate names/spellings.
46
+ - `search_terms`: additional search patterns for entity resolution.
47
+ - Community-curated from multiple sources.
48
 
49
+ ## kg_entities / kg_relationships
 
 
 
 
50
 
51
+ **467 entities, 2,198 relationships.** Knowledge graph connecting people, organizations, and locations.
52
 
53
+ - Relationship types: traveled_with, associated_with, employed_by, legal_representative, financial_connection, etc.
54
+ - `weight` indicates strength of connection (co-occurrence frequency).
55
+ - `evidence` field links to source document file_keys.
56
 
57
+ ## recovered_redactions
 
 
58
 
59
+ **37,870 rows.** Text recovered from under redaction bars using ML reconstruction.
60
 
61
+ - `interest_score` (0-100) ranks significance of recovered content.
62
+ - `names_found` lists person names detected in recovered text.
63
+ - Experimental quality — treat as leads, not verified text.
64
 
65
+ ## financial_transactions (NEW in v2)
 
 
66
 
67
+ **49,770 rows.** Credit card and bank transaction records extracted with DeepSeek.
68
 
69
+ - Source: DOJ-released credit card statements and bank records (DataSet 10/11).
70
+ - Extraction model: DeepSeek (per-page structured extraction).
71
+ - 31% of raw extractions quarantined for quality issues (hallucinated amounts, duplicate entries, garbled OCR). Only clean records included.
72
+ - Flight fields (flight_from, flight_to, flight_carrier, flight_passenger) populated for airline purchases.
73
+ - Key cardholders: JEFFREY E EPSTEIN, GHISLAINE MAXWELL, KARYNA SHULIAK, HBRK ASSOCIATES, TERRAMAR PROJECT.
74
 
75
+ ## communication_records (NEW in v2)
 
 
 
76
 
77
+ **128 rows.** Phone call and cell-site/CDR records.
78
 
79
+ - Source: DOJ-released phone records.
80
+ - 99.8% of raw extractions quarantined (CDR data has very high hallucination rates). Only 128 high-confidence records included.
81
+ - Fields: call date/time, duration, direction, location, number called, provider.
82
 
83
+ ## investigative_records (NEW in v2)
 
 
 
 
84
 
85
+ **143 rows.** Law enforcement reports, evidence recovery logs, and vehicle/property records.
86
 
87
+ - Source: FBI, PBSO, and other agency reports in DOJ release.
88
+ - 87% of raw extractions quarantined. Only 143 verified records included.
89
+ - Record types: law_enforcement_report, evidence_recovery_log, vehicle_property_record.
90
 
91
+ ## derived_events (NEW in v2)
 
 
 
 
92
 
93
+ **3,038 events** with **5,751 participants** and **21,910 source document links.**
 
94
 
95
+ Three analysis tracks reconstruct Epstein's activities:
96
 
97
+ - **Calendar track**: Meetings, dinners, appointments from Lesley Groff's daily schedule emails (2011-2016).
98
+ - **Travel track**: Flights, hotel stays, ground transport from schedule transitions and AmEx bookings.
99
+ - **Financial track**: Gift exchanges, institutional donations, major purchases from bank statements and emails.
100
 
101
+ Each event links to source EFTA documents via `event_sources` and participants via `event_participants`.
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
+ ## curated_docs (NEW in v2)
104
 
105
+ **5,766 gold documents** across 5 investigation subjects: Hoffman (1,526), Gates (2,069), Summers (739), Clinton (765), Black (667).
 
 
 
106
 
107
+ - Each document annotated with: tier (NUCLEAR/CRITICAL/HIGH/MEDIUM/SUPPORTING), category, date, sender, recipient, headline, key quote, investigative detail.
108
+ - `also_appears_as`: JSON array of duplicate EFTA file_keys for the same document.
109
+ - Only gold-status documents exported (24,706 rejected documents excluded).
110
 
111
+ ## provenance/files
 
 
 
 
112
 
113
+ **1,387,775 rows.** One row per processed file with full pipeline metadata.
114
 
115
+ - `pdf_sha256`: SHA-256 hash of input PDF.
116
+ - `output_sha256`: SHA-256 hash of output JSON.
117
+ - `model_used`: OCR model (gemini-2.5-flash-lite, tesseract-community, etc.).
118
+ - `input_tokens` / `output_tokens`: API token consumption.
119
+ - `validation_score`: automated quality score (0-100).
120
 
121
+ ## provenance/audit_log
122
+
123
+ **3,711,609 rows.** Append-only forensic audit trail.
124
+
125
+ - Every pipeline action (file processed, error, retry, fix applied) is logged.
126
+ - `checksum` field provides tamper detection on critical operations.
127
+ - Timestamps in ISO-8601 format.
128
+
129
+ ## provenance/runs
130
+
131
+ **123 rows.** Pipeline execution records with timing, worker counts, and costs.
README.md CHANGED
@@ -1,178 +1,196 @@
1
- # Epstein Document Archive
2
-
3
- **1.39 million OCR'd documents** from the DOJ Jeffrey Epstein document release, with extracted entities, text embeddings, a knowledge graph, and full pipeline provenance.
4
-
5
- This is the data behind [epstein.academy](https://epstein.academy).
6
-
7
- ## What's in the dataset
8
-
9
- | Layer | Rows | Download Size | Description |
10
- |-------|------|---------------|-------------|
11
- | `documents` | 1,413,765 | ~800 MB | Full text of every document with metadata |
12
- | `entities` | 8,542,849 | ~200 MB | Extracted people, organizations, locations, dates |
13
- | `chunks` | 2,039,205 | ~1.5 GB | ~800-token text chunks for RAG |
14
- | `embeddings_chunk` | 1,956,803 | ~5 GB | 768-dim Gemini embeddings per chunk |
15
- | `provenance` | 4.9M rows | ~400 MB | Full pipeline audit trail |
16
- | `persons` | 1,614 | <1 MB | Curated person registry with aliases |
17
- | `kg_entities` | 467 | <1 MB | Knowledge graph entities |
18
- | `kg_relationships` | 4,190 | <1 MB | Knowledge graph relationships |
19
- | `recovered_redactions` | 39,588 | ~3 MB | Recovered text from redacted pages |
20
-
21
- **Total: ~8 GB compressed** (Parquet with zstd). Each layer is independent -- download only what you need.
22
-
23
- ## Quick start
24
-
25
- ### With HuggingFace `datasets`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ```python
28
  from datasets import load_dataset
29
 
30
- # Stream documents (no full download needed)
31
- ds = load_dataset("kabasshouse/epstein-data", "documents", split="train", streaming=True)
32
- for doc in ds:
33
- print(doc["file_key"], doc["dataset"], doc["document_type"])
34
- print(doc["full_text"][:200])
35
  break
36
-
37
- # Load entities into memory
38
- entities = load_dataset("kabasshouse/epstein-data", "entities", split="train")
39
- print(f"{len(entities):,} entities loaded")
40
-
41
- # Filter to a specific dataset
42
- ds10 = load_dataset("kabasshouse/epstein-data", "documents", split="train")
43
- ds10 = ds10.filter(lambda x: x["dataset"] == "DataSet10")
44
  ```
45
 
46
- ### With DuckDB (no download)
47
 
48
  ```sql
49
- -- Query directly from HuggingFace (auto-downloads Parquet)
50
- SELECT file_key, dataset, document_type, date
51
  FROM 'hf://datasets/kabasshouse/epstein-data/data/documents/*.parquet'
52
- WHERE dataset = 'DataSet10' AND document_type = 'Email'
53
- LIMIT 10;
54
-
55
- -- Count entities by type
56
- SELECT entity_type, COUNT(*) as cnt
57
- FROM 'hf://datasets/kabasshouse/epstein-data/data/entities/*.parquet'
58
- GROUP BY entity_type
59
- ORDER BY cnt DESC;
 
 
 
 
 
 
 
 
 
 
60
  ```
61
 
62
- ### With pandas
63
 
64
  ```python
65
  import pandas as pd
66
 
67
- # Read a specific shard
68
- df = pd.read_parquet("hf://datasets/kabasshouse/epstein-data/data/documents/documents-00000.parquet")
69
- print(df.shape)
70
- print(df.columns.tolist())
71
  ```
72
 
73
- ### Assemble a local SQLite database
74
 
75
- ```bash
76
- pip install pyarrow numpy tqdm huggingface_hub
77
 
78
- # Core tables only (documents + entities, ~1 GB download)
79
- python assemble_db.py --layers core --output epstein.db
 
 
 
 
80
 
81
- # With text chunks (~2.5 GB download)
82
- python assemble_db.py --layers text --output epstein.db
83
 
84
- # Full database with embeddings (~7.5 GB download)
85
- python assemble_db.py --layers full --output epstein.db
 
 
 
 
 
86
 
87
- # Everything including provenance (~8 GB download)
88
- python assemble_db.py --layers all --output epstein.db
89
 
90
- # From a local Parquet export
91
- python assemble_db.py --local ./hf_export/ --layers core --output epstein.db
92
- ```
 
 
 
 
 
93
 
94
- ## Source documents
95
 
96
- The documents come from 12 DOJ FOIA dataset releases plus two community-sourced collections:
 
 
 
 
 
 
97
 
98
  | Dataset | Files | Source |
99
  |---------|-------|--------|
100
- | DataSet 1 | 3,158 | DOJ FOIA |
101
- | DataSet 2 | 574 | DOJ FOIA |
102
- | DataSet 3 | 67 | DOJ FOIA |
103
- | DataSet 4 | 152 | DOJ FOIA |
104
- | DataSet 5 | 120 | DOJ FOIA |
105
- | DataSet 6 | 13 | DOJ FOIA |
106
- | DataSet 7 | 17 | DOJ FOIA |
107
- | DataSet 8 | 10,595 | DOJ FOIA |
108
- | DataSet 9 | 531,279 | DOJ FOIA |
109
- | DataSet 10 | 503,154 | DOJ FOIA |
110
- | DataSet 11 | 331,655 | DOJ FOIA |
111
- | DataSet 12 | 152 | DOJ FOIA |
112
- | FBIVault | 22 | FBI Vault FOIA |
113
  | HouseOversightEstate | 4,892 | House Oversight Committee |
114
 
115
- **Total: 1,385,850 successful** + 472 unrecoverable failures (documented in `release/epstein_problems.json`).
116
-
117
- ## OCR provenance
118
-
119
- Two OCR sources were used:
120
 
121
- - **Gemini 2.5 Flash Lite** (848,228 files): Primary OCR engine. These documents have `ocr_source` = NULL.
122
- - **Tesseract (community)** (537,622 files): Gap-fill from community repositories. These have `ocr_source` = `"tesseract-community"`.
123
 
124
- See [PROVENANCE.md](PROVENANCE.md) for per-table source documentation.
 
 
125
 
126
- ## Schema
127
 
128
- Every document has a unique `file_key` (e.g., `EFTA00000001`) that serves as the primary identifier across all tables. The Parquet files use `file_key` everywhere -- no opaque integer IDs.
129
 
130
- Key fields on `documents`:
131
- - `file_key` -- unique identifier (EFTA number)
132
- - `dataset` -- source dataset (e.g., "DataSet10")
133
- - `full_text` -- complete OCR text
134
- - `document_type` -- classified type (Email, Form, Letter, Photo, etc.)
135
- - `date` -- extracted date if available
136
- - `is_photo` -- whether the document is a photograph
137
- - `ocr_source` -- NULL for Gemini, "tesseract-community" for community OCR
138
 
139
- See `schema.sql` for the full SQLite schema used by `assemble_db.py`.
 
 
 
 
 
 
140
 
141
- ## Known issues
142
 
143
- - 472 source PDFs could not be processed (corrupt, empty, or unavailable). These are cataloged in `release/epstein_problems.json` with DOJ download URLs.
144
- - DataSet 9 (531K files) was entirely community-processed with Tesseract OCR, which has lower quality than Gemini.
145
- - Some documents are heavily redacted. `recovered_redactions` contains ML-recovered text from 39,588 redacted pages.
146
- - Embedding coverage is ~96% for chunks (1,249 malformed embeddings excluded). Summary embeddings were removed as redundant -- 92% of documents have a single chunk, making summary and chunk embeddings identical.
147
 
148
- ## Release artifacts
149
 
150
- Small reference files are included directly in this repo under `release/`:
 
 
 
151
 
152
- | File | Size | Description |
153
- |------|------|-------------|
154
- | `epstein_problems.json` | 280 KB | 472 processing failures with DOJ URLs |
155
- | `efta_dataset_mapping.json` | 4 KB | EFTA file key to DOJ URL mapping |
156
- | `persons_registry.json` | 436 KB | 1,614 curated person records |
157
- | `knowledge_graph_entities.json` | 172 KB | 467 KG entities |
158
- | `knowledge_graph_relationships.json` | 932 KB | 4,190 KG relationships |
159
- | `extracted_entities_filtered.json` | 1.9 MB | Filtered entity export |
160
- | `redacted_text_recovered.json.gz` | 2.5 MB | 39,588 recovered redacted pages |
161
- | `document_summary.csv.gz` | 1.8 MB | Document metadata summary |
162
- | `image_catalog.csv.gz` | 15 MB | Photo/image catalog |
163
 
164
  ## License
165
 
166
- This dataset is released under [CC-BY-4.0](LICENSE). The underlying documents are U.S. government records released under FOIA.
167
 
168
  ## Citation
169
 
170
  ```bibtex
171
- @dataset{epstein_data_2026,
172
- title={Epstein Document Archive},
173
- author={Kevin Bass},
174
  year={2026},
175
  url={https://huggingface.co/datasets/kabasshouse/epstein-data},
176
- note={1.39M OCR'd DOJ documents with entities, embeddings, and knowledge graph}
177
  }
178
  ```
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-classification
5
+ - question-answering
6
+ - feature-extraction
7
+ language:
8
+ - en
9
+ tags:
10
+ - legal
11
+ - ocr
12
+ - documents
13
+ - foia
14
+ - knowledge-graph
15
+ - financial
16
+ size_categories:
17
+ - 1M<n<10M
18
+ pretty_name: "Epstein DOJ Document Archive (OCR + Structured Data)"
19
+ ---
20
+
21
+ # Epstein DOJ Document Archive v2
22
+
23
+ **1.42 million OCR'd documents** from the Department of Justice Jeffrey Epstein document release, with structured entity extraction, vector embeddings, financial transactions, communication records, and a forensic audit trail.
24
+
25
+ Frontend: [epstein.academy](https://epstein.academy)
26
+
27
+ ## What's New in v2
28
+
29
+ - **10.6M entities** (up from 8.5M) — expanded NER extraction
30
+ - **2.1M chunk embeddings** (up from 1.96M) — more documents embedded
31
+ - **49,770 financial transactions** — credit card and bank records (DeepSeek extraction)
32
+ - **3,038 derived events** — reconstructed calendar, travel, and financial timeline
33
+ - **5,766 curated gold documents** — expert-annotated research catalog across 5 subjects
34
+ - **143 investigative records** — law enforcement reports and evidence logs
35
+ - **128 communication records** — phone call and CDR data
36
+ - **604 DS10 files upgraded** from Tesseract to Gemini OCR
37
+
38
+ ## Quick Start
39
+
40
+ ### HuggingFace Datasets (streaming)
41
 
42
  ```python
43
  from datasets import load_dataset
44
 
45
+ # Stream documents without downloading everything
46
+ ds = load_dataset("kabasshouse/epstein-data", "documents", streaming=True)
47
+ for doc in ds["train"]:
48
+ print(doc["file_key"], doc["document_type"], len(doc["full_text"] or ""))
 
49
  break
 
 
 
 
 
 
 
 
50
  ```
51
 
52
+ ### DuckDB (direct Parquet queries)
53
 
54
  ```sql
55
+ -- Query directly from HuggingFace without downloading
56
+ SELECT file_key, document_type, date, char_count
57
  FROM 'hf://datasets/kabasshouse/epstein-data/data/documents/*.parquet'
58
+ WHERE document_type = 'Email'
59
+ AND date LIKE '2015%'
60
+ ORDER BY date
61
+ LIMIT 20;
62
+
63
+ -- Financial transactions
64
+ SELECT transaction_date, amount, merchant_name, cardholder
65
+ FROM 'hf://datasets/kabasshouse/epstein-data/data/financial_transactions/*.parquet'
66
+ WHERE cardholder LIKE '%EPSTEIN%'
67
+ AND amount > 1000
68
+ ORDER BY amount DESC
69
+ LIMIT 20;
70
+
71
+ -- Curated gold documents
72
+ SELECT file_key, subject, tier, headline, key_quote
73
+ FROM 'hf://datasets/kabasshouse/epstein-data/data/curated_docs/*.parquet'
74
+ WHERE tier = 'NUCLEAR'
75
+ ORDER BY subject, doc_date;
76
  ```
77
 
78
+ ### Pandas
79
 
80
  ```python
81
  import pandas as pd
82
 
83
+ docs = pd.read_parquet("hf://datasets/kabasshouse/epstein-data/data/documents/")
84
+ print(f"{len(docs):,} documents")
85
+ print(docs.groupby("dataset").size().sort_values(ascending=False))
 
86
  ```
87
 
88
+ ## Data Layers
89
 
90
+ ### Core Content
 
91
 
92
+ | Layer | Rows | Description |
93
+ |-------|------|-------------|
94
+ | `documents` | 1,424,673 | Full OCR text, document type, date, photo flag |
95
+ | `entities` | 10,629,198 | Named entities (person, org, location, date, etc.) |
96
+ | `chunks` | 2,193,090 | ~800-token text chunks for RAG |
97
+ | `embeddings_chunk` | 2,111,356 | 768-dim Gemini embeddings per chunk |
98
 
99
+ ### Knowledge & Analysis
 
100
 
101
+ | Layer | Rows | Description |
102
+ |-------|------|-------------|
103
+ | `persons` | 1,614 | Curated person registry (name, aliases, category) |
104
+ | `kg_entities` | 467 | Knowledge graph nodes |
105
+ | `kg_relationships` | 2,198 | Knowledge graph edges (traveled_with, associated_with, etc.) |
106
+ | `recovered_redactions` | 37,870 | ML-recovered text from redacted pages |
107
+ | `curated_docs` | 5,766 | Expert-annotated gold documents (5 subjects, tiered) |
108
 
109
+ ### Structured Records (NEW in v2)
 
110
 
111
+ | Layer | Rows | Description |
112
+ |-------|------|-------------|
113
+ | `financial_transactions` | 49,770 | Credit card & bank transactions |
114
+ | `derived_events` | 3,038 | Reconstructed calendar/travel/financial events |
115
+ | `event_participants` | 5,751 | People linked to derived events |
116
+ | `event_sources` | 21,910 | Source documents for derived events |
117
+ | `investigative_records` | 143 | Law enforcement reports & evidence logs |
118
+ | `communication_records` | 128 | Phone call & CDR records |
119
 
120
+ ### Provenance
121
 
122
+ | Layer | Rows | Description |
123
+ |-------|------|-------------|
124
+ | `provenance/files` | 1,387,775 | Per-file processing metadata + SHA-256 checksums |
125
+ | `provenance/audit_log` | 3,711,609 | Append-only forensic audit trail |
126
+ | `provenance/runs` | 123 | Pipeline execution records |
127
+
128
+ ## Datasets
129
 
130
  | Dataset | Files | Source |
131
  |---------|-------|--------|
132
+ | DataSet 1 | 3,158 | DOJ EFTA release |
133
+ | DataSet 2 | 574 | DOJ EFTA release |
134
+ | DataSet 3 | 67 | DOJ EFTA release |
135
+ | DataSet 4 | 152 | DOJ EFTA release |
136
+ | DataSet 5 | 120 | DOJ EFTA release |
137
+ | DataSet 6 | 13 | DOJ EFTA release |
138
+ | DataSet 7 | 17 | DOJ EFTA release |
139
+ | DataSet 8 | 10,595 | DOJ EFTA release |
140
+ | DataSet 9 | 531,279 | DOJ EFTA release (community Tesseract OCR) |
141
+ | DataSet 10 | 503,154 | DOJ EFTA release |
142
+ | DataSet 11 | 331,655 | DOJ EFTA release |
143
+ | DataSet 12 | 152 | DOJ EFTA release |
144
+ | FBIVault | 22 | FBI Vault FOIA release |
145
  | HouseOversightEstate | 4,892 | House Oversight Committee |
146
 
147
+ **468 unrecoverable failures** (corrupt/empty source PDFs). Full failure catalog in `release/epstein_problems.json`.
 
 
 
 
148
 
149
+ ## OCR Sources
 
150
 
151
+ - **Gemini 2.5 Flash Lite**: 856,028 files — structured JSON output with entities, document classification, and metadata
152
+ - **Tesseract (community)**: 531,279 files — raw text only (DataSet 9, community gap-fill imports)
153
+ - **Upgraded**: 1,377 files originally processed with Tesseract, now re-processed with Gemini
154
 
155
+ Distinguish OCR source via the `ocr_source` column: `NULL` = Gemini, `'tesseract-community'` = community Tesseract.
156
 
157
+ ## Curated Documents
158
 
159
+ The `curated_docs` layer contains 5,766 expert-annotated gold documents across 5 investigation subjects:
 
 
 
 
 
 
 
160
 
161
+ | Subject | Gold Docs | Tiers |
162
+ |---------|-----------|-------|
163
+ | Hoffman | 1,526 | NUCLEAR / CRITICAL / HIGH / MEDIUM / SUPPORTING |
164
+ | Gates | 2,069 | NUCLEAR / CRITICAL / HIGH / MEDIUM / SUPPORTING |
165
+ | Summers | 739 | NUCLEAR / CRITICAL / HIGH / MEDIUM / SUPPORTING |
166
+ | Clinton | 765 | NUCLEAR / CRITICAL / HIGH / MEDIUM / SUPPORTING |
167
+ | Black | 667 | NUCLEAR / CRITICAL / HIGH / MEDIUM / SUPPORTING |
168
 
169
+ Each entry includes: tier, category, date, sender/recipient, headline, key quote, and investigative detail.
170
 
171
+ ## Financial Transactions
 
 
 
172
 
173
+ 49,770 clean records extracted from credit card statements and bank records using DeepSeek. Includes:
174
 
175
+ - Transaction date, amount, currency, merchant
176
+ - Cardholder name (Epstein, Maxwell, Shuliak, etc.)
177
+ - Flight data (origin, destination, carrier, passenger) for airline purchases
178
+ - Merchant category classification
179
 
180
+ 31% of raw extractions were quarantined for quality issues and excluded from this release.
 
 
 
 
 
 
 
 
 
 
181
 
182
  ## License
183
 
184
+ CC-BY-4.0. Source documents are U.S. government public records.
185
 
186
  ## Citation
187
 
188
  ```bibtex
189
+ @dataset{epstein_archive_2026,
190
+ title={Epstein DOJ Document Archive},
191
+ author={kabasshouse},
192
  year={2026},
193
  url={https://huggingface.co/datasets/kabasshouse/epstein-data},
194
+ version={2.0}
195
  }
196
  ```
data/chunks/chunks-00000-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a981d273fec089a29a4974e3a2af050202679a1866d7d697838ff4b0b3a8a652
3
+ size 42764173
data/chunks/chunks-00001-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d297fd634712e97d74b2cb6bf044896f5c4c663c8453c06aa33a580bfd86607
3
+ size 33380833
data/chunks/chunks-00002-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e40c6aca5a9aea79a1def8a6df7bf46c4db51b5c950ba8e631c8f845ee88e87
3
+ size 29118656
data/chunks/chunks-00003-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be7f48ed65262b15c339ebca9aa17643f7688cca33ec117ca2146645bb4ad904
3
+ size 42233793
data/chunks/chunks-00004-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2df4e8a30985bfff388c710d5f1fd0f083e03dca874993c180bb47f6c13082e
3
+ size 87261378
data/chunks/chunks-00005-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc38b2316707eb4a6b1352c2640345c876fbc3895aacee242caafcdc825ab543
3
+ size 45014063
data/chunks/chunks-00006-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ca9ce7e668a7d0489caf7906828c35596dea52326e6820f5130b8193fb88d8b
3
+ size 70822072
data/chunks/chunks-00007-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d93698d86e8b5651928f89230afbf2fc6f62c465cd6da99ecd0fb3f98b2ea65
3
+ size 49084476
data/chunks/chunks-00008-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbad34373a91f59c77f05c055fe645ffe04e7741982c2773b6b780e1de7fb5a3
3
+ size 89484554
data/chunks/chunks-00009-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06e5502ee0d7a23815b629163ab15c1c9da4ce91c46557e747a047a5c828c19c
3
+ size 73717067
data/chunks/chunks-00010-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f649bc2bf8558e7cb069f157218e3e909471222dceebea3914038b597d7b89bd
3
+ size 88033223
data/communication_records/communication_records-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a141c3718414c26778bd0042c3c91f4540b2e61ddc2a0b82ebabd43f1a0f15e9
3
+ size 11056
data/curated_docs/curated_docs-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92dfd415fafda8467d6d1ca15834c4c451aec61f6e9e5453033ac06adaf110a1
3
+ size 1629169
data/derived_events/derived_events-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e67707faf0c02e3ad8606a89ace4959fecfa060e27f0244e885ee537a2e44ca
3
+ size 291804
data/documents/documents-00000-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7cbb617509ee7a80fba0131258a10c28540e88af6cc89f57d85cd2b4e5c582d
3
+ size 32160053
data/documents/documents-00001-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de6b953b10b7c127441fe01f98c1932dca33c0a0cc1ae89ad43088a8a4a5c7a6
3
+ size 17896030
data/documents/documents-00002-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20dcade7851269ecabfef1706bbd3c0d3b8d9b5d7c76de6127a384f096b4f1dc
3
+ size 19282498
data/documents/documents-00003-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40283258fd792d74b30cfac709be1580252fb0d3b755f794c3abe5e5018ef5bc
3
+ size 19455797
data/documents/documents-00004-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa27211d9f5730e8585fee6ec413aecf27f5fe88d343294c15568df991a76a9b
3
+ size 17039369
data/documents/documents-00005-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:335f064da47bffeacaded9fe1021b26d2481e88fb71915b01285ec883b6876a8
3
+ size 17440809
data/documents/documents-00006-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e25f3d8f31833bf103173dad70e52ca54d7945d397f5cecd23588c4c2c6a6db
3
+ size 17857487
data/documents/documents-00007-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c950c8ab37e16d698d1fe5480977d0447831dad78957a15942ce75787ce3a10b
3
+ size 109974347
data/documents/documents-00008-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:882668db59453182f3dc50560da56f1acbfcfc6b2a0e5ba09cd95faa1127c363
3
+ size 25698673
data/documents/documents-00009-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:593199750fd761c17fe590d9c4e7bea71e23bf2e8d9c43f01a0b49cee4b57d23
3
+ size 57062658
data/documents/documents-00010-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bcdd7b92050f8eac4e2f758b9489064df9265b6f6523f621965ae61461860f9
3
+ size 45063376
data/documents/documents-00011-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a169499d8d62d68e80740b8e16418ca3d086c6d28314ab33e77af75b9a1f7cb9
3
+ size 30585491
data/documents/documents-00012-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6e9112404eea9d15a08401dba574250bb51e396e63f1ae759eb66c05f66b730
3
+ size 99331623
data/documents/documents-00013-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6cf89ef12831f9240d1b4747d8014cd4961edc0a963eced9ba7c20861c71371
3
+ size 91335664
data/documents/documents-00014-of-00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eade2f5a3e0b115bc1dc72a8c45e84e8dd2c03858b949bcf4728c479bcf3d5be
3
+ size 34729689
data/embeddings_chunk/embeddings_chunk-00000-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86b02fb452416c4030ad2ddb88432fdc272492f89f645da7683abdf26960d4c9
3
+ size 288483749
data/embeddings_chunk/embeddings_chunk-00001-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:148d386cd71cd4851ebbc4a635b059c33f9807b20dd362663bfbb1b801f618c2
3
+ size 285222350
data/embeddings_chunk/embeddings_chunk-00002-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c2ef04c2ec3a7a06bc723d20e2e359a517a46aa18f6f17827b4f669c6c3a7c4
3
+ size 289276438
data/embeddings_chunk/embeddings_chunk-00003-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cb3c5462ba3550dd12356b621fc69c31d3057a06acdeeae675cdfb4393a8cd0
3
+ size 289338036
data/embeddings_chunk/embeddings_chunk-00004-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be1803d2be01c0d42bd5cca026226eb8b6b0c8d60bcc0eb08cbab620a7714393
3
+ size 289401040
data/embeddings_chunk/embeddings_chunk-00005-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8367b73b4f07ad199e51f901114e18285d332ebe8b7f177a6769aa450b80ed6
3
+ size 289392645
data/embeddings_chunk/embeddings_chunk-00006-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8708706541bad5ec2c4e2aa0f4f2c2ac0da13bffbd35ad65f2c2c1060a8c7c00
3
+ size 289367235
data/embeddings_chunk/embeddings_chunk-00007-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce160de0b2308be6c77c8564afdd20df9031e3bbe5021c0f4567780816d0e6cd
3
+ size 288539090
data/embeddings_chunk/embeddings_chunk-00008-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d038982c4f8a25368d4e6be209fd55bb6a18c1a751bf93f17cb223fe09627d4
3
+ size 287019691
data/embeddings_chunk/embeddings_chunk-00009-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07ddf238792ec25ef0de6c02652036f2faf85f60ad7d4e59388460cb8f624061
3
+ size 282093097
data/embeddings_chunk/embeddings_chunk-00010-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33daa455fce4d8f2003668f737452361775e8381511fe187b072eb751c396f66
3
+ size 288806736
data/embeddings_chunk/embeddings_chunk-00011-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cca3204fb9e03c2ff5a372de2ffac3e4d5c1e0bd3db3b0ff2d1dc9e48ea60b90
3
+ size 286994949
data/embeddings_chunk/embeddings_chunk-00012-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2561d7f17910d13d91a733b9eac39b9b8ea00f85a9bdee2c66c19cf320832d43
3
+ size 288108603
data/embeddings_chunk/embeddings_chunk-00013-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4da100983a1f559f614686080c1e6cdf6b83fa165ebeffc9dc5954838e800be6
3
+ size 289185371
data/embeddings_chunk/embeddings_chunk-00014-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51f74efd7e6751b179d7b4122eb5869a74029f4be2480eca38e8465e1ad096bb
3
+ size 288657815
data/embeddings_chunk/embeddings_chunk-00015-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:437de34dd97ec77377c2200214343bc26b0d7802611805637439b82ae6a89188
3
+ size 289344303
data/embeddings_chunk/embeddings_chunk-00016-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6551e7141e768bd62da4eb85f04be0b7136ba4ed16ecee55f73218acd503ac9f
3
+ size 287325025
data/embeddings_chunk/embeddings_chunk-00017-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91e0bae0551f51fa8353c4c44db4e05ad301112e7f6eb31260452937bfedd6a5
3
+ size 286707491
data/embeddings_chunk/embeddings_chunk-00018-of-00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b6917c6d46af116184d9c95c1343467b0675a7df0736c921dacb8619ba250b7
3
+ size 288816406