Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,7 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Internet Archive Historical Texts (0001-1899)
|
| 2 |
|
| 3 |
## TL;DR
|
| 4 |
-
- 711,680 cleaned public-domain style documents harvested from the Internet Archive
|
| 5 |
- Coverage targets items that contain textual content dated between 0001 and 1899, ranked by download counts; ~715k IDs were attempted, ~4.1k were filtered during preprocessing.
|
| 6 |
- Stored in 620 Zstandard-compressed Parquet shards (`shard_00000.parquet` ... `shard_00619.parquet`) occupying ~240 GB on disk and ~622 billion characters uncompressed.
|
| 7 |
- Texts underwent aggressive OCR cleanup (disclaimer removal, page-number stripping, ASCII ratio checks, min length=100) to match the fineweb/nanochat training format.
|
|
@@ -9,6 +27,7 @@
|
|
| 9 |
|
| 10 |
## Repository Layout
|
| 11 |
- `shard_#####.parquet` – text-only Parquet shards with string column `text`; row groups are sized at 1024 documents, and many shards contain two groups (2048 docs).
|
|
|
|
| 12 |
|
| 13 |
## Dataset Card
|
| 14 |
|
|
@@ -48,7 +67,7 @@
|
|
| 48 |
Detection used `langdetect` on the first 2k characters per sampled document. Results are indicative, not exhaustive; rarer languages may be underrepresented due to the small sample.
|
| 49 |
|
| 50 |
### Data Collection and Preprocessing
|
| 51 |
-
- **Acquisition pipeline**:
|
| 52 |
- **Filters applied**:
|
| 53 |
- Removal of common Internet Archive, Google Books, JSTOR disclaimers.
|
| 54 |
- Page-number and bracketed page annotation stripping.
|
|
@@ -69,7 +88,7 @@ Detection used `langdetect` on the first 2k characters per sampled document. Res
|
|
| 69 |
- The dataset targets historical materials; nevertheless, manual review is advised before deploying outputs in production settings.
|
| 70 |
|
| 71 |
### Suggested Citation
|
| 72 |
-
> “Internet Archive Historical Texts (0001-1899) dataset, assembled via
|
| 73 |
|
| 74 |
Please also cite the Internet Archive and the original works when appropriate.
|
| 75 |
|
|
@@ -169,5 +188,4 @@ PY
|
|
| 169 |
|
| 170 |
## Acknowledgements
|
| 171 |
- Thanks to the Internet Archive for maintaining open access to historical texts.
|
| 172 |
-
- The acquisition pipeline
|
| 173 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pretty_name: Internet Archive Historical Texts (0001-1899)
|
| 3 |
+
tags:
|
| 4 |
+
- internet-archive
|
| 5 |
+
- historical-texts
|
| 6 |
+
- ocr
|
| 7 |
+
language:
|
| 8 |
+
- en
|
| 9 |
+
- fr
|
| 10 |
+
- nl
|
| 11 |
+
- sl
|
| 12 |
+
- cs
|
| 13 |
+
task_categories:
|
| 14 |
+
- text-generation
|
| 15 |
+
size_categories:
|
| 16 |
+
- 100K<n<1M
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
# Internet Archive Historical Texts (0001-1899)
|
| 20 |
|
| 21 |
## TL;DR
|
| 22 |
+
- 711,680 cleaned public-domain style documents harvested from the Internet Archive via a high-throughput text-to-parquet pipeline.
|
| 23 |
- Coverage targets items that contain textual content dated between 0001 and 1899, ranked by download counts; ~715k IDs were attempted, ~4.1k were filtered during preprocessing.
|
| 24 |
- Stored in 620 Zstandard-compressed Parquet shards (`shard_00000.parquet` ... `shard_00619.parquet`) occupying ~240 GB on disk and ~622 billion characters uncompressed.
|
| 25 |
- Texts underwent aggressive OCR cleanup (disclaimer removal, page-number stripping, ASCII ratio checks, min length=100) to match the fineweb/nanochat training format.
|
|
|
|
| 27 |
|
| 28 |
## Repository Layout
|
| 29 |
- `shard_#####.parquet` – text-only Parquet shards with string column `text`; row groups are sized at 1024 documents, and many shards contain two groups (2048 docs).
|
| 30 |
+
- `checkpoint_processed_ids.txt` – resume log containing 715,776 processed Archive item identifiers (kept + filtered).
|
| 31 |
|
| 32 |
## Dataset Card
|
| 33 |
|
|
|
|
| 67 |
Detection used `langdetect` on the first 2k characters per sampled document. Results are indicative, not exhaustive; rarer languages may be underrepresented due to the small sample.
|
| 68 |
|
| 69 |
### Data Collection and Preprocessing
|
| 70 |
+
- **Acquisition pipeline**: A bespoke high-concurrency downloader queues Archive.org identifiers, retrieves OCR’d text files, and writes batched Parquet shards while checkpointing processed IDs.
|
| 71 |
- **Filters applied**:
|
| 72 |
- Removal of common Internet Archive, Google Books, JSTOR disclaimers.
|
| 73 |
- Page-number and bracketed page annotation stripping.
|
|
|
|
| 88 |
- The dataset targets historical materials; nevertheless, manual review is advised before deploying outputs in production settings.
|
| 89 |
|
| 90 |
### Suggested Citation
|
| 91 |
+
> “Internet Archive Historical Texts (0001-1899) dataset, assembled via a high-concurrency Internet Archive downloader from items sorted by download counts.”
|
| 92 |
|
| 93 |
Please also cite the Internet Archive and the original works when appropriate.
|
| 94 |
|
|
|
|
| 188 |
|
| 189 |
## Acknowledgements
|
| 190 |
- Thanks to the Internet Archive for maintaining open access to historical texts.
|
| 191 |
+
- The acquisition pipeline builds on prior high-concurrency scraping work developed for large-scale language-model pretraining.
|
|
|