Datasets:
text
stringlengths 814
33M
|
|---|
"y: \n\nuTd-nu \n\n/o- \n\nBY THE SAME AUTHOR. \n\nA Treatise on Elementary Dynamics. Crown 8vo. \nT(...TRUNCATED)
|
"^m /(^. \n\n-cs- \n\nkM. \n\n-tp \n\n. % .(- 7 \"\"J \n\nDigitized by tine Internet Arciiive \n\nin(...TRUNCATED)
|
"CIZIZCVJ \n\nCM \n\nID \n\n:CO \n\nIE \n. 5 \n\nH79A5 \nc. 1 \nROBA \n\nTHE ORIGINAL LISTS. \n\nrtg(...TRUNCATED)
|
"Gc \n\n942.1302 \nW532a \nGENEALOGY COLLECTION \n\n3 1833 00724 5589 \n\nTHE \n\n^ tt ib 1 1 c a 1 (...TRUNCATED)
|
"**1 \n\nmmmmm*?™\"*™: \n\ni \n\nMm^^M^^Wy? \n\nHHHH \n\nt \n\n•■■•• ;^:W-N<- \n\nKtfH(...TRUNCATED)
|
"Gc M. L; \n\n942.38019 \nAalp \nV.7 \nGENEALOGY COLLECTION \n\nALLEN COUNTY PUBLIC LIBRARY \n\n3 18(...TRUNCATED)
|
"THE \n\nEARLY GERMANS \n\nOF \n\nNEW JERSEY \n\nTHEIR \n\nHistory, Churches and GEN£alogies \n\nTH(...TRUNCATED)
|
"O/ \n\nDigitized by tine Internet Arclnive \n\nin 2011 with funding from \n\nThe Institute of Museu(...TRUNCATED)
|
"Tee \nsoenennn \n\n0 tyre \n\nwee \n\nee \npenerwteretyar rte \n\n4ytienii- \n\ni \n\nthae \n\ni? \(...TRUNCATED)
|
"8 9G80IEl0 LOLI € \n\nMl | \n\nOLNOHO!L JO ALISHAAIN \n\ni \neg \n4 © 4 \ne \n» \nLa \n‘ ‘ (...TRUNCATED)
|
Internet Archive Historical Texts (0001-1899)
TL;DR
- 711,680 cleaned public-domain style documents harvested from the Internet Archive via a high-throughput text-to-parquet pipeline.
- Coverage targets items that contain textual content dated between 0001 and 1899, ranked by download counts; ~715k IDs were attempted, ~4.1k were filtered during preprocessing.
- Stored in 620 Zstandard-compressed Parquet shards (
shard_00000.parquet...shard_00619.parquet) occupying ~240 GB on disk and ~622 billion characters uncompressed. - Texts underwent aggressive OCR cleanup (disclaimer removal, page-number stripping, ASCII ratio checks, min length=100) to match the fineweb/nanochat training format.
- Sample-based language detection shows the collection is overwhelmingly English (~97%), with trace amounts of French, Dutch, Slovene, and Czech.
Repository Layout
shard_#####.parquet– text-only Parquet shards with string columntext; row groups are sized at 1024 documents, and many shards contain two groups (2048 docs).checkpoint_processed_ids.txt– resume log containing 715,776 processed Archive item identifiers (kept + filtered).
Dataset Card
Data Summary
| Metric | Value |
|---|---|
| Total documents kept | 711,680 |
| Processed Archive IDs (kept + filtered) | 715,776 |
| Filtered during preprocessing | 4,096 (~0.6%) |
| Parquet shards | 620 |
| Rows per shard | 1,024–2,048 (avg 1,148) |
On-disk size (shard_*.parquet) |
240 GB |
| Total characters (uncompressed) | 622,091,938,957 |
| Mean characters per doc | 874,117 |
| Std deviation | 1,625,026 |
| Min / Max characters | 100 / 67,609,272 |
| P25 / P50 / P75 | 152,401 / 483,738 / 1,076,868 |
| P90 / P95 / P99 | 1,891,420 / 2,737,610 / 6,333,235 |
Language Profile (sample of 200 docs across evenly spaced shards)
| ISO code | Language | Count | Share |
|---|---|---|---|
en |
English | 195 | 97.5% |
fr |
French | 2 | 1.0% |
nl |
Dutch | 1 | 0.5% |
sl |
Slovene | 1 | 0.5% |
cs |
Czech | 1 | 0.5% |
unknown |
Detection failure | 0 | 0% |
Detection used langdetect on the first 2k characters per sampled document. Results are indicative, not exhaustive; rarer languages may be underrepresented due to the small sample.
Data Collection and Preprocessing
- Acquisition pipeline: A bespoke high-concurrency downloader queues Archive.org identifiers, retrieves OCR’d text files, and writes batched Parquet shards while checkpointing processed IDs.
- Filters applied:
- Removal of common Internet Archive, Google Books, JSTOR disclaimers.
- Page-number and bracketed page annotation stripping.
- OCR artifact smoothing (single-letter noise, em/en dash normalization, whitespace compaction).
- Printable-character filtering and ASCII ratio threshold (≥70% ASCII).
- Length filter: documents shorter than 100 characters dropped.
- Shard writing: Zstandard compression level 1, Arrow row group size 1024. Shard size targets
500Mcharacters but varies with doc length distribution.
Known Issues and Limitations
- Residual OCR errors remain, especially in very long volumes where heuristic cleaning is limited.
- The dataset stores plain text only; metadata such as author, title, year, or download counts are not preserved in the shards.
- Some public-domain disclaimers survive when pattern variants were not recognized.
- Documents can be extremely long (max >67M characters) leading to significant memory pressure when loaded naively.
- Language balance is skewed toward English due to query bias (download count ranking).
Ethical Considerations
- All texts were sourced from the Internet Archive. Users must ensure their downstream use complies with the Archive’s Terms of Use and the legal status of individual works in their jurisdiction.
- The dataset targets historical materials; nevertheless, manual review is advised before deploying outputs in production settings.
Suggested Citation
“Internet Archive Historical Texts (0001-1899) dataset, assembled via a high-concurrency Internet Archive downloader from items sorted by download counts.”
Please also cite the Internet Archive and the original works when appropriate.
Working With the Dataset
High-throughput Reading Tips
The environment used to build this dataset offered 52 CPU cores, ~700 GB RAM, and NVMe storage rated around 10 GB/s. To exploit similar hardware when loading the data:
import pyarrow.dataset as ds
import pyarrow.compute as pc
dataset = ds.dataset("shard_*.parquet", format="parquet")
scanner = dataset.scanner(
columns=["text"],
use_threads=True, # leverage multi-core CPU
batch_size=4096 # larger batches amortize I/O
)
for batch in scanner.to_batches():
# operate on Arrow arrays without converting to Python when possible
lengths = pc.utf8_length(batch["text"])
# ... downstream processing ...
Additional practical tips:
- Enable Arrow memory mapping (
dataset = ds.dataset(files, format="parquet", filesystem=...)) if the filesystem supports it; this avoids copying data into Python space. - For PyTorch/NumPy pipelines, stream batches instead of materializing the entire dataset (
scanner.to_reader()). - When sampling large texts, slice in Arrow before conversion (
pc.utf8_slice_codeunits) to avoid pulling multi-megabyte strings into Python. - Use
pyarrow.parquet.ParquetFileto inspect shard metadata (row counts, row groups) before launching heavy jobs.
Quick Statistics Script
Recompute the headline metrics directly from the shards:
python - <<'PY'
import pyarrow.dataset as ds
import pyarrow.compute as pc
from math import sqrt
dataset = ds.dataset("shard_*.parquet", format="parquet")
scanner = dataset.scanner(columns=["text"], use_threads=True)
count = 0
char_sum = 0
char_sq_sum = 0
min_len = None
max_len = 0
for batch in scanner.to_batches():
lengths = pc.utf8_length(batch["text"])
batch_sum = pc.sum(lengths).as_py()
count += batch.num_rows
char_sum += batch_sum
char_sq_sum += pc.sum(pc.multiply(pc.cast(lengths, "float64"), pc.cast(lengths, "float64"))).as_py()
batch_min = pc.min(lengths).as_py()
batch_max = pc.max(lengths).as_py()
min_len = batch_min if min_len is None else min(min_len, batch_min)
max_len = max(max_len, batch_max)
mean = char_sum / count
variance = max(0.0, (char_sq_sum / count) - mean**2)
print(f"docs={count:,} chars={char_sum:,} mean={mean:,.0f} std={sqrt(variance):,.0f} min={min_len:,} max={max_len:,}")
PY
Sampling Languages
If langdetect is available, you can reproduce the language profile on a lightweight subset:
python - <<'PY'
import random
import glob
from collections import Counter
import pyarrow.parquet as pq
from langdetect import detect, DetectorFactory
DetectorFactory.seed = 0
files = sorted(glob.glob("shard_*.parquet"))
indices = [round(i) for i in [k * (len(files) - 1) / 9 for k in range(10)]]
lang_counts = Counter()
for idx in indices:
table = pq.read_table(files[idx], columns=["text"], use_threads=True)
for row in random.sample(range(table.num_rows), 20):
snippet = table.column(0)[row].as_py()[:2000]
try:
lang = detect(snippet)
except Exception:
lang = "unknown"
lang_counts[lang] += 1
print(lang_counts)
PY
Acknowledgements
- Thanks to the Internet Archive for maintaining open access to historical texts.
- The acquisition pipeline builds on prior high-concurrency scraping work developed for large-scale language-model pretraining.
- Downloads last month
- 433