meettilavat commited on
Commit
68dfc56
·
verified ·
1 Parent(s): 6714c6b

Add files using upload-large-folder tool

Browse files
.ipynb_checkpoints/README-checkpoint.md CHANGED
@@ -1,7 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Internet Archive Historical Texts (0001-1899)
2
 
3
  ## TL;DR
4
- - 711,680 cleaned public-domain style documents harvested from the Internet Archive using `download_texts_improved.py`.
5
  - Coverage targets items that contain textual content dated between 0001 and 1899, ranked by download counts; ~715k IDs were attempted, ~4.1k were filtered during preprocessing.
6
  - Stored in 620 Zstandard-compressed Parquet shards (`shard_00000.parquet` ... `shard_00619.parquet`) occupying ~240 GB on disk and ~622 billion characters uncompressed.
7
  - Texts underwent aggressive OCR cleanup (disclaimer removal, page-number stripping, ASCII ratio checks, min length=100) to match the fineweb/nanochat training format.
@@ -9,6 +27,7 @@
9
 
10
  ## Repository Layout
11
  - `shard_#####.parquet` – text-only Parquet shards with string column `text`; row groups are sized at 1024 documents, and many shards contain two groups (2048 docs).
 
12
 
13
  ## Dataset Card
14
 
@@ -48,7 +67,7 @@
48
  Detection used `langdetect` on the first 2k characters per sampled document. Results are indicative, not exhaustive; rarer languages may be underrepresented due to the small sample.
49
 
50
  ### Data Collection and Preprocessing
51
- - **Acquisition pipeline**: `download_texts_improved.py` queues Archive.org identifiers, downloads OCR’d text files with high concurrency (default 256 threads), and writes batched Parquet shards while checkpointing processed IDs.
52
  - **Filters applied**:
53
  - Removal of common Internet Archive, Google Books, JSTOR disclaimers.
54
  - Page-number and bracketed page annotation stripping.
@@ -69,7 +88,7 @@ Detection used `langdetect` on the first 2k characters per sampled document. Res
69
  - The dataset targets historical materials; nevertheless, manual review is advised before deploying outputs in production settings.
70
 
71
  ### Suggested Citation
72
- > “Internet Archive Historical Texts (0001-1899) dataset, assembled via `download_texts_improved.py` from Archive.org items sorted by download counts.”
73
 
74
  Please also cite the Internet Archive and the original works when appropriate.
75
 
@@ -169,5 +188,4 @@ PY
169
 
170
  ## Acknowledgements
171
  - Thanks to the Internet Archive for maintaining open access to historical texts.
172
- - The acquisition pipeline is based on the `download_texts_improved.py` script (London v0) tuned for high-concurrency environments.
173
-
 
1
+ ---
2
+ pretty_name: Internet Archive Historical Texts (0001-1899)
3
+ tags:
4
+ - internet-archive
5
+ - historical-texts
6
+ - ocr
7
+ language:
8
+ - en
9
+ - fr
10
+ - nl
11
+ - sl
12
+ - cs
13
+ task_categories:
14
+ - text-generation
15
+ size_categories:
16
+ - 100K<n<1M
17
+ ---
18
+
19
  # Internet Archive Historical Texts (0001-1899)
20
 
21
  ## TL;DR
22
+ - 711,680 cleaned public-domain style documents harvested from the Internet Archive via a high-throughput text-to-parquet pipeline.
23
  - Coverage targets items that contain textual content dated between 0001 and 1899, ranked by download counts; ~715k IDs were attempted, ~4.1k were filtered during preprocessing.
24
  - Stored in 620 Zstandard-compressed Parquet shards (`shard_00000.parquet` ... `shard_00619.parquet`) occupying ~240 GB on disk and ~622 billion characters uncompressed.
25
  - Texts underwent aggressive OCR cleanup (disclaimer removal, page-number stripping, ASCII ratio checks, min length=100) to match the fineweb/nanochat training format.
 
27
 
28
  ## Repository Layout
29
  - `shard_#####.parquet` – text-only Parquet shards with string column `text`; row groups are sized at 1024 documents, and many shards contain two groups (2048 docs).
30
+ - `checkpoint_processed_ids.txt` – resume log containing 715,776 processed Archive item identifiers (kept + filtered).
31
 
32
  ## Dataset Card
33
 
 
67
  Detection used `langdetect` on the first 2k characters per sampled document. Results are indicative, not exhaustive; rarer languages may be underrepresented due to the small sample.
68
 
69
  ### Data Collection and Preprocessing
70
+ - **Acquisition pipeline**: A bespoke high-concurrency downloader queues Archive.org identifiers, retrieves OCR’d text files, and writes batched Parquet shards while checkpointing processed IDs.
71
  - **Filters applied**:
72
  - Removal of common Internet Archive, Google Books, JSTOR disclaimers.
73
  - Page-number and bracketed page annotation stripping.
 
88
  - The dataset targets historical materials; nevertheless, manual review is advised before deploying outputs in production settings.
89
 
90
  ### Suggested Citation
91
+ > “Internet Archive Historical Texts (0001-1899) dataset, assembled via a high-concurrency Internet Archive downloader from items sorted by download counts.”
92
 
93
  Please also cite the Internet Archive and the original works when appropriate.
94
 
 
188
 
189
  ## Acknowledgements
190
  - Thanks to the Internet Archive for maintaining open access to historical texts.
191
+ - The acquisition pipeline builds on prior high-concurrency scraping work developed for large-scale language-model pretraining.
 
data/shard_00578.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e899ef5430e617aafc80bdb9dbadf3040021f44425845e3b64afe8ea5f7c707
3
+ size 354437853
data/shard_00579.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29e7ec8b3524285390a418adb4f9a9036f46896bcf24db76a4cfc668aec453ef
3
+ size 339766218
data/shard_00580.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e108b1ca2712ce3e5c4980f4ad9ab286158729dfc6a1bd35b28ba7563e2201c7
3
+ size 381064361
data/shard_00581.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5693ff913c417f429ec387c68540c7d1a2efd4327fbde667517f2260e82cea7b
3
+ size 389062768
data/shard_00582.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88dda5f8dc2726a52ea5eccf7a06f64e4e26ffd5a14f4e022181715ad99b0320
3
+ size 339954898
data/shard_00583.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1abb35c4c0ebe6d057b5efbbd95fa2466773b3ac81985d5c93d55ca0e8ad61ba
3
+ size 405809774
data/shard_00584.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09965e32f5a1177d0bdf73b99c568a34f559d80aba7ef557028caa43246576e3
3
+ size 342250813
data/shard_00585.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9b654a628666c7dcd7c89f8c1671ea3eedd6a6f53f68f74a039032b08e1f51f
3
+ size 387711535
data/shard_00586.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17a6914300f1ecce0847ed54f54e8c802f8b671c0a121564d1b16c7694d5c2fb
3
+ size 272172391
data/shard_00587.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8dad4516f07bafcec4303a95be1e1eef9808b8de279653267ef9f672dcf37ca0
3
+ size 439427604
data/shard_00590.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:300390dd67ac5ca14daf37541dd0422aafe1529ee0f834ab40f871df3f61ea84
3
+ size 403883897