beyarkay commited on
Commit ·
2abde28
1
Parent(s): 9269662
Add dl archive.org script for downloading indices
Browse files- README.md +32 -10
- dl_proj_gut.sh → src/dl_proj_gut.sh +0 -0
- src/download_archive_dot_org.py +91 -0
- src/download_chronicling_america.py +2 -5
README.md
CHANGED
|
@@ -2,7 +2,6 @@
|
|
| 2 |
license: mit
|
| 3 |
task_categories:
|
| 4 |
- text-generation
|
| 5 |
-
- text2text-generation
|
| 6 |
language:
|
| 7 |
- en
|
| 8 |
tags:
|
|
@@ -19,18 +18,18 @@ size_categories:
|
|
| 19 |
|
| 20 |
# A dataset of pre-1950 English text
|
| 21 |
|
| 22 |
-
|
| 23 |
-
capabilities and developments
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
|
| 26 |
|
| 27 |
-
- www.chroniclingamerica.loc.gov: https://chroniclingamerica.loc.gov/lccn/sn86072192/1897-08-01/ed-1/seq-1/
|
| 28 |
-
- www.gutenberg.org/
|
| 29 |
-
- https://www.newspapers.com (https://www.newspapers.com/paper/the-45th-division-news/29237/) (licensing unclear, requires sign-in)
|
| 30 |
- wikipedia looks like it's got a big list of newspaper archives: https://en.wikipedia.org/wiki/Wikipedia:List_of_online_newspaper_archives
|
| 31 |
- also see https://github.com/haykgrigo3/TimeCapsuleLLM
|
| 32 |
|
| 33 |
-
# Data Sources
|
| 34 |
|
| 35 |
## Project Gutenberg
|
| 36 |
|
|
@@ -70,18 +69,24 @@ Information: https://chroniclingamerica.loc.gov/ocr/
|
|
| 70 |
|
| 71 |
JSON listing of files: https://chroniclingamerica.loc.gov/ocr.json
|
| 72 |
|
| 73 |
-
Download
|
| 74 |
|
| 75 |
```
|
| 76 |
uv run src/download_chronicling_america.py
|
| 77 |
```
|
| 78 |
|
| 79 |
-
|
|
|
|
| 80 |
|
| 81 |
```
|
| 82 |
find -E data/chronicling-america -type d -regex '.*/(1950|19[5-9][0-9]|20[0-9]{2})$' -exec rm -rf {} +
|
| 83 |
```
|
| 84 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
## Biodiversity Heritage Library
|
| 86 |
|
| 87 |
60+ million pages of OCR content (~41 GB)
|
|
@@ -122,6 +127,23 @@ exactly how to do so.
|
|
| 122 |
|
| 123 |
https://data.uspto.gov/apis/getting-started
|
| 124 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
# Cleaning up
|
| 126 |
|
| 127 |
List all file containing at least 1% lines of non-English characters
|
|
|
|
| 2 |
license: mit
|
| 3 |
task_categories:
|
| 4 |
- text-generation
|
|
|
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
tags:
|
|
|
|
| 18 |
|
| 19 |
# A dataset of pre-1950 English text
|
| 20 |
|
| 21 |
+
This dataset is for training LLMs on old science, in order to quiz them about
|
| 22 |
+
current capabilities and developments. The goal is to explore how much
|
| 23 |
+
prompting an old-LLM would require in order to "invent" modern technology, with the
|
| 24 |
+
hope that this will inform how to get current LLMs to truly invent
|
| 25 |
+
next-generation technology.
|
| 26 |
|
| 27 |
+
# Data sources to investigate
|
| 28 |
|
|
|
|
|
|
|
|
|
|
| 29 |
- wikipedia looks like it's got a big list of newspaper archives: https://en.wikipedia.org/wiki/Wikipedia:List_of_online_newspaper_archives
|
| 30 |
- also see https://github.com/haykgrigo3/TimeCapsuleLLM
|
| 31 |
|
| 32 |
+
# Data Sources in use
|
| 33 |
|
| 34 |
## Project Gutenberg
|
| 35 |
|
|
|
|
| 69 |
|
| 70 |
JSON listing of files: https://chroniclingamerica.loc.gov/ocr.json
|
| 71 |
|
| 72 |
+
Download the full dataset, one archive at a time (total size is 2 115 GB):
|
| 73 |
|
| 74 |
```
|
| 75 |
uv run src/download_chronicling_america.py
|
| 76 |
```
|
| 77 |
|
| 78 |
+
Conveniently, they're all organised by date, so we can find all directories
|
| 79 |
+
indicating content after 1950 and delete them:
|
| 80 |
|
| 81 |
```
|
| 82 |
find -E data/chronicling-america -type d -regex '.*/(1950|19[5-9][0-9]|20[0-9]{2})$' -exec rm -rf {} +
|
| 83 |
```
|
| 84 |
|
| 85 |
+
TODO the resulting files are pretty bad. The OCR has many many artefacts, and
|
| 86 |
+
not all of them are obvious how to fix, since the source scans/images aren't
|
| 87 |
+
available apparently. Not sure how to fix these without using modern LLMs and
|
| 88 |
+
potentially infecting the dataset.
|
| 89 |
+
|
| 90 |
## Biodiversity Heritage Library
|
| 91 |
|
| 92 |
60+ million pages of OCR content (~41 GB)
|
|
|
|
| 127 |
|
| 128 |
https://data.uspto.gov/apis/getting-started
|
| 129 |
|
| 130 |
+
## Hathi Trust
|
| 131 |
+
|
| 132 |
+
> HathiTrust was founded in 2008 as a not-for-profit collaborative of academic
|
| 133 |
+
> and research libraries now preserving 18+ million digitized items in the
|
| 134 |
+
> HathiTrust Digital Library. We offer reading access to the fullest extent
|
| 135 |
+
> allowable by U.S. and international copyright law, text and data mining tools
|
| 136 |
+
> for the entire corpus, and other emerging services based on the combined
|
| 137 |
+
> collection.
|
| 138 |
+
|
| 139 |
+
Looks like it has a lot of information, although this might all just be
|
| 140 |
+
duplicated from the data available in the Internet Archive. Also it's less
|
| 141 |
+
easy to download, Google Books has some pretty restrictive licensing
|
| 142 |
+
|
| 143 |
+
https://babel.hathitrust.org/cgi/pt?id=mdp.39015082239875&seq=26&format=plaintext
|
| 144 |
+
|
| 145 |
+
[Advanced search URL](https://babel.hathitrust.org/cgi/ls?lmt=ft&a=srchls&adv=1&c=148631352&q1=*&field1=ocr&anyall1=all&op1=AND&yop=before&pdate_end=1949&facet_lang=language008_full%3AEnglish&facet_lang=language008_full%3AEnglish%2C+Middle+%281100-1500%29&facet_lang=language008_full%3AEnglish%2C+Old+%28ca.+450-1100%29&facet_format=format%3ADictionaries&facet_format=format%3AEncyclopedias&facet_format=format%3AJournal&facet_format=format%3AManuscript&facet_format=format%3ANewspaper&facet_format=format%3ABiography&facet_format=format%3ABook)
|
| 146 |
+
|
| 147 |
# Cleaning up
|
| 148 |
|
| 149 |
List all file containing at least 1% lines of non-English characters
|
dl_proj_gut.sh → src/dl_proj_gut.sh
RENAMED
|
File without changes
|
src/download_archive_dot_org.py
ADDED
|
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# /// script
|
| 2 |
+
# requires-python = ">=3.11"
|
| 3 |
+
# dependencies = ["tqdm"]
|
| 4 |
+
# ///
|
| 5 |
+
import csv
|
| 6 |
+
import urllib.parse
|
| 7 |
+
import urllib.request
|
| 8 |
+
from pathlib import Path
|
| 9 |
+
from datetime import date, timedelta
|
| 10 |
+
from tqdm import tqdm
|
| 11 |
+
|
| 12 |
+
BASE_URL = "https://archive.org/advancedsearch.php"
|
| 13 |
+
FIELDS = ['date', 'identifier', 'item_size']
|
| 14 |
+
ROWS_PER_PAGE = 1000
|
| 15 |
+
|
| 16 |
+
OUT_ROOT = Path("data/archive-dot-org/indices")
|
| 17 |
+
BY_YEAR = OUT_ROOT / "by_year"
|
| 18 |
+
BY_DAY = OUT_ROOT / "by_day"
|
| 19 |
+
BY_YEAR.mkdir(parents=True, exist_ok=True)
|
| 20 |
+
BY_DAY.mkdir(parents=True, exist_ok=True)
|
| 21 |
+
|
| 22 |
+
def parse_csv(body: str) -> list[dict]:
|
| 23 |
+
lines = body.strip().splitlines()
|
| 24 |
+
if not lines or lines[0].startswith('<!DOCTYPE html>'):
|
| 25 |
+
return []
|
| 26 |
+
reader = csv.DictReader(lines)
|
| 27 |
+
return list(reader)
|
| 28 |
+
|
| 29 |
+
def fetch_range(start: str, end: str) -> list[dict]:
|
| 30 |
+
all_rows = []
|
| 31 |
+
page = 1
|
| 32 |
+
while True:
|
| 33 |
+
q = f'mediatype:(texts) AND language:(English) AND date:[{start} TO {end}]'
|
| 34 |
+
params = {
|
| 35 |
+
'q': q,
|
| 36 |
+
'fl[]': FIELDS,
|
| 37 |
+
'sort[]': 'date asc',
|
| 38 |
+
'rows': str(ROWS_PER_PAGE),
|
| 39 |
+
'page': str(page),
|
| 40 |
+
'output': 'csv',
|
| 41 |
+
}
|
| 42 |
+
url = BASE_URL + '?' + urllib.parse.urlencode(params, doseq=True)
|
| 43 |
+
try:
|
| 44 |
+
with urllib.request.urlopen(url) as r:
|
| 45 |
+
body = r.read().decode('utf-8')
|
| 46 |
+
except Exception as e:
|
| 47 |
+
print(f" Failed {start}–{end} page {page}: {e}")
|
| 48 |
+
break
|
| 49 |
+
rows = parse_csv(body)
|
| 50 |
+
if not rows:
|
| 51 |
+
break
|
| 52 |
+
all_rows.extend(rows)
|
| 53 |
+
if len(rows) < ROWS_PER_PAGE:
|
| 54 |
+
break
|
| 55 |
+
page += 1
|
| 56 |
+
return all_rows
|
| 57 |
+
|
| 58 |
+
# 1. Pre-1900: yearly
|
| 59 |
+
pbar = tqdm(range(1600, 1900), desc="Year bins")
|
| 60 |
+
for year in pbar:
|
| 61 |
+
out_file = BY_YEAR / f"{year}.csv"
|
| 62 |
+
if out_file.exists():
|
| 63 |
+
continue
|
| 64 |
+
pbar.set_description(f"Fetching {year}")
|
| 65 |
+
rows = fetch_range(f"{year}-01-01", f"{year}-12-31")
|
| 66 |
+
if rows:
|
| 67 |
+
pbar.set_description(f"Fetching {year} (writing {len(rows)} rows)")
|
| 68 |
+
with out_file.open("w", newline='', encoding="utf-8") as f:
|
| 69 |
+
writer = csv.DictWriter(f, fieldnames=FIELDS)
|
| 70 |
+
writer.writeheader()
|
| 71 |
+
writer.writerows(rows)
|
| 72 |
+
|
| 73 |
+
# 2. 1900–1949: daily
|
| 74 |
+
start_date = date(1900, 1, 1)
|
| 75 |
+
end_date = date(1950, 1, 1)
|
| 76 |
+
cur = start_date
|
| 77 |
+
pbar = tqdm(total=(end_date - start_date).days, desc="Daily bins")
|
| 78 |
+
while cur < end_date:
|
| 79 |
+
day_str = cur.isoformat()
|
| 80 |
+
out_file = BY_DAY / f"{day_str}.csv"
|
| 81 |
+
if not out_file.exists():
|
| 82 |
+
pbar.set_description("Fetching {day_str}")
|
| 83 |
+
rows = fetch_range(day_str, day_str)
|
| 84 |
+
if rows:
|
| 85 |
+
pbar.set_description("Fetching {day_str} (writing {len(rows)} rows)")
|
| 86 |
+
with out_file.open("w", newline='', encoding="utf-8") as f:
|
| 87 |
+
writer = csv.DictWriter(f, fieldnames=FIELDS)
|
| 88 |
+
writer.writeheader()
|
| 89 |
+
writer.writerows(rows)
|
| 90 |
+
cur += timedelta(days=1)
|
| 91 |
+
pbar.update(1)
|
src/download_chronicling_america.py
CHANGED
|
@@ -37,10 +37,9 @@ def main() -> None:
|
|
| 37 |
out_dir = DEST_ROOT / name.removesuffix(".tar.bz2")
|
| 38 |
|
| 39 |
if out_dir.exists():
|
|
|
|
| 40 |
continue
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
if not out_dir.exists():
|
| 44 |
if not archive_path.exists():
|
| 45 |
print(f"downloading {name}")
|
| 46 |
urllib.request.urlretrieve(url, archive_path)
|
|
@@ -48,8 +47,6 @@ def main() -> None:
|
|
| 48 |
with tarfile.open(archive_path, "r:bz2") as tar:
|
| 49 |
tar.extractall(out_dir) # simple, unsafe but short
|
| 50 |
os.remove(archive_path)
|
| 51 |
-
else:
|
| 52 |
-
print(f"Directory exists, skipping {out_dir}")
|
| 53 |
|
| 54 |
print("done")
|
| 55 |
|
|
|
|
| 37 |
out_dir = DEST_ROOT / name.removesuffix(".tar.bz2")
|
| 38 |
|
| 39 |
if out_dir.exists():
|
| 40 |
+
print(f"Directory exists, skipping {out_dir}")
|
| 41 |
continue
|
| 42 |
+
else:
|
|
|
|
|
|
|
| 43 |
if not archive_path.exists():
|
| 44 |
print(f"downloading {name}")
|
| 45 |
urllib.request.urlretrieve(url, archive_path)
|
|
|
|
| 47 |
with tarfile.open(archive_path, "r:bz2") as tar:
|
| 48 |
tar.extractall(out_dir) # simple, unsafe but short
|
| 49 |
os.remove(archive_path)
|
|
|
|
|
|
|
| 50 |
|
| 51 |
print("done")
|
| 52 |
|