beyarkay commited on
Commit
1462867
·
1 Parent(s): 75d1d09

Update chronicling america dl script to rm .xml files

Browse files
Files changed (2) hide show
  1. README.md +59 -33
  2. src/download_chronicling_america.py +9 -0
README.md CHANGED
@@ -37,33 +37,28 @@ Download (~8GB), excluding most file types except for `.txt` (while keeping
37
  unknown file types, in case they're useful):
38
 
39
  ```
40
- rsync -av \
41
- --del \
42
- --exclude='*.gif' \
43
- --exclude='*.htm' \
44
- --exclude='*.html' \
45
- --exclude='*.iso' \
46
- --exclude='*.jpeg' \
47
- --exclude='*.jpg' \
48
- --exclude='*.m4a' \
49
- --exclude='*.m4b' \
50
- --exclude='*.mid' \
51
- --exclude='*.mp3' \
52
- --exclude='*.ogg' \
53
- --exclude='*.png' \
54
- --exclude='*.spx' \
55
- --exclude='*.spx' \
56
- --exclude='*.xml' \
57
- --exclude='*.zip' \
58
  ftp.ibiblio.org::gutenberg \
59
  data/gutenberg
60
  ```
61
 
 
 
 
 
 
 
62
  Afterwards (or during) the download, there'll be a lot of non-text files.
63
  Remove them using:
64
 
65
  ```
66
- find data/gutenberg/ -type f \( -iname '*.gif' -o -iname '*.jpg' -o -iname '*.jpeg' -o -iname '*.html' -o -iname '*.htm' -o -iname '*.png' -o -iname '*.mp3' -o -iname '*.rst' -o -iname '*.rtf' -o -iname '*.doc' -o -iname '*.lit' -o -iname '*.xml' -o -iname '*.iso.*' -o -iname '*.prc' \) -delete
67
  ```
68
 
69
  List all txt files and their sizes (in human readable numbers)
@@ -110,6 +105,18 @@ And also anything published after 1950
110
 
111
  And also anything not in English
112
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  ## Chronicling America
114
 
115
  Information: https://chroniclingamerica.loc.gov/ocr/
@@ -129,6 +136,14 @@ indicating content after 1950 and delete them:
129
  find -E data/chronicling-america -type d -regex '.*/(1950|19[5-9][0-9]|20[0-9]{2})$' -exec rm -rf {} +
130
  ```
131
 
 
 
 
 
 
 
 
 
132
  TODO the resulting files are pretty bad. The OCR has many many artefacts, and
133
  not all of them are obvious how to fix, since the source scans/images aren't
134
  available apparently. Not sure how to fix these without using modern LLMs and
@@ -159,20 +174,31 @@ find data/bhl/ -type f \( -iname '*.jpg' -o -iname '*.jpeg' -o -iname '*.html' -
159
 
160
  ## Archive.org
161
 
162
- Dataset query (1800-1950): https://archive.org/search?query=date%3A%5B1800-01-01%20TO%201949-12-31%5D
163
-
164
- Advanced search: https://archive.org/advancedsearch.php
165
-
166
- Query: `mediatype:(texts) AND language:(English) AND date:[1800-01-01 TO 1949-12-31]`
167
-
168
- URL with query:
169
-
170
- ```
171
- https://archive.org/advancedsearch.php?q=mediatype%3A%28texts%29+AND+language%3A%28English%29+AND+date%3A%5B1800-01-01+TO+1949-12-31%5D+&fl%5B%5D=identifier&sort%5B%5D=date+asc&sort%5B%5D=&sort%5B%5D=&rows=50&page=1&callback=callback&save=yes&output=csv
172
- ```
173
-
174
- Theoretically should be able to download lots of stuff, but I don't know
175
- exactly how to do so.
 
 
 
 
 
 
 
 
 
 
 
176
 
177
  ## US Post Office
178
 
 
37
  unknown file types, in case they're useful):
38
 
39
  ```
40
+ rsync -av --del \
41
+ --include='*/' \
42
+ --include='*.txt' \
43
+ --include='*.TXT' \
44
+ --include='*.text' \
45
+ --exclude='*' \
46
+ --info=progress2 \
 
 
 
 
 
 
 
 
 
 
 
47
  ftp.ibiblio.org::gutenberg \
48
  data/gutenberg
49
  ```
50
 
51
+ List all unique file extensions:
52
+
53
+ ```
54
+ find data/gutenberg/ -type f | sed -n 's/.*\.\([^.\/]\+\)$/\1/p' | sort -u
55
+ ```
56
+
57
  Afterwards (or during) the download, there'll be a lot of non-text files.
58
  Remove them using:
59
 
60
  ```
61
+ find data/gutenberg/ -type f \( -iname '*.m4a' -o -iname '*.m4b' -o -iname '*.gif' -o -iname '*.jpg' -o -iname '*.jpeg' -o -iname '*.html' -o -iname '*.htm' -o -iname '*.png' -o -iname '*.mp3' -o -iname '*.rst' -o -iname '*.rtf' -o -iname '*.doc' -o -iname '*.lit' -o -iname '*.xml' -o -iname '*.iso.*' -o -iname '*.prc' \) -delete
62
  ```
63
 
64
  List all txt files and their sizes (in human readable numbers)
 
105
 
106
  And also anything not in English
107
 
108
+ Hmm. A bit problematic, Project Gutenberg explicitly does not include the
109
+ original publication date of the items in their catalogue
110
+ [link](www.gutenberg.org/ebooks/offline_catalogs.html#the-gutindex-listings-of-ebooks):
111
+
112
+ > Project Gutenberg metadata does not include the original print source
113
+ > publication date(s). Because Project Gutenberg eBooks are substantially
114
+ > different from the source book(s), we track the Project Gutenberg publication
115
+ > date (“release date”), but do not include print source information in the
116
+ > metadata.
117
+
118
+ So we'll need to date all the items manually. Hrmm
119
+
120
  ## Chronicling America
121
 
122
  Information: https://chroniclingamerica.loc.gov/ocr/
 
136
  find -E data/chronicling-america -type d -regex '.*/(1950|19[5-9][0-9]|20[0-9]{2})$' -exec rm -rf {} +
137
  ```
138
 
139
+ We'll also want to delete all the XML files:
140
+
141
+ ```
142
+ find -type f -iname '*.xml' -delete
143
+ ```
144
+
145
+ (this will clear a few hundred GBs)
146
+
147
  TODO the resulting files are pretty bad. The OCR has many many artefacts, and
148
  not all of them are obvious how to fix, since the source scans/images aren't
149
  available apparently. Not sure how to fix these without using modern LLMs and
 
174
 
175
  ## Archive.org
176
 
177
+ The script `src/download_archive_dot_org.py` will download an _index_ of all
178
+ the archive files matching the above query. These indices take up about 259MB,
179
+ and will be stored to `data/archive-dot-org/indices/`, containing the date, the
180
+ id, and the size of the item in bytes. The ID can then be used to download the
181
+ actual files. To download all the text files associated with the IDs listed in
182
+ a file `./itemlist.txt`, you can use this command:
183
+
184
+ ```
185
+ wget \
186
+ --recursive \
187
+ --span-hosts \
188
+ --no-clobber \
189
+ --no-parent \
190
+ --no-host-directories \
191
+ --cut-dirs=1 \
192
+ --accept=txt \
193
+ --execute robots=off \
194
+ --level=1 \
195
+ --input-file=./itemlist.txt \
196
+ --base='http://archive.org/download/'
197
+ ```
198
+
199
+ - Dataset query (1800-1950): https://archive.org/search?query=date%3A%5B1800-01-01%20TO%201949-12-31%5D
200
+ - Advanced search: https://archive.org/advancedsearch.php
201
+ - Query: `mediatype:(texts) AND language:(English) AND date:[1800-01-01 TO 1949-12-31]`
202
 
203
  ## US Post Office
204
 
src/download_chronicling_america.py CHANGED
@@ -23,6 +23,14 @@ CATALOG_URL = "https://chroniclingamerica.loc.gov/ocr.json"
23
  DEST_ROOT = Path("data/chronicling-america")
24
  ARCHIVES_DIR = DEST_ROOT / "_archives"
25
 
 
 
 
 
 
 
 
 
26
  def main() -> None:
27
  DEST_ROOT.mkdir(parents=True, exist_ok=True)
28
  ARCHIVES_DIR.mkdir(parents=True, exist_ok=True)
@@ -46,6 +54,7 @@ def main() -> None:
46
  print(f"extracting {name}")
47
  with tarfile.open(archive_path, "r:bz2") as tar:
48
  tar.extractall(out_dir) # simple, unsafe but short
 
49
  os.remove(archive_path)
50
 
51
  print("done")
 
23
  DEST_ROOT = Path("data/chronicling-america")
24
  ARCHIVES_DIR = DEST_ROOT / "_archives"
25
 
26
+ def remove_non_txt_files(root: Path) -> None:
27
+ deleted = 0
28
+ for p in root.rglob("*"):
29
+ if p.is_file() and p.suffix.lower() != ".txt":
30
+ p.unlink()
31
+ deleted += 1
32
+ print(f"deleted {deleted} non-txt files in {root}")
33
+
34
  def main() -> None:
35
  DEST_ROOT.mkdir(parents=True, exist_ok=True)
36
  ARCHIVES_DIR.mkdir(parents=True, exist_ok=True)
 
54
  print(f"extracting {name}")
55
  with tarfile.open(archive_path, "r:bz2") as tar:
56
  tar.extractall(out_dir) # simple, unsafe but short
57
+ remove_non_txt_files(out_dir)
58
  os.remove(archive_path)
59
 
60
  print("done")