ltgoslo commited on
Commit
cffb01a
·
verified ·
1 Parent(s): bff998e

Downloading script

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md CHANGED
@@ -264,6 +264,79 @@ wget https://data.hplt-project.org/three/sorted/eng_Latn.map
264
  To speed up large downloads, it can be beneficial to use multiple parallel connections, for example using the `--max-threads` option in `wget`.
265
  We recommend to limit download parallelization to 16–32 threads, to avoid server-side rate limitations, which should allow download rates of around 250 gigabytes per hour.
266
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
267
  ## New in this release compared to HPLT v2
268
  - Reflects substantially more raw web data, primarily from the Common Crawl
269
  - Additional metadata, including more information from the underlying crawl
 
264
  To speed up large downloads, it can be beneficial to use multiple parallel connections, for example using the `--max-threads` option in `wget`.
265
  We recommend to limit download parallelization to 16–32 threads, to avoid server-side rate limitations, which should allow download rates of around 250 gigabytes per hour.
266
 
267
+ ## Downloading with HuggingFace Datasets
268
+ To load a monolingual portion of the **HPLT v3.0** dataset in the _Huggingface Datasets_ format, you can run the following code to download the dataset files map and then load the `.jsonl.zst` files directly using `load_datasets()`.
269
+ The `Datasets` package will then handle downloading of the files.
270
+ If you would like to **stream** the files instead of downloading them all at once, set `streaming=True` within the `load_dataset()` function.
271
+
272
+ ```
273
+ from datasets import load_dataset, Features, Value, Sequence, List
274
+ import requests
275
+
276
+ lang_code = "yor_Latn" # Define your language-script code here, or "multilingual" for full multilingual portion minus English
277
+
278
+ r = requests.get(f"https://data.hplt-project.org/three/sorted/{lang_code}.map")
279
+
280
+ source_urls = r.text.strip().split("\n")
281
+
282
+ features = Features(
283
+ {
284
+ "f": Value("string"),
285
+ "o": Value("int64"),
286
+ "s": Value("int64"),
287
+ "rs": Value("int64"),
288
+ "u": Value("string"),
289
+ "c": Value("string"),
290
+ "ts": Value("timestamp[s]"),
291
+ "de": Value("string"),
292
+ "crawl_id": Value("string"),
293
+ "lang": List(Value("string")),
294
+ "prob": List(Value("float64")),
295
+ "text": Value("string"),
296
+ "xml": Value("string"),
297
+ "html_lang": List(Value("string")),
298
+ "cluster_size": Value("int64"),
299
+ "seg_langs": List(Value("string")),
300
+ "id": Value("string"),
301
+ "filter": Value("string"),
302
+ "pii": List(List(Value("int64"))),
303
+ "doc_scores": List(Value("float64")),
304
+ "web-register": {
305
+ "MT": Value("float64"),
306
+ "LY": Value("float64"),
307
+ "SP": Value("float64"),
308
+ "ID": Value("float64"),
309
+ "NA": Value("float64"),
310
+ "HI": Value("float64"),
311
+ "IN": Value("float64"),
312
+ "OP": Value("float64"),
313
+ "IP": Value("float64"),
314
+ "it": Value("float64"),
315
+ "ne": Value("float64"),
316
+ "sr": Value("float64"),
317
+ "nb": Value("float64"),
318
+ "re": Value("float64"),
319
+ "en": Value("float64"),
320
+ "ra": Value("float64"),
321
+ "dtp": Value("float64"),
322
+ "fi": Value("float64"),
323
+ "lt": Value("float64"),
324
+ "rv": Value("float64"),
325
+ "ob": Value("float64"),
326
+ "rs": Value("float64"),
327
+ "av": Value("float64"),
328
+ "ds": Value("float64"),
329
+ "ed": Value("float64"),
330
+ },
331
+ }
332
+ )
333
+
334
+ ds = load_dataset("json", data_files=source_urls, features=features)
335
+
336
+ print(ds)
337
+ ```
338
+
339
+
340
  ## New in this release compared to HPLT v2
341
  - Reflects substantially more raw web data, primarily from the Common Crawl
342
  - Additional metadata, including more information from the underlying crawl