winobes commited on
Commit
04566be
·
1 Parent(s): 48b11ee

integrate source lists

Browse files
Files changed (1) hide show
  1. README.md +3 -14
README.md CHANGED
@@ -65,22 +65,11 @@ The focus of this huggingface dataset is to organise the data for fine-grained d
65
  - data includes: date, document_type, document_id, target_word, and text.
66
 
67
  The dataset builder requires a `years` argument, which must be an interable of years between 1979 and 2019 (inclusive). This can be supplied to the `load_dataset` function as a keyword argument.
68
- For example, to load all of the data:
69
 
70
  ```python
71
  from datasets import load_dataset
72
- data = load_dataset('ChangeIsKey/openRD-103', years=range(1979,2020))
73
  ```
74
 
75
- The data can take some time to load/extract. Using [dataset streaming](https://huggingface.co/docs/datasets/stream) may be an option. Size of data by decade can be found below:
76
-
77
- | | bytes | sentences | tokens |
78
- |-------|-------|-----------|--------|
79
- | 1979 | 118Mb | 0.409M | 10M |
80
- | 1980s | 1.4Gb | 4.7M | 118M |
81
- | 1990s | 2.2Gb | 5.3M | 202M |
82
- | 2000s | 4.0Gb | 11.8M | 338M |
83
- | 2010s | 4.4Gb | 14.1M | 361M |
84
- | total | 13Gb | 36.9M | 279M |
85
-
86
- License is CC BY 4.0 with attribution.
 
65
  - data includes: date, document_type, document_id, target_word, and text.
66
 
67
  The dataset builder requires a `years` argument, which must be an interable of years between 1979 and 2019 (inclusive). This can be supplied to the `load_dataset` function as a keyword argument.
68
+ For example, to load raw sentences from the `prop` and `bet` data sources run:
69
 
70
  ```python
71
  from datasets import load_dataset
72
+ data = load_dataset('ChangeIsKey/open-riksdag', 'sentences' years=range(1999,2000), sources=['prop', 'bet'])
73
  ```
74
 
75
+ License is CC BY 4.0 with attribution.