Tejaswi2525 mykolaskrynnyk commited on
Commit
098953a
·
verified ·
0 Parent(s):

Duplicate from UNDP/sdgi-corpus

Browse files

Co-authored-by: Mykola Skrynnyk <mykolaskrynnyk@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: text
5
+ dtype: string
6
+ - name: embedding
7
+ sequence: float64
8
+ - name: labels
9
+ sequence: int64
10
+ - name: metadata
11
+ struct:
12
+ - name: country
13
+ dtype: string
14
+ - name: file_id
15
+ dtype: string
16
+ - name: language
17
+ dtype: string
18
+ - name: locality
19
+ dtype: string
20
+ - name: size
21
+ dtype: string
22
+ - name: type
23
+ dtype: string
24
+ - name: year
25
+ dtype: int64
26
+ splits:
27
+ - name: train
28
+ num_bytes: 124052504
29
+ num_examples: 5880
30
+ - name: test
31
+ num_bytes: 36948683
32
+ num_examples: 1470
33
+ download_size: 129951175
34
+ dataset_size: 161001187
35
+ configs:
36
+ - config_name: default
37
+ data_files:
38
+ - split: train
39
+ path: data/train-*
40
+ - split: test
41
+ path: data/test-*
42
+ license: cc-by-nc-sa-4.0
43
+ task_categories:
44
+ - text-classification
45
+ language:
46
+ - en
47
+ - es
48
+ - fr
49
+ tags:
50
+ - sustainable-development-goals
51
+ - sdgs
52
+ pretty_name: SDGi Corpus
53
+ size_categories:
54
+ - 1K<n<10K
55
+ ---
56
+ # Dataset Card for SDGi Corpus
57
+
58
+ <!-- Provide a quick summary of the dataset. -->
59
+
60
+ SDGi Corpus is a curated dataset for text classification by the [United Nations Sustainable Development Goals (SDGs)](https://www.un.org/sustainabledevelopment/sustainable-development-goals/).
61
+
62
+ ## Dataset Details
63
+
64
+ ### Dataset Description
65
+
66
+ <!-- Provide a longer summary of what this dataset is. -->
67
+
68
+ SDG Integration Corpus (SDGi Corpus) is the most comprehensive multilingual collection of texts labelled by Sustainable
69
+ Development Goals (SDGs) to date. Designed for multi-label multilingual classification, SDGi Corpus contains over 7,000
70
+ examples in English, French and Spanish. Leveraging years of international SDG reporting on the national and subnational
71
+ levels, we hand-picked texts from Voluntary National Reviews (VNRs) and Voluntary Local Reviews (VLRs) from more than 180
72
+ countries to create an inclusive dataset that provides both focused and broad perspectives on the SDGs. The dataset comes
73
+ with a predefined train/test split.
74
+
75
+ - **Curated by:** United Nations Development Programme
76
+ - **Language(s):** English, French and Spanish
77
+ - **License:** CC BY-NC-SA 4.0
78
+
79
+ ### Dataset Sources [optional]
80
+
81
+ <!-- Provide the basic links for the dataset. -->
82
+
83
+ - **Repository:** https://github.com/UNDP-Data/dsc-sdgi-corpus (benchmarks)
84
+ - **Paper:** https://ceur-ws.org/Vol-3764/paper3.pdf
85
+ - **Demo:** TBA.
86
+
87
+ ## Uses
88
+
89
+ <!-- Address questions around how the dataset is intended to be used. -->
90
+
91
+ The dataset is designed primarily for text classification tasks – including binary, multiclass and multi-label classification –
92
+ in one or more of the three supported languages. The dataset includes rich metadata with provenance information and can be used for
93
+ other text mining tasks like topic modelling or quantitative text analysis with a focus on the 2030 Agenda for Sustainable Development.
94
+
95
+ ### Direct Use
96
+
97
+ <!-- This section describes suitable use cases for the dataset. -->
98
+
99
+ The dataset can be directly used for training machine learning models for text classification tasks. It can also be used for topic modelling to
100
+ identify the main themes that occur in the corpus or a specific subset of it. The rich metadata provided makes it possible to conduct both a trageted or comparative
101
+ analyses along linguistic, geographic (country and/or locality) and temporal dimensions.
102
+
103
+ ### Out-of-Scope Use
104
+
105
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
106
+
107
+ The dataset is not suitable for tasks that require information not included in the dataset, such as image analysis or audio processing.
108
+ It cannot be used for predicting future trends or patterns in the SDGs and is not linked to SDG indicator data directly.
109
+
110
+ ## Dataset Structure
111
+
112
+ The dataset consists of `7350` examples, with `5880` in the training set and `1470` in the test set. Each example includes the following fields:
113
+
114
+ - `text`: `str` – the text of the example in the original language.
115
+ - `embedding`: `list[float]` – 1536-dimensional embedding from OpenAI's `text-embedding-ada-002` model.
116
+ - `labels`: `list[int]` – one or more integer labels corresponding to SDGs. About 89% of the examples have just one label.
117
+ - `metadata`: `dict` – a dictionary containing metadata information, including:
118
+ - `country`: `str` – ISO 3166-1 alpha-3 code.
119
+ - `file_id`: `str` – internal ID of the original file. Used for provenance and troubleshooting only.
120
+ - `language`: `str` – one of the three supported languages, i.e., `en` (English), `fr` (French), `es` (Spanish).
121
+ - `locality`: `str` – name of the locality within `country` for examples from VLRs, e.g., city, province or region name.
122
+ - `size`: `str` – the size group of the example in terms of tokens, i.e., `s` (small, approx. < 512 tokens), `m` (medium, approx. 512-2048 tokens), `l` (large, approx. > 2048 tokens).
123
+ - `type`: `str` – one of the two document types, i.e., `vnr` (Voluntary National Review) or `vlr` (Voluntary Local Review).
124
+ - `year`: `int` – year of the publication.
125
+
126
+ <aside class="note">
127
+ <b>Note:</b>
128
+ the embeddings were produced from texts after removing digits. Embedding raw `text` will not produce the same result.
129
+ After applying the following replacements, you should be able to obtain similar emebedding vectors:
130
+ </aside>
131
+
132
+ ```python
133
+ re.sub(r'(\b\d+[\.\,]?\d*\b)', 'NUM', text)
134
+ ```
135
+
136
+ The dataset comes with a predefined train/test split. The examples for the test set were not sampled at random. Instead, they were
137
+ sampled in a stratified fashion using weights proportional to the cross-entropy loss of a simple classifier fitted on the full dataset.
138
+ For details on the sampling process, refer to the paper.
139
+
140
+ ## Dataset Creation
141
+
142
+ ### Curation Rationale
143
+
144
+ <!-- Motivation for the creation of this dataset. -->
145
+
146
+ The dataset was created to facilitate automated analysis of large corpora with respect to the 2030 Agenda for Sustainable Development.
147
+ The dataset comprises texts from Voluntary National Reviews (VNRs) and Voluntary Local Reviews (VLRs) which are arguably the most
148
+ authoritative sources of SDG-related texts. The dataset is a collection of texts labelled by the source data producets, the curators
149
+ have not labelled any data themselves.
150
+
151
+ ### Source Data
152
+
153
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
154
+
155
+ All examples were collected from one of the two sources:
156
+
157
+ - [Voluntary National Reviews (VNRs)](https://hlpf.un.org/vnrs)
158
+ - [Voluntary Local Reviews (VLRs)](https://sdgs.un.org/topics/voluntary-local-reviews)
159
+
160
+ Only Reviews in English, French and Spanish published between January 2016 and December 2023 were included.
161
+
162
+ #### Data Collection and Processing
163
+
164
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
165
+
166
+ To create SDGi Corpus, we manually analysed each document, searching and extracting specific parts clearly linked to SDGs.
167
+ Our curation process can be summarised in 4 steps as follows:
168
+
169
+ 1. Manually examine a given document to identify SDG-labelled content.
170
+ 2. Extract pages containing relevant content to SDG-specific folders.
171
+ 3. Edit extracted pages to redact (mask) irrelevant content before and after the relevant content.
172
+ 4. For content linked to multiple SDGs, fill out a metadata sheet.
173
+
174
+ For details on the curation process, refer to the paper.
175
+
176
+ #### Who are the source data producers?
177
+
178
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
179
+
180
+ Voluntary National Reviews (VNRs) and Voluntary Local Reviews (VLRs) are typically produced by government agencies, national
181
+ statistical offices, and other relevant national and subnational institutions within each country. These entities are responsible
182
+ for collecting, analysing, and reporting on the progress of their respective countries towards the SDGs. In addition, international
183
+ organisations, civil society organisations, academia, and other stakeholders may also contribute to the data collection and reporting
184
+ process for VNRs and VLRs.
185
+
186
+ ### Annotations
187
+
188
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
189
+
190
+ The labels in the dataset come directly from the source documents. No label annotation has been performed to produce SDGi Corpus.
191
+
192
+ #### Annotation process
193
+
194
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
195
+
196
+ Not applicable.
197
+
198
+ #### Who are the annotators?
199
+
200
+ <!-- This section describes the people or systems who created the annotations. -->
201
+
202
+ Not applicable.
203
+
204
+ #### Personal and Sensitive Information
205
+
206
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
207
+
208
+ While VNR and VLR texts are unlikely to contain any sensitive Personally Identifiable Information (PII) due to their public nature
209
+ and intented use, users should adhere to ethical standards and best practices when handling the dataset. Should sensitive PII
210
+ information be found in the dataset, you are strongly encouraged to notify the curators.
211
+
212
+ ## Bias, Risks, and Limitations
213
+
214
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
215
+
216
+ - **Language Bias**: The dataset includes texts in three languages, with English (71.9%) examples dominating the dataset, followed by examples in Spanish (15.9%) and French (12.2%). The performance of models trained on this dataset may be biased towards these languages and may not generalise well to texts in other languages. Multilingual classifiers should ensure consistent performance across the languages of interest.
217
+
218
+ - **Geographical Bias**: The dataset includes data from various countries. However, because VNRs and VLRs are self-reported documents, some countries have produced more reports than others and are therfore overrepresented while some others are underrepresented in the dataset. This could lead to geographical bias in the models trained on this dataset.
219
+
220
+ - **Temporal Limitations**: The dataset includes data from reports published between 2016 and 2023. Some earlier reports did not have the right structure to derive SDG labels and were not included in the dataset. As a text corpus, the dataset does not lend itself for predictive modelling to determine future trends or patterns in the SDGs.
221
+
222
+ - **Labelling Bias**: While the labels in the dataset come from the source documents directly, they may not be entirely bias-free. The biases of the authors of the source documents might be reflected in the content of the section or the labels they assigned to it.
223
+
224
+ - **Domain Bias**: VNRs and VLRs are formal public documents. Models trained on the data form these sources may not generalise well to other types of documents or contexts.
225
+
226
+ - **Sociotechnical Risks**: The use of this dataset for decision-making in policy or other areas related to the SDGs should be done with caution, considering all the potential biases and limitations of the dataset. Misinterpretation or misuse of the data could lead to unfair or ineffective decisions.
227
+
228
+ - **Corrupted texts**: A small fraction of texts in the dataset were not properly extracted from source PDFs and is corrupted. Affected examples will be removed from the dataset in the next version.
229
+
230
+ ### Recommendations
231
+
232
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
233
+
234
+ Users should be made aware of the risks, biases and limitations of the dataset.
235
+
236
+ Concerning the existence of corrupted texts, users are advised to remove them early on in the processing/training pipeline.
237
+ To identify such examples, one can look for a large share of non-alphanumeric or special characters as well as the number of
238
+ single character tokens.
239
+
240
+ ## Citation
241
+
242
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
243
+
244
+ **BibTeX:**
245
+
246
+ ```
247
+ @inproceedings{skrynnyk2024sdgi,
248
+ author = {Mykola Skrynnyk and Gedion Disassa and Andrey Krachkov and Janine DeVera},
249
+ title = {SDGi Corpus: A Comprehensive Multilingual Dataset for Text Classification by Sustainable Development Goals},
250
+ booktitle = {Proceedings of the 2nd Symposium on NLP for Social Good},
251
+ year = {2024},
252
+ editor = {Procheta Sen and Tulika Saha and Danushka Bollegala},
253
+ volume = {3764},
254
+ series = {CEUR Workshop Proceedings},
255
+ pages = {32--42},
256
+ publisher = {CEUR-WS.org},
257
+ series = {CEUR Workshop Proceedings},
258
+ address = {Aachen},
259
+ venue = {Liverpool, United Kingdom},
260
+ issn = {1613-0073},
261
+ url = {https://ceur-ws.org/Vol-3764/paper3.pdf},
262
+ eventdate = {2024-04-25},
263
+ }
264
+ ```
265
+
266
+ **APA:**
267
+
268
+ Skrynnyk, M., Disassa, G., Krachkov, A., & DeVera, J. (2024). SDGi Corpus: A Comprehensive Multilingual Dataset for Text Classification by Sustainable Development Goals. In P. Sen, T. Saha, & D. Bollegala (Eds.), Proceedings of the 2nd Symposium on NLP for Social Good (Vol. 3764, pp. 32–42). CEUR-WS.org. https://ceur-ws.org/Vol-3764/paper3.pdf
269
+
270
+ ## Glossary
271
+
272
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
273
+
274
+ - **SDGs (Sustainable Development Goals)** : A collection of 17 global goals set by the United Nations General Assembly in 2015 for the year 2030. They cover social and economic development issues including poverty, hunger, health, education, climate change, gender equality, water, sanitation, energy, urbanization, environment and social justice.
275
+ - **VLR (Voluntary Local Review)**: A process undertaken by local and regional governments to evaluate their progress towards the 2030 Agenda. Note that unlike VNRs, VLRs were not originally envisioned in the 2030 Agenda but emerged as a popular means of communication about SDG localisation.
276
+ - **VNR (Voluntary National Review)**: A process undertaken by national governments to evaluate their progress towards the 2030 Agenda.
277
+
278
+ ## More Information
279
+
280
+ The dataset is a product of the DFx. [Data Futures Platform (DFx)](https://data.undp.org) is an open-source, central hub for data innovation for development impact.
281
+ Guided by UNDP’s thematic focus areas, we use a systems approach and advanced analytics to identify actions to
282
+ accelerate sustainable development around the world.
283
+
284
+ ## Dataset Card Contact
285
+
286
+ For inquiries regarding data sources, technical assistance, or general information, please feel free to reach out to us at data@undp.org.
data/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76983e01fa47e71a39c330ee86c5ab36b0510b33e04e59055cd6d6f48f6bc95b
3
+ size 28359122
data/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e3ccde110c094d6e09f60c5472a421a64ea047b41cb1275635f0a967a455e9b
3
+ size 101592053