rstodden commited on
Commit
eac34f7
·
1 Parent(s): b9aaff9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +221 -0
README.md ADDED
@@ -0,0 +1,221 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language:
5
+ - de
6
+ language_creators:
7
+ - expert-generated
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - translation
12
+ - monolingual
13
+ pretty_name: DEplain-APA-doc
14
+ size_categories:
15
+ - <1K
16
+ source_datasets:
17
+ - original
18
+ tags:
19
+ - web-text
20
+ - plain language
21
+ - easy-to-read language
22
+ - document simplification
23
+ task_categories:
24
+ - text2text-generation
25
+ task_ids:
26
+ - text-simplification
27
+ ---
28
+
29
+
30
+ # DEplain-APA-doc: A corpus for German Document Simplification
31
+ DEplain-APA-doc is a subcorpus of DEplain [Stodden et al., 2023]((https://arxiv.org/abs/2305.18939)) for document simplification.
32
+ The corpus consists of 483 (387/48/48) parallel documents from the Austrian Press Agency (APA) in German written for people with CEFR level B1 (plain language) and for people with CEFR level A2 (plain language). All documents are either published under an open license or the copyright holders gave us the permission to share the data.
33
+
34
+ Human annotators also sentence-wise aligned the 483 documents to build a corpus for sentence simplification.
35
+ For the sentence-level version of this corpus, please see [https://huggingface.co/datasets/DEplain/DEplain-APA-sent](https://huggingface.co/datasets/DEplain/DEplain-APA-sent).
36
+ The data of APA (Austrian Press Agency) is restricted for non-commercial research purposes. To get access to DEplain-APA please request the access via zenodo (https://zenodo.org/record/7674560).
37
+
38
+ # Dataset Card for DEplain-APA-doc
39
+
40
+ ### Table of Contents
41
+ - [Dataset Description](#dataset-description)
42
+ - [Dataset Summary](#dataset-summary)
43
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
44
+ - [Languages](#languages)
45
+ - [Dataset Structure](#dataset-structure)
46
+ - [Data Instances](#data-instances)
47
+ - [Data Fields](#data-fields)
48
+ - [Data Splits](#data-splits)
49
+ - [Dataset Creation](#dataset-creation)
50
+ - [Curation Rationale](#curation-rationale)
51
+ - [Source Data](#source-data)
52
+ - [Annotations](#annotations)
53
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
54
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
55
+ - [Social Impact of Dataset](#social-impact-of-dataset)
56
+ - [Discussion of Biases](#discussion-of-biases)
57
+ - [Other Known Limitations](#other-known-limitations)
58
+ - [Additional Information](#additional-information)
59
+ - [Dataset Curators](#dataset-curators)
60
+ - [Licensing Information](#licensing-information)
61
+ - [Citation Information](#citation-information)
62
+ - [Contributions](#contributions)
63
+
64
+ ### Dataset Description
65
+
66
+ - **Repository:** [DEplain-APA zenodo repository](https://zenodo.org/record/7674560)
67
+ - **Paper:** Regina Stodden, Momen Omar, and Laura Kallmeyer. 2023. ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939). In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada. Association for Computational Linguistics.
68
+ - **Point of Contact:** [Regina Stodden](regina.stodden@hhu.de)
69
+
70
+ #### Dataset Summary
71
+
72
+ DEplain-APA [(Stodden et al., 2023)](https://arxiv.org/abs/2305.18939) is a dataset for the training and evaluation of sentence and document simplification in German. All texts of this dataset are provided by the Austrian Press Agency. The simple-complex sentence pairs are manually aligned.
73
+
74
+ #### Supported Tasks and Leaderboards
75
+
76
+ The dataset supports the training and evaluation of `text-simplification` systems. Success in this task is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
77
+
78
+ #### Languages
79
+
80
+ The text in this dataset is in Austrian German (`de-at`).
81
+
82
+ #### Domains
83
+ All texts in this dataset are news data.
84
+
85
+ ## Dataset Structure
86
+
87
+ #### Data Access
88
+
89
+ - The dataset is licensed with restricted access for only academic purposes. To download the dataset, please request access on [zenodo](https://zenodo.org/record/7674560).
90
+
91
+ #### Data Instances
92
+ - `document-simplification` configuration: an instance consists of an original document and one reference simplification (in plain-text format).
93
+ - `sentence-simplification` configuration: an instance consists of original sentence(s) and one manually aligned reference simplification (inclusing one or more sentences).
94
+
95
+
96
+ #### Data Fields
97
+
98
+ | data field | data field description |
99
+ |-------------------------------------------------|-------------------------------------------------------------------------------------------------------|
100
+ | `original` | an original text from the source dataset |
101
+ | `simplification` | a simplified text from the source dataset |
102
+ | `pair_id` | document pair id |
103
+ | `complex_document_id ` (on doc-level) | id of complex document (-1) |
104
+ | `simple_document_id ` (on doc-level) | id of simple document (-0) |
105
+ | `original_id ` (on sent-level) | id of sentence(s) of the original text |
106
+ | `simplification_id ` (on sent-level) | id of sentence(s) of the simplified text |
107
+ | `domain ` | text domain of the document pair |
108
+ | `corpus ` | subcorpus name |
109
+ | `simple_url ` | origin URL of the simplified document |
110
+ | `complex_url ` | origin URL of the simplified document |
111
+ | `simple_level ` or `language_level_simple ` | required CEFR language level to understand the simplified document |
112
+ | `complex_level ` or `language_level_original ` | required CEFR language level to understand the original document |
113
+ | `simple_location_html ` | location on hard disk where the HTML file of the simple document is stored |
114
+ | `complex_location_html ` | location on hard disk where the HTML file of the original document is stored |
115
+ | `simple_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
116
+ | `complex_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
117
+ | `alignment_location ` | location on hard disk where the alignment is stored |
118
+ | `simple_author ` | author (or copyright owner) of the simplified document |
119
+ | `complex_author ` | author (or copyright owner) of the original document |
120
+ | `simple_title ` | title of the simplified document |
121
+ | `complex_title ` | title of the original document |
122
+ | `license ` | license of the data |
123
+ | `last_access ` or `access_date` | data origin data or data when the HTML files were downloaded |
124
+ | `rater` | id of the rater who annotated the sentence pair |
125
+ | `alignment` | type of alignment, e.g., 1:1, 1:n, n:1 or n:m |
126
+
127
+
128
+ #### Data Splits
129
+
130
+ DEplain-APA is randomly split into a training, development and test set. The training set of the sentence-simplification configuration contains only texts of documents which are part of the training set of document-simplification configuration and the same for dev and test sets.
131
+ The statistics are given below.
132
+
133
+
134
+ | | Train | Dev | Test | Total |
135
+ | ----- | ------ | ------ | ---- | ----- |
136
+ | Document Pairs | 387 | 48 | 48 |483 |
137
+ | Sentence Pairs | 10660 | 1231 | 1231 | 13122|
138
+
139
+ Inter-Annotator-Agreement: 0.7497 (moderate)
140
+
141
+ Here, more information on simplification operations will follow soon.
142
+
143
+ ### Dataset Creation
144
+
145
+ #### Curation Rationale
146
+
147
+ DEplain-APA was created to improve the training and evaluation of German document and sentence simplification. The data is provided by the same data provided as for the APA-LHA data. In comparison to APA-LHA (automatic-aligned), the sentence pairs of DEplain-APA are all manually aligned. Further, DEplain-APA aligns the texts in language level B1 with the texts in A2, which result in mostly mild simplifications.
148
+
149
+ Further DEplain-APA, contains parallel documents as well as parallel sentence pairs.
150
+
151
+ #### Source Data
152
+
153
+ ##### Initial Data Collection and Normalization
154
+
155
+ The original news texts (in CEFR level B2) were manually simplified by professional translators, i.e. capito – CFS GmbH, and provided to us by the Austrian Press Agency.
156
+ All documents date back to 2019 to 2021.
157
+ Two German native speakers have manually aligned the sentence pairs by using the text simplification annotation tool TS-ANNO. The data was split into sentences using a German model of SpaCy.
158
+
159
+ ##### Who are the source language producers?
160
+ The original news texts (in CEFR level B2) were manually simplified by professional translators, i.e. capito – CFS GmbH. No other demographic or compensation information is known.
161
+
162
+ #### Annotations
163
+
164
+ ##### Annotation process
165
+
166
+ The instructions given to the annotators are available [here](https://github.com/rstodden/TS_annotation_tool/tree/master/annotation_schema).
167
+
168
+ ##### Who are the annotators?
169
+
170
+ The annotators are two German native speakers, who are trained in linguistics. Both were at least compensated with the minimum wage of their country of residence.
171
+ They are not part of any target group of text simplification.
172
+
173
+ #### Personal and Sensitive Information
174
+
175
+ No sensitive data.
176
+
177
+ ### Considerations for Using the Data
178
+
179
+ #### Social Impact of Dataset
180
+
181
+ Many people do not understand texts due to their complexity. With automatic text simplification methods, the texts can be simplified for them. Our new training data can benefit in training a TS model.
182
+
183
+ #### Discussion of Biases
184
+
185
+ No bias is known.
186
+
187
+ #### Other Known Limitations
188
+
189
+ The dataset is provided for research purposes only. Please check the dataset license for additional information.
190
+
191
+ ### Additional Information
192
+
193
+ #### Dataset Curators
194
+
195
+ Researchers at the Heinrich-Heine-University Düsseldorf, Germany, developed DEplain-APA. This research is part of the PhD-program `Online Participation` supported by the North Rhine-Westphalian (German) funding scheme `Forschungskolleg`.
196
+
197
+ #### Licensing Information
198
+
199
+ The dataset (DEplain-APA) is provided for research purposes only. Please request access using the following form: [https://zenodo.org/record/7674560](https://zenodo.org/record/7674560).
200
+
201
+ #### Citation Information
202
+
203
+ If you use part of this work, please cite our paper:
204
+
205
+
206
+ ```
207
+ @inproceedings{stodden-etal-2023-deplain,
208
+ title = "{DE}-plain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification",
209
+ author = "Stodden, Regina and
210
+ Momen, Omar and
211
+ Kallmeyer, Laura",
212
+ booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
213
+ month = jul,
214
+ year = "2023",
215
+ address = "Toronto, Canada",
216
+ publisher = "Association for Computational Linguistics",
217
+ notes = "preprint: https://arxiv.org/abs/2305.18939",
218
+ }
219
+ ```
220
+ This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r) and [Yacine Jernite](https://github.com/yjernite).
221
+