lzc03211 parquet-converter commited on
Commit
45086b8
·
verified ·
0 Parent(s):

Duplicate from bea2019st/wi_locness

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +27 -0
  2. README.md +343 -0
  3. wi_locness.py +234 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,343 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - crowdsourced
6
+ language:
7
+ - en
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ - other-language-learner
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text2text-generation
19
+ task_ids: []
20
+ paperswithcode_id: locness-corpus
21
+ pretty_name: Cambridge English Write & Improve + LOCNESS
22
+ tags:
23
+ - grammatical-error-correction
24
+ dataset_info:
25
+ - config_name: default
26
+ features:
27
+ - name: id
28
+ dtype: string
29
+ - name: userid
30
+ dtype: string
31
+ - name: cefr
32
+ dtype: string
33
+ - name: text
34
+ dtype: string
35
+ - name: edits
36
+ sequence:
37
+ - name: start
38
+ dtype: int32
39
+ - name: end
40
+ dtype: int32
41
+ - name: text
42
+ dtype: string
43
+ splits:
44
+ - name: train
45
+ num_bytes: 4375795
46
+ num_examples: 3000
47
+ - name: validation
48
+ num_bytes: 447055
49
+ num_examples: 300
50
+ download_size: 6120469
51
+ dataset_size: 4822850
52
+ - config_name: wi
53
+ features:
54
+ - name: id
55
+ dtype: string
56
+ - name: userid
57
+ dtype: string
58
+ - name: cefr
59
+ dtype: string
60
+ - name: text
61
+ dtype: string
62
+ - name: edits
63
+ sequence:
64
+ - name: start
65
+ dtype: int32
66
+ - name: end
67
+ dtype: int32
68
+ - name: text
69
+ dtype: string
70
+ splits:
71
+ - name: train
72
+ num_bytes: 4375795
73
+ num_examples: 3000
74
+ - name: validation
75
+ num_bytes: 447055
76
+ num_examples: 300
77
+ download_size: 6120469
78
+ dataset_size: 4822850
79
+ - config_name: locness
80
+ features:
81
+ - name: id
82
+ dtype: string
83
+ - name: cefr
84
+ dtype: string
85
+ - name: text
86
+ dtype: string
87
+ - name: edits
88
+ sequence:
89
+ - name: start
90
+ dtype: int32
91
+ - name: end
92
+ dtype: int32
93
+ - name: text
94
+ dtype: string
95
+ splits:
96
+ - name: validation
97
+ num_bytes: 138176
98
+ num_examples: 50
99
+ download_size: 6120469
100
+ dataset_size: 138176
101
+ config_names:
102
+ - locness
103
+ - wi
104
+ ---
105
+
106
+ # Dataset Card for Cambridge English Write & Improve + LOCNESS Dataset
107
+
108
+ ## Table of Contents
109
+ - [Dataset Description](#dataset-description)
110
+ - [Dataset Summary](#dataset-summary)
111
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
112
+ - [Languages](#languages)
113
+ - [Dataset Structure](#dataset-structure)
114
+ - [Data Instances](#data-instances)
115
+ - [Data Fields](#data-fields)
116
+ - [Data Splits](#data-splits)
117
+ - [Dataset Creation](#dataset-creation)
118
+ - [Curation Rationale](#curation-rationale)
119
+ - [Source Data](#source-data)
120
+ - [Annotations](#annotations)
121
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
122
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
123
+ - [Social Impact of Dataset](#social-impact-of-dataset)
124
+ - [Discussion of Biases](#discussion-of-biases)
125
+ - [Other Known Limitations](#other-known-limitations)
126
+ - [Additional Information](#additional-information)
127
+ - [Dataset Curators](#dataset-curators)
128
+ - [Licensing Information](#licensing-information)
129
+ - [Citation Information](#citation-information)
130
+ - [Contributions](#contributions)
131
+
132
+ ## Dataset Description
133
+
134
+ - **Homepage:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data
135
+ - **Repository:**
136
+ - **Paper:** https://www.aclweb.org/anthology/W19-4406/
137
+ - **Leaderboard:** https://competitions.codalab.org/competitions/20228#results
138
+ - **Point of Contact:**
139
+
140
+ ### Dataset Summary
141
+
142
+ Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native English students with their writing. Specifically, students from around the world submit letters, stories, articles and essays in response to various prompts, and the W&I system provides instant feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these submissions and assigned them a CEFR level.
143
+
144
+ The LOCNESS corpus (Granger, 1998) consists of essays written by native English students. It was originally compiled by researchers at the Centre for English Corpus Linguistics at the University of Louvain. Since native English students also sometimes make mistakes, we asked the W&I annotators to annotate a subsection of LOCNESS so researchers can test the effectiveness of their systems on the full range of English levels and abilities.
145
+
146
+ ### Supported Tasks and Leaderboards
147
+
148
+ Grammatical error correction (GEC) is the task of automatically correcting grammatical errors in text; e.g. [I follows his advices -> I followed his advice]. It can be used to not only help language learners improve their writing skills, but also alert native speakers to accidental mistakes or typos.
149
+
150
+ The aim of the task of this dataset is to correct all types of errors in written text. This includes grammatical, lexical and orthographical errors.
151
+
152
+ The following Codalab competition contains the latest leaderboard, along with information on how to submit to the withheld W&I+LOCNESS test set: https://competitions.codalab.org/competitions/20228
153
+
154
+ ### Languages
155
+
156
+ The dataset is in English.
157
+
158
+ ## Dataset Structure
159
+
160
+ ### Data Instances
161
+
162
+ An example from the `wi` configuration:
163
+
164
+ ```
165
+ {
166
+ 'id': '1-140178',
167
+ 'userid': '21251',
168
+ 'cefr': 'A2.i',
169
+ 'text': 'My town is a medium size city with eighty thousand inhabitants. It has a high density population because its small territory. Despite of it is an industrial city, there are many shops and department stores. I recommend visiting the artificial lake in the certer of the city which is surrounded by a park. Pasteries are very common and most of them offer the special dessert from the city. There are a comercial zone along the widest street of the city where you can find all kind of establishments: banks, bars, chemists, cinemas, pet shops, restaurants, fast food restaurants, groceries, travel agencies, supermarkets and others. Most of the shops have sales and offers at least three months of the year: January, June and August. The quality of the products and services are quite good, because there are a huge competition, however I suggest you taking care about some fakes or cheats.',
170
+ 'edits': {
171
+ 'start': [13, 77, 104, 126, 134, 256, 306, 375, 396, 402, 476, 484, 579, 671, 774, 804, 808, 826, 838, 850, 857, 862, 868],
172
+ 'end': [24, 78, 104, 133, 136, 262, 315, 379, 399, 411, 480, 498, 588, 671, 777, 807, 810, 835, 845, 856, 861, 867, 873],
173
+ 'text': ['medium-sized', '-', ' of', 'Although', '', 'center', None, 'of', 'is', 'commercial', 'kinds', 'businesses', 'grocers', ' in', 'is', 'is', '', '. However,', 'recommend', 'be', 'careful', 'of', '']
174
+ }
175
+ }
176
+ ```
177
+
178
+ An example from the `locness` configuration:
179
+
180
+ ```
181
+ {
182
+ 'id': '7-5819177',
183
+ 'cefr': 'N',
184
+ 'text': 'Boxing is a common, well known and well loved sport amongst most countries in the world however it is also punishing, dangerous and disliked to the extent that many people want it banned, possibly with good reason.\nBoxing is a dangerous sport, there are relatively common deaths, tragic injuries and even disease. All professional boxers are at risk from being killed in his next fight. If not killed then more likely paralysed. There have been a number of cases in the last ten years of the top few boxers having tragic losses throughout their ranks. This is just from the elite few, and theres more from those below them.\nMore deaths would occur through boxing if it were banned. The sport would go underground, there would be no safety measures like gloves, a doctor, paramedics or early stopping of the fight if someone looked unable to continue. With this going on the people taking part will be dangerous, and on the streets. Dangerous dogs who were trained to kill and maim in similar underound dog fights have already proved deadly to innocent people, the new boxers could be even more at risk.\nOnce boxing is banned and no-one grows up knowing it as acceptable there will be no interest in boxing and hopefully less all round interest in violence making towns and cities much safer places to live in, there will be less fighting outside pubs and clubs and less violent attacks with little or no reason.\nchange the rules of boxing slightly would much improve the safety risks of the sport and not detract form the entertainment. There are all sorts of proposals, lighter and more cushioning gloves could be worn, ban punches to the head, headguards worn or make fights shorter, as most of the serious injuries occur in the latter rounds, these would all show off the boxers skill and tallent and still be entertaining to watch.\nEven if a boxer is a success and manages not to be seriously hurt he still faces serious consequences in later life diseases that attack the brains have been known to set in as a direct result of boxing, even Muhamed Ali, who was infamous(?) both for his boxing and his quick-witted intelligence now has Alzheimer disease and can no longer do many everyday acts.\nMany other sports are more dangerous than boxing, motor sports and even mountaineering has risks that are real. Boxers chose to box, just as racing drivers drive.',
185
+ 'edits': {
186
+ 'start': [24, 39, 52, 87, 242, 371, 400, 528, 589, 713, 869, 992, 1058, 1169, 1209, 1219, 1255, 1308, 1386, 1412, 1513, 1569, 1661, 1731, 1744, 1781, 1792, 1901, 1951, 2038, 2131, 2149, 2247, 2286],
187
+ 'end': [25, 40, 59, 95, 249, 374, 400, 538, 595, 713, 869, 1001, 1063, 1169, 1209, 1219, 1255, 1315, 1390, 1418, 1517, 1570, 1661, 1737, 1751, 1781, 1799, 1901, 1960, 2044, 2131, 2149, 2248, 2289],
188
+ 'text': ['-', '-', 'in', '. However,', '. There', 'their', ',', 'among', "there's", ' and', ',', 'underground', '. The', ',', ',', ',', ',', '. There', 'for', 'Changing', 'from', ';', ',', 'later', '. These', "'", 'talent', ',', '. Diseases', '. Even', ',', "'s", ';', 'have']
189
+ }
190
+ }
191
+ ```
192
+
193
+ ### Data Fields
194
+
195
+ The fields of the dataset are:
196
+ - `id`: the id of the text as a string
197
+ - `cefr`: the [CEFR level](https://www.cambridgeenglish.org/exams-and-tests/cefr/) of the text as a string
198
+ - `userid`: id of the user
199
+ - `text`: the text of the submission as a string
200
+ - `edits`: the edits from W&I:
201
+ - `start`: start indexes of each edit as a list of integers
202
+ - `end`: end indexes of each edit as a list of integers
203
+ - `text`: the text content of each edit as a list of strings
204
+ - `from`: the original text of each edit as a list of strings
205
+
206
+ ### Data Splits
207
+
208
+ | name |train|validation|
209
+ |----------|----:|---------:|
210
+ | wi | 3000| 300|
211
+ | locness | N/A| 50|
212
+
213
+ ## Dataset Creation
214
+
215
+ ### Curation Rationale
216
+
217
+ [More Information Needed]
218
+
219
+ ### Source Data
220
+
221
+ #### Initial Data Collection and Normalization
222
+
223
+ [More Information Needed]
224
+
225
+ #### Who are the source language producers?
226
+
227
+ [More Information Needed]
228
+
229
+ ### Annotations
230
+
231
+ #### Annotation process
232
+
233
+ [More Information Needed]
234
+
235
+ #### Who are the annotators?
236
+
237
+ [More Information Needed]
238
+
239
+ ### Personal and Sensitive Information
240
+
241
+ [More Information Needed]
242
+
243
+ ## Considerations for Using the Data
244
+
245
+ ### Social Impact of Dataset
246
+
247
+ [More Information Needed]
248
+
249
+ ### Discussion of Biases
250
+
251
+ [More Information Needed]
252
+
253
+ ### Other Known Limitations
254
+
255
+ [More Information Needed]
256
+
257
+ ## Additional Information
258
+
259
+ ### Dataset Curators
260
+
261
+ [More Information Needed]
262
+
263
+ ### Licensing Information
264
+
265
+ Write & Improve License:
266
+
267
+ ```
268
+ Cambridge English Write & Improve (CEWI) Dataset Licence Agreement
269
+
270
+ 1. By downloading this dataset and licence, this licence agreement is
271
+ entered into, effective this date, between you, the Licensee, and the
272
+ University of Cambridge, the Licensor.
273
+
274
+ 2. Copyright of the entire licensed dataset is held by the Licensor.
275
+ No ownership or interest in the dataset is transferred to the
276
+ Licensee.
277
+
278
+ 3. The Licensor hereby grants the Licensee a non-exclusive
279
+ non-transferable right to use the licensed dataset for
280
+ non-commercial research and educational purposes.
281
+
282
+ 4. Non-commercial purposes exclude without limitation any use of the
283
+ licensed dataset or information derived from the dataset for or as
284
+ part of a product or service which is sold, offered for sale,
285
+ licensed, leased or rented.
286
+
287
+ 5. The Licensee shall acknowledge use of the licensed dataset in all
288
+ publications of research based on it, in whole or in part, through
289
+ citation of the following publication:
290
+
291
+ Helen Yannakoudakis, Øistein E. Andersen, Ardeshir Geranpayeh,
292
+ Ted Briscoe and Diane Nicholls. 2018. Developing an automated writing
293
+ placement system for ESL learners. Applied Measurement in Education.
294
+
295
+ 6. The Licensee may publish excerpts of less than 100 words from the
296
+ licensed dataset pursuant to clause 3.
297
+
298
+ 7. The Licensor grants the Licensee this right to use the licensed dataset
299
+ "as is". Licensor does not make, and expressly disclaims, any express or
300
+ implied warranties, representations or endorsements of any kind
301
+ whatsoever.
302
+
303
+ 8. This Agreement shall be governed by and construed in accordance with
304
+ the laws of England and the English courts shall have exclusive
305
+ jurisdiction.
306
+ ```
307
+
308
+ LOCNESS License:
309
+
310
+ ```
311
+ LOCNESS Dataset Licence Agreement
312
+
313
+ 1. The corpus is to be used for non-commercial purposes only
314
+
315
+ 2. All publications on research partly or wholly based on the corpus should give credit to the Centre for English Corpus Linguistics (CECL), Université catholique de Louvain, Belgium. A scanned copy or offprint of the publication should also be sent to <sylviane.granger@uclouvain.be>.
316
+
317
+ 3. No part of the corpus is to be distributed to a third party without specific authorization from CECL. The corpus can only be used by the person agreeing to the licence terms and researchers working in close collaboration with him/her or students under his/her supervision, attached to the same institution, within the framework of the research project.
318
+ ```
319
+
320
+ ### Citation Information
321
+
322
+ ```
323
+ @inproceedings{bryant-etal-2019-bea,
324
+ title = "The {BEA}-2019 Shared Task on Grammatical Error Correction",
325
+ author = "Bryant, Christopher and
326
+ Felice, Mariano and
327
+ Andersen, {\O}istein E. and
328
+ Briscoe, Ted",
329
+ booktitle = "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
330
+ month = aug,
331
+ year = "2019",
332
+ address = "Florence, Italy",
333
+ publisher = "Association for Computational Linguistics",
334
+ url = "https://www.aclweb.org/anthology/W19-4406",
335
+ doi = "10.18653/v1/W19-4406",
336
+ pages = "52--75",
337
+ abstract = "This paper reports on the BEA-2019 Shared Task on Grammatical Error Correction (GEC). As with the CoNLL-2014 shared task, participants are required to correct all types of errors in test data. One of the main contributions of the BEA-2019 shared task is the introduction of a new dataset, the Write{\&}Improve+LOCNESS corpus, which represents a wider range of native and learner English levels and abilities. Another contribution is the introduction of tracks, which control the amount of annotated data available to participants. Systems are evaluated in terms of ERRANT F{\_}0.5, which allows us to report a much wider range of performance statistics. The competition was hosted on Codalab and remains open for further submissions on the blind test set.",
338
+ }
339
+ ```
340
+
341
+ ### Contributions
342
+
343
+ Thanks to [@aseifert](https://github.com/aseifert) for adding this dataset.
wi_locness.py ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native
16
+ English students with their writing. Specifically, students from around the world submit letters,
17
+ stories, articles and essays in response to various prompts, and the W&I system provides instant
18
+ feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these
19
+ submissions and assigned them a CEFR level.
20
+
21
+ The LOCNESS corpus (Granger, 1998) consists of essays written by native English students.
22
+ It was originally compiled by researchers at the Centre for English Corpus Linguistics at the
23
+ University of Louvain. Since native English students also sometimes make mistakes, we asked
24
+ the W&I annotators to annotate a subsection of LOCNESS so researchers can test the effectiveness
25
+ of their systems on the full range of English levels and abilities."""
26
+
27
+
28
+ import json
29
+
30
+ import datasets
31
+
32
+
33
+ _CITATION = """\
34
+ @inproceedings{bryant-etal-2019-bea,
35
+ title = "The {BEA}-2019 Shared Task on Grammatical Error Correction",
36
+ author = "Bryant, Christopher and
37
+ Felice, Mariano and
38
+ Andersen, {\\O}istein E. and
39
+ Briscoe, Ted",
40
+ booktitle = "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
41
+ month = aug,
42
+ year = "2019",
43
+ address = "Florence, Italy",
44
+ publisher = "Association for Computational Linguistics",
45
+ url = "https://www.aclweb.org/anthology/W19-4406",
46
+ doi = "10.18653/v1/W19-4406",
47
+ pages = "52--75",
48
+ abstract = "This paper reports on the BEA-2019 Shared Task on Grammatical Error Correction (GEC). As with the CoNLL-2014 shared task, participants are required to correct all types of errors in test data. One of the main contributions of the BEA-2019 shared task is the introduction of a new dataset, the Write{\\&}Improve+LOCNESS corpus, which represents a wider range of native and learner English levels and abilities. Another contribution is the introduction of tracks, which control the amount of annotated data available to participants. Systems are evaluated in terms of ERRANT F{\\_}0.5, which allows us to report a much wider range of performance statistics. The competition was hosted on Codalab and remains open for further submissions on the blind test set.",
49
+ }
50
+ """
51
+
52
+ _DESCRIPTION = """\
53
+ Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native
54
+ English students with their writing. Specifically, students from around the world submit letters,
55
+ stories, articles and essays in response to various prompts, and the W&I system provides instant
56
+ feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these
57
+ submissions and assigned them a CEFR level.
58
+ """
59
+
60
+ _HOMEPAGE = "https://www.cl.cam.ac.uk/research/nl/bea2019st/#data"
61
+
62
+ _LICENSE = ""
63
+
64
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
65
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
66
+ _URL = "https://www.cl.cam.ac.uk/research/nl/bea2019st/data/wi+locness_v2.1.bea19.tar.gz"
67
+
68
+
69
+ class WiLocness(datasets.GeneratorBasedBuilder):
70
+ """\
71
+ Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native
72
+ English students with their writing. Specifically, students from around the world submit letters,
73
+ stories, articles and essays in response to various prompts, and the W&I system provides instant
74
+ feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these
75
+ submissions and assigned them a CEFR level."""
76
+
77
+ VERSION = datasets.Version("1.1.0")
78
+
79
+ # This is an example of a dataset with multiple configurations.
80
+ # If you don't want/need to define several sub-sets in your dataset,
81
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
82
+
83
+ # If you need to make complex sub-parts in the datasets with configurable options
84
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
85
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
86
+
87
+ # You will be able to load one or the other configurations in the following list with
88
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
89
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
90
+ BUILDER_CONFIGS = [
91
+ datasets.BuilderConfig(
92
+ name="wi",
93
+ version=VERSION,
94
+ description="This part of the dataset includes the Write & Improve data for levels A, B and C",
95
+ ),
96
+ datasets.BuilderConfig(
97
+ name="locness",
98
+ version=VERSION,
99
+ description="This part of the dataset includes the Locness part of the W&I-Locness dataset",
100
+ ),
101
+ ]
102
+
103
+ # DEFAULT_CONFIG_NAME = "first_domain" # It's not mandatory to have a default configuration. Just use one if it make sense.
104
+
105
+ def _info(self):
106
+ if self.config.name == "wi":
107
+ features = datasets.Features(
108
+ {
109
+ "id": datasets.Value("string"),
110
+ "userid": datasets.Value("string"),
111
+ "cefr": datasets.Value("string"),
112
+ "text": datasets.Value("string"),
113
+ "edits": datasets.Sequence(
114
+ {
115
+ "start": datasets.Value("int32"),
116
+ "end": datasets.Value("int32"),
117
+ "text": datasets.Value("string"),
118
+ }
119
+ ),
120
+ }
121
+ )
122
+ elif self.config.name == "locness":
123
+ features = datasets.Features(
124
+ {
125
+ "id": datasets.Value("string"),
126
+ "cefr": datasets.Value("string"),
127
+ "text": datasets.Value("string"),
128
+ "edits": datasets.Sequence(
129
+ {
130
+ "start": datasets.Value("int32"),
131
+ "end": datasets.Value("int32"),
132
+ "text": datasets.Value("string"),
133
+ }
134
+ ),
135
+ }
136
+ )
137
+ else:
138
+ assert False
139
+ return datasets.DatasetInfo(
140
+ # This is the description that will appear on the datasets page.
141
+ description=_DESCRIPTION,
142
+ # This defines the different columns of the dataset and their types
143
+ features=features, # Here we define them above because they are different between the two configurations
144
+ # If there's a common (input, target) tuple from the features,
145
+ # specify them here. They'll be used if as_supervised=True in
146
+ # builder.as_dataset.
147
+ supervised_keys=None,
148
+ # Homepage of the dataset for documentation
149
+ homepage=_HOMEPAGE,
150
+ # License for the dataset if available
151
+ license=_LICENSE,
152
+ # Citation for the dataset
153
+ citation=_CITATION,
154
+ )
155
+
156
+ def _split_generators(self, dl_manager):
157
+ """Returns SplitGenerators."""
158
+
159
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
160
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
161
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
162
+ archive = dl_manager.download(_URL)
163
+ data_dir = "wi+locness/json"
164
+
165
+ if self.config.name == "wi":
166
+ return [
167
+ datasets.SplitGenerator(
168
+ name=datasets.Split.TRAIN,
169
+ # These kwargs will be passed to _generate_examples
170
+ gen_kwargs={"data_dir": data_dir, "split": "train", "files": dl_manager.iter_archive(archive)},
171
+ ),
172
+ datasets.SplitGenerator(
173
+ name=datasets.Split.VALIDATION,
174
+ # These kwargs will be passed to _generate_examples
175
+ gen_kwargs={
176
+ "data_dir": data_dir,
177
+ "split": "validation",
178
+ "files": dl_manager.iter_archive(archive),
179
+ },
180
+ ),
181
+ ]
182
+ elif self.config.name == "locness":
183
+ return [
184
+ datasets.SplitGenerator(
185
+ name=datasets.Split.VALIDATION,
186
+ # These kwargs will be passed to _generate_examples
187
+ gen_kwargs={
188
+ "data_dir": data_dir,
189
+ "split": "validation",
190
+ "files": dl_manager.iter_archive(archive),
191
+ },
192
+ ),
193
+ ]
194
+ else:
195
+ assert False
196
+
197
+ def _generate_examples(self, data_dir, split, files):
198
+ """Yields examples."""
199
+
200
+ if split == "validation":
201
+ split = "dev"
202
+
203
+ if self.config.name == "wi":
204
+ levels = ["A", "B", "C"]
205
+ elif self.config.name == "locness":
206
+ levels = ["N"]
207
+ else:
208
+ assert False
209
+ filepaths = [f"{data_dir}/{level}.{split}.json" for level in levels]
210
+ id_ = 0
211
+ for path, fp in files:
212
+ if not filepaths:
213
+ break
214
+ if path in filepaths:
215
+ filepaths.remove(path)
216
+ for line in fp:
217
+ o = json.loads(line.decode("utf-8"))
218
+
219
+ edits = []
220
+ for (start, end, text) in o["edits"][0][1:][0]:
221
+ edits.append({"start": start, "end": end, "text": text})
222
+
223
+ out = {
224
+ "id": o["id"],
225
+ "cefr": o["cefr"],
226
+ "text": o["text"],
227
+ "edits": edits,
228
+ }
229
+
230
+ if self.config.name == "wi":
231
+ out["userid"] = o.get("userid", "")
232
+
233
+ yield id_, out
234
+ id_ += 1