BramVanroy commited on
Commit
4ffbd53
·
verified ·
1 Parent(s): 775a7a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +246 -0
README.md CHANGED
@@ -1,4 +1,25 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: id
@@ -117,4 +138,229 @@ configs:
117
  path: data/validation-*
118
  - split: test
119
  path: data/test-*
 
 
 
 
 
 
 
 
 
 
 
 
 
120
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|other-reuters-corpus
16
+ task_categories:
17
+ - token-classification
18
+ task_ids:
19
+ - named-entity-recognition
20
+ - part-of-speech
21
+ paperswithcode_id: conll-2003
22
+ pretty_name: CoNLL-2003
23
  dataset_info:
24
  features:
25
  - name: id
 
138
  path: data/validation-*
139
  - split: test
140
  path: data/test-*
141
+ train-eval-index:
142
+ - config: conll2003
143
+ task: token-classification
144
+ task_id: entity_extraction
145
+ splits:
146
+ train_split: train
147
+ eval_split: test
148
+ col_mapping:
149
+ tokens: tokens
150
+ ner_tags: tags
151
+ metrics:
152
+ - type: seqeval
153
+ name: seqeval
154
  ---
155
+
156
+ **This is an exact duplicate of https://huggingface.co/datasets/eriktks/conll2003, which is not compatible with modern versions of `datasets` anymore because loading data via a custom script is no longer supported. All credit goes to the original creators.** Original README below.
157
+
158
+ # Dataset Card for "conll2003"
159
+
160
+ ## Table of Contents
161
+ - [Dataset Description](#dataset-description)
162
+ - [Dataset Summary](#dataset-summary)
163
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
164
+ - [Languages](#languages)
165
+ - [Dataset Structure](#dataset-structure)
166
+ - [Data Instances](#data-instances)
167
+ - [Data Fields](#data-fields)
168
+ - [Data Splits](#data-splits)
169
+ - [Dataset Creation](#dataset-creation)
170
+ - [Curation Rationale](#curation-rationale)
171
+ - [Source Data](#source-data)
172
+ - [Annotations](#annotations)
173
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
174
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
175
+ - [Social Impact of Dataset](#social-impact-of-dataset)
176
+ - [Discussion of Biases](#discussion-of-biases)
177
+ - [Other Known Limitations](#other-known-limitations)
178
+ - [Additional Information](#additional-information)
179
+ - [Dataset Curators](#dataset-curators)
180
+ - [Licensing Information](#licensing-information)
181
+ - [Citation Information](#citation-information)
182
+ - [Contributions](#contributions)
183
+
184
+ ## Dataset Description
185
+
186
+ - **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
187
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
189
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
+ - **Size of downloaded dataset files:** 4.85 MB
191
+ - **Size of the generated dataset:** 10.26 MB
192
+ - **Total amount of disk used:** 15.11 MB
193
+
194
+ ### Dataset Summary
195
+
196
+ The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
197
+ four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
198
+ not belong to the previous three groups.
199
+
200
+ The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
201
+ a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
202
+ a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
203
+ and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
204
+ if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
205
+ B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
206
+ tagging scheme, whereas the original dataset uses IOB1.
207
+
208
+ For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
209
+
210
+ ### Supported Tasks and Leaderboards
211
+
212
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
213
+
214
+ ### Languages
215
+
216
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
217
+
218
+ ## Dataset Structure
219
+
220
+ ### Data Instances
221
+
222
+ #### conll2003
223
+
224
+ - **Size of downloaded dataset files:** 4.85 MB
225
+ - **Size of the generated dataset:** 10.26 MB
226
+ - **Total amount of disk used:** 15.11 MB
227
+
228
+ An example of 'train' looks as follows.
229
+
230
+ ```
231
+ {
232
+ "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
233
+ "id": "0",
234
+ "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
235
+ "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
236
+ "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
237
+ }
238
+ ```
239
+
240
+ The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
241
+ Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
242
+
243
+ ### Data Fields
244
+
245
+ The data fields are the same among all splits.
246
+
247
+ #### conll2003
248
+ - `id`: a `string` feature.
249
+ - `tokens`: a `list` of `string` features.
250
+ - `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
251
+
252
+ ```python
253
+ {'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
254
+ 'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
255
+ 'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
256
+ 'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
257
+ 'WP': 44, 'WP$': 45, 'WRB': 46}
258
+ ```
259
+
260
+ - `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
261
+
262
+ ```python
263
+ {'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
264
+ 'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
265
+ 'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
266
+ ```
267
+
268
+ - `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
269
+
270
+ ```python
271
+ {'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
272
+ ```
273
+
274
+ ### Data Splits
275
+
276
+ | name |train|validation|test|
277
+ |---------|----:|---------:|---:|
278
+ |conll2003|14041| 3250|3453|
279
+
280
+ ## Dataset Creation
281
+
282
+ ### Curation Rationale
283
+
284
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
285
+
286
+ ### Source Data
287
+
288
+ #### Initial Data Collection and Normalization
289
+
290
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
291
+
292
+ #### Who are the source language producers?
293
+
294
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
295
+
296
+ ### Annotations
297
+
298
+ #### Annotation process
299
+
300
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
301
+
302
+ #### Who are the annotators?
303
+
304
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
305
+
306
+ ### Personal and Sensitive Information
307
+
308
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
309
+
310
+ ## Considerations for Using the Data
311
+
312
+ ### Social Impact of Dataset
313
+
314
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
315
+
316
+ ### Discussion of Biases
317
+
318
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
319
+
320
+ ### Other Known Limitations
321
+
322
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
323
+
324
+ ## Additional Information
325
+
326
+ ### Dataset Curators
327
+
328
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
329
+
330
+ ### Licensing Information
331
+
332
+ From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
333
+
334
+ > The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
335
+
336
+ The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
337
+
338
+ > The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
339
+ >
340
+ > [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
341
+ >
342
+ > This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
343
+ >
344
+ > [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
345
+ >
346
+ > This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
347
+
348
+ ### Citation Information
349
+
350
+ ```
351
+ @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
352
+ title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
353
+ author = "Tjong Kim Sang, Erik F. and
354
+ De Meulder, Fien",
355
+ booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
356
+ year = "2003",
357
+ url = "https://www.aclweb.org/anthology/W03-0419",
358
+ pages = "142--147",
359
+ }
360
+
361
+ ```
362
+
363
+
364
+ ### Contributions
365
+
366
+ Thanks to [@jplu](https://github.com/jplu), [@vblagoje](https://github.com/vblagoje), [@lhoestq](https://github.com/lhoestq) for adding this dataset.