File size: 3,433 Bytes
ddcf6e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fa3bfa9
ddcf6e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
language:
- en
task_categories:
- token-classification
task_ids:
- named-entity-recognition
tags:
- ner
- conll2003
size_categories:
- 10K<n<100K
---

# CoNLL-2003 Named Entity Recognition Dataset

This is a self-contained version of the CoNLL-2003 dataset for Named Entity Recognition (NER).

**🔒 No `trust_remote_code` required** - This dataset uses only standard parquet files with no custom loading scripts.

## Dataset Description

The CoNLL-2003 shared task dataset consists of newswire text from the Reuters corpus tagged with four entity types: persons (PER), locations (LOC), organizations (ORG), and miscellaneous (MISC).

## Dataset Structure

### Data Instances

Each instance contains:
- `id`: Unique identifier for the example
- `tokens`: List of tokens (words)
- `pos_tags`: List of part-of-speech tags
- `chunk_tags`: List of chunk tags
- `ner_tags`: List of named entity tags

### Data Splits

- **train**: 14,041 examples
- **validation**: 3,250 examples
- **test**: 3,453 examples

### Features

- **tokens** (list of strings): The words in the sentence
- **pos_tags** (list of ClassLabel): Part-of-speech tags
- **chunk_tags** (list of ClassLabel): Chunk tags (phrases)
- **ner_tags** (list of ClassLabel): Named entity tags with BIO scheme
  - O: Outside any named entity
  - B-PER: Beginning of a person name
  - I-PER: Inside a person name
  - B-ORG: Beginning of an organization name
  - I-ORG: Inside an organization name
  - B-LOC: Beginning of a location name
  - I-LOC: Inside a location name
  - B-MISC: Beginning of a miscellaneous entity
  - I-MISC: Inside a miscellaneous entity

## Usage

This dataset is completely self-contained and does NOT require `trust_remote_code=True`. All data is bundled in parquet files.

### Loading from Hugging Face Hub

```python
from datasets import load_dataset

# Load the dataset directly from the Hub
dataset = load_dataset("jacobmitchinson/conll2003")

# Access splits
train_data = dataset["train"]
validation_data = dataset["validation"]
test_data = dataset["test"]
```

### Loading from Local Files

```python
from datasets import load_dataset

# Load the dataset from local parquet files
dataset = load_dataset('parquet', data_files={
    'train': 'data/train.parquet',
    'validation': 'data/validation.parquet',
    'test': 'data/test.parquet'
})
```

### Example Usage

```python
# Get an example
example = train_data[0]
print("Tokens:", example["tokens"])
# Output: ['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.']

print("NER tags:", example["ner_tags"])
# Output: [3, 0, 7, 0, 0, 0, 7, 0, 0]

# Convert NER tags to readable labels
ner_feature = train_data.features["ner_tags"].feature
ner_labels = [ner_feature.int2str(tag) for tag in example["ner_tags"]]
print("NER labels:", ner_labels)
# Output: ['B-ORG', 'O', 'B-MISC', 'O', 'O', 'O', 'B-MISC', 'O', 'O']
```

## Citation

```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
    title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
    author = "Tjong Kim Sang, Erik F. and De Meulder, Fien",
    booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
    year = "2003",
    pages = "142--147",
    url = "https://www.aclweb.org/anthology/W03-0419",
}
```

## License

The dataset is licensed under the same terms as the original CoNLL-2003 dataset.