profner_ner_master / README.md
luisgasco's picture
Upload dataset
5be9633 verified
---
dataset_info:
features:
- name: tweet_id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PROFESION
'1': I-PROFESION
'2': O
splits:
- name: train
num_bytes: 1835289
num_examples: 2786
- name: validation
num_bytes: 614287
num_examples: 999
- name: test
num_bytes: 623453
num_examples: 1001
download_size: 805537
dataset_size: 3073029
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# NER-PROFESION Dataset (Based on PROFNER)
This dataset is built upon the original **[PROFNER](https://zenodo.org/records/4563995)** corpus, which focuses on Named Entity Recognition (NER) of **professions in Spanish**.
To ensure consistency and simplicity, the original fine-grained entity tags have been **normalized** into the following three labels:
- `B-PROFESION`: beginning of a profession entity
- `I-PROFESION`: continuation of a profession entity
- `O`: outside any named entity
---
## 📦 Dataset Structure
The dataset is provided as a Hugging Face `DatasetDict` with the following splits:
| Split | Description |
|--------------|-------------|
| `train` | Balanced: approximately 50% of the documents contain at least one `B-PROFESION` tag, while the other 50% contain no profession entities. |
| `validation` | Balanced with respect to the number of documents containing profession entities, matching the `test` split. |
| `test` | Also balanced: half of the documents contain at least one `B-PROFESION`, half contain none. |
---
## ⚙️ Dataset Generation
Two utility functions were used to prepare the dataset:
### `procesar_training_set_balanceado`
This function loads the original training data and performs the following steps:
1. Groups the data by document ID.
2. Splits documents into two groups: those containing at least one `B-` label (indicating a profession) and those without.
3. Selects an equal number of documents from both groups to ensure a 50/50 balance between positive and negative samples.
4. Converts the documents into CoNLL-style format: one token-label pair per line, separated by empty lines between documents.
### `procesar_dev_test_balanceado`
This function splits the original development set into two subsets:
1. Documents are separated into two groups:
- With at least one `B-` tag
- Without any `B-` tag
2. Each group is split equally into two halves:
- Half goes into the `validation` set
- Half goes into the `test` set
3. This guarantees that both `validation` and `test` are balanced with respect to the presence of annotated profession entities.
The selected document IDs are stored separately (`dev_ids.txt` and `test_ids.txt`) for reproducibility.
---
## 🧾 Format
Each instance is a dictionary with:
- `tokens`: list of tokenized words
- `ner_tags`: list of integer-encoded entity labels
The `ner_tags` feature uses Hugging Face's `ClassLabel`, which maps integers to string labels:
```python
label_list = ["B-PROFESION", "I-PROFESION", "O"]
```
Example:
```python
{
"tokens": ["Nuestros", "colaboradores", "y", "conductores"],
"ner_tags": [0, 1, 2, 0] # Using ClassLabel -> ["B-PROFESION", "I-PROFESION", "O", "B-PROFESION"]
}
```
---
## 🔁 Label Mappings
The dataset includes a `label_mappings.json` file with:
```json
{
"label_list": ["B-PROFESION", "I-PROFESION", "O"],
"label2id": {
"B-PROFESION": 0,
"I-PROFESION": 1,
"O": 2
},
"id2label": {
"0": "B-PROFESION",
"1": "I-PROFESION",
"2": "O"
}
}
```
These can be loaded to configure a model or to interpret predictions.
---
## 📥 Loading the Dataset
```python
from datasets import load_dataset
dataset = load_dataset("luisgasco/profner_ner_master")
tokens = dataset["train"][0]["tokens"]
tags = dataset["train"].features["ner_tags"].feature.int2str(dataset["train"][0]["ner_tags"])
print(list(zip(tokens, tags)))
```
---
## 🔗 Original Data Source
This dataset is based on the **PROFNER** corpus:
- Zenodo: [LINK](https://zenodo.org/records/4563995)
- [Original task website](https://temu.bsc.es/smm4h-spanish/)
---
## ✍️ Author
This version has been processed and curated by [Luis Gasco](https://huggingface.co/luisgasco), based on the PROFNER dataset, for educational purposes.
---