dataset_info:
features:
- name: tweet_id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PROFESION
'1': I-PROFESION
'2': O
splits:
- name: train
num_bytes: 1835289
num_examples: 2786
- name: validation
num_bytes: 614287
num_examples: 999
- name: test
num_bytes: 623453
num_examples: 1001
download_size: 805537
dataset_size: 3073029
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
NER-PROFESION Dataset (Based on PROFNER)
This dataset is built upon the original PROFNER corpus, which focuses on Named Entity Recognition (NER) of professions in Spanish.
To ensure consistency and simplicity, the original fine-grained entity tags have been normalized into the following three labels:
B-PROFESION: beginning of a profession entityI-PROFESION: continuation of a profession entityO: outside any named entity
๐ฆ Dataset Structure
The dataset is provided as a Hugging Face DatasetDict with the following splits:
| Split | Description |
|---|---|
train |
Balanced: approximately 50% of the documents contain at least one B-PROFESION tag, while the other 50% contain no profession entities. |
validation |
Balanced with respect to the number of documents containing profession entities, matching the test split. |
test |
Also balanced: half of the documents contain at least one B-PROFESION, half contain none. |
โ๏ธ Dataset Generation
Two utility functions were used to prepare the dataset:
procesar_training_set_balanceado
This function loads the original training data and performs the following steps:
- Groups the data by document ID.
- Splits documents into two groups: those containing at least one
B-label (indicating a profession) and those without. - Selects an equal number of documents from both groups to ensure a 50/50 balance between positive and negative samples.
- Converts the documents into CoNLL-style format: one token-label pair per line, separated by empty lines between documents.
procesar_dev_test_balanceado
This function splits the original development set into two subsets:
- Documents are separated into two groups:
- With at least one
B-tag - Without any
B-tag
- With at least one
- Each group is split equally into two halves:
- Half goes into the
validationset - Half goes into the
testset
- Half goes into the
- This guarantees that both
validationandtestare balanced with respect to the presence of annotated profession entities.
The selected document IDs are stored separately (dev_ids.txt and test_ids.txt) for reproducibility.
๐งพ Format
Each instance is a dictionary with:
tokens: list of tokenized wordsner_tags: list of integer-encoded entity labels
The ner_tags feature uses Hugging Face's ClassLabel, which maps integers to string labels:
label_list = ["B-PROFESION", "I-PROFESION", "O"]
Example:
{
"tokens": ["Nuestros", "colaboradores", "y", "conductores"],
"ner_tags": [0, 1, 2, 0] # Using ClassLabel -> ["B-PROFESION", "I-PROFESION", "O", "B-PROFESION"]
}
๐ Label Mappings
The dataset includes a label_mappings.json file with:
{
"label_list": ["B-PROFESION", "I-PROFESION", "O"],
"label2id": {
"B-PROFESION": 0,
"I-PROFESION": 1,
"O": 2
},
"id2label": {
"0": "B-PROFESION",
"1": "I-PROFESION",
"2": "O"
}
}
These can be loaded to configure a model or to interpret predictions.
๐ฅ Loading the Dataset
from datasets import load_dataset
dataset = load_dataset("luisgasco/profner_ner_master")
tokens = dataset["train"][0]["tokens"]
tags = dataset["train"].features["ner_tags"].feature.int2str(dataset["train"][0]["ner_tags"])
print(list(zip(tokens, tags)))
๐ Original Data Source
This dataset is based on the PROFNER corpus:
- Zenodo: LINK
- Original task website
โ๏ธ Author
This version has been processed and curated by Luis Gasco, based on the PROFNER dataset, for educational purposes.