Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Azerbaijani NER Benchmark

This dataset is the evaluation benchmark used to test and compare four Azerbaijani Named Entity Recognition (NER) models trained on the LocalDoc/azerbaijani-ner-dataset.

Entity Types

The dataset uses IOB2 annotation with 12 entity categories:

Tag Description
O Outside (non-entity token)
B-PERSON / I-PERSON Person names (e.g., İlham Əliyev)
B-LOCATION / I-LOCATION Geographic locations (e.g., Bakı, Azərbaycan)
B-ORGANISATION / I-ORGANISATION Organizations (e.g., universitetlər, şirkətlər)
B-DATE / I-DATE Date expressions (e.g., 2014-cü il, yanvar ayı)

Model Comparison

The following four models were evaluated on this benchmark:

Model Parameters F1-Score Hugging Face
mBERT Azerbaijani NER 180M 67.70% IsmatS/mbert-az-ner
XLM-RoBERTa Base Azerbaijani NER 125M 75.22% IsmatS/xlm-roberta-az-ner
XLM-RoBERTa Large Azerbaijani NER 355M 75.48% IsmatS/xlm_roberta_large_az_ner
Azerbaijani-Turkish BERT Base NER 110M 73.55% IsmatS/azeri-turkish-bert-ner

XLM-RoBERTa Large achieves the highest F1-score of 75.48% and is used in the production deployment at named-entity-recognition.fly.dev.

How to Use for Evaluation

Quick Start

from datasets import load_dataset

dataset = load_dataset("IsmatS/azerbaijani-ner-benchmark", split="test")
print(dataset)
# Dataset({features: ['tokens', 'ner_tags'], num_rows: 2915})

Evaluate a Model

Use the provided evaluate_models.py script to reproduce benchmark results:

pip install transformers datasets seqeval
python evaluate_models.py

Or evaluate a single model programmatically:

from transformers import pipeline
from datasets import load_dataset
from seqeval.metrics import f1_score

# Load benchmark
dataset = load_dataset("IsmatS/azerbaijani-ner-benchmark", split="test")

# Load model
ner_pipeline = pipeline(
    "token-classification",
    model="IsmatS/xlm-roberta-az-ner",
    aggregation_strategy="simple"
)

# Run evaluation
# See evaluate_models.py for the full evaluation loop

Evaluation Script

The full evaluation script (evaluate_models.py) in this repository:

  1. Loads each of the 4 Azerbaijani NER models from Hugging Face Hub
  2. Runs inference on all 2,915 benchmark sentences
  3. Computes precision, recall, and F1-score using seqeval
  4. Prints a comparison table with all results

Dataset Loading

from datasets import load_dataset

# Load test split (the full benchmark)
benchmark = load_dataset("IsmatS/azerbaijani-ner-benchmark", split="test")

# Inspect a sample
print(benchmark[0])
# {
#   'tokens': ['2014-cü', 'ildə', 'Azərbaycan', ...],
#   'ner_tags': [7, 8, 3, ...]
# }

Citation

If you use this benchmark in your research, please cite the original dataset:

@dataset{azerbaijani_ner_benchmark,
  title     = {Azerbaijani NER Benchmark},
  author    = {Ismat Samadov},
  year      = {2024},
  publisher = {Hugging Face},
  url       = {https://huggingface.co/datasets/IsmatS/azerbaijani-ner-benchmark},
  note      = {Derived from LocalDoc/azerbaijani-ner-dataset}
}

Related Resources

License

MIT License

Downloads last month
14

Collection including IsmatS/azerbaijani-ner-benchmark