AWED-FiNER
Collection
Fine-grained Named Entity Recognition for more than 6 billion people.
β’
21 items
β’
Updated
XLM-RoBERTa is fine-tuned on Chinese MultiCoNER2 dataset for Fine-grained Named Entity Recognition.
The tagset of MultiCoNER2 is a fine-grained tagset. The fine to coarse level mapping of the tags are as follows:
Precision: 64.66
Recall: 69.42
F1: 66.95
Epochs: 6
Optimizer: AdamW
Learning Rate: 5e-5
Weight Decay: 0.01
Batch Size: 64
If you use this model, please cite the following papers:
@inproceedings{fetahu2023multiconer,
title={MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition},
author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2023},
pages={2027--2051},
year={2023}
}
@inproceedings{kaushik2026sampurner,
title={SampurNER: Fine-grained Named Entity Recognition Dataset for 22 Indian Languages},
author={Kaushik, Prachuryya and Anand, Ashish},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={40},
year={2026}
}
Base model
FacebookAI/xlm-roberta-large