mishig HF Staff commited on
Commit
4ed25ee
·
verified ·
1 Parent(s): feb6f95

Add 1 files

Browse files
Files changed (1) hide show
  1. 2601/2601.10161.md +139 -0
2601/2601.10161.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers
2
+
3
+ URL Source: https://arxiv.org/html/2601.10161
4
+
5
+ Markdown Content:
6
+ Ashish Anand
7
+
8
+ Department of Computer Science and Engineering
9
+
10
+ Indian Institute of Technology Guwahati
11
+
12
+ Guwahati, Assam, India
13
+
14
+ {k.prachuryya, anand.ashish}@iitg.ac.in
15
+
16
+ ###### Abstract
17
+
18
+ We introduce AWED-FiNER, an open-source ecosystem designed to bridge the gap in Fine-grained Named Entity Recognition (FgNER) for 36 global languages spoken by more than 6.6 billion people. While Large Language Models (LLMs) dominate general Natural Language Processing (NLP) tasks, they often struggle with low-resource languages and fine-grained NLP tasks. AWED-FiNER provides a collection of agentic toolkits, web applications, and several state-of-the-art expert models that provides FgNER solutions across 36 languages. The agentic tools enable to route multilingual text to specialized expert models and fetch FgNER annotations within seconds. The web-based platforms provide ready-to-use FgNER annotation service for non-technical users. Moreover, the collection of language specific extremely small sized open-source state-of-the-art expert models facilitate offline deployment in resource contraint scenerios including edge devices. AWED-FiNER covers languages spoken by over 6.6 billion people, including a specific focus on vulnerable languages such as Bodo, Manipuri, Bishnupriya, and Mizo. The resources can be accessed here: [Agentic Tool](https://github.com/PrachuryyaKaushik/AWED-FiNER), [Web Application](https://hf.co/spaces/prachuryyaIITG/AWED-FiNER), and [49 Expert Detector Models](https://hf.co/collections/prachuryyaIITG/awed-finer).
19
+
20
+ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers
21
+
22
+ Prachuryya Kaushik and Ashish Anand Department of Computer Science and Engineering Indian Institute of Technology Guwahati Guwahati, Assam, India{k.prachuryya, anand.ashish}@iitg.ac.in
23
+
24
+ 1 Introduction
25
+ --------------
26
+
27
+ Digital equity in Natural Language Processing (NLP) requires robust support for the "long tail" of global languages. Current state-of-the-art models often prioritize high-resource languages, leaving millions of speakers behind. AWED-FiNER addresses this, for the NLP task Fine-grained Named Entity Recognition (FgNER), by providing a unified interface for 36 languages, ranging from the most spoken languages English, Chinese, Spanish, Hindi etc., to vulnerable languages UNESCO ([2017](https://arxiv.org/html/2601.10161v1#bib.bib6 "UNESCO atlas of the world’s languages in danger")) such as Bishnupriya, Bodo, Manipuri, and Mizo.
28
+
29
+ 1 from smolagents import CodeAgent,HfApiModel
30
+
31
+ 2 from tool import AWEDFiNERTool
32
+
33
+ 3
34
+
35
+ 4
36
+
37
+ 5 ner_tool=AWEDFiNERTool()
38
+
39
+ 6
40
+
41
+ 7
42
+
43
+ 8 agent=CodeAgent(tools=[ner_tool],model=HfApiModel())
44
+
45
+ 9
46
+
47
+ 10
48
+
49
+ 11 agent.run("Recognize the named entities in this Mizo sentence:’Pu Lal Thanhawla chu Aizawl-ah a cheng thin.’")
50
+
51
+ Listing 1: Implementation of the AWED-FiNER Agentic Tool using smolagents.
52
+
53
+ AWED-FiNER is a collection of a gentic tools, w eb apps, and state-of-the-art e xpert d etector models for fi ne-grained n amed e ntity r ecognition. The expert detector models integrate fine-tuned models based on the SampurNER Kaushik and Anand ([2026](https://arxiv.org/html/2601.10161v1#bib.bib1 "SampurNER: fine-grained named entity recognition dataset for 22 indian languages")), CLASSER Kaushik and Anand ([2025](https://arxiv.org/html/2601.10161v1#bib.bib2 "CLASSER: cross-lingual annotation projection enhancement through script similarity for fine-grained named entity recognition")), MultiCoNER2 Fetahu et al. ([2023](https://arxiv.org/html/2601.10161v1#bib.bib3 "MultiCoNER v2: a large multilingual dataset for fine-grained and noisy named entity recognition")), FewNERD Ding et al. ([2021](https://arxiv.org/html/2601.10161v1#bib.bib4 "Few-nerd: a few-shot named entity recognition dataset")), TAFSIL Kaushik et al. ([2025](https://arxiv.org/html/2601.10161v1#bib.bib5 "TAFSIL: taxonomy adaptable fine-grained entity recognition through distant supervision for indian languages")), FiNE-MiBBiC, FiNERVINER, and APTFiNER datasets into the agentic tool AWED-FiNER, allowing seamless "one-line" integration into modern AI workflows. To the best of our knowledge, this is the first comprehensive contribution covering an agentic tool, an interactive web app, and a collection of expert models across 36 languages, serving 6.6 billion speakers, for the fine-grained named entity recognition task.
54
+
55
+ Table 1: Performance (Macro-F1) of the best expert models across 22 languages under the MultiCoNER2 taxonomy, using the MultiCoNER2 benchmark Fetahu et al. ([2023](https://arxiv.org/html/2601.10161v1#bib.bib3 "MultiCoNER v2: a large multilingual dataset for fine-grained and noisy named entity recognition")) and additional datasets including CLASSER Kaushik and Anand ([2025](https://arxiv.org/html/2601.10161v1#bib.bib2 "CLASSER: cross-lingual annotation projection enhancement through script similarity for fine-grained named entity recognition")), FiNERVINER, and APTFiNER to extend language coverage.
56
+
57
+ 2 Related Works
58
+ ---------------
59
+
60
+ Early influential work in Named Entity Recognition (NER) tools includes the FreeLing suite (Carreras et al., 2003) and German-specific models utilizing Wikipedia-based clusters (Chrupała and Klakow, 2010), both of which established robust baselines for multilingual and language-specific tasks. Some of the most used NLP tools, such as NLTK Loper and Bird ([2002](https://arxiv.org/html/2601.10161v1#bib.bib21 "NLTK: the natural language toolkit")), CoreNLP Manning et al. ([2014](https://arxiv.org/html/2601.10161v1#bib.bib17 "The stanford corenlp natural language processing toolkit")), spaCy Choi et al. ([2015](https://arxiv.org/html/2601.10161v1#bib.bib20 "It depends: dependency parser comparison using a web-based evaluation tool")), and Flair Akbik et al. ([2019](https://arxiv.org/html/2601.10161v1#bib.bib18 "FLAIR: an easy-to-use framework for state-of-the-art nlp")), and Stanza Qi et al. ([2020](https://arxiv.org/html/2601.10161v1#bib.bib19 "Stanza: a python natural language processing toolkit for many human languages")) include NER pipelines for specific high-resource languages. A significant subset of this research prioritizes fine-grained and hierarchical entity taxonomies to overcome the limitations of traditional coarse-grained categories. For instance, NameTag 1 Straková et al. ([2013](https://arxiv.org/html/2601.10161v1#bib.bib14 "A new state-of-the-art czech named entity recognizer")) introduces a complex hierarchy for Czech entities, while PolDeepNer Marcinczuk et al. ([2018](https://arxiv.org/html/2601.10161v1#bib.bib15 "Recognition of named entities for polish-comparison of deep learning and conditional random fields approaches")) utilizes nested annotations to distinguish granular geographic and personal subtypes. Similarly, the Finnish Tagtools Ruokolainen et al. ([2020](https://arxiv.org/html/2601.10161v1#bib.bib13 "A finnish news corpus for named entity recognition")) extend traditional classifications into detailed categories, reflecting a broader trend toward more expressive and precise information extraction models. Although NameTag 3 Straková and Straka ([2025](https://arxiv.org/html/2601.10161v1#bib.bib16 "NameTag 3: a tool and a service for multilingual/multitagset NER")) extended coarse-grained and nested entity recognition to Arabic, Chinese, and 15 European languages, there is still a lack of Fine-grained NER tools spanning various languages, especially the low-resource ones.
61
+
62
+ 3 AWED-FiNER
63
+ ------------
64
+
65
+ AWED-FiNER is composed of distinct layers designed for high throughput and low-latency agentic calling as described in the code shown in Listing [1](https://arxiv.org/html/2601.10161v1#LST1 "Listing 1 ‣ 1 Introduction ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
66
+
67
+ ### 3.1 Agentic Toolkit
68
+
69
+ The toolkit, hosted on GitHub 1 1 1[https://github.com/PrachuryyaKaushik/AWED-FiNER](https://github.com/PrachuryyaKaushik/AWED-FiNER), implements the smolagents paradigm. The Tool class provides the calling agent with a description of the available 36 languages and fine-grained taxonomies.
70
+
71
+ ### 3.2 Hugging Face Router API
72
+
73
+ A centralized backend hosted on Hugging Face Spaces handles the routing logic. It dynamically loads the required specialized model via a serverless inference architecture, minimizing memory overhead on the client side.
74
+
75
+ Table 2: Performance (Macro-F1) of the best expert models across 27 languages under the FewNERD taxonomy Ding et al. ([2021](https://arxiv.org/html/2601.10161v1#bib.bib4 "Few-nerd: a few-shot named entity recognition dataset")), using the FewNERD benchmark, and additional datasets including SampurNER Kaushik and Anand ([2026](https://arxiv.org/html/2601.10161v1#bib.bib1 "SampurNER: fine-grained named entity recognition dataset for 22 indian languages")), and FiNE-MiBBiC to extend language coverage.
76
+
77
+ ### 3.3 Interactive Web Applications
78
+
79
+ The AWED-FiNER 2 2 2[https://hf.co/spaces/prachuryyaIITG/AWED-FiNER](https://hf.co/spaces/prachuryyaIITG/AWED-FiNER) and SampurNER 3 3 3[https://hf.co/spaces/prachuryyaIITG/SampurNER-Demo](https://hf.co/spaces/prachuryyaIITG/SampurNER-Demo) web applications provide an interactive, user-friendly platform built with the Gradio framework to demonstrate fine-grained NER capabilities across 36 languages. The interface allows users to input text in various languages, including vulnerable languages like Bodo, Mizo etc., and visualize entity extractions in real-time. By providing a unified portal with the expert models, the space serves as a practical deployment benchmark for evaluating multilingual performance on complex entity taxonomies, allowing for direct comparison across different linguistic contexts.
80
+
81
+ ### 3.4 Multilingual Expert Model Collection
82
+
83
+ Our contribution includes a centralized repository 4 4 4[https://hf.co/collections/prachuryyaIITG/awed-finer](https://hf.co/collections/prachuryyaIITG/awed-finer) of 49 fine-tuned expert models, specialized for fine-grained NER targeting a global audience of 6.6 billion speakers. These models leverage state-of-the-art multilingual backbones, including IndicBERTv2 Doddapaneni et al. ([2023](https://arxiv.org/html/2601.10161v1#bib.bib7 "Towards leaving no indic language behind: building monolingual corpora, benchmark and models for indic languages")), MuRIL Khanuja et al. ([2021](https://arxiv.org/html/2601.10161v1#bib.bib9 "Muril: multilingual representations for indian languages")), and XLM-RoBERTa Conneau et al. ([2020](https://arxiv.org/html/2601.10161v1#bib.bib10 "Unsupervised cross-lingual representation learning at scale")), and were trained on diverse datasets such as SampurNER Kaushik and Anand ([2026](https://arxiv.org/html/2601.10161v1#bib.bib1 "SampurNER: fine-grained named entity recognition dataset for 22 indian languages")), CLASSER Kaushik and Anand ([2025](https://arxiv.org/html/2601.10161v1#bib.bib2 "CLASSER: cross-lingual annotation projection enhancement through script similarity for fine-grained named entity recognition")), MultiCoNER2 Fetahu et al. ([2023](https://arxiv.org/html/2601.10161v1#bib.bib3 "MultiCoNER v2: a large multilingual dataset for fine-grained and noisy named entity recognition")), FewNERD Ding et al. ([2021](https://arxiv.org/html/2601.10161v1#bib.bib4 "Few-nerd: a few-shot named entity recognition dataset")), FiNERVINER, APTFiNER, and FiNE-MiBBiC. This collection addresses the disparity of linguistic diversity by providing resources for languages ranging from the most spoken global languages to extremely underrepresented, vulnerable languages.
84
+
85
+ 4 Experimental Setup
86
+ --------------------
87
+
88
+ The state-of-the-art approach for sequence labeling tasks involves fine-tuning pre-trained language models (PLM) with the NER datasets (Venkataramana et al., [2022](https://arxiv.org/html/2601.10161v1#bib.bib28 "HiNER: a large hindi named entity recognition dataset"); Litake et al., [2022](https://arxiv.org/html/2601.10161v1#bib.bib29 "L3cube-mahaner: a marathi named entity recognition dataset and bert models"); Malmasi et al., [2022](https://arxiv.org/html/2601.10161v1#bib.bib26 "MultiCoNER: a large-scale multilingual dataset for complex named entity recognition"); Mhaske et al., [2023](https://arxiv.org/html/2601.10161v1#bib.bib25 "Naamapadam: a large-scale named entity annotated data for Indic languages"); Fetahu et al., [2023](https://arxiv.org/html/2601.10161v1#bib.bib3 "MultiCoNER v2: a large multilingual dataset for fine-grained and noisy named entity recognition"); Tulajiang et al., [2025](https://arxiv.org/html/2601.10161v1#bib.bib22 "A bilingual legal ner dataset and semantics-aware cross-lingual label transfer method for low-resource languages"); del Moral-González et al., [2025](https://arxiv.org/html/2601.10161v1#bib.bib23 "Comparative analysis of generative llms for labeling entities in clinical notes")). Similarly, we have fine-tuned mBERT (bert-base-multilingual-cased) (Devlin et al., [2019](https://arxiv.org/html/2601.10161v1#bib.bib8 "BERT: pre-training of deep bidirectional transformers for language understanding")), IndicBERTv2 (IndicBERTv2-MLM-Sam-TLM) (Doddapaneni et al., [2023](https://arxiv.org/html/2601.10161v1#bib.bib7 "Towards leaving no indic language behind: building monolingual corpora, benchmark and models for indic languages")), MuRIL (muril-large-cased) (Khanuja et al., [2021](https://arxiv.org/html/2601.10161v1#bib.bib9 "Muril: multilingual representations for indian languages")) and XLM-RoBERTa (XLM-RoBERTa-large) (Conneau et al., [2020](https://arxiv.org/html/2601.10161v1#bib.bib10 "Unsupervised cross-lingual representation learning at scale")) for fine-grained NER using the Hugging Face Transformers library (Wolf et al., [2020](https://arxiv.org/html/2601.10161v1#bib.bib24 "Transformers: state-of-the-art natural language processing")). The models were trained for six epochs with a batch size of 64, utilizing AdamW optimization (learning rate: 5e-5, weight decay: 0.01). Training was performed on an NVIDIA A100 GPU, with evaluation based on SeqEval metrics, and the best performance determined by the F1-score following Golde et al. ([2025](https://arxiv.org/html/2601.10161v1#bib.bib30 "Familarity: better evaluation of zero-shot named entity recognition by quantifying label shifts in synthetic training data")).
89
+
90
+ 5 Results
91
+ ---------
92
+
93
+ The best performing PLMs fine-tuned on the MultiCoNER2 taxonomy for 22 languages are shown in Table [1](https://arxiv.org/html/2601.10161v1#S1.T1 "Table 1 ‣ 1 Introduction ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), and on the FewNERD taxonomy for 27 languages are shown in Table [2](https://arxiv.org/html/2601.10161v1#S3.T2 "Table 2 ‣ 3.2 Hugging Face Router API ‣ 3 AWED-FiNER ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"). For languages where expert models are available for both taxonomies, those with higher Macro-F1 scores are preferred for the web application AWED-FiNER. However, the agentic tool can be used to call any of these 49 expert models, depending on the purpose and the requirement of entity type granularity.
94
+
95
+ 6 Conclusion
96
+ ------------
97
+
98
+ AWED-FiNER provides a scalable solution for global FgNER, serving 6.6 billion people. By combining specialized expert models with agentic reasoning, we ensure that vulnerable and low-resource languages are preserved in the era of Generative AI. To the best of our knowledge, this is the first comprehensive contribution covering an agentic tool, an interactive web app, and a collection of expert models across 36 languages, serving 6.6 billion speakers, for the fine-grained named entity recognition task.
99
+
100
+ 7 Limitations
101
+ -------------
102
+
103
+ While our system covers 36 languages, performance may vary across extremely low-resource languages due to training data imbalances, both in the pre-training of the PLMs and also during fine-tuning for the FgNER task. Additionally, the agentic architecture introduces higher computational overhead and inference latency compared to standard, single-task token classification models. Moreover, the performance of these tools is limited to the quality of the source data, the cross-lingual capabilities of PLMs, and the technical overhead on the hosted platform.
104
+
105
+ Ethical Statement
106
+ -----------------
107
+
108
+ We emphasize the preservation of low-resource languages (Bodo, Manipuri, Bishnupriya, and Mizo) to prevent digital linguistic extinction. The datasets used for fine-tuning the models are released under CC-BY-4.0 5 5 5 https://creativecommons.org/licenses/by/4.0/ and CC0 6 6 6 https://creativecommons.org/public-domain/cc0/ licenses. We have cited all the sources of resources, tools, packages, and models used in this work. All the contributions in this work are released under an MIT license 7 7 7 https://opensource.org/license/MIT.
109
+
110
+ References
111
+ ----------
112
+
113
+ * A. Akbik, T. Bergmann, D. Blythe, K. Rasul, S. Schweter, and R. Vollgraf (2019)FLAIR: an easy-to-use framework for state-of-the-art nlp. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics (demonstrations), pp.54–59. Cited by: [§2](https://arxiv.org/html/2601.10161v1#S2.p1.1 "2 Related Works ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
114
+ * J. D. Choi, J. Tetreault, and A. Stent (2015)It depends: dependency parser comparison using a web-based evaluation tool. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp.387–396. Cited by: [§2](https://arxiv.org/html/2601.10161v1#S2.p1.1 "2 Related Works ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
115
+ * A. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov (2020)Unsupervised cross-lingual representation learning at scale. In ACL, Cited by: [§3.4](https://arxiv.org/html/2601.10161v1#S3.SS4.p1.1 "3.4 Multilingual Expert Model Collection ‣ 3 AWED-FiNER ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
116
+ * R. del Moral-González, H. Gómez-Adorno, and O. Ramos-Flores (2025)Comparative analysis of generative llms for labeling entities in clinical notes. Genomics & Informatics 23 (1), pp.1–8. Cited by: [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
117
+ * J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019)BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT (1), Cited by: [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
118
+ * N. Ding, G. Xu, Y. Chen, X. Wang, X. Han, P. Xie, H. Zheng, and Z. Liu (2021)Few-nerd: a few-shot named entity recognition dataset. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp.3198–3213. Cited by: [§1](https://arxiv.org/html/2601.10161v1#S1.p2.1 "1 Introduction ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [§3.4](https://arxiv.org/html/2601.10161v1#S3.SS4.p1.1 "3.4 Multilingual Expert Model Collection ‣ 3 AWED-FiNER ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [Table 2](https://arxiv.org/html/2601.10161v1#S3.T2 "In 3.2 Hugging Face Router API ‣ 3 AWED-FiNER ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
119
+ * S. Doddapaneni, R. Aralikatte, G. Ramesh, S. Goyal, M. M. Khapra, A. Kunchukuttan, and P. Kumar (2023)Towards leaving no indic language behind: building monolingual corpora, benchmark and models for indic languages. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.12402–12426. Cited by: [§3.4](https://arxiv.org/html/2601.10161v1#S3.SS4.p1.1 "3.4 Multilingual Expert Model Collection ‣ 3 AWED-FiNER ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
120
+ * B. Fetahu, Z. Chen, S. Kar, O. Rokhlenko, and S. Malmasi (2023)MultiCoNER v2: a large multilingual dataset for fine-grained and noisy named entity recognition. In Findings of the Association for Computational Linguistics: EMNLP 2023, pp.2027–2051. Cited by: [Table 1](https://arxiv.org/html/2601.10161v1#S1.T1 "In 1 Introduction ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [§1](https://arxiv.org/html/2601.10161v1#S1.p2.1 "1 Introduction ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [§3.4](https://arxiv.org/html/2601.10161v1#S3.SS4.p1.1 "3.4 Multilingual Expert Model Collection ‣ 3 AWED-FiNER ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
121
+ * J. Golde, P. Haller, M. Ploner, F. Barth, N. P. Jedema, and A. Akbik (2025)Familarity: better evaluation of zero-shot named entity recognition by quantifying label shifts in synthetic training data. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp.820–834. Cited by: [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
122
+ * P. Kaushik and A. Anand (2025)CLASSER: cross-lingual annotation projection enhancement through script similarity for fine-grained named entity recognition. In Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, Note: Main conference paper Cited by: [Table 1](https://arxiv.org/html/2601.10161v1#S1.T1 "In 1 Introduction ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [§1](https://arxiv.org/html/2601.10161v1#S1.p2.1 "1 Introduction ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [§3.4](https://arxiv.org/html/2601.10161v1#S3.SS4.p1.1 "3.4 Multilingual Expert Model Collection ‣ 3 AWED-FiNER ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
123
+ * P. Kaushik and A. Anand (2026)SampurNER: fine-grained named entity recognition dataset for 22 indian languages. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 40. Cited by: [§1](https://arxiv.org/html/2601.10161v1#S1.p2.1 "1 Introduction ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [§3.4](https://arxiv.org/html/2601.10161v1#S3.SS4.p1.1 "3.4 Multilingual Expert Model Collection ‣ 3 AWED-FiNER ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [Table 2](https://arxiv.org/html/2601.10161v1#S3.T2 "In 3.2 Hugging Face Router API ‣ 3 AWED-FiNER ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
124
+ * P. Kaushik, S. Mishra, and A. Anand (2025)TAFSIL: taxonomy adaptable fine-grained entity recognition through distant supervision for indian languages. In Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.3753–3763. External Links: [Link](https://doi.org/10.1145/3726302.3730341), [Document](https://dx.doi.org/10.1145/3726302.3730341)Cited by: [§1](https://arxiv.org/html/2601.10161v1#S1.p2.1 "1 Introduction ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
125
+ * S. Khanuja, D. Bansal, S. Mehtani, S. Khosla, A. Dey, B. Gopalan, D. K. Margam, P. Aggarwal, R. T. Nagipogu, S. Dave, et al. (2021)Muril: multilingual representations for indian languages. arXiv preprint arXiv:2103.10730. Cited by: [§3.4](https://arxiv.org/html/2601.10161v1#S3.SS4.p1.1 "3.4 Multilingual Expert Model Collection ‣ 3 AWED-FiNER ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers"), [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
126
+ * O. Litake, M. R. Sabane, P. S. Patil, A. A. Ranade, and R. Joshi (2022)L3cube-mahaner: a marathi named entity recognition dataset and bert models. In Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference, pp.29–34. Cited by: [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
127
+ * E. Loper and S. Bird (2002)NLTK: the natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics, pp.63–70. Cited by: [§2](https://arxiv.org/html/2601.10161v1#S2.p1.1 "2 Related Works ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
128
+ * S. Malmasi, A. Fang, B. Fetahu, S. Kar, and O. Rokhlenko (2022)MultiCoNER: a large-scale multilingual dataset for complex named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pp.3798–3809. Cited by: [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
129
+ * C. D. Manning, M. Surdeanu, J. Bauer, J. R. Finkel, S. Bethard, and D. McClosky (2014)The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pp.55–60. Cited by: [§2](https://arxiv.org/html/2601.10161v1#S2.p1.1 "2 Related Works ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
130
+ * M. Marcinczuk, J. Kocon, and M. Gawor (2018)Recognition of named entities for polish-comparison of deep learning and conditional random fields approaches. In Proceedings of the PolEval 2018 workshop, pp.77–92. Cited by: [§2](https://arxiv.org/html/2601.10161v1#S2.p1.1 "2 Related Works ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
131
+ * A. Mhaske, H. Kedia, S. Doddapaneni, M. M. Khapra, P. Kumar, R. Murthy, and A. Kunchukuttan (2023)Naamapadam: a large-scale named entity annotated data for Indic languages. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.10441–10456. Cited by: [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
132
+ * P. Qi, Y. Zhang, Y. Zhang, J. Bolton, and C. D. Manning (2020)Stanza: a python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp.101–108. Cited by: [§2](https://arxiv.org/html/2601.10161v1#S2.p1.1 "2 Related Works ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
133
+ * T. Ruokolainen, P. Kauppinen, M. Silfverberg, and K. Lindén (2020)A finnish news corpus for named entity recognition. Language Resources and Evaluation 54 (1), pp.247–272. Cited by: [§2](https://arxiv.org/html/2601.10161v1#S2.p1.1 "2 Related Works ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
134
+ * J. Straková, M. Straka, and J. Hajič (2013)A new state-of-the-art czech named entity recognizer. In International Conference on Text, Speech and Dialogue, pp.68–75. Cited by: [§2](https://arxiv.org/html/2601.10161v1#S2.p1.1 "2 Related Works ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
135
+ * J. Straková and M. Straka (2025)NameTag 3: a tool and a service for multilingual/multitagset NER. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), pp.31–39. External Links: ISBN 979-8-89176-253-4 Cited by: [§2](https://arxiv.org/html/2601.10161v1#S2.p1.1 "2 Related Works ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
136
+ * P. Tulajiang, Y. Sun, Y. Zhang, Y. Le, K. Xiao, and H. Lin (2025)A bilingual legal ner dataset and semantics-aware cross-lingual label transfer method for low-resource languages. ACM Transactions on Asian and Low-Resource Language Information Processing. External Links: ISSN 2375-4699, [Link](https://doi.org/10.1145/3748325), [Document](https://dx.doi.org/10.1145/3748325)Cited by: [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
137
+ * A. UNESCO (2017)UNESCO atlas of the world’s languages in danger. UNESCO. Cited by: [§1](https://arxiv.org/html/2601.10161v1#S1.p1.1 "1 Introduction ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
138
+ * R. M. Venkataramana, P. Bhattacharjee, R. Sharnagat, J. Khatri, D. Kanojia, and P. Bhattacharyya (2022)HiNER: a large hindi named entity recognition dataset. In International Conference on Language Resources and Evaluation, Cited by: [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").
139
+ * T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, et al. (2020)Transformers: state-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pp.38–45. Cited by: [§4](https://arxiv.org/html/2601.10161v1#S4.p1.1 "4 Experimental Setup ‣ AWED-FiNER: Agents, Web applications, and Expert Detectors for Fine-grained Named Entity Recognition across 36 Languages for 6.6 Billion Speakers").