mishig HF Staff commited on
Commit
2f5b85b
·
verified ·
1 Parent(s): 0a5caea

Add 1 files

Browse files
Files changed (1) hide show
  1. 2411/2411.06850.md +430 -0
2411/2411.06850.md ADDED
@@ -0,0 +1,430 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs
2
+
3
+ URL Source: https://arxiv.org/html/2411.06850
4
+
5
+ Published Time: Tue, 12 Nov 2024 02:13:20 GMT
6
+
7
+ Markdown Content:
8
+ Jebish Purbey
9
+
10
+ Pulchowk Campus, IoE
11
+
12
+ jebishpurbey@gmail.com
13
+
14
+ &Siddartha Pullakhandam
15
+
16
+ University of Wisconsin
17
+
18
+ pullakh2@uwm.edu
19
+
20
+ &Kanwal Mehreen *
21
+
22
+ Traversaal.ai
23
+
24
+ kanwal@traversaal.ai
25
+ \AND Muhammad Arham *
26
+
27
+ NUST/Traversaal.ai
28
+
29
+ arhamm40182@gmail.com
30
+
31
+ &Drishti Sharma *
32
+
33
+ Cohere For AI Community
34
+
35
+ drishtishrma@gmail.com
36
+
37
+ &Ashay Srivastava
38
+
39
+ University of Maryland
40
+
41
+ ashays06@umd.edu
42
+
43
+ \AND Ram Mohan Rao Kadiyala
44
+
45
+ University of Maryland, College Park
46
+
47
+ rkadiyal@umd.edu
48
+
49
+ ###### Abstract
50
+
51
+ This paper presents a detailed system description of our entry for the CHiPSAL 2025 shared task, focusing on language detection, hate speech identification, and target detection in Devanagari script languages. We experimented with a combination of large language models and their ensembles, including MuRIL, IndicBERT, and Gemma-2, and leveraged unique techniques like focal loss to address challenges in the natural understanding of Devanagari languages, such as multilingual processing and class imbalance. Our approach achieved competitive results across all tasks: F1 of 0.9980, 0.7652, and 0.6804 for Sub-tasks A, B, and C respectively. This work provides insights into the effectiveness of transformer models in tasks with domain-specific and linguistic challenges, as well as areas for potential improvement in future iterations.
52
+
53
+ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs
54
+
55
+ Jebish Purbey Pulchowk Campus, IoE jebishpurbey@gmail.com Siddartha Pullakhandam University of Wisconsin pullakh2@uwm.edu Kanwal Mehreen *Traversaal.ai kanwal@traversaal.ai
56
+
57
+ Muhammad Arham *NUST/Traversaal.ai arhamm40182@gmail.com Drishti Sharma *Cohere For AI Community drishtishrma@gmail.com Ashay Srivastava University of Maryland ashays06@umd.edu
58
+
59
+ Ram Mohan Rao Kadiyala University of Maryland, College Park rkadiyal@umd.edu
60
+
61
+ 0 0 footnotetext: * equal contribution
62
+ 1 Introduction
63
+ --------------
64
+
65
+ Large language models (LLMs) have revolutionized natural language processing (NLP) yet South Asian languages remain largely underrepresented within these advancements despite being home to over 700 languages, 25 major scripts, and approximately 1.97 billion people. Addressing these gaps, this paper focuses on three critical NLP tasks of CHiPSAL 2025 Sarveswaran et al. ([2025](https://arxiv.org/html/2411.06850v1#bib.bib20)) in Devanagari-scripted languages: 5-way classification of the text based on the language of the text (Sub-task A), Binary classification for detecting hate speech in the text (Sub-task B), and 3-way classification for detecting target of hate speech in a text (Sub-task C) Thapa et al. ([2025](https://arxiv.org/html/2411.06850v1#bib.bib21)). Our system leverages the multilingual capabilities of open-source LLMs namely IndicBERT V2 Doddapaneni et al. ([2023](https://arxiv.org/html/2411.06850v1#bib.bib5)), MuRIL Khanuja et al. ([2021](https://arxiv.org/html/2411.06850v1#bib.bib11)), and Gemma-2 GemmaTeam ([2024](https://arxiv.org/html/2411.06850v1#bib.bib6)) and their ensembles for natural language understanding of Devanagari script languages. Our work contributes to advancing language technology in South Asia, aiming for inclusivity and deeper understanding across diverse linguistic landscapes.
66
+
67
+ 2 Dataset & Task
68
+ ----------------
69
+
70
+ The goal of Sub-task A is to determine the language of the given Devanagari script among the 5 languages to address the critical need for accurate multilingual identification. The dataset consists of text in Nepali Thapa et al. ([2023](https://arxiv.org/html/2411.06850v1#bib.bib22)); Rauniyar et al. ([2023](https://arxiv.org/html/2411.06850v1#bib.bib18)), Marathi Kulkarni et al. ([2021](https://arxiv.org/html/2411.06850v1#bib.bib12)), Sanskrit Aralikatte et al. ([2021](https://arxiv.org/html/2411.06850v1#bib.bib1)), Bhojpuri Ojha ([2019](https://arxiv.org/html/2411.06850v1#bib.bib16)), and Hindi Jafri et al. ([2024](https://arxiv.org/html/2411.06850v1#bib.bib9), [2023](https://arxiv.org/html/2411.06850v1#bib.bib10)). For Sub-task B, the goal is to determine if the text contains hate speech or not. The dataset consists of social media text (tweets) in Hindi and Nepali languages. Sub-task C follows Sub-task B, where the goal is to identify the targets of hate speech among "individual", "organization", or "community". Similar to Sub-task B, the dataset for Sub-task C is in Hindi and Nepali languages. The distribution of labels for the three datasets can be seen in table [1](https://arxiv.org/html/2411.06850v1#S2.T1 "Table 1 ‣ 2 Dataset & Task ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs"), [2](https://arxiv.org/html/2411.06850v1#S2.T2 "Table 2 ‣ 2 Dataset & Task ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs"), and [3](https://arxiv.org/html/2411.06850v1#S2.T3 "Table 3 ‣ 2 Dataset & Task ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs") respectively.
71
+
72
+ Table 1: Class distribution for Subtask A
73
+
74
+ Table 2: Class distribution for the Subtask B
75
+
76
+ ![Image 1: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/system1.png)
77
+
78
+ Figure 1: System design workflow. The development set is initially used to select the best-performing models, which are then retrained on the combined train and development set. Selected models are ensembled to generate final predictions on the test set.
79
+
80
+ Table 3: Class distribution for the Subtask C
81
+
82
+ 3 Methodology
83
+ -------------
84
+
85
+ The common approach to all three Sub-tasks was to fine-tune a multitude of multilingual models in the train set and use the dev set to select the best few models during the Evaluation phase. The selected best models were then fine-tuned again on both the train and dev sets and their ensemble, by majority voting, was used for the final prediction of the test set during the Testing phase as shown in Figure [1](https://arxiv.org/html/2411.06850v1#S2.F1 "Figure 1 ‣ 2 Dataset & Task ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs"). The models fine-tuned under this approach include decoder-only models such as Gemma-2 9B, Llama 3.1 8B LlamaTeam ([2024](https://arxiv.org/html/2411.06850v1#bib.bib14)), and Mistral Nemo Base 12B MistralAI ([2024](https://arxiv.org/html/2411.06850v1#bib.bib15)), and BERT Devlin et al. ([2019](https://arxiv.org/html/2411.06850v1#bib.bib4)) based models such as IndicBERT V2, MuRIL, XLM Roberta Conneau et al. ([2019](https://arxiv.org/html/2411.06850v1#bib.bib2)), mDistilBERT Sanh et al. ([2019](https://arxiv.org/html/2411.06850v1#bib.bib19)) and mBERT Devlin et al. ([2018](https://arxiv.org/html/2411.06850v1#bib.bib3)). For decoder-only models, each Sub-task was formulated as a text-generation task where each model was asked to generate only one option among the given choices. For BERT-based models, each Sub-task was formulated as a multilabel classification task by adding a classification head to the model.
86
+
87
+ For Sub-task A, each decoder-only models were trained for 1 epoch with a learning rate of 2e-4. The BERT-based models were trained for 5 epochs with a learning rate of 4e-5 with weighted cross-entropy loss. For Sub-task B, decoder-only models were trained for 2-4 epochs with a learning rate of 2e-4. The BERT-based models were trained for 5 epochs with a learning rate of 4e-5.
88
+
89
+ To handle the class imbalance in sub-task B, focal loss Lin et al. ([2018](https://arxiv.org/html/2411.06850v1#bib.bib13)) was used for BERT-based models. Focal loss modifies cross-entropy by reducing the relative loss for well-classified examples, focusing more on hard, misclassified examples. The focal loss is given by formula [1](https://arxiv.org/html/2411.06850v1#S3.E1 "In 3 Methodology ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs"):
90
+
91
+ ℒ focal=−α t⁢(1−p t)γ⁢log⁡(p t)subscript ℒ focal subscript 𝛼 𝑡 superscript 1 subscript 𝑝 𝑡 𝛾 subscript 𝑝 𝑡\mathcal{L}_{\text{focal}}=-\alpha_{t}(1-p_{t})^{\gamma}\log(p_{t})caligraphic_L start_POSTSUBSCRIPT focal end_POSTSUBSCRIPT = - italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( 1 - italic_p start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT roman_log ( italic_p start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )(1)
92
+
93
+ Where, α t subscript 𝛼 𝑡\alpha_{t}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the balancing factor for class t 𝑡 t italic_t, p t subscript 𝑝 𝑡 p_{t}italic_p start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the model’s estimated probability for the correct class, and γ 𝛾\gamma italic_γ is the focusing parameter that adjusts the rate at which easy examples are down-weighted. The hyperparameters α t subscript 𝛼 𝑡\alpha_{t}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and γ 𝛾\gamma italic_γ were determined using grid search as 0.35 and 4.0 respectively.
94
+
95
+ For Sub-task C, only decoder models were used during the Testing phase as BERT-based models massively underperformed in limited tests. An additional Gemma-2 27B model was fine-tuned for Sub-task B and C using Odds Ratio Preference Optimization (ORPO) Hong et al. ([2024](https://arxiv.org/html/2411.06850v1#bib.bib7)) for better alignment. All the fine-tuning of decoder-only models was carried out using Unsloth with Low-Rank Adaptation of Large Language Models (LoRA) Hu et al. ([2021](https://arxiv.org/html/2411.06850v1#bib.bib8)). The rank (r 𝑟 r italic_r) and alpha (α 𝛼\alpha italic_α) values used were 16 for both.
96
+
97
+ 4 Results and Discussion
98
+ ------------------------
99
+
100
+ Table 4: Performance metrics for Subtask A on the dev set
101
+
102
+ Model Description F1
103
+ MuRIL Fine-tuned on train+dev set 0.9968
104
+ IndicBERT V2 Fine-tuned on train+dev set 0.9977
105
+ Gemma-2 9B Fine-tuned on train+dev set 0.9973
106
+ Ensemble-1 MuRIL’s prediction as fallback in case of no majority 0.9979
107
+ Ensemble-2 IndicBERT V2’s prediction as fallback in case of no majority 0.9980
108
+ Ensemble-3 Gemma-2 9B’s prediction as fallback in case of no majority 0.9979
109
+
110
+ Table 5: Performance metrics for Subtask A on the test set
111
+
112
+ Table 6: Performance metrics for Subtask B on dev set
113
+
114
+ Table 7: Performance metrics for Subtask B on test set
115
+
116
+ Table 8: Performance metrics for Subtask C on dev set
117
+
118
+ Table 9: Performance metrics for task C on the test set
119
+
120
+ ### 4.1 Evaluation Phase
121
+
122
+ During the Evaluation phase, various models were assessed across Sub-tasks A, B, and C using the dev set to identify the top-performing models for each task. For Sub-task A (Table [4](https://arxiv.org/html/2411.06850v1#S4.T4 "Table 4 ‣ 4 Results and Discussion ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs")), the BERT-based models and decoder-only models, both delivered strong performances, with IndicBERT V2 and MuRIL emerging as the best models, each achieving an F1 score of 0.9978 0.9978 0.9978 0.9978. They also had high recall and precision, indicating their robustness in effectively balancing sensitivity and specificity in task A classification. mBERT, XLM-Roberta, and larger generative models like Gemma-2 and Mistral Nemo also scored close to the top contenders, demonstrating that BERT-based and recent LLMs both possess considerable ability in text classification. For Sub-task B (Table [6](https://arxiv.org/html/2411.06850v1#S4.T6 "Table 6 ‣ 4 Results and Discussion ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs")), models’ performance varied more significantly, reflecting the increased complexity compared to Sub-task A. Among the evaluated models, fine-tuned Gemma-2 9B with few-shot prompting yielded an F1 score of 0.7412 0.7412 0.7412 0.7412. This shows Gemma-2’s effective adaptation in low-resource scenarios even with limited examples. IndicBERT V2 and XLM-Roberta also provided competitive results, with IndicBERT V2 achieving an F1 score of 0.7298 0.7298 0.7298 0.7298, reinforcing its efficacy across both tasks. This marked Gemma-2 9B and IndicBERT V2 as the top choices to be further evaluated for Sub-task B during the Testing phase. In Sub-task C (Table [8](https://arxiv.org/html/2411.06850v1#S4.T8 "Table 8 ‣ 4 Results and Discussion ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs")), Gemma-2 9B demonstrated superior results with an F1 score of 0.6937 0.6937 0.6937 0.6937. This outcome was significantly better than all other models, indicating Gemma-2’s robust performance for tasks with limited examples. XLM Roberta achieved the second-highest F1 score of 0.5455 0.5455 0.5455 0.5455. The performance of other models shows the complexity of the task as except for Gemma-2, other models couldn’t cross the F1 score of 0.6 0.6 0.6 0.6.
123
+
124
+ ### 4.2 Testing Phase
125
+
126
+ For the testing phase, we retrained the top-selected models from the Evaluation phase by incorporating both the train and dev sets to create a more generalized model for final testing. For Sub-task A (Table [5](https://arxiv.org/html/2411.06850v1#S4.T5 "Table 5 ‣ 4 Results and Discussion ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs")), ensemble techniques were applied to enhance accuracy further, leading to notable improvements in performance. Three ensembles were constructed, each with a different fallback model for cases without a majority prediction. Among these, Ensemble-2, which defaulted to IndicBERT V2’s predictions when no majority was reached, yielded the highest F1 score of 0.9980 0.9980 0.9980 0.9980. This ensemble strategy was instrumental in refining classification outcomes by leveraging the strengths of multiple models while relying on IndicBERT V2’s consistency as a fallback. As a result, Sub-task A saw an optimal performance boost, indicating the success of ensembling techniques in improving classification tasks with high base accuracy. For Sub-task B (Table [7](https://arxiv.org/html/2411.06850v1#S4.T7 "Table 7 ‣ 4 Results and Discussion ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs")), we employed a similar ensemble approach to maximize prediction performance. Ensemble results demonstrated improved robustness and balance across the metrics, culminating in an F1 score of 0.7652 0.7652 0.7652 0.7652, with strong recall (0.7441 0.7441 0.7441 0.7441) and precision (0.7925 0.7925 0.7925 0.7925). For the ensemble, we employed an additional Gemma-2 27B trained using ORPO with the two models selected during the Evaluation phase. The overall gains from the ensemble approach for this task underscore its potential to improve tasks with more nuanced, challenging data patterns. In Sub-task C (Table [9](https://arxiv.org/html/2411.06850v1#S4.T9 "Table 9 ‣ 4 Results and Discussion ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs")), instead of using ensembling, we selected Gemma-2 27B ORPO as the optimal model for its strong performance during testing. This model achieved an F1 score of 0.6804 0.6804 0.6804 0.6804, with balanced recall (0.6669 0.6669 0.6669 0.6669) and precision (0.7183 0.7183 0.7183 0.7183), showcasing its capability to handle more granular classification without the need for ensemble interventions. The decision to forego ensembling was based on the observation that Gemma-2 27B’s setup offered robust, reliable performance on its own, suggesting that, for some tasks, a single, finely-tuned model can sometimes match or exceed ensemble outcomes.
127
+
128
+ 5 Conclusion
129
+ ------------
130
+
131
+ Our results demonstrate the importance of leveraging tailored approaches to tackle complex natural language understanding tasks across multiple languages in Devanagari script. By combining the multilingual strengths of the BERT-based models, focal loss for class sensitivity, and the generative power of Gemma-2, we achieved notable performance improvements across the subtasks. These findings highlight the value of adapting model architectures and training strategies to the nuances of each task, especially in handling multilingual contexts and imbalanced classes. This work lays a foundation for more refined, scalable hate speech detection systems for South Asian languages that can respond effectively to diverse and complex online discourse.
132
+
133
+ Limitations
134
+ -----------
135
+
136
+ The datasets used for training and evaluation in hate speech and target detection are relatively small, which may impact the generalizability of the models in real-world applications. The challenges such as unbalanced datasets, difficulties in data collection, and issues with code-mixed languages, as noted in prior research Parihar et al. ([2021](https://arxiv.org/html/2411.06850v1#bib.bib17)), remain significant hurdles in the accurate detection of hate speech. Although techniques like focal loss and Odds Ratio Preference Optimization (ORPO) were applied to improve performance, the models still struggle with fine-grained distinctions in ambiguous hate speech contexts. Additionally, the decoder-only models were trained in 4-bit precision due to computational limitations, and they may perform better in full-precision mode. While these models performed well in most tasks, they are computationally intensive, requiring substantial resources for both fine-tuning and inference. On the other hand, BERT-based models performed well in Sub-tasks A and B, and with larger datasets, they may offer better performance for Sub-task C at a lower computational cost than decoder-only models.
137
+
138
+ Ethical Considerations
139
+ ----------------------
140
+
141
+ When developing models for detecting hate speech and its targets, it’s important to address several ethical concerns. A major issue is the potential for bias in both the data and the model’s outputs. Since the datasets used in the development are limited and might not fully represent all social contexts, there’s a risk that the models could unintentionally reinforce biases or target specific groups unfairly. These models might also be used in ways that could cause harm, such as censoring or flagging content incorrectly without human oversight. Given the complex nuances of hate speech, it’s crucial to avoid over-censorship, which may otherwise lead to the unjust targeting of certain communities or the stifling of legitimate free speech.
142
+
143
+ References
144
+ ----------
145
+
146
+ * Aralikatte et al. (2021) Rahul Aralikatte, Miryam De Lhoneux, Anoop Kunchukuttan, and Anders Søgaard. 2021. Itihasa: A large-scale corpus for sanskrit to english translation. In _Proceedings of the 8th Workshop on Asian Translation (WAT2021)_, pages 191–197.
147
+ * Conneau et al. (2019) Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. [Unsupervised cross-lingual representation learning at scale](https://arxiv.org/abs/1911.02116). _CoRR_, abs/1911.02116.
148
+ * Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. [BERT: pre-training of deep bidirectional transformers for language understanding](https://arxiv.org/abs/1810.04805). _CoRR_, abs/1810.04805.
149
+ * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. [Bert: Pre-training of deep bidirectional transformers for language understanding](https://arxiv.org/abs/1810.04805). _Preprint_, arXiv:1810.04805.
150
+ * Doddapaneni et al. (2023) Sumanth Doddapaneni, Rahul Aralikatte, Gowtham Ramesh, Shreya Goyal, Mitesh M. Khapra, Anoop Kunchukuttan, and Pratyush Kumar. 2023. [Towards leaving no Indic language behind: Building monolingual corpora, benchmark and models for indic languages](https://doi.org/10.18653/v1/2023.acl-long.693). In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 12402–12426, Toronto, Canada. Association for Computational Linguistics.
151
+ * GemmaTeam (2024) GemmaTeam. 2024. [Gemma: Open models based on gemini research and technology](https://arxiv.org/abs/2403.08295). _Preprint_, arXiv:2403.08295.
152
+ * Hong et al. (2024) Jiwoo Hong, Noah Lee, and James Thorne. 2024. [Orpo: Monolithic preference optimization without reference model](https://arxiv.org/abs/2403.07691). _Preprint_, arXiv:2403.07691.
153
+ * Hu et al. (2021) Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. [Lora: Low-rank adaptation of large language models](https://arxiv.org/abs/2106.09685). _Preprint_, arXiv:2106.09685.
154
+ * Jafri et al. (2024) Farhan Ahmad Jafri, Kritesh Rauniyar, Surendrabikram Thapa, Mohammad Aman Siddiqui, Matloob Khushi, and Usman Naseem. 2024. Chunav: Analyzing hindi hate speech and targeted groups in indian election discourse. _ACM Transactions on Asian and Low-Resource Language Information Processing_.
155
+ * Jafri et al. (2023) Farhan Ahmad Jafri, Mohammad Aman Siddiqui, Surendrabikram Thapa, Kritesh Rauniyar, Usman Naseem, and Imran Razzak. 2023. Uncovering political hate speech during indian election campaign: A new low-resource dataset and baselines.
156
+ * Khanuja et al. (2021) Simran Khanuja, Diksha Bansal, Sarvesh Mehtani, Savya Khosla, Atreyee Dey, Balaji Gopalan, Dilip Kumar Margam, Pooja Aggarwal, Rajiv Teja Nagipogu, Shachi Dave, Shruti Gupta, Subhash Chandra Bose Gali, Vish Subramanian, and Partha Talukdar. 2021. [Muril: Multilingual representations for indian languages](https://arxiv.org/abs/2103.10730). _Preprint_, arXiv:2103.10730.
157
+ * Kulkarni et al. (2021) Atharva Kulkarni, Meet Mandhane, Manali Likhitkar, Gayatri Kshirsagar, and Raviraj Joshi. 2021. L3cubemahasent: A marathi tweet-based sentiment analysis dataset. In _Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis_, pages 213–220.
158
+ * Lin et al. (2018) Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. 2018. [Focal loss for dense object detection](https://arxiv.org/abs/1708.02002). _Preprint_, arXiv:1708.02002.
159
+ * LlamaTeam (2024) LlamaTeam. 2024. [The llama 3 herd of models](https://arxiv.org/abs/2407.21783). _Preprint_, arXiv:2407.21783.
160
+ * MistralAI (2024) MistralAI. 2024. [Mistral nemo](https://mistral.ai/news/mistral-nemo/).
161
+ * Ojha (2019) Atul Kr Ojha. 2019. English-bhojpuri smt system: Insights from the karaka model. _arXiv preprint arXiv:1905.02239_.
162
+ * Parihar et al. (2021) Anil Singh Parihar, Surendrabikram Thapa, and Sushruti Mishra. 2021. Hate speech detection using natural language processing: Applications and challenges. In _2021 5th International Conference on Trends in Electronics and Informatics (ICOEI)_, pages 1302–1308. IEEE.
163
+ * Rauniyar et al. (2023) Kritesh Rauniyar, Sweta Poudel, Shuvam Shiwakoti, Surendrabikram Thapa, Junaid Rashid, Jungeun Kim, Muhammad Imran, and Usman Naseem. 2023. Multi-aspect annotation and analysis of nepali tweets on anti-establishment election discourse. _IEEE Access_.
164
+ * Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. _ArXiv_, abs/1910.01108.
165
+ * Sarveswaran et al. (2025) Kengatharaiyer Sarveswaran, Bal Krishna Bal, Surendrabikram Thapa, Ashwini Vaidya, and Sana Shams. 2025. A brief overview of the first workshop on challenges in processing south asian languages (chipsal). In _Proceedings of the First Workshop on Challenges in Processing South Asian Languages (CHiPSAL)_.
166
+ * Thapa et al. (2025) Surendrabikram Thapa, Kritesh Rauniyar, Farhan Ahmad Jafri, Surabhi Adhikari, Kengatharaiyer Sarveswaran, Bal Krishna Bal, Hariram Veeramani, and Usman Naseem. 2025. Natural language understanding of devanagari script languages: Language identification, hate speech and its target detection. In _Proceedings of the First Workshop on Challenges in Processing South Asian Languages (CHiPSAL)_.
167
+ * Thapa et al. (2023) Surendrabikram Thapa, Kritesh Rauniyar, Shuvam Shiwakoti, Sweta Poudel, Usman Naseem, and Mehwish Nasim. 2023. Nehate: Large-scale annotated data shedding light on hate speech in nepali local election discourse. In _ECAI 2023_, pages 2346–2353. IOS Press.
168
+
169
+ Appendix A Appendix
170
+ -------------------
171
+
172
+ ### A.1 Confusion Matrix
173
+
174
+ We provide the confusion matrix for all the models we tested below:
175
+
176
+ #### A.1.1 Sub-task A: Language Detection
177
+
178
+ Evaluation Phase
179
+
180
+ ![Image 2: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/mbert_A_Eval.png)
181
+
182
+ Figure 2: mBERT’s Confusion Matrix for Language Detection
183
+
184
+ ![Image 3: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/mdistilbert_A_Eval.png)
185
+
186
+ Figure 3: mDistilBERT’s Confusion Matrix for Language Detection
187
+
188
+ ![Image 4: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/xlm_A_Eval.png)
189
+
190
+ Figure 4: XLM Roberta’s Confusion Matrix for Language Detection
191
+
192
+ ![Image 5: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/muril_A_Eval.png)
193
+
194
+ Figure 5: MuRIL’s Confusion Matrix for Language Detection
195
+
196
+ ![Image 6: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/indic_A_Eval.png)
197
+
198
+ Figure 6: IndicBERT V2’s Confusion Matrix for Language Detection
199
+
200
+ ![Image 7: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/llama_A_Eval.png)
201
+
202
+ Figure 7: Llama 3.1 8B’s Confusion Matrix for Language Detection
203
+
204
+ ![Image 8: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/gemma_A_Eval.png)
205
+
206
+ Figure 8: Gemma-2 9B’s Confusion Matrix for Language Detection
207
+
208
+ ![Image 9: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/mistral_A_Eval.png)
209
+
210
+ Figure 9: Mistral Nemo’s Confusion Matrix for Language Detection
211
+
212
+ Testing Phase
213
+
214
+ ![Image 10: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/muril_A_Test.png)
215
+
216
+ Figure 10: MuRIL’s Confusion Matrix for Language Detection
217
+
218
+ ![Image 11: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/indic_A_Test.png)
219
+
220
+ Figure 11: IndicBERT V2’s Confusion Matrix for Language Detection
221
+
222
+ ![Image 12: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/gemma_A_Test.png)
223
+
224
+ Figure 12: Gemma-2 9B’s Confusion Matrix for Language Detection
225
+
226
+ ![Image 13: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/ensemble1_A_Test.png)
227
+
228
+ Figure 13: Ensemble-1’s Confusion Matrix for Language Detection
229
+
230
+ ![Image 14: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/ensemble2_A_Test.png)
231
+
232
+ Figure 14: Ensemble-2’s Confusion Matrix for Language Detection
233
+
234
+ ![Image 15: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/ensemble3_A_Test.png)
235
+
236
+ Figure 15: Ensemble-3’s Confusion Matrix for Language Detection
237
+
238
+ #### A.1.2 Sub-task B: Hate Speech Detection
239
+
240
+ Evaluation Phase
241
+
242
+ ![Image 16: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/mbert_B_Eval.png)
243
+
244
+ Figure 16: mBERT’s Confusion Matrix for Hate Speech Detection
245
+
246
+ ![Image 17: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/mdistilbert_B_Eval.png)
247
+
248
+ Figure 17: mDistilBERT’s Confusion Matrix for Hate Speech Detection
249
+
250
+ ![Image 18: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/roberta_B_Eval.png)
251
+
252
+ Figure 18: XLM Roberta’s Confusion Matrix for Hate Speech Detection
253
+
254
+ ![Image 19: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/indic_B_Eval.png)
255
+
256
+ Figure 19: IndicBERT V2’s Confusion Matrix for Hate Speech Detection
257
+
258
+ ![Image 20: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/gemma_B_Eval.png)
259
+
260
+ Figure 20: Gemma-2 9B’s Confusion Matrix for Hate Speech Detection
261
+
262
+ ![Image 21: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/gemmafew_B_Eval.png)
263
+
264
+ Figure 21: Gemma-2 9B (Few-shot)’s Confusion Matrix for Hate Speech Detection
265
+
266
+ Testing Phase
267
+
268
+ ![Image 22: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/indic_B_Test.png)
269
+
270
+ Figure 22: IndicBERT V2’s Confusion Matrix for Hate Speech Detection
271
+
272
+ ![Image 23: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/gemma_B_Test.png)
273
+
274
+ Figure 23: Gemma-2 9B (Few-shot)’s Confusion Matrix for Hate Speech Detection
275
+
276
+ ![Image 24: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/gemma27_B_Test.png)
277
+
278
+ Figure 24: Gemma-2 27B ORPO’s Confusion Matrix for Hate Speech Detection
279
+
280
+ ![Image 25: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/ensemble_B_Test.png)
281
+
282
+ Figure 25: Ensemble’s Confusion Matrix for Hate Speech Detection
283
+
284
+ #### A.1.3 Sub-task C: Hate Speech Target Detection
285
+
286
+ Evaluation Phase
287
+
288
+ ![Image 26: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/mbert_C_Eval.png)
289
+
290
+ Figure 26: mBERT’s Confusion Matrix for Hate Speech Target Detection
291
+
292
+ ![Image 27: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/mdistilbert_C_Eval.png)
293
+
294
+ Figure 27: mDistilBERT’s Confusion Matrix for Hate Speech Target Detection
295
+
296
+ ![Image 28: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/roberta_C_Eval.png)
297
+
298
+ Figure 28: XLM Roberta’s Confusion Matrix for Hate Speech Target Detection
299
+
300
+ ![Image 29: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/indic_C_Eval.png)
301
+
302
+ Figure 29: IndicBERT V2’s Confusion Matrix for Hate Speech Target Detection
303
+
304
+ ![Image 30: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/gemma29b_C_Eval.png)
305
+
306
+ Figure 30: Gemma-2 9B’s Confusion Matrix for Hate Speech Target Detection
307
+
308
+ Testing Phase
309
+
310
+ ![Image 31: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/gemmaalpha_C_Test.png)
311
+
312
+ Figure 31: Gemma-2 9B Alpha’s Confusion Matrix for Hate Speech Target Detection
313
+
314
+ ![Image 32: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/gemmabeta_C_Test.png)
315
+
316
+ Figure 32: Gemma-2 9B Beta’s Confusion Matrix for Hate Speech Target Detection
317
+
318
+ ![Image 33: Refer to caption](https://arxiv.org/html/2411.06850v1/extracted/5991648/gemma27_C_Test.png)
319
+
320
+ Figure 33: Gemma-2 27B’s Confusion Matrix for Hate Speech Target Detection
321
+
322
+ ### A.2 System Replication
323
+
324
+ We provide the details of hyperparameters used in training for replicating the process in Table [10](https://arxiv.org/html/2411.06850v1#A1.T10 "Table 10 ‣ A.2 System Replication ‣ Appendix A Appendix ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs") and [11](https://arxiv.org/html/2411.06850v1#A1.T11 "Table 11 ‣ A.2 System Replication ‣ Appendix A Appendix ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs").
325
+
326
+ Table 10: Hyperparameters’ values for decoder-only models across tasks
327
+
328
+ Table 11: Hyperparameters’ values for BERT-based models
329
+
330
+ Table [10](https://arxiv.org/html/2411.06850v1#A1.T10 "Table 10 ‣ A.2 System Replication ‣ Appendix A Appendix ‣ 1-800-SHARED-TASKS @ NLU of Devanagari Script Languages: Detection of Language, Hate Speech, and Targets using LLMs") presents the hyperparameters for decoder-only models across tasks, with core values, such as learning rate, weight decay, and LoRA values shared across tasks. Task-specific parameters like maximum token length, batch size, gradient accumulation, warmup steps, and epochs were experimented with to meet the requirements of each task. For hyperparameters not listed, default values were used for each model.
331
+
332
+ ### A.3 Prompts
333
+
334
+ The prompts used for decoder-only models are provided below:
335
+
336
+ #### A.3.1 Task A: Language Detection
337
+
338
+ Task:You are an expert linguist specializing in Devanagari script languages.Your task is to identify the language of the given text.
339
+
340
+ ###Instruction:
341
+
342
+ Analyze the following Devanagari script text and determine its language.Choose the correct language code from these options:
343
+
344
+ 0:Nepali
345
+
346
+ 1:Marathi
347
+
348
+ 2:Sanskrit
349
+
350
+ 3:Bhojpuri
351
+
352
+ 4:Hindi
353
+
354
+ ###Input:
355
+
356
+ Text:{text}
357
+
358
+ ###Response:
359
+
360
+ The language code for the given text is:{label}
361
+
362
+ #### A.3.2 Task B: Hate Speech Detection
363
+
364
+ Task:You are fluent in Nepali and Hindi languages.Your task is to classify if the given input text contains hate speech or not.
365
+
366
+ ###Instruction:
367
+
368
+ The goal of this subtask is to identify the targets of hate speech in a given text.Choose the correct category from these options:
369
+
370
+ 1:Hate
371
+
372
+ 0:Non-Hate
373
+
374
+ ###Examples:
375
+
376
+ Input:{example_text1}
377
+
378
+ Response:{example_text1_label}
379
+
380
+ Input:{example_text2}
381
+
382
+ Response:{example_text2_label}
383
+
384
+ Input:{example_text3}
385
+
386
+ Response:{example_text3_label}
387
+
388
+ Input:{example_text4}
389
+
390
+ Response:{example_text4_label}
391
+
392
+ Input:{example_text5}
393
+
394
+ Response:{example_text5_label}
395
+
396
+ ###Input:
397
+
398
+ {text}
399
+
400
+ ###Response:
401
+
402
+ {label}
403
+
404
+ #### A.3.3 Task C: Hate Speech Target Detection
405
+
406
+ You are an expert linguist specializing in detecting hate speech targets in Devanagari-script tweets.Your task is to classify the target of hate speech.
407
+
408
+ ###Instruction:
409
+
410
+ Analyze the given tweet in Devanagari script and determine who the hate speech is targeting.
411
+
412
+ Step 1:First,decide if the target is an individual or a group.
413
+
414
+ Step 2(if group):If it’s a group,further classify it as either an organization or a community.
415
+
416
+ Classify the final label according to these categories:
417
+
418
+ 0.Individual:A specific person or a small set of identifiable individuals
419
+
420
+ 1.Organization:A formal entity,institution,or company
421
+
422
+ 2.Community:A broader group based on ethnicity,religion,gender,or other shared characteristics
423
+
424
+ ###Input:
425
+
426
+ {}
427
+
428
+ ###Response:
429
+
430
+ {}