Machlovi commited on
Commit
efd7892
·
verified ·
1 Parent(s): 088d2ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -8
README.md CHANGED
@@ -128,12 +128,21 @@ This resource accompanies our paper accepted in the **Late Breaking Work** track
128
  ## 📖 Citation
129
 
130
  ```bibtex
131
- @misc{machlovi2025saferaimoderationevaluating,
132
- title={Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach},
133
- author={Naseem Machlovi and Maryam Saleki and Innocent Ababio and Ruhul Amin},
134
- year={2025},
135
- eprint={2508.07063},
136
- archivePrefix={arXiv},
137
- primaryClass={cs.AI},
138
- url={https://arxiv.org/abs/2508.07063},
 
 
 
 
 
 
 
139
  }
 
 
 
128
  ## 📖 Citation
129
 
130
  ```bibtex
131
+ @InProceedings{10.1007/978-3-032-13184-3_24,
132
+ author="Machlovi, Naseem
133
+ and Saleki, Maryam
134
+ and Ababio, Innocent
135
+ and Amin, Ruhul",
136
+ editor="Degen, Helmut
137
+ and Ntoa, Stavroula",
138
+ title="Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach",
139
+ booktitle="HCI International 2025 -- Late Breaking Papers",
140
+ year="2026",
141
+ publisher="Springer Nature Switzerland",
142
+ address="Cham",
143
+ pages="386--403",
144
+ abstract="As AI systems become more integrated into daily life, the need for safer and more reliable moderation has never been greater. Large Language Models (LLMs) have demonstrated remarkable capabilities, surpassing earlier models in complexity and performance. Their evaluation across diverse tasks has consistently showcased their potential, enabling the development of adaptive and personalized agents. However, despite these advancements, LLMs remain prone to errors, particularly in areas requiring nuanced moral reasoning. They struggle with detecting implicit hate, offensive language, and gender biases due to the subjective and context-dependent nature of these issues. Moreover, their reliance on training data can inadvertently reinforce societal biases, leading to inconsistencies and ethical concerns in their outputs. To explore the limitations of LLMs in this role, we developed an experimental framework based on state-of-the-art (SOTA) models to assess human emotions and offensive behaviors. The framework introduces a unified benchmark dataset encompassing 49 distinct categories spanning the wide spectrum of human emotions, offensive and hateful text, and gender and racial biases. Furthermore, we introduced SafePhi, a QLoRA fine-tuned version of Phi-4, adapting diverse ethical contexts and outperforming benchmark moderators by achieving a Macro F1 score of 0.89, where OpenAI Moderator and Llama Guard score 0.77 and 0.74, respectively. This research also highlights the critical domains where LLM moderators consistently underperformed, pressing the need to incorporate more heterogeneous and representative data with human-in-the-loop, for better model robustness and explainability.",
145
+ isbn="978-3-032-13184-3"
146
  }
147
+
148
+ ```