Update README.md
Browse files
README.md
CHANGED
|
@@ -70,3 +70,62 @@ from datasets import load_dataset
|
|
| 70 |
|
| 71 |
dataset = load_dataset("your-hf-username/combined-dataset")
|
| 72 |
print(dataset['train'][0])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
|
| 71 |
dataset = load_dataset("your-hf-username/combined-dataset")
|
| 72 |
print(dataset['train'][0])
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
# [Your Model/Dataset Name]
|
| 76 |
+
|
| 77 |
+
This resource accompanies our paper accepted in the **Late Breaking Work** track of **HCI International 2025**.
|
| 78 |
+
|
| 79 |
+
π **Paper Title:** _"Your Paper Title Here"_
|
| 80 |
+
π©βπ» **Authors:** Naseem Machlovi, [Other Authors]
|
| 81 |
+
π **Conference:** HCI International 2025 β Late Breaking Work
|
| 82 |
+
π [Link to Proceedings](https://2025.hci.international/proceedings.html)
|
| 83 |
+
|
| 84 |
+
---
|
| 85 |
+
|
| 86 |
+
## β¨ Description
|
| 87 |
+
|
| 88 |
+
As AI systems become more integrated into daily
|
| 89 |
+
life, the need for safer and more reliable moderation has never
|
| 90 |
+
been greater. Large Language Models (LLMs) have demonstrated
|
| 91 |
+
remarkable capabilities, surpassing earlier models in complexity
|
| 92 |
+
and performance. Their evaluation across diverse tasks has
|
| 93 |
+
consistently showcased their potential, enabling the development
|
| 94 |
+
of adaptive and personalized agents. However, despite these
|
| 95 |
+
advancements, LLMs remain prone to errors, particularly in
|
| 96 |
+
areas requiring nuanced moral reasoning. They struggle with
|
| 97 |
+
detecting implicit hate, offensive language, and gender biases
|
| 98 |
+
due to the subjective and context-dependent nature of these
|
| 99 |
+
issues. Moreover, their reliance on training data can inadvertently
|
| 100 |
+
reinforce societal biases, leading to inconsistencies and ethical
|
| 101 |
+
concerns in their outputs. To explore the limitations of LLMs
|
| 102 |
+
in this role, we developed an experimental framework based
|
| 103 |
+
on state-of-the-art (SOTA) models to assess human emotions
|
| 104 |
+
and offensive behaviors. The framework introduces a unified
|
| 105 |
+
benchmark dataset encompassing 49 distinct categories spanning
|
| 106 |
+
the wide spectrum of human emotions, offensive and hateful
|
| 107 |
+
text, and gender and racial biases. Furthermore, we introduced
|
| 108 |
+
SafePhi, a QLoRA fine-tuned version of Phi-4, adapting diverse
|
| 109 |
+
ethical contexts and outperforming benchmark moderators by
|
| 110 |
+
achieving a Marco F1 score of 0.89, where OpenAI Moderator
|
| 111 |
+
and Llama Guard score 0.77 and 0.74, respectively. This research
|
| 112 |
+
also highlights the critical domains where LLM moderators
|
| 113 |
+
consistently underperformed, pressing the need to incorporate
|
| 114 |
+
more heterogeneous and representative data with human-in-theloop, for better model robustness and explainability.
|
| 115 |
+
|
| 116 |
+
## π Usage
|
| 117 |
+
|
| 118 |
+
[Code snippets or sample usage if it's a model or dataset.]
|
| 119 |
+
|
| 120 |
+
## π Citation
|
| 121 |
+
|
| 122 |
+
```bibtex
|
| 123 |
+
@inproceedings{machlovi2025hci,
|
| 124 |
+
title = {Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach},
|
| 125 |
+
author = {Naseem Machlovi and ...},
|
| 126 |
+
booktitle = {HCI International 2025 Late Breaking Work β Proceedings},
|
| 127 |
+
year = {2025},
|
| 128 |
+
note = {Accepted. Session [XX], Paper ID [XYZ]},
|
| 129 |
+
url = {https://2025.hci.international/proceedings.html}
|
| 130 |
+
}
|
| 131 |
+
|