MLSNT / README.md
RSTZZZZ's picture
Update README.md
9a1b5d2 verified
|
raw
history blame
4.48 kB
metadata
license: mit
task_categories:
  - text-classification
  - token-classification
language:
  - zh
  - ja
  - pt
  - fr
  - de
  - ru
tags:
  - toxicity
  - hatespeech
pretty_name: Multi-Lingual Social Network Toxicity
size_categories:
  - 100K<n<1M

MLSNT: Multi-Lingual Social Network Toxicity Dataset

MLSNT is a multi-lingual dataset for toxicity detection created through a large language model-assisted label transfer pipeline. It enables efficient and scalable moderation across languages and platforms, and is built to support span-level and category-specific classification for toxic content.

This dataset is introduced in the following paper:

Unified Game Moderation: Soft-Prompting and LLM-Assisted Label Transfer for Resource-Efficient Toxicity Detection
🏆 Accepted at KDD 2025, Applied Data Science Track


🧩 Overview

MLSNT harmonizes 15 publicly available toxicity datasets across 7 languages using GPT-4o-mini to create consistent binary and fine-grained labels. It is suitable for both training and evaluating toxicity classifiers in multi-lingual, real-world moderation systems.


🌍 Supported Languages

  • 🇫🇷 French (fr)
  • 🇩🇪 German (de)
  • 🇵🇹 Portuguese (pt)
  • 🇷🇺 Russian (ru)
  • 🇨🇳 Simplified Chinese (zh-cn)
  • 🇹🇼 Traditional Chinese (zh-tw)
  • 🇯🇵 Japanese (ja)

🏗️ Construction Method

  1. Source Datasets
    15 human-annotated datasets were gathered from hatespeechdata.com and peer-reviewed publications.

  2. LLM-Assisted Label Transfer
    GPT-4o-mini was prompted to re-annotate each instance into a unified label schema. Only examples where human and LLM annotations agreed were retained.

  3. Toxicity Categories
    Labels are fine-grained categories (e.g., threat, hate, harassment).


📊 Dataset Statistics

Language Total Samples % Discarded Toxic % (Processed)
German (HASOC, etc.) ~13,800 28–69% 32–56%
French (MLMA) ~3,200 20% 94%
Russian ~14,300 ~40% 33–54%
Portuguese ~21,000 20–44% 26–50%
Japanese ~2,000 10–25% 17–45%
Chinese (Simplified) ~34,000 29–46% 48–61%
Chinese (Traditional) ~65,000 37% ~9%

💾 Format

Each row in the dataset includes:

  • full_text: The original utterance or message.
  • start_string_index: A list of start string indices (start positions of toxic spans).
  • end_string_index: A list of end string indices (end positions of toxic spans).
  • category_id: A list of toxic category IDs (integer values).
  • final_label: A list of toxic category names (string values).
  • min_category_id: The minimum toxic category ID in the row (used as the primary label).
  • match_id: A unique identifier composed of the original dataset name and a row-level ID.

🗂️ Category ID Mapping

ID Friendly Name
0 Non Toxic
1 Threats (Life Threatening)
2 Minor Endangerment
3 Threats (Non-Life Threatening)
4 Hate
5 Sexual Content / Harassment
6 Extremism
7 Insults
8 Controversial / Potentially Toxic Topic

🔬 Applications

  • Fine-tuning multi-lingual moderation systems
  • Cross-lingual toxicity benchmarking
  • Training span-level and category-specific toxicity detectors
  • Studying LLM label transfer reliability and agreement filtering

📜 Citation

If you use MLSNT in academic work, please cite:

@inproceedings{yang2025mlsnt,
  title={Unified Game Moderation: Soft-Prompting and LLM-Assisted Label Transfer for Resource-Efficient Toxicity Detection},
  author={Zachary Yang and Domenico Tullo and Reihaneh Rabbany},
  booktitle={Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)},
  year={2025}
}