Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
raphaelzrf's picture
Update README.md
a4fc26f verified
metadata
license: apache-2.0
task_categories:
  - text-classification
language:
  - en
tags:
  - Harmful
  - toxic
  - spam
  - negative
size_categories:
  - 1K<n<10K

🦣 Mastodon Wild Data for Harmful Content Detection

Overview

The Harmful Texts on Mastodon dataset is a human-annotated corpus of 3,000 English posts collected from the decentralized social media platform Mastodon between December 2024 and February 2025.
It is designed to evaluate the robustness, generalization, and personalization capabilities of large language models (LLMs) and in-context learning (ICL) approaches for harmful content detection in real-world scenarios.

Unlike existing benchmark datasets, which are typically curated and balanced, this dataset captures the natural distribution, domain shifts, and semantic overlaps present in real-world social discourse.
Each post is annotated at three granularities β€” binary, multi-class, and multi-label β€” allowing flexible evaluation under multiple task formulations.


πŸ“Š Dataset Structure

Granularity Labels Description
Binary benign, harmful Basic harmfulness classification.
Multi-class benign, toxic, spam, negative Mutually exclusive fine-grained categories.
Multi-label One or more from {benign, toxic, spam, negative} Allows overlapping or composite labels for nuanced real-world cases.

🧠 Motivation

Existing datasets such as SST-2, TextDetox, and UCI SMS provide clean, well-curated benchmarks for harmful content detection.
However, real-world moderation is far more complex β€” social media posts are ambiguous, noisy, and often contain overlapping intents.
For example, a post can simultaneously express anger (negative) while using profanity (toxic) or contain excessive hashtags (spam-like) without malicious intent.

The Mastodon Wild Data dataset addresses these limitations by introducing a β€œwild” benchmark that captures the messiness and richness of real-world online discourse.
It aims to:

  • Evaluate robustness and generalization of large language models (LLMs) under domain shift.
  • Reflect the compositional nature of harmful content (e.g., toxic + negative).
  • Provide a unified resource for studying multi-task, multi-class, and multi-label formulations.

πŸ—οΈ Data Construction

  • Source: Public Mastodon posts (Dec 2024 – Feb 2025).
  • Initial Corpus: 8,998,738 posts β†’ 3,948,831 unique English entries.
  • Filtering Strategy:
    1. Randomly sample 15,000 English posts.
    2. Use Llama-3 (48-shot Random) ICL model for preliminary harmfulness prediction.
    3. Select 1,500 predicted benign and 1,500 predicted harmful posts for manual annotation.
  • Final Dataset: 3,000 annotated posts, balanced between harmful and benign examples.
  • Annotation: Each sample is labeled at three levels β€” binary, multi-class, and multi-label β€” by trained human annotators.

🧾 Label Statistics

Multi-Class Distribution

Label Count Percentage
Benign 1798 59.9%
Negative 755 25.2%
Toxic 259 8.6%
Spam 188 6.3%

Multi-label Label Distribution

Labels Count Labels Count Labels Count
Benign 1437 Benign, Negative 249 Benign, Negative, Spam 11
Negative 517 Benign, Spam 184 Benign, Negative, Toxic 13
Spam 60 Benign, Toxic 8 Benign, Spam, Toxic 5
Toxic 6 Negative, Spam 10 Negative, Spam, Toxic 3
– – Negative, Toxic 339 – –
– – Spam, Toxic 98 – –
Sum 2020 – 948 – 32

πŸ“š Recommended Usage

This dataset is well-suited for:

  • Evaluating In-Context Learning (ICL) and prompt-based personalization methods.
  • Studying robustness and domain generalization in harmful content detection.
  • Training or testing multi-label or reason-augmented classification frameworks.
  • Benchmarking cross-task, multi-task, and multi-modal content moderation models.

βš–οΈ License

The dataset is distributed under the CC BY 4.0 License.
Users should also check the Terms of Service of the specific Mastodon instance you collect data from (e.g., mastodon.social Terms of Service ) when redistributing or reusing data derived from public posts.


🧩 Citation

If you use this dataset, please cite:

@misc{zhang2025onesizefitsallpersonalizedharmfulcontent,
      title={Beyond One-Size-Fits-All: Personalized Harmful Content Detection with In-Context Learning}, 
      author={Rufan Zhang and Lin Zhang and Xianghang Mi},
      year={2025},
      eprint={2511.05532},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2511.05532}, 
}