YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
xlm-twitter-counter_rad_level
This model is a fine-tuned classifier on the Counter dataset (Riabi et al., 2025) (anonymized version) (Riabi et al., 2024) to predict radicalization levels in radical content. It is based on XLM-T.
The model was introduced as part of the research presented in:
- Beyond Dataset Creation: Critical View of Annotation Variation and Bias Probing of a Dataset for Online Radical Content Detection (Riabi et al., 2025)
- Cloaked Classifiers: Pseudonymization Strategies on Sensitive Classification Tasks (Riabi et al., 2024)
β Intended Use
- Research on extremist or radical language detection
- Analysis of online hate speech and coded in-group language
- Supporting moderation and intervention efforts in academic or policy contexts
βοΈ Ethical Considerations
Handling extremist text data carries significant ethical risks. This model was developed under strict research protocols and is released only for responsible, academic, and policy research purposes. Repeated exposure to extremist content can be harmful; proper support and mental health considerations are advised for practitioners using this model.
π Citation
If you use this model, please cite the following works:
@inproceedings{riabi2025beyond,
title = {Beyond Dataset Creation: Critical View of Annotation Variation and Bias Probing of a Dataset for Online Radical Content Detection},
author = {Riabi, Arij and Mouilleron, Virginie and Mahamdi, Menel and Antoun, Wissam and Seddah, DjamΓ©},
booktitle = {Proceedings of the 31st International Conference on Computational Linguistics},
year = {2025},
url = {https://aclanthology.org/2025.coling-main.578/}
}
@inproceedings{riabi2024counter,
title = {Cloaked Classifiers: Pseudonymization Strategies on Sensitive Classification Tasks},
author = {Riabi, Arij and Mouilleron, Virginie and Mahamdi, Menel and Seddah, DjamΓ©},
booktitle = {Proceedings of the Workshop on Privacy in NLP},
year = {2024},
url = {https://aclanthology.org/2024.privatenlp-1.13/}
}
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support