Datasets:
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
language:
- bn
tags:
- hate-speech-classification
- text-classification
pretty_name: 'BanglaMultiHate: Multi-task Bangla Hate-speech Dataset'
size_categories:
- 10K<n<100K
dataset_info:
- config_name: BanglaMultiHate
splits:
- name: train
num_examples: 35522
- name: dev
num_examples: 5024
- name: test
num_examples: 10200
configs:
- config_name: BanglaMultiHate
data_files:
- split: train
path: data/train.json
- split: dev
path: data/dev.json
- split: test
path: data/test.json
BanglaMultiHate: Multi-task Bangla Hate-speech Dataset
The BanglaMultiHate dataset collected public comments from YouTube videos using the YouTube API, primarily from Somoy TV, which is a popular Bangla News channel. The comments belong to 19 different categories, including Business, Celebrities, Disaster, Entertainment, Fashion, Geopolitics, Health, History, International, Lifestyle, Literature, Miscellaneous, National, Opinion, Politics, Religion, Science, Sports, and Technology, as well as 120 subcategories. The annotation agreement of this manually annotated dataset shows agreement scores of $0.71$, $0.84$, and $0.79$ for the type of hate, severity of hate, and target of hate tasks, respectively, indicating substantial to perfect agreement.
Shared Task: Bangla Hate Speech Identification
Check our shared task on this dataset: https://github.com/AridHasan/blp25_task1
Dataset
Data format
Each file uses the json format. A json object within the list of objects adheres to the following structure:
{
"id": "",
"comment": "",
"category": "",
"subcategory": "",
"type_of_hate": "",
"severity_of_hate": "",
"target_of_hate": ""
}
Where:
- id: an index or id of the text
- comment: text
- category: a category
- subcategory: a subcategory
- hate_type: Abusive, Sexism, Religious Hate, Political Hate, Profane, or None.
- hate_severity: Little to None, Mild, or Severe.
- to_whom: Individuals, Organizations, Communities, or Society.
Example
{
"id": "490273",
"comment": "আওয়ামী লীগের সন্ত্রাসী কবে দরবেন এই সাহস আপনাদের নাই",
"category": "National",
"subcategory": "বাংলার সময়",
"type_of_hate": "Political Hate",
"severity_of_hate": "Little to None",
"target_of_hate": "Organization"
}
How to download data
import os
import json
from datasets import load_dataset
output_dir="./BanglaMultiHate/"
dataset = load_dataset("AridHasan/BanglaMultiHate")
# Save the dataset to the specified directory. This will save all splits to the output directory.
dataset.save_to_disk(output_dir)
# iterate over splits to save the data into json format
for split in ['train','dev','test']:
data = []
if split not in dataset:
continue
for idx, item in enumerate(dataset[split]):
data.append(item)
output_file = os.path.join(output_dir, f"{split}.json")
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=4)
Citation
@article{hasan2025llm,
title={LLM-Based Multi-Task Bangla Hate Speech Detection: Type, Severity, and Target},
author={Hasan, Md Arid and Alam, Firoj and Hossain, Md Fahad and Naseem, Usman and Ahmed, Syed Ishtiaque},
year={2025},
journal={arXiv preprint arXiv:2510.01995},
url={https://arxiv.org/abs/2510.01995},
}
@inproceedings{blp2025-overview-task1,
title = "Overview of BLP 2025 Task 1: Bangla Hate Speech Identification",
author = "Hasan, Md Arid and Alam, Firoj and Hossain, Md Fahad and Naseem, Usman and Ahmed, Syed Ishtiaque",
booktitle = "Proceedings of the Second International Workshop on Bangla Language Processing (BLP-2025)",
editor = {Alam, Firoj
and Kar, Sudipta
and Chowdhury, Shammur Absar
and Hassan, Naeemul
and Prince, Enamul Hoque
and Tasnim, Mohiuddin
and Rony, Md Rashad Al Hasan,
and Rahman, Md Tahmid Rahman
},
month = dec,
year = "2025",
address = "India",
publisher = "Association for Computational Linguistics",
}
License
BanglaMultiHate is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
You should have received a copy of the license along with this work. If not, see http://creativecommons.org/licenses/by-nc-sa/4.0/.