| --- |
| license: cc-by-sa-4.0 |
| task_categories: |
| - text-classification |
| - question-answering |
| - zero-shot-classification |
| language: |
| - bn |
| tags: |
| - communal |
| - violence |
| - dataset |
| - classification |
| - bengali |
| - low-resource |
| - annotator-disagreement |
| - multi-label |
| - bengali |
| - bangla |
| - social-media |
| - anonymyzed |
| - human-annotation |
| pretty_name: DANGA |
| size_categories: |
| - 10K<n<100K |
| --- |
| > [!WARNING] |
| > **Content Warning:** This dataset contains **violent, hateful, and severely offensive language** in Bengali, including communal slurs, dehumanizing rhetoric, threats, and incitement to violence targeting religious, ethnic, and cultural communities. It is intended **solely for research purposes** (hate speech detection, content moderation, NLP). Do not use this dataset to generate, promote, or amplify harmful content. |
|
|
| <!-- > [!NOTE] |
| > This dataset has been released as part of the **Adaption Competition**, with support from **Adaptive Data by Adaption**, whose initiative motivated us to formalize, restructure, and substantially update this dataset prior to its open release. |
| --> |
|
|
| # BanDANGA: A Bangla Dataset on Aggressive Narratives and Group-based Attacks |
|
|
| **দাঙ্গা** (DANGA) is an expert-annotated Bengali dataset of **12,720** social media texts classified for communal and sectarian violence. It captures violence across four identity dimensions: religion, ethnicity, socioculture, and nondenominational. Each dimensions were annotated with up to four expression types: derogation, antipathy, prejudication, and repression. The dataset includes full annotator disagreement metadata with individual votes, resolution strategies, and anonymized annotator pairs. This makes it suitable for multi-label classification, preference learning (DPO), and LLM fine-tuning on low-resource hate speech detection. |
|
|
|
|
| ## Dataset Summary |
|
|
| ## Authors & Attribution |
|
|
| DANGA was developed by **Istiak Shihab** and **Nazia Tasnim** as part of a broader effort to advance resources for the Bengali language. |
|
|
| The work was carried out in collaboration with **Bengali.AI**, a non-profit focused on building and promoting open technologies for Bengali. |
|
|
| --- |
|
|
| ## Dataset Summary |
|
|
| | | Count | |
| |---|---| |
| | Total samples | 12,720 | |
| | Violent | 4,459 (35.1%) | |
| | Non-violent | 8,261 (64.9%) | |
| | Multi-category violent | 146 | |
| | Annotator pairs | 4 (8 annotators) | |
| | Disputed samples | 4,963 (39.0%) | |
| | Expert-resolved disputes | 1,553 | |
|
|
| ## Schema |
|
|
| Each record is a JSON object with the following structure: |
|
|
| ```json |
| { |
| "text": "হিন্দু মালুরা বাংলাদেশে বসবাস করে ...", |
| "violent": true, |
| "labels": { |
| "religion": ["derogation", "antipathy", "prejudication"], |
| "ethnicity": [], |
| "socioculture": [], |
| "nondenominational": ["prejudication"] |
| }, |
| "annotation": { |
| "disputed": true, |
| "resolution": "third-party", |
| "annotators": ["C", "D"], |
| "votes": { |
| "annot_1": { |
| "religion": ["derogation", "antipathy", "prejudication"], |
| "ethnicity": [], |
| "socioculture": [], |
| "nondenominational": ["derogation", "antipathy", "prejudication"] |
| }, |
| "annot_2": { |
| "religion": ["derogation", "antipathy", "prejudication"], |
| "ethnicity": [], |
| "socioculture": [], |
| "nondenominational": [] |
| } |
| } |
| } |
| } |
| ``` |
|
|
| ### Fields |
|
|
| | Field | Type | Description | |
| |---|---|---| |
| | `text` | string | Bengali social media text (YouTube comments) | |
| | `violent` | bool | Whether the text contains any violence expression | |
| | `labels` | object | Gold-standard labels across 4 identity dimensions | |
| | `annotation` | object | Full annotator disagreement metadata | |
|
|
| ### Identity Dimensions |
|
|
| | Dimension | Column | Target Communities | Examples | |
| |---|---|---|---| |
| | **Religio-communal** | `religion` | Religious identity groups | Muslim, Hindu, Christian, Ahmadia, Shia, Atheist, Baul | |
| | **Ethno-communal** | `ethnicity` | Ethnic identity groups | Bihari, Rohingya, Chakma, Adibashi | |
| | **Sociocultural** | `socioculture` | Regional/geographic/cultural identity | Sylheti, Kashmiri, Brahmanbaria, Cultural Baul | |
| | **Nondenominational** | `nondenominational` | Individual, gender, political targets | Misogyny, homophobia, political entities, government | |
|
|
| ### Expression Types (Degree of Violence) |
|
|
| Each identity dimension is annotated with zero or more expression types: |
|
|
| | Expression | Description | Count | |
| |---|---|---| |
| | **Derogation** | Communal slurs, incivility, dehumanization, bullying | 2,212 | |
| | **Prejudication** | False accusation, victim blaming, stereotyping, justifying mistreatment | 2,125 | |
| | **Antipathy** | Alienation, deportation, stripping rights, internalized hatred | 827 | |
| | **Repression** | Direct threats, incitement to harm, encouraging violence | 517 | |
|
|
| ### Annotation Metadata |
|
|
| Each sample includes full disagreement provenance: |
|
|
| | Field | Values | Description | |
| |---|---|---| |
| | `disputed` | `true` / `false` | Whether annotators disagreed | |
| | `resolution` | `sided_with_X` / `third-party` / `null` | How the dispute was resolved | |
| | `annotators` | `["X", "Y"]` | Anonymized annotator pair (A–H) | |
| | `votes.annot_1` | labels object | First annotator's original labels | |
| | `votes.annot_2` | labels object | Second annotator's original labels | |
|
|
| **Resolution distribution (4,963 disputed samples):** |
|
|
| | Resolution | Count | |
| |---|---| |
| | Sided with first annotator | 2,205 | |
| | Sided with second annotator | 1,205 | |
| | Third-party expert label | 1,553 | |
|
|
| ## Taxonomy |
|
|
| The dataset employs a **4×4 orthogonal taxonomy**: |
|
|
| - **4 Identity dimensions** (WHO is targeted): Religio-communal, Ethno-communal, Sociocultural, Nondenominational |
| - **4 Expression types** (HOW violence is expressed): Derogation, Antipathy, Prejudication, Repression |
| - Posts can have **multiple identity categories** and **multiple expression types** simultaneously (multi-label) |
|
|
| This produces a theoretical space of 16 fine-grained violence classes. |
|
|
| ## Source |
|
|
| | Metric | Value | |
| |---|---| |
| | Language | Bengali (বাংলা) | |
| | Source | YouTube, Facebook, Newspaper comments | |
|
|
| ## Intended Uses |
|
|
| - **Violence detection** in Bengali social media |
| - **Multi-label classification** research |
| - **Annotator disagreement modeling** and calibration |
| - **LLM fine-tuning** for hate speech and communal violence tasks |
| - **Preference learning (DPO/RLHF)** using annotator votes as chosen/rejected pairs |
| - **Cross-lingual transfer** for low-resource language hate speech detection |
|
|
| ## License |
|
|
| This dataset is released under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/). It is intended for research purposes only. |
|
|
| ## Anonymization Policy |
|
|
| To protect the privacy of individuals whose content appears in this dataset, the following anonymization measures were applied: |
|
|
| - **Annotator identities** are fully anonymized. All annotators are referred to only by a randomly assigned letter (A–H). No names, institutional affiliations, or demographic information about annotators are disclosed. |
| - **Author/poster identities** from source platforms (YouTube, Facebook, newspaper comment sections) are not included in the dataset. Usernames and profile references have been removed or replaced. |
| - **Personal mentions** within text (e.g., tagged usernames, phone numbers, identifiable personal details) were removed or masked where detected during preprocessing. |
| - The raw source URLs or post IDs that could be used to re-identify individuals are not released as part of this dataset. |
|
|
| Researchers who discover re-identification risks are encouraged to contact the authors. |
|
|
| ## Ethical Considerations |
|
|
| This content is preserved for research purposes, specifically to build systems that can detect and mitigate such violence. The following guidelines apply: |
|
|
| - The dataset **must not** be used to generate, promote, or amplify hate speech or communal violence |
| - The dataset is intended for **research use only** (NLP, content moderation, computational social science) |
| - All annotators have been **fully anonymized** (see Anonymization Policy above) |
| - The data was collected from **publicly available** social media comments; personal identifiers have been removed |
| - Users of this dataset are expected to adhere to responsible AI and research ethics guidelines |