Update README.md
Browse files
README.md
CHANGED
|
@@ -15,4 +15,141 @@ This repository contains parsed datasets used in the RuleFollower project.
|
|
| 15 |
- **Hatecot**
|
| 16 |
- **Implicit Hate**
|
| 17 |
|
| 18 |
-
Each folder contains a `data.csv` file with the parsed annotations. Each dataset is capped at 5k samples (full dataset used if <5k).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
- **Hatecot**
|
| 16 |
- **Implicit Hate**
|
| 17 |
|
| 18 |
+
Each folder contains a `data.csv` file with the parsed annotations. Each dataset is capped at 5k samples (full dataset used if <5k).
|
| 19 |
+
|
| 20 |
+
Each data file includes standardized columns:
|
| 21 |
+
- `Text`: The text input for annotation
|
| 22 |
+
- `id`: Unique identifier for the row
|
| 23 |
+
- `source`: Name of the original dataset or paper
|
| 24 |
+
- `ground_truth`: (Optional) Gold label, if available in the original dataset
|
| 25 |
+
|
| 26 |
+
### RumorEval
|
| 27 |
+
- **Text content**: Replies to source tweets about rumored events
|
| 28 |
+
- **Source**: [Gorrell et al (2019)](https://aclanthology.org/S19-2147/)
|
| 29 |
+
- **Dataset**: [Huggingface](https://huggingface.co/datasets/strombergnlp/rumoureval_2019)
|
| 30 |
+
- **Annotation goal**: Classify reply stance as `support`, `deny`, `query`, or `comment`
|
| 31 |
+
- **Ground truth**: Provided (stance label)
|
| 32 |
+
- **Note**: Only reply texts are retained for classification.
|
| 33 |
+
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
### HateCoT
|
| 37 |
+
- **Text content**: Social media posts from 8 hate/offensive speech datasets
|
| 38 |
+
- **Source**: [Nghiem and Daumé III (2024)](https://arxiv.org/abs/2403.11456)
|
| 39 |
+
- **Dataset**: [Github](https://github.com/hnghiem-nlp/hatecot?tab=readme-ov-file)
|
| 40 |
+
- **Annotation goal**: Classify post as `benign`, `offensive`, or `hateful`
|
| 41 |
+
- **Ground truth**: Provided
|
| 42 |
+
- **Note**: The original HateCoT dataset combines samples from 8 hate speech datasets with diverse annotation schemes. We standardized its many fine-grained labels into 3 unified classes:
|
| 43 |
+
- `0 = Benign` (e.g., "Not Hate", "Normal", "Neutral"),
|
| 44 |
+
- `1 = Offensive` (e.g., "Toxic", "Offensive"),
|
| 45 |
+
- `2 = Hateful` (e.g., "Hate", "Dehumanization", "Directed Abuse").
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
### Tweets(2023) / Tweets News(2017) / Tweets(2020-2021)
|
| 50 |
+
- **Text content**: Tweets from different public Twitter samples spanning 2017 to 2023, focused on content moderation and related debates. Sample sizes vary across years and sources.
|
| 51 |
+
- **Source**: [Gilardi et al (2023)](https://arxiv.org/abs/2303.15056)
|
| 52 |
+
- **Dataset**: [Harvard Dataverse](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/PQYF6M)
|
| 53 |
+
- **Ground truth**: Not provided
|
| 54 |
+
- **Supported tasks** (from annotation codebook, excluding political content tasks):
|
| 55 |
+
|
| 56 |
+
| Task ID | Task Name | Task Description Summary | Labels |
|
| 57 |
+
|---------|----------------------------------|-------------------------------------------------------------------------------------------|------------------------------------------------------------------------|
|
| 58 |
+
| T1 | **Content Moderation Relevance** | Is the tweet about content moderation? | `relevant` (1), `irrelevant` (0) |
|
| 59 |
+
| T3 | **Problem/Solution Frame** | Does the tweet portray content moderation as a problem, a solution, or neither? | `problem`, `solution`, `neutral` |
|
| 60 |
+
| T4 | **Policy Frame (Moderation)** | What policy dimension frames the content moderation issue? | e.g., `morality`, `fairness`, `security`, `equality`, `health`, ... |
|
| 61 |
+
| T6 | **Stance on Section 230** | Does the tweet support, oppose, or remain neutral about Section 230 of U.S. law? | `positive`, `negative`, `neutral` |
|
| 62 |
+
| T7 | **Topic Classification** | What is the topic in relation to content moderation? | `section 230`, `trump ban`, `complaints`, `platform policies`, ... |
|
| 63 |
+
|
| 64 |
+
- **Note**:
|
| 65 |
+
- Each of these tasks has a dedicated `.txt` task description under `prompt/task_descriptions/`.
|
| 66 |
+
- The same dataset can be reused across tasks; task choice is controlled by the selected prompt.
|
| 67 |
+
- Tweets by U.S. Congress members (used for political content) are handled separately (see below).
|
| 68 |
+
---
|
| 69 |
+
|
| 70 |
+
### Tweets Congressional
|
| 71 |
+
- **Text content**: Tweets by U.S. Congress members (2017–2022)
|
| 72 |
+
- **Source**: [Gilardi et al (2023)](https://arxiv.org/abs/2303.15056)
|
| 73 |
+
- **Dataset**: [Harvard Dataverse](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/PQYF6M)
|
| 74 |
+
- **Annotation goal**: Binary classification: whether tweet is **political**
|
| 75 |
+
- **Ground truth**: Not provided
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
### Misinfo / Misinfo Cancer
|
| 80 |
+
- **Text content**:News headlines (The Misinfo Reaction Frames corpus is an dataset of 25k news headlines to articles that have been factchecked. The articles can be about Covid-19, Cancer or Climate Change)
|
| 81 |
+
- **Source**: [Gabriel et al. (2022)](https://arxiv.org/abs/2104.08790)
|
| 82 |
+
- **Dataset**: [Github](https://github.com/skgabriel/mrf-modeling)
|
| 83 |
+
- **Annotation goal**: Classify headline as **misinformation** or **not**
|
| 84 |
+
- **Ground truth**: Provided
|
| 85 |
+
|
| 86 |
+
---
|
| 87 |
+
|
| 88 |
+
### Implicit Hate
|
| 89 |
+
- **Text content**:A dataset of English-language tweets annotated to capture both explicit and implicit hate speech. Implicit hate includes stereotypical, sarcastic, or coded language that may not be overtly hateful. The data was originally collected from Twitter and the Social Bias Inference Corpus.
|
| 90 |
+
- **Source**: [Elsafoury et al. (2021)](https://aclanthology.org/2021.emnlp-main.29/)
|
| 91 |
+
- **Dataset**: [Github](https://github.com/SALT-NLP/implicit-hate)
|
| 92 |
+
- **Annotation goal**: Classify post as `explicit_hate`, `implicit_hate`, or `not_hate`
|
| 93 |
+
- **Ground truth**: Provided
|
| 94 |
+
- We use Stage 1 annotations for parsing:
|
| 95 |
+
- `explicit_hate`: overt hate speech
|
| 96 |
+
- `implicit_hate`: indirect hate speech (e.g., stereotypes, sarcasm)
|
| 97 |
+
- `not_hate`: no hateful content
|
| 98 |
+
- **Note**:
|
| 99 |
+
- The dataset originally included multiple annotation stages (e.g., fine-grained subtypes, implied statements).
|
| 100 |
+
- Additional parsing for Stage 2 (fine-grained) or Stage 3 (target/implied meaning) will be added later.
|
| 101 |
+
|
| 102 |
+
---
|
| 103 |
+
|
| 104 |
+
### GWSD (Global Warming Stance Dataset)
|
| 105 |
+
- **Text content**: News spans (Opinion spans extracted from global warming news articles, published from Jan. 1, 2000 to April 12, 2020 by various U.S. news sources.)
|
| 106 |
+
- **Source**: [Luo et al. (2020)](https://aclanthology.org/2020.findings-emnlp.296/)
|
| 107 |
+
- **Dataset**: [Github](https://github.com/yiweiluo/GWStance/blob/master/GWSD.tsv)
|
| 108 |
+
- **Annotation goal**: Classify stance toward the statement: “Climate change is a serious concern”
|
| 109 |
+
Labels: `agree`, `neutral`, or `disagree`
|
| 110 |
+
- **Ground truth**: Provided
|
| 111 |
+
|
| 112 |
+
---
|
| 113 |
+
|
| 114 |
+
### Bureaucracies
|
| 115 |
+
- **Text content**: Crisis communication texts (bureaucratic cables and memos from international crises)
|
| 116 |
+
- **Source**: [Schub (2022)]([https://aclanthology.org/2020.findings-emnlp.296/](https://www.cambridge.org/core/journals/american-political-science-review/article/informing-the-leader-bureaucracies-and-international-crises/EE9C4DF3F68A2DA31E753450DA910053))
|
| 117 |
+
- **Dataset**: [Data](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/PXXUCO)
|
| 118 |
+
- **Annotation goal**:
|
| 119 |
+
- Info Type Task: Classify whether the text conveys `political` or `military` information relevant to Cold War crisis decision-making.
|
| 120 |
+
- Certainty Task: Determine whether the adviser expresses the information with certainty or uncertainty.
|
| 121 |
+
- **Ground truth**: Available only by "Info Type Task"
|
| 122 |
+
|
| 123 |
+
---
|
| 124 |
+
|
| 125 |
+
## Using the Parsing Script
|
| 126 |
+
|
| 127 |
+
To convert raw datasets into the standardized format used in this folder, use the helper script:
|
| 128 |
+
|
| 129 |
+
**Script:** `script/script_parse_dataset.py`
|
| 130 |
+
|
| 131 |
+
This script loads a dataset in CSV, TSV, JSONL, or Excel format and outputs a standardized parsed file under parsed_data/.
|
| 132 |
+
|
| 133 |
+
What the Script Automatically Handles
|
| 134 |
+
- Detects common text columns (e.g., text, tweet, post, content)
|
| 135 |
+
- Detects common label columns (e.g., label, class, ground_truth, stance)
|
| 136 |
+
- Standardizes output columns to: `Text`, `id`, `source`, and `ground_truth` (only included if the dataset has labels).
|
| 137 |
+
|
| 138 |
+
### Example
|
| 139 |
+
|
| 140 |
+
```bash
|
| 141 |
+
python script/script_parse_dataset.py \
|
| 142 |
+
--input_file example.csv \
|
| 143 |
+
--dataset_name example_data \
|
| 144 |
+
--text_col text
|
| 145 |
+
```
|
| 146 |
+
|
| 147 |
+
(Optional) Manually specifying ununsual column names:
|
| 148 |
+
|
| 149 |
+
```bash
|
| 150 |
+
python script/script_parse_dataset.py \
|
| 151 |
+
--input_file example.tsv \
|
| 152 |
+
--dataset_name example_data \
|
| 153 |
+
--text_col text \
|
| 154 |
+
--label_col category
|
| 155 |
+
```
|