RuleFollower – Parsed Datasets
This repository contains parsed datasets used in the RuleFollower project.
Datasets
- GWSD
- Misinfo
- Misinfo Cancer
- Bureaucracies
- Tweets23
- Tweets Congress
- Tweets News
- Tweets
- Rumoureval2019
- Hatecot
- Implicit Hate
Each folder contains a data.csv file with the parsed annotations. Each dataset is capped at 5k samples (full dataset used if <5k).
Each data file includes standardized columns:
Text: The text input for annotationid: Unique identifier for the rowsource: Name of the original dataset or paperground_truth: (Optional) Gold label, if available in the original dataset
RumorEval
- Text content: Replies to source tweets about rumored events
- Source: Gorrell et al (2019)
- Dataset: Huggingface
- Annotation goal: Classify reply stance as
support,deny,query, orcomment - Ground truth: Provided (stance label)
- Note: Only reply texts are retained for classification.
HateCoT
- Text content: Social media posts from 8 hate/offensive speech datasets
- Source: Nghiem and Daumé III (2024)
- Dataset: Github
- Annotation goal: Classify post as
benign,offensive, orhateful - Ground truth: Provided
- Note: The original HateCoT dataset combines samples from 8 hate speech datasets with diverse annotation schemes. We standardized its many fine-grained labels into 3 unified classes:
0 = Benign(e.g., "Not Hate", "Normal", "Neutral"),1 = Offensive(e.g., "Toxic", "Offensive"),2 = Hateful(e.g., "Hate", "Dehumanization", "Directed Abuse").
Tweets(2023) / Tweets News(2017) / Tweets(2020-2021)
- Text content: Tweets from different public Twitter samples spanning 2017 to 2023, focused on content moderation and related debates. Sample sizes vary across years and sources.
- Source: Gilardi et al (2023)
- Dataset: Harvard Dataverse
- Ground truth: Not provided
- Supported tasks (from annotation codebook, excluding political content tasks):
| Task ID | Task Name | Task Description Summary | Labels |
|---|---|---|---|
| T1 | Content Moderation Relevance | Is the tweet about content moderation? | relevant (1), irrelevant (0) |
| T3 | Problem/Solution Frame | Does the tweet portray content moderation as a problem, a solution, or neither? | problem, solution, neutral |
| T4 | Policy Frame (Moderation) | What policy dimension frames the content moderation issue? | e.g., morality, fairness, security, equality, health, ... |
| T6 | Stance on Section 230 | Does the tweet support, oppose, or remain neutral about Section 230 of U.S. law? | positive, negative, neutral |
| T7 | Topic Classification | What is the topic in relation to content moderation? | section 230, trump ban, complaints, platform policies, ... |
- Note:
- Each of these tasks has a dedicated
.txttask description underprompt/task_descriptions/. - The same dataset can be reused across tasks; task choice is controlled by the selected prompt.
- Tweets by U.S. Congress members (used for political content) are handled separately (see below).
- Each of these tasks has a dedicated
Tweets Congressional
- Text content: Tweets by U.S. Congress members (2017–2022)
- Source: Gilardi et al (2023)
- Dataset: Harvard Dataverse
- Annotation goal: Binary classification: whether tweet is political
- Ground truth: Not provided
Misinfo / Misinfo Cancer
- Text content:News headlines (The Misinfo Reaction Frames corpus is an dataset of 25k news headlines to articles that have been factchecked. The articles can be about Covid-19, Cancer or Climate Change)
- Source: Gabriel et al. (2022)
- Dataset: Github
- Annotation goal: Classify headline as misinformation or not
- Ground truth: Provided
Implicit Hate
- Text content:A dataset of English-language tweets annotated to capture both explicit and implicit hate speech. Implicit hate includes stereotypical, sarcastic, or coded language that may not be overtly hateful. The data was originally collected from Twitter and the Social Bias Inference Corpus.
- Source: Elsafoury et al. (2021)
- Dataset: Github
- Annotation goal: Classify post as
explicit_hate,implicit_hate, ornot_hate - Ground truth: Provided
- We use Stage 1 annotations for parsing:
explicit_hate: overt hate speechimplicit_hate: indirect hate speech (e.g., stereotypes, sarcasm)not_hate: no hateful content
- Note:
- The dataset originally included multiple annotation stages (e.g., fine-grained subtypes, implied statements).
- Additional parsing for Stage 2 (fine-grained) or Stage 3 (target/implied meaning) will be added later.
GWSD (Global Warming Stance Dataset)
- Text content: News spans (Opinion spans extracted from global warming news articles, published from Jan. 1, 2000 to April 12, 2020 by various U.S. news sources.)
- Source: Luo et al. (2020)
- Dataset: Github
- Annotation goal: Classify stance toward the statement: “Climate change is a serious concern”
Labels:agree,neutral, ordisagree - Ground truth: Provided
Bureaucracies
- Text content: Crisis communication texts (bureaucratic cables and memos from international crises)
- Source: Schub (2022)
- Dataset: Data
- Annotation goal:
- Info Type Task: Classify whether the text conveys
politicalormilitaryinformation relevant to Cold War crisis decision-making. - Certainty Task: Determine whether the adviser expresses the information with certainty or uncertainty.
- Info Type Task: Classify whether the text conveys
- Ground truth: Available only by "Info Type Task"
Using the Parsing Script
To convert raw datasets into the standardized format used in this folder, use the helper script:
Script: script/script_parse_dataset.py
This script loads a dataset in CSV, TSV, JSONL, or Excel format and outputs a standardized parsed file under parsed_data/.
What the Script Automatically Handles
- Detects common text columns (e.g., text, tweet, post, content)
- Detects common label columns (e.g., label, class, ground_truth, stance)
- Standardizes output columns to:
Text,id,source, andground_truth(only included if the dataset has labels).
Example
python script/script_parse_dataset.py \
--input_file example.csv \
--dataset_name example_data \
--text_col text
(Optional) Manually specifying ununsual column names:
python script/script_parse_dataset.py \
--input_file example.tsv \
--dataset_name example_data \
--text_col text \
--label_col category