PHEME Misinformation Cascades (PyTorch Geometric)
Dataset Summary
This dataset contains mathematical graph representations of Twitter conversation cascades from the PHEME 9-Event rumour dataset. It is designed for Graph-Level Binary Classification tasks to detect misinformation spreading dynamics using Graph Neural Networks (GNNs).
Each graph represents a single breaking-news conversation thread. Nodes are individual tweets, and edges represent the "reply-to" network topology.
- Total Graphs (Cascades): 6,425
- Task: Binary Classification (Rumour vs. Non-Rumour)
- Format: PyTorch Geometric (
.pt) and NetworkX (.gpickle)
GitHub Repository & Pipeline Code
The full Python pipeline used to parse the raw Twitter/X JSONs, extract network metrics (PageRank, Degrees), and generate the PyTorch Geometric bindings is entirely open-source.
If you are interested in auditing the data processing code, running the baseline models, or observing the Leave-One-Event cross-validation logic, visit the official repository:
File Structure
There are two files provided in this repository:
pheme_pyg_dataset.pt: A list of PyTorch GeometricDataobjects ready for immediate GNN training.pheme_cascades.gpickle: A saved list of NetworkXDiGraphobjects containing the raw text, strings, and hover metadata (useful for rendering visualizations with PyVis).
Feature Matrix breakdown (PyG Data.x)
For the PyTorch tensors, all textual, credibility, and network metrics have been pre-computed and concatenated into a single matrix x of shape [num_nodes, 390] per graph.
Dimension Mapping:
[0 : 384]Text Embeddings: Dense NLP vectors generated usingsentence-transformers/all-MiniLM-L6-v2.[384]Followers (Log-Normalized): Computed asmath.log1p(followers_count).[385]Account Verified:1.0if the user is verified,0.0otherwise.[386]PageRank: The node's PageRank centrality.[387]Degree Centrality: The localized degree centrality.[388]In-Degree: The raw count of replies received by this tweet.[389]Out-Degree: The raw count of replies this tweet made.
Raw Text (Data.text)
To make qualitative analysis and debugging easier, the raw string content of every tweet in the graph is perfectly preserved in a standard Python list under Data.text. The index of the text in the list perfectly matches the node index in the x matrix.
Targets (PyG Data.y)
1: Rumour (Misinformation)0: Non-Rumour (True/Verified)
Cross-Validation Design (Data.event)
To prevent lexical overfitting, standard practice on this dataset is Leave-One-Event-Out Cross-Validation. Each graph includes a data.event string (e.g., 'charliehebdo-all-rnr-threads'). Models should be trained on 8 events and tested on the unseen 9th event.
Usage Snippet
You can download and load the dataset directly into memory in two lines of Python, bypassing the need to parse raw JSONs.
from huggingface_hub import hf_hub_download
import torch
# Download the dataset directly to cache
file_path = hf_hub_download(
repo_id="NunoBatista/PHEME-Misinformation-Graphs",
filename="pheme_pyg_dataset.pt"
)
# Load into memory
dataset = torch.load(file_path)
# Example: View the first cascade
first_graph = dataset[0]
print(f"Nodes: {first_graph.num_nodes}")
print(f"Edges: {first_graph.num_edges}")
print(f"Features mapped: {first_graph.x.shape}")
print(f"Is Rumour?: {first_graph.y.item()}")
print(f"Event Cluster: {first_graph.event}")
# View the raw text of the source tweet (Node 0) and the first reply (Node 1)
print(f"\n[Source Tweet]: {first_graph.text[0]}")
print(f"[First Reply]: {first_graph.text[1]}")
Alternative Usage: NetworkX (.gpickle)
If you want to perform topological analysis, motif counting, or interactive visualizations (e.g., using PyVis), you can use the raw NetworkX graphs instead of the PyTorch tensors.
The pheme_cascades.gpickle file contains a standard Python list of networkx.DiGraph objects.
from huggingface_hub import hf_hub_download
import pickle
# Download the NetworkX dataset
file_path_nx = hf_hub_download(
repo_id="NunoBatista/PHEME-Misinformation-Graphs",
filename="pheme_cascades.gpickle"
)
# Load list of networkx.DiGraph objects
with open(file_path_nx, 'rb') as f:
nx_graphs = pickle.load(f)
first_nx_graph = nx_graphs[0]
# Display Graph-level metadata
print(f"Graph Metadata: {first_nx_graph.graph}")
# Example output: {'label': 1, 'thread_id': '500388199064420352', 'event': 'ferguson-all-rnr-threads'}
# Inspect the rich data attached to an individual node (Tweet)
source_tweet_id = list(first_nx_graph.nodes())[0]
node_data = first_nx_graph.nodes[source_tweet_id]
print(f"Text: {node_data['text']}")
print(f"Followers: {node_data['user_followers']}")
print(f"Is Source Tweet?: {node_data['is_source']}")
print(f"PageRank: {node_data['pagerank']:.4f}")
Contact & Support
Please feel free to reach out:
- Email: nunomarquesbatista@gmail.com
- Public Issues: For code or dataset bugs, please open an issue on the GitHub Repository.
Citation & Acknowledgements
This project builds upon the 9-event PHEME dataset. If you utilize this processed dataset or pipeline in your own work, please ensure you cite the original dataset creators:
Zubiaga, A., Kochkina, E., Liakata, M., Procter, R., Lukasik, M., et al. (2016). Fact-checking updates on the rumorous PHEME dataset. Figshare. Dataset DOI: 10.6084/m9.figshare.6392078
- Downloads last month
- 46