File size: 1,500 Bytes
b7fbdab
e4e6b3b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80bb1a9
5d23e14
80bb1a9
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# MMM-Fact: A Multimodal, Multi-Domain Fact-Checking Dataset with Multi-Level Retrieval Difficulty

## Overview
MMM-Fact is a real-world, multimodal benchmark for fact checking spanning 1995–2025. Each claim is paired with the full fact-check article and cross-modal evidence (text, images, videos, tables) to evaluate multi-step, cross-modal retrieval and reasoning.

## Scale & Coverage
| Item | Details |
|---|---|
| Instances | **125,449** fact-checked claims |
| Time range | **1995–2025** |
| Sources | **4** fact-checking sites + **1** news outlet *(each sample includes the complete fact-check article and associated evidence)* |

## Annotations & Tasks
- **Veracity labels:** True / False / Not Enough Information  
- **Supported tasks:**  
  - Veracity prediction  
  - Explainable fact-checking  
  - Complex evidence aggregation  
  - Longitudinal analysis  

## Retrieval-Difficulty Tiers
To reflect verification effort, each sample is tagged with a retrieval difficulty based on the number of retrievable evidence sources:

| Tier | Evidence Sources |
|---|---|
| **Basic** | 1–5 sources |
| **Intermediate** | 6–10 sources |
| **Advanced** | >10 sources |

## Citation
If you find this dataset useful, please cite:

@article{xu2025mmm,
  title={MMM-Fact: A Multimodal, Multi-Domain Fact-Checking Dataset with Multi-Level Retrieval Difficulty},
  author={Xu, Wenyan and Xiang, Dawei and Ding, Tianqi and Lu, Weihai},
  journal={arXiv preprint arXiv:2510.25120},
  year={2025}
}