| # MMM-Fact: A Multimodal, Multi-Domain Fact-Checking Dataset with Multi-Level Retrieval Difficulty | |
| ## Overview | |
| MMM-Fact is a real-world, multimodal benchmark for fact checking spanning 1995–2025. Each claim is paired with the full fact-check article and cross-modal evidence (text, images, videos, tables) to evaluate multi-step, cross-modal retrieval and reasoning. | |
| ## Scale & Coverage | |
| | Item | Details | | |
| |---|---| | |
| | Instances | **125,449** fact-checked claims | | |
| | Time range | **1995–2025** | | |
| | Sources | **4** fact-checking sites + **1** news outlet *(each sample includes the complete fact-check article and associated evidence)* | | |
| ## Annotations & Tasks | |
| - **Veracity labels:** True / False / Not Enough Information | |
| - **Supported tasks:** | |
| - Veracity prediction | |
| - Explainable fact-checking | |
| - Complex evidence aggregation | |
| - Longitudinal analysis | |
| ## Retrieval-Difficulty Tiers | |
| To reflect verification effort, each sample is tagged with a retrieval difficulty based on the number of retrievable evidence sources: | |
| | Tier | Evidence Sources | | |
| |---|---| | |
| | **Basic** | 1–5 sources | | |
| | **Intermediate** | 6–10 sources | | |
| | **Advanced** | >10 sources | | |
| ## Citation | |
| If you find this dataset useful, please cite: | |
| @article{xu2025mmm, | |
| title={MMM-Fact: A Multimodal, Multi-Domain Fact-Checking Dataset with Multi-Level Retrieval Difficulty}, | |
| author={Xu, Wenyan and Xiang, Dawei and Ding, Tianqi and Lu, Weihai}, | |
| journal={arXiv preprint arXiv:2510.25120}, | |
| year={2025} | |
| } | |