language:
- en
license: apache-2.0
size_categories:
- 100K<n<1M
task_categories:
- token-classification
- text-classification
- image-text-to-text
- object-detection
tags:
- multimodal
- manipulation-detection
- media-forensics
- deepfake-detection
SAMM: Semantic-Aligned Multimodal Manipulation Dataset
Introduction
The detection and grounding of manipulated content in multimodal data has emerged as a critical challenge in media forensics. While existing benchmarks demonstrate technical progress, they suffer from misalignment artifacts that poorly reflect real-world manipulation patterns: practical attacks typically maintain semantic consistency across modalities, whereas current datasets artificially disrupt cross-modal alignment, creating easily detectable anomalies. To bridge this gap, we pioneer the detection of semantically-coordinated manipulations where visual edits are systematically paired with semantically consistent textual descriptions. Our approach begins with constructing the first Semantic-Aligned Multimodal Manipulation (SAMM) dataset.
We present SAMM, a large-scale dataset for Detecting and Grounding Semantic-Coordinated Multimodal Manipulation. This is the official implementation of SAMM and RamDG. We propose a realistic research scenario: detecting and grounding semantic-coordinated multimodal manipulations, and introduce a new dataset SAMM. To address this challenge, we design the RamDG framework, proposing a novel approach for detecting fake news by leveraging external knowledge.
The framework of the proposed RamDG:
Notes β οΈ
- If you want to import the CAP data into your own dataset, please refer to this.
- If you want to run RamDG on datasets other than SAMM and use CNCL to incorporate external knowledge, please ensure to configure
idx_cap_textsandidx_cap_imagesin the dataset jsons. - We have upgraded the SAMM JSON files. The latest versions (SAMM with CAP or without CAP) are available on July 24, 2025. Please download the newest version.
Dataset Statistics
Annotations
{
"text": "Lachrymose Terri Butler, whose letter prompted Peter Dutton to cancel Troy Newman's visa, was clearly upset.",
"fake_cls": "attribute_manipulation",
"image": "emotion_jpg/65039.jpg",
"id": 13,
"fake_image_box": [
665,
249,
999,
671
],
"cap_texts": {
"Terri Butler": "Terri Butler Gender: Female, Occupation: Politician, Birth year: 1977, Main achievement: Member of Australian Parliament.",
"Peter Dutton": "Peter Dutton Gender: Male, Occupation: Politician, Birth year: 1970, Main achievement: Australian Minister for Defence."
},
"cap_images": {
"Terri Butler": "Terri Butler",
"Peter Dutton": "Peter Dutton"
},
"idx_cap_texts": [
1,
0
],
"idx_cap_images": [
1,
0
],
"fake_text_pos": [
0,
11,
13,
14,
15
]
}
image: The relative path to the original or manipulated image.text: The original or manipulated text caption.fake_cls: Indicates the type of manipulation (e.g., forgery, editing).fake_image_box: The bounding box coordinates of the manipulated region in the image.fake_text_pos: A list of indices specifying the positions of manipulated tokens within thetextstring.cap_texts: Textual information extracted from CAP (Contextual Auxiliary Prompt) annotations.cap_images: Relative paths to visual information from CAP annotations.idx_cap_texts: A binary array where the i-th element indicates whether the i-th celebrity incap_textsis tampered (1 = tampered, 0 = not tampered).idx_cap_images: A binary array where the i-th element indicates whether the i-th celebrity incap_imagesis tampered (1 = tampered, 0 = not tampered).
Sample Usage (Training and Testing RamDG)
The following snippets are taken from the official GitHub repository to demonstrate how to train and test the RamDG framework using this dataset.
Dependencies and Installation
mkdir code
cd code
git clone https://github.com/shen8424/SAMM-RamDG-CAP.git
cd SAMM-RamDG-CAP
conda create -n RamDG python=3.8
conda activate RamDG
conda install --yes -c pytorch pytorch=1.10.0 torchvision==0.11.1 cudatoolkit=11.3
pip install -r requirements.txt
conda install -c conda-forge ruamel_yaml
Prepare Checkpoint
Download the pre-trained model through this link: ALBEF_4M.pth and pytorch_model.bin[GoogleDrive].
Then put the ALBEF_4M.pth and pytorch_model.bin into ./code/SAMM-RamDG-CAP/.
./
βββ code
βββ SAMM-RamDG-CAP (this github repo)
βββ configs
β βββ...
βββ dataset
β βββ...
βββ models
β βββ...
...
βββ ALBEF_4M.pth
βββ pytorch_model.bin
Prepare Data
We provide two versions: SAMM with CAP information and SAMM without CAP information. If you choose SAMM with CAP information, download people_imgs1 and people_imgs2, then move the data from both folders to ./code/SAMM-RamDG-CAP/SAMM_datasets/people_imgs.
Then place the train.json, val.json, test.json into ./code/SAMM-RamDG-CAP/SAMM_datasets/jsons and place emotion_jpg, orig_output, swap_jpg into ./code/SAMM-RamDG-CAP/SAMM_datasets.
./
βββ code
βββ SAMM-RamDG-CAP (this github repo)
βββ configs
β βββ...
βββ dataset
β βββ...
βββ models
β βββ...
...
βββ SAMM_datasets
β βββ jsons
β β βββtrain.json
β β β
β β βββtest.json
β β β
β β βββval.json
β βββ people_imgs
β β βββMessi (from people_imgs1)
β β βββTrump (from people_imgs2)
β β βββ...
β β
β βββ emotion_jpg
β β
β βββ orig_output
β β
β βββ swap_jpg
βββ models
β
βββ pytorch_model.bin
Training RamDG
To train RamDG on the SAMM dataset, please modify train.sh and then run the following commands:
bash train.sh
Testing RamDG
To test RamDG on the SAMM dataset, please modify test.sh and then run the following commands:
bash test.sh
Citation
If you find this work useful for your research, please kindly cite our paper:
@inproceedings{shen2025beyond,
title={Beyond Artificial Misalignment: Detecting and Grounding Semantic-Coordinated Multimodal Manipulations},
author={Shen, Jinjie and Wang, Yaxiong and Chen, Lechao and Nan, Pu and Zhong, Zhun},
booktitle={ACM Multimedia},
year={2025}
}