SAMM / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add paper/code links, update task categories, add sample usage and citation
e9fb869 verified
|
raw
history blame
7.8 kB
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 100K<n<1M
task_categories:
  - token-classification
  - text-classification
  - image-text-to-text
  - object-detection
tags:
  - multimodal
  - manipulation-detection
  - media-forensics
  - deepfake-detection

SAMM: Semantic-Aligned Multimodal Manipulation Dataset

Paper | Code

Introduction

The detection and grounding of manipulated content in multimodal data has emerged as a critical challenge in media forensics. While existing benchmarks demonstrate technical progress, they suffer from misalignment artifacts that poorly reflect real-world manipulation patterns: practical attacks typically maintain semantic consistency across modalities, whereas current datasets artificially disrupt cross-modal alignment, creating easily detectable anomalies. To bridge this gap, we pioneer the detection of semantically-coordinated manipulations where visual edits are systematically paired with semantically consistent textual descriptions. Our approach begins with constructing the first Semantic-Aligned Multimodal Manipulation (SAMM) dataset.

We present SAMM, a large-scale dataset for Detecting and Grounding Semantic-Coordinated Multimodal Manipulation. This is the official implementation of SAMM and RamDG. We propose a realistic research scenario: detecting and grounding semantic-coordinated multimodal manipulations, and introduce a new dataset SAMM. To address this challenge, we design the RamDG framework, proposing a novel approach for detecting fake news by leveraging external knowledge.

The framework of the proposed RamDG:

Notes ⚠️

  • If you want to import the CAP data into your own dataset, please refer to this.
  • If you want to run RamDG on datasets other than SAMM and use CNCL to incorporate external knowledge, please ensure to configure idx_cap_texts and idx_cap_images in the dataset jsons.
  • We have upgraded the SAMM JSON files. The latest versions (SAMM with CAP or without CAP) are available on July 24, 2025. Please download the newest version.

Dataset Statistics

Annotations

    {
        "text": "Lachrymose Terri Butler, whose letter prompted Peter Dutton to cancel Troy Newman's visa, was clearly upset.",
        "fake_cls": "attribute_manipulation",
        "image": "emotion_jpg/65039.jpg",
        "id": 13,
        "fake_image_box": [
            665,
            249,
            999,
            671
        ],
        "cap_texts": {
            "Terri Butler": "Terri Butler Gender: Female, Occupation: Politician, Birth year: 1977, Main achievement: Member of Australian Parliament.",
            "Peter Dutton": "Peter Dutton Gender: Male, Occupation: Politician, Birth year: 1970, Main achievement: Australian Minister for Defence."
        },
        "cap_images": {
            "Terri Butler": "Terri Butler",
            "Peter Dutton": "Peter Dutton"
        },
        "idx_cap_texts": [
            1,
            0
        ],
        "idx_cap_images": [
            1,
            0
        ],
        "fake_text_pos": [
            0,
            11,
            13,
            14,
            15
        ]
    }
  • image: The relative path to the original or manipulated image.
  • text: The original or manipulated text caption.
  • fake_cls: Indicates the type of manipulation (e.g., forgery, editing).
  • fake_image_box: The bounding box coordinates of the manipulated region in the image.
  • fake_text_pos: A list of indices specifying the positions of manipulated tokens within the text string.
  • cap_texts: Textual information extracted from CAP (Contextual Auxiliary Prompt) annotations.
  • cap_images: Relative paths to visual information from CAP annotations.
  • idx_cap_texts: A binary array where the i-th element indicates whether the i-th celebrity in cap_texts is tampered (1 = tampered, 0 = not tampered).
  • idx_cap_images: A binary array where the i-th element indicates whether the i-th celebrity in cap_images is tampered (1 = tampered, 0 = not tampered).

Sample Usage (Training and Testing RamDG)

The following snippets are taken from the official GitHub repository to demonstrate how to train and test the RamDG framework using this dataset.

Dependencies and Installation

mkdir code
cd code
git clone https://github.com/shen8424/SAMM-RamDG-CAP.git
cd SAMM-RamDG-CAP
conda create -n RamDG python=3.8
conda activate RamDG
conda install --yes -c pytorch pytorch=1.10.0 torchvision==0.11.1 cudatoolkit=11.3
pip install -r requirements.txt
conda install -c conda-forge ruamel_yaml

Prepare Checkpoint

Download the pre-trained model through this link: ALBEF_4M.pth and pytorch_model.bin[GoogleDrive].

Then put the ALBEF_4M.pth and pytorch_model.bin into ./code/SAMM-RamDG-CAP/.

./
β”œβ”€β”€ code
    └── SAMM-RamDG-CAP (this github repo)
        β”œβ”€β”€ configs
        β”‚   └──...
        β”œβ”€β”€ dataset
        β”‚   └──...
        β”œβ”€β”€ models
        β”‚   └──...
        ...
        └── ALBEF_4M.pth
        └── pytorch_model.bin

Prepare Data

We provide two versions: SAMM with CAP information and SAMM without CAP information. If you choose SAMM with CAP information, download people_imgs1 and people_imgs2, then move the data from both folders to ./code/SAMM-RamDG-CAP/SAMM_datasets/people_imgs.

Then place the train.json, val.json, test.json into ./code/SAMM-RamDG-CAP/SAMM_datasets/jsons and place emotion_jpg, orig_output, swap_jpg into ./code/SAMM-RamDG-CAP/SAMM_datasets.

./
β”œβ”€β”€ code
    └── SAMM-RamDG-CAP (this github repo)
        β”œβ”€β”€ configs
        β”‚   └──...
        β”œβ”€β”€ dataset
        β”‚   └──...
        β”œβ”€β”€ models
        β”‚   └──...
        ...
        └── SAMM_datasets
        β”‚       β”œβ”€β”€ jsons
        β”‚       β”‚   β”œβ”€β”€train.json
        β”‚       β”‚   β”‚
        β”‚       β”‚   β”œβ”€β”€test.json
        β”‚       β”‚   β”‚
        β”‚       β”‚   └──val.json
        β”‚       β”œβ”€β”€ people_imgs
        β”‚       β”‚   β”œβ”€β”€Messi (from people_imgs1)
        β”‚       β”‚   β”œβ”€β”€Trump (from people_imgs2)
        β”‚       β”‚   └──... 
        β”‚       β”‚
        β”‚       β”œβ”€β”€ emotion_jpg
        β”‚       β”‚
        β”‚       β”œβ”€β”€ orig_output
        β”‚       β”‚
        β”‚       β”œβ”€β”€ swap_jpg
        β”œβ”€β”€ models
        β”‚   
        └── pytorch_model.bin

Training RamDG

To train RamDG on the SAMM dataset, please modify train.sh and then run the following commands:

bash train.sh

Testing RamDG

To test RamDG on the SAMM dataset, please modify test.sh and then run the following commands:

bash test.sh

Citation

If you find this work useful for your research, please kindly cite our paper:

@inproceedings{shen2025beyond,
  title={Beyond Artificial Misalignment: Detecting and Grounding Semantic-Coordinated Multimodal Manipulations},
  author={Shen, Jinjie and Wang, Yaxiong and Chen, Lechao and Nan, Pu and Zhong, Zhun},
  booktitle={ACM Multimedia},
  year={2025}
}