Improve dataset card: Add paper/code links, update task categories, add sample usage and citation

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +133 -15
README.md CHANGED
@@ -1,36 +1,54 @@
1
  ---
2
- license: apache-2.0
3
- task_categories:
4
- - token-classification
5
- - text-classification
6
  language:
7
  - en
 
8
  size_categories:
9
  - 100K<n<1M
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
- # Notes ⚠️
13
-
14
- - If you want to import the CAP data into your own dataset, please refer to [this](https://github.com/shen8424/CAP).
15
- - If you want to run RamDG on datasets other than SAMM and use CNCL to incorporate external knowledge, please ensure to configure ```idx_cap_texts``` and ```idx_cap_images``` in the dataset jsons.
16
- - We have upgraded the SAMM JSON files. The latest versions (SAMM with CAP or without CAP) are available on July 24, 2025. Please download the newest version.
17
 
18
- # Brief introduction
 
 
 
 
 
 
19
 
20
  <div align="center">
21
  <img src='./figures/teaser.png' width='90%'>
22
  </div>
23
 
24
- We present <b>SAMM</b>, a large-scale dataset for Detecting and Grounding Semantic-Coordinated Multimodal Manipulation. The official code has been released at [this](https://github.com/shen8424/SAMM-RamDG-CAP).
 
 
 
 
 
 
 
 
 
 
25
 
26
- **Dataset Statistics:**
27
 
28
  <div align="center">
29
  <img src='./figures/samm_statistics.png' width='90%'>
30
  </div>
31
 
32
-
33
- # Annotations
34
  ```
35
  {
36
  "text": "Lachrymose Terri Butler, whose letter prompted Peter Dutton to cancel Troy Newman's visa, was clearly upset.",
@@ -77,4 +95,104 @@ We present <b>SAMM</b>, a large-scale dataset for Detecting and Grounding Semant
77
  - `cap_texts`: Textual information extracted from CAP (Contextual Auxiliary Prompt) annotations.
78
  - `cap_images`: Relative paths to visual information from CAP annotations.
79
  - `idx_cap_texts`: A binary array where the i-th element indicates whether the i-th celebrity in `cap_texts` is tampered (1 = tampered, 0 = not tampered).
80
- - `idx_cap_images`: A binary array where the i-th element indicates whether the i-th celebrity in `cap_images` is tampered (1 = tampered, 0 = not tampered).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
2
  language:
3
  - en
4
+ license: apache-2.0
5
  size_categories:
6
  - 100K<n<1M
7
+ task_categories:
8
+ - token-classification
9
+ - text-classification
10
+ - image-text-to-text
11
+ - object-detection
12
+ tags:
13
+ - multimodal
14
+ - manipulation-detection
15
+ - media-forensics
16
+ - deepfake-detection
17
  ---
18
 
19
+ # SAMM: Semantic-Aligned Multimodal Manipulation Dataset
 
 
 
 
20
 
21
+ [Paper](https://huggingface.co/papers/2509.12653) | [Code](https://github.com/shen8424/SAMM-RamDG-CAP)
22
+
23
+ ## Introduction
24
+
25
+ The detection and grounding of manipulated content in multimodal data has emerged as a critical challenge in media forensics. While existing benchmarks demonstrate technical progress, they suffer from misalignment artifacts that poorly reflect real-world manipulation patterns: practical attacks typically maintain semantic consistency across modalities, whereas current datasets artificially disrupt cross-modal alignment, creating easily detectable anomalies. To bridge this gap, we pioneer the detection of semantically-coordinated manipulations where visual edits are systematically paired with semantically consistent textual descriptions. Our approach begins with constructing the first Semantic-Aligned Multimodal Manipulation (SAMM) dataset.
26
+
27
+ We present **SAMM**, a large-scale dataset for Detecting and Grounding Semantic-Coordinated Multimodal Manipulation. This is the official implementation of *SAMM* and *RamDG*. We propose a realistic research scenario: detecting and grounding semantic-coordinated multimodal manipulations, and introduce a new dataset SAMM. To address this challenge, we design the RamDG framework, proposing a novel approach for detecting fake news by leveraging external knowledge.
28
 
29
  <div align="center">
30
  <img src='./figures/teaser.png' width='90%'>
31
  </div>
32
 
33
+ The framework of the proposed RamDG:
34
+
35
+ <div align="center">
36
+ <img src='https://github.com/shen8424/SAMM-RamDG-CAP/blob/main/figures/RamDG.png?raw=true' width='90%'>
37
+ </div>
38
+
39
+ ## Notes ⚠️
40
+
41
+ - If you want to import the CAP data into your own dataset, please refer to [this](https://github.com/shen8424/CAP).
42
+ - If you want to run RamDG on datasets other than SAMM and use CNCL to incorporate external knowledge, please ensure to configure ```idx_cap_texts``` and ```idx_cap_images``` in the dataset jsons.
43
+ - We have upgraded the SAMM JSON files. The latest versions (SAMM with CAP or without CAP) are available on July 24, 2025. Please download the newest version.
44
 
45
+ ## Dataset Statistics
46
 
47
  <div align="center">
48
  <img src='./figures/samm_statistics.png' width='90%'>
49
  </div>
50
 
51
+ ## Annotations
 
52
  ```
53
  {
54
  "text": "Lachrymose Terri Butler, whose letter prompted Peter Dutton to cancel Troy Newman's visa, was clearly upset.",
 
95
  - `cap_texts`: Textual information extracted from CAP (Contextual Auxiliary Prompt) annotations.
96
  - `cap_images`: Relative paths to visual information from CAP annotations.
97
  - `idx_cap_texts`: A binary array where the i-th element indicates whether the i-th celebrity in `cap_texts` is tampered (1 = tampered, 0 = not tampered).
98
+ - `idx_cap_images`: A binary array where the i-th element indicates whether the i-th celebrity in `cap_images` is tampered (1 = tampered, 0 = not tampered).
99
+
100
+ ## Sample Usage (Training and Testing RamDG)
101
+
102
+ The following snippets are taken from the official GitHub repository to demonstrate how to train and test the RamDG framework using this dataset.
103
+
104
+ ### Dependencies and Installation
105
+ ```bash
106
+ mkdir code
107
+ cd code
108
+ git clone https://github.com/shen8424/SAMM-RamDG-CAP.git
109
+ cd SAMM-RamDG-CAP
110
+ conda create -n RamDG python=3.8
111
+ conda activate RamDG
112
+ conda install --yes -c pytorch pytorch=1.10.0 torchvision==0.11.1 cudatoolkit=11.3
113
+ pip install -r requirements.txt
114
+ conda install -c conda-forge ruamel_yaml
115
+ ```
116
+
117
+ ### Prepare Checkpoint
118
+
119
+ Download the pre-trained model through this link: [ALBEF_4M.pth](https://storage.googleapis.com/sfr-pcl-data-research/ALBEF/ALBEF_4M.pth) and [pytorch_model.bin](https://drive.google.com/file/d/15qfsTHPB-CkEVreOyf-056JWDAVjWK3w/view?usp=sharing)[GoogleDrive].
120
+
121
+ Then put the `ALBEF_4M.pth` and `pytorch_model.bin` into `./code/SAMM-RamDG-CAP/`.
122
+
123
+ ```
124
+ ./
125
+ β”œβ”€β”€ code
126
+ └── SAMM-RamDG-CAP (this github repo)
127
+ β”œβ”€β”€ configs
128
+ β”‚ └──...
129
+ β”œβ”€β”€ dataset
130
+ β”‚ └──...
131
+ β”œβ”€β”€ models
132
+ β”‚ └──...
133
+ ...
134
+ └── ALBEF_4M.pth
135
+ └── pytorch_model.bin
136
+ ```
137
+
138
+ ### Prepare Data
139
+
140
+ We provide two versions: SAMM with CAP information and SAMM without CAP information. If you choose SAMM with CAP information, download `people_imgs1` and `people_imgs2`, then move the data from both folders to `./code/SAMM-RamDG-CAP/SAMM_datasets/people_imgs`.
141
+
142
+ Then place the `train.json`, `val.json`, `test.json` into `./code/SAMM-RamDG-CAP/SAMM_datasets/jsons` and place `emotion_jpg`, `orig_output`, `swap_jpg` into `./code/SAMM-RamDG-CAP/SAMM_datasets`.
143
+
144
+ ```
145
+ ./
146
+ β”œβ”€β”€ code
147
+ └── SAMM-RamDG-CAP (this github repo)
148
+ β”œβ”€β”€ configs
149
+ β”‚ └──...
150
+ β”œβ”€β”€ dataset
151
+ β”‚ └──...
152
+ β”œβ”€β”€ models
153
+ β”‚ └──...
154
+ ...
155
+ └── SAMM_datasets
156
+ β”‚ β”œβ”€β”€ jsons
157
+ β”‚ β”‚ β”œβ”€β”€train.json
158
+ β”‚ β”‚ β”‚
159
+ β”‚ β”‚ β”œβ”€β”€test.json
160
+ β”‚ β”‚ β”‚
161
+ β”‚ β”‚ └──val.json
162
+ β”‚ β”œβ”€β”€ people_imgs
163
+ β”‚ β”‚ β”œβ”€β”€Messi (from people_imgs1)
164
+ β”‚ β”‚ β”œβ”€β”€Trump (from people_imgs2)
165
+ β”‚ β”‚ └──...
166
+ β”‚ β”‚
167
+ β”‚ β”œβ”€β”€ emotion_jpg
168
+ β”‚ β”‚
169
+ β”‚ β”œβ”€β”€ orig_output
170
+ β”‚ β”‚
171
+ β”‚ β”œβ”€β”€ swap_jpg
172
+ β”œβ”€β”€ models
173
+ β”‚
174
+ └── pytorch_model.bin
175
+ ```
176
+
177
+ ### Training RamDG
178
+ To train RamDG on the SAMM dataset, please modify `train.sh` and then run the following commands:
179
+ ```bash
180
+ bash train.sh
181
+ ```
182
+
183
+ ### Testing RamDG
184
+ To test RamDG on the SAMM dataset, please modify `test.sh` and then run the following commands:
185
+ ```bash
186
+ bash test.sh
187
+ ```
188
+
189
+ ## Citation
190
+ If you find this work useful for your research, please kindly cite our paper:
191
+ ```bibtex
192
+ @inproceedings{shen2025beyond,
193
+ title={Beyond Artificial Misalignment: Detecting and Grounding Semantic-Coordinated Multimodal Manipulations},
194
+ author={Shen, Jinjie and Wang, Yaxiong and Chen, Lechao and Nan, Pu and Zhong, Zhun},
195
+ booktitle={ACM Multimedia},
196
+ year={2025}
197
+ }
198
+ ```