Datasets:

ArXiv:
License:

Enhance CAP dataset card with metadata, paper, and code links

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +28 -2
README.md CHANGED
@@ -1,10 +1,26 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
 
 
 
 
5
  # Introduction
6
 
7
- We present <b>CAP</b>, a large-scale database including over 70k celebrities. Each celebrity in the CAP has at least three associated images along with their gender, birth year, occupation, and main achievements.
 
 
8
 
9
  Two examples from CAP:
10
 
@@ -20,4 +36,14 @@ If you want to import the CAP data into your own dataset, please refer to [this]
20
 
21
  # 🤗🤗🤗 Citation
22
 
23
- If you find this work useful for your research, please kindly cite our paper:
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - image-feature-extraction
5
+ - feature-extraction
6
+ language:
7
+ - en
8
+ tags:
9
+ - multimodal
10
+ - celebrity
11
+ - attribute
12
+ - image-text
13
  ---
14
 
15
+ This repository contains the **Celeb Attributes Portfolio (CAP)** dataset, which is part of the research presented in the paper [Beyond Artificial Misalignment: Detecting and Grounding Semantic-Coordinated Multimodal Manipulations](https://huggingface.co/papers/2509.12653).
16
+
17
+ Code: [https://github.com/shen8424/SAMM-RamDG-CAP](https://github.com/shen8424/SAMM-RamDG-CAP)
18
+
19
  # Introduction
20
 
21
+ We present **CAP**, a large-scale database including over 70k celebrities. Each celebrity in the CAP has at least three associated images along with their gender, birth year, occupation, and main achievements.
22
+
23
+ CAP serves as an auxiliary dataset for the Semantic-Aligned Multimodal Manipulation (SAMM) dataset, providing contextual evidence for the Retrieval-Augmented Manipulation Detection and Grounding (RamDG) framework, as described in our paper. It enhances the realism of multimodal manipulation detection by offering semantically consistent visual and textual attributes for grounding manipulations.
24
 
25
  Two examples from CAP:
26
 
 
36
 
37
  # 🤗🤗🤗 Citation
38
 
39
+ If you find this work useful for your research, please kindly cite our paper:
40
+
41
+ ```bibtex
42
+ @inproceedings{shen2025beyond,
43
+ title={Beyond Artificial Misalignment: Detecting and Grounding Semantic-Coordinated Multimodal Manipulations},
44
+ author={Shen, Jinjie and Wang, Yaxiong and Chen, Lechao and Nan, Pu and Zhong, Zhun},
45
+ booktitle={Proceedings of the ACM International Conference on Multimedia (MM)},
46
+ year={2025},
47
+ url={https://huggingface.co/papers/2509.12653}
48
+ }
49
+ ```