nielsr HF Staff commited on
Commit
a1cebc1
·
verified ·
1 Parent(s): dce9e88

Add comprehensive model card for FakeVLM

Browse files

This PR adds a comprehensive model card for the FakeVLM model, linking it to the paper [Spot the Fake: Large Multimodal Model-Based Synthetic Image Detection with Artifact Explanation](https://arxiv.org/pdf/2503.14905).

The model card includes:
- Essential metadata such as `license`, `pipeline_tag`, and `library_name`, as well as `tags`, `datasets`, and `base_model` for improved discoverability.
- A detailed description, overview of the framework, key contributions, installation instructions, evaluation results, acknowledgements, contact information, and citation, all extracted directly from the original GitHub repository.
- Links to the official GitHub repository and the associated FakeClue dataset.

This update will greatly enhance the discoverability and usability of the FakeVLM model on the Hugging Face Hub, allowing users to easily understand its purpose, how to use it, and its performance.

Please review and merge this PR if everything looks good.

Files changed (1) hide show
  1. README.md +154 -0
README.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ pipeline_tag: image-text-to-text
4
+ library_name: transformers
5
+ tags:
6
+ - multimodal
7
+ - synthetic-image-detection
8
+ - deepfake-detection
9
+ - explainable-ai
10
+ - llava
11
+ datasets:
12
+ - lingcco/FakeClue
13
+ base_model:
14
+ - llava-hf/llava-1.5-7b-hf
15
+ ---
16
+
17
+ <div align="center">
18
+ <h2> <img src="https://github.com/opendatalab/FakeVLM/raw/main/imgs/logo.jpg" alt="Image Alt Text" width="50" height="50" align="absmiddle"> Spot the Fake: Large Multimodal Model-Based Synthetic Image Detection with Artifact Explanation
19
+ </h2>
20
+ </div>
21
+ <div align="center">
22
+
23
+ [Siwei Wen](https://scholar.google.com/citations?user=kJRiUYwAAAAJ&hl=zh-CN)<sup>1,3*</sup>,
24
+ [Junyan Ye](https://yejy53.github.io/)<sup>2,1*</sup>,
25
+ [Peilin Feng](https://peilin-ff.github.io/)<sup>1,3</sup>,
26
+ [Hengrui Kang](https://scholar.google.com/citations?user=kVbzWCAAAAAJ&hl=zh-CN)<sup>4,1</sup>, <br>
27
+ [Zichen Wen](https://scholar.google.com/citations?user=N-aPFvEAAAAJ&hl=zh-CN)<sup>4,1</sup>,
28
+ [Yize Chen](https://openreview.net/profile?id=~Yize_Chen2)<sup>5</sup>,
29
+ [Jiang Wu](https://scholar.google.com/citations?user=LHiiL7AAAAAJ&hl=zh-CN)<sup>1</sup>,
30
+ [Wenjun Wu](https://openreview.net/profile?id=~wenjun_wu3)<sup>3</sup>,
31
+ [Conghui He](https://conghui.github.io/)<sup>1</sup>,
32
+ [Weijia Li](https://liweijia.github.io/)<sup>2,1†</sup>
33
+
34
+ <sup>1</sup>Shanghai Artificial Intelligence Laboratory, <sup>2</sup>Sun Yat-sen University<br>
35
+ <sup>3</sup>Beihang University, <sup>4</sup>Shanghai Jiao Tong University, <sup>5</sup>The Chinese University of Hong Kong, Shenzhen
36
+
37
+ </div>
38
+
39
+ This model was presented in the paper [Spot the Fake: Large Multimodal Model-Based Synthetic Image Detection with Artifact Explanation](https://arxiv.org/pdf/2503.14905).
40
+
41
+ The code and further details can be found in the [GitHub repository](https://github.com/opendatalab/FakeVLM).
42
+
43
+ <div align="center">
44
+
45
+ [![arXiv](https://img.shields.io/badge/Arxiv-2503.14905-AD1C18.svg?logo=arXiv)](https://arxiv.org/pdf/2503.14905)
46
+ [![](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2Fopendatalab%2FFakeVLM&count_bg=%23C25AE6&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=Visitor&edge_flat=false)](https://hits.seeyoufarm.com)
47
+ [![GitHub issues](https://img.shields.io/github/issues/opendatalab/FakeVLM?color=critical&label=Issues)](https://github.com/opendatalab/FakeVLM/issues)
48
+ [![GitHub Stars](https://img.shields.io/github/stars/opendatalab/FakeVLM?style=social)](https://github.com/opendatalab/FakeVLM/stargazers)
49
+ [![Dataset](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Dataset-yellow)](https://huggingface.co/datasets/lingcco/FakeClue)
50
+ [![Model](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-yellow)](https://huggingface.co/lingcco/fakeVLM)
51
+ </div>
52
+
53
+ ## 📰 News
54
+ - **[2025.9.24]**: 🎉 FakeVLM was accepted to NeurIPS 2025!
55
+ - **[2025.4.15]**: 🤗 We are excited to release the FakeClue dataset. Check out [here](https://huggingface.co/datasets/lingcco/FakeClue).
56
+ - **[2025.3.20]**: 🔥 We have released **Spot the Fake: Large Multimodal Model-Based Synthetic Image Detection with Artifact Explanation**. Check out the [paper](https://arxiv.org/abs/2503.14905). We present FakeClue dataset and FakeVLM model.
57
+
58
+ ## <img id="painting_icon" width="3%" src="https://cdn-icons-png.flaticon.com/256/599/599205.png"> FakeVLM Overview
59
+
60
+ With the rapid advancement of Artificial Intelligence Generated Content (AIGC) technologies, synthetic images have become increasingly prevalent in everyday life, posing new challenges for authenticity assessment and detection. Despite the effectiveness of existing methods in evaluating image authenticity and locating forgeries, these approaches often lack human interpretability and do not fully address the growing complexity of synthetic data. To tackle these challenges, we introduce FakeVLM, a specialized large multimodal model designed for both general synthetic image and DeepFake detection tasks. FakeVLM not only excels in distinguishing real from fake images but also provides clear, natural language explanations for image artifacts, enhancing interpretability. Additionally, we present FakeClue, a comprehensive dataset containing over 100,000 images across seven categories, annotated with fine-grained artifact clues in natural language. FakeVLM demonstrates performance comparable to expert models while eliminating the need for additional classifiers, making it a robust solution for synthetic data detection. Extensive evaluations across multiple datasets confirm the superiority of FakeVLM in both authenticity classification and artifact explanation tasks, setting a new benchmark for synthetic image detection.
61
+
62
+ <div align="center">
63
+ <img src="https://github.com/opendatalab/FakeVLM/raw/main/imgs/framework.jpg" alt="framework" width="90%" height="auto">
64
+ </div>
65
+
66
+ ## <img id="painting_icon" width="3%" src="https://cdn-icons-png.flaticon.com/256/2435/2435606.png"> Contributions
67
+
68
+ - We propose FakeVLM, a multimodal large model designed for both general synthetic and deepfake image detection tasks. It excels at distinguishing real from fake images while also providing excellent interpretability for artifact details in synthetic images.
69
+ - We introduce the FakeClue dataset, which includes a rich variety of image categories and fine-grained artifact annotations in natural language.
70
+ - Our method has been extensively evaluated on multiple datasets, achieving outstanding performance in both synthetic detection and abnormal artifact explanation tasks.
71
+
72
+ ## 🛠️ Installation
73
+ Please clone our repository and change to that folder
74
+ ```bash
75
+ git clone git@github.com:opendatalab/FakeVLM.git
76
+ cd FakeVLM
77
+ ```
78
+
79
+ Our model is based on the [lmms-finetune](https://github.com/zjysteven/lmms-finetune) environment. Please follow the steps below to configure the environment.
80
+ ```bash
81
+ conda create -n fakevlm python=3.10 -y
82
+ conda activate fakevlm
83
+
84
+ python -m pip install -r requirements.txt
85
+
86
+ python -m pip install --no-cache-dir --no-build-isolation flash-attn
87
+ ```
88
+
89
+ ## 📦 Dataset
90
+ The training data can be downloaded from [here](https://huggingface.co/datasets/lingcco/FakeClue).
91
+
92
+ The directory containing the images should have the following structure:
93
+
94
+ ```
95
+ playground
96
+ └──data
97
+ └──train
98
+ |--doc
99
+ |--fake
100
+ |--real
101
+ .
102
+ .
103
+ |--satellite
104
+ └──test
105
+ .
106
+ .
107
+ .
108
+ ```
109
+
110
+ ## 📌 Usage
111
+ For detailed instructions on data preparation, training, and evaluation, please refer to the [official GitHub repository](https://github.com/opendatalab/FakeVLM).
112
+
113
+ ## 📊 Results
114
+ Performance of 7 leading LMMs and FakeVLM on DD-VQA, Fake Clues and Loki.
115
+
116
+ - **FakeClue**
117
+ Ours dataset.
118
+ - **LOKI**
119
+ A new benchmark for evaluating multimodal models in synthetic detection tasks. It includes **human-annotated fine-grained image artifacts**, enabling deeper analysis of artifact explanations. We used its image modality, covering categories like Animals, Humans, Scenery, and Documents.
120
+
121
+ <img src="https://github.com/opendatalab/FakeVLM/raw/main/imgs/fakeclue_loki_result.jpg" alt="framework" width="auto" height="auto">
122
+
123
+ - **DD-VQA**
124
+ A dataset for explaining facial artifacts, using **manual annotations** in a VQA format. Artifacts include blurred hairlines, mismatched eyebrows, rigid pupils, and unnatural shadows. It builds on FF++ data and emphasizes common-sense reasoning.
125
+
126
+ <div align="center">
127
+ <img src="https://github.com/opendatalab/FakeVLM/raw/main/imgs/ddvqa.jpg" alt="framework" width="500" height="auto">
128
+ </div>
129
+
130
+ To provide a comprehensive comparison of the model performance across the three datasets—FakeClue, LOKI, and DD-VQA—we present the following radar chart. This chart visually highlights the strengths and weaknesses of the 7 leading LMMs and FakeVLM, offering a clear depiction of their results in synthetic detection and artifact explanation tasks.
131
+
132
+ <div align="center">
133
+ <img src="https://github.com/opendatalab/FakeVLM/raw/main/imgs/result.jpg" alt="result" width="400" height="auto">
134
+ </div>
135
+
136
+ ## 😄 Acknowledgement
137
+
138
+ This repository is built upon the work of [LLaVA](https://github.com/haotian-liu/LLaVA/tree/main), and our codebase is built upon [lmms-finetune](https://github.com/zjysteven/lmms-finetune). We appreciate their contributions and insights that have provided a strong foundation for our research.
139
+
140
+ ## 📨 Contact
141
+
142
+ If you have any questions or suggestions, please feel free to contact us
143
+ at [466439420gh@gmail.com](mailto:466439420gh@gmail.com).
144
+
145
+ ## 📝 Citation
146
+ If you find our work interesting and helpful, please consider giving our repo a star. Additionally, if you would like to cite our work, please use the following format:
147
+ ```bibtex
148
+ @article{wen2025spot,
149
+ title={Spot the fake: Large multimodal model-based synthetic image detection with artifact explanation},
150
+ author={Wen, Siwei and Ye, Junyan and Feng, Peilin and Kang, Hengrui and Wen, Zichen and Chen, Yize and Wu, Jiang and Wu, Wenjun and He, Conghui and Li, Weijia},
151
+ journal={arXiv preprint arXiv:2503.14905},
152
+ year={2025}
153
+ }
154
+ ```