File size: 1,769 Bytes
d8b37f8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
license: bsd-3-clause
library_name: pytorch
pipeline_tag: image-classification
tags:
- facial-forgery-detection
- multi-label-classification
- vit
- deepfake
- acl-2026
---

# Face-ViT: Multi-Label Facial Forgery Region Classifier

## ๐Ÿ“– Model Description
This is the **Face-ViT** auxiliary perception module proposed in the ACL 2026 paper: 
*"Generating Attribution Reports for Manipulated Facial Images: A Dataset and Baseline"*.

Face-ViT is a multi-label classifier based on the **ViT-H/14** architecture. It is specifically trained to recognize 21 different types of facial manipulations (e.g., eye modification, skin smoothing, mouth tampering). In the DFF framework, it provides fine-grained visual cues that guide the large language model to generate accurate forensic explanations.

## ๐Ÿ› ๏ธ Model Details
- **Architecture**: ViT-H/14 with an additional CNN branch and max-pooling for multi-label support.
- **Input Size**: 224x224 RGB images.
- **Number of Classes**: 21 (Facial attributes/manipulation types).
- **Training Objective**: Joint loss including BCE, Focal, Dice, and Jaccard loss.

## ๐Ÿš€ Links
- **Official Code**: [Generating-Attribution-Reports](https://github.com/NattyLianJc/Generating-Attribution-Reports)
- **Main Framework (DFF)**: [LianJC/DFF-InstructBLIP-Detection](https://huggingface.co/LianJC/DFF-InstructBLIP-Detection)
- **Dataset (MMTT)**: [LianJC/MMTT-Dataset](https://huggingface.co/datasets/LianJC/MMTT-Dataset)

## ๐Ÿ“œ Citation
If you find this model useful, please cite:
```bibtex
@inproceedings{lian2026generating,
  title={Generating Attribution Reports for Manipulated Facial Images: A Dataset and Baseline},
  author={Lian, Jingchun and others},
  booktitle={Proceedings of ACL},
  year={2026},
  note={To appear}
}