Improve model card: Add metadata and links to paper, project, and code
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,3 +1,13 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: video-text-to-text
|
| 4 |
+
library_name: transformers
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
This repository contains the `Videollama3Qwen2ForCausalLM` model, a reward model presented in the paper [Learning Human-Perceived Fakeness in AI-Generated Videos via Multimodal LLMs](https://huggingface.co/papers/2509.22646).
|
| 8 |
+
|
| 9 |
+
The model is designed to detect human-perceived deepfake traces in AI-generated videos. It takes multimodal input and provides natural-language explanations, bounding-box regions for spatial grounding, and precise onset/offset timestamps for temporal labeling. It was trained on the DeeptraceReward benchmark, which is the first fine-grained, spatially- and temporally-aware dataset for annotating human-perceived fake traces.
|
| 10 |
+
|
| 11 |
+
* **Paper**: [Learning Human-Perceived Fakeness in AI-Generated Videos via Multimodal LLMs](https://huggingface.co/papers/2509.22646)
|
| 12 |
+
* **Project Page**: https://deeptracereward.github.io/
|
| 13 |
+
* **Code**: https://github.com/deeptracereward/deeptracereward
|