Update README.md
Browse files
README.md
CHANGED
|
@@ -25,9 +25,8 @@ configs:
|
|
| 25 |
path: questions.json
|
| 26 |
---
|
| 27 |
|
| 28 |
-
# PerceptionComp
|
| 29 |
|
| 30 |
-
<p align="center">
|
| 31 |
<a href="https://huggingface.co/datasets/hrinnnn/PerceptionComp">
|
| 32 |
<img src="https://img.shields.io/badge/Dataset-Hugging%20Face-FFD21E?logo=huggingface&logoColor=black" alt="Dataset">
|
| 33 |
</a>
|
|
@@ -40,7 +39,6 @@ configs:
|
|
| 40 |
<a href="https://github.com/hrinnnn/PerceptionComp">
|
| 41 |
<img src="https://img.shields.io/badge/GitHub-Repository-181717?logo=github&logoColor=white" alt="GitHub">
|
| 42 |
</a>
|
| 43 |
-
</p>
|
| 44 |
|
| 45 |
PerceptionComp is a benchmark for complex perception-centric video reasoning. It focuses on questions that cannot be solved from a single frame, a short clip, or a shallow caption. Models must revisit visually complex videos, gather evidence across temporally separated segments, and combine multiple perceptual cues before answering.
|
| 46 |
|
|
|
|
| 25 |
path: questions.json
|
| 26 |
---
|
| 27 |
|
| 28 |
+
# PerceptionComp: A Benchmark for Complex Perception-Centric Video Reasoning
|
| 29 |
|
|
|
|
| 30 |
<a href="https://huggingface.co/datasets/hrinnnn/PerceptionComp">
|
| 31 |
<img src="https://img.shields.io/badge/Dataset-Hugging%20Face-FFD21E?logo=huggingface&logoColor=black" alt="Dataset">
|
| 32 |
</a>
|
|
|
|
| 39 |
<a href="https://github.com/hrinnnn/PerceptionComp">
|
| 40 |
<img src="https://img.shields.io/badge/GitHub-Repository-181717?logo=github&logoColor=white" alt="GitHub">
|
| 41 |
</a>
|
|
|
|
| 42 |
|
| 43 |
PerceptionComp is a benchmark for complex perception-centric video reasoning. It focuses on questions that cannot be solved from a single frame, a short clip, or a shallow caption. Models must revisit visually complex videos, gather evidence across temporally separated segments, and combine multiple perceptual cues before answering.
|
| 44 |
|