Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,8 @@ size_categories:
|
|
| 9 |
- 1K<n<10K
|
| 10 |
---
|
| 11 |
|
| 12 |
-
#
|
|
|
|
| 13 |
|
| 14 |
<a href='https://deepperception-kvg.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
|
| 15 |
<a href='https://github.com/MaxyLee/DeepPerception'><img src='https://img.shields.io/badge/Github-Page-Green'></a>
|
|
@@ -19,3 +20,5 @@ size_categories:
|
|
| 19 |
|
| 20 |
This is the official repository of **KVG-Bench**, a comprehensive benchmark of Knowledge-intensive Visual Grounding (KVG) spanning 10 categories with 1.3K manually curated test cases.
|
| 21 |
|
|
|
|
|
|
|
|
|
| 9 |
- 1K<n<10K
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# DeepPerception: Advancing Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding
|
| 13 |
+
Xinyu Ma, Ziyang Ding, Zhicong Luo, Chi Chen, Zonghao Guo, Derek F. Wong, Xiaoyi Feng, Maosong Sun
|
| 14 |
|
| 15 |
<a href='https://deepperception-kvg.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
|
| 16 |
<a href='https://github.com/MaxyLee/DeepPerception'><img src='https://img.shields.io/badge/Github-Page-Green'></a>
|
|
|
|
| 20 |
|
| 21 |
This is the official repository of **KVG-Bench**, a comprehensive benchmark of Knowledge-intensive Visual Grounding (KVG) spanning 10 categories with 1.3K manually curated test cases.
|
| 22 |
|
| 23 |
+
|
| 24 |
+
|