|
|
--- |
|
|
license: cc-by-nc-4.0 |
|
|
task_categories: |
|
|
- text-generation |
|
|
- image-to-text |
|
|
- summarization |
|
|
- question-answering |
|
|
language: |
|
|
- en |
|
|
--- |
|
|
|
|
|
# 🎨 Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want |
|
|
|
|
|
The interaction between humans and artificial intelligence (AI) is a crucial factor that reflects the effectiveness of multimodal large language models (MLLMs). However, current MLLMs primarily focus on image-level comprehension and limit interaction to textual instructions, thereby constraining their flexibility in usage and depth of response. Therefore, we introduce the **Draw-and-Understand project**: a new model, a multi-domain dataset, and a challenging benchmark for visual prompting. |
|
|
|
|
|
|
|
|
## Training and Evaluation Dataset Card |
|
|
|
|
|
- MDVP-Data is a comprehensive dataset for multi-domain visual-prompt instruction tuning. This dataset encompasses data for both point-level and region-level understanding, designed to enhance a model’s comprehension ability and robustness. |
|
|
|
|
|
- We also introduce MDVP-Bench, a challenging benchmark designed to evaluate tasks that require a combination of detailed description referrals, inter-relationship analysis, and complex reasoning. |
|
|
|
|
|
|
|
|
## Paper and Code |
|
|
Project Page: [Draw-and-Understand](https://draw-and-understand.github.io/) \ |
|
|
Paper: [https://arxiv.org/abs/2403.20271](https://arxiv.org/abs/2403.20271) \ |
|
|
Code: [https://github.com/AFeng-x/Draw-and-Understand](https://github.com/AFeng-x/Draw-and-Understand) |
|
|
|
|
|
|
|
|
## License |
|
|
Attribution-NonCommercial 4.0 International \ |
|
|
It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use. |
|
|
|
|
|
|
|
|
## Citations |
|
|
``` |
|
|
@article{lin2024draw, |
|
|
title={Draw-and-understand: Leveraging visual prompts to enable mllms to comprehend what you want}, |
|
|
author={Lin, Weifeng and Wei, Xinyu and An, Ruichuan and Gao, Peng and Zou, Bocheng and Luo, Yulin and Huang, Siyuan and Zhang, Shanghang and Li, Hongsheng}, |
|
|
journal={arXiv preprint arXiv:2403.20271}, |
|
|
year={2024} |
|
|
} |
|
|
``` |