Datasets:
| dataset_info: | |
| features: | |
| - name: id | |
| dtype: string | |
| - name: conversations | |
| list: | |
| - name: role | |
| dtype: string | |
| - name: content | |
| dtype: string | |
| splits: | |
| - name: train | |
| num_bytes: 277884785 | |
| num_examples: 160000 | |
| download_size: 126665150 | |
| dataset_size: 277884785 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| task_categories: | |
| - image-to-text | |
| <h1 align="center"> Text-Based Reasoning About Vector Graphics </h1> | |
| <p align="center"> | |
| <a href="https://mikewangwzhl.github.io/VDLM">🌐 Homepage</a> | |
| • | |
| <a href="https://arxiv.org/abs/2404.06479">📃 Paper</a> | |
| • | |
| <a href="https://huggingface.co/datasets/mikewang/PVD-160K" >🤗 Data (PVD-160k)</a> | |
| • | |
| <a href="https://huggingface.co/mikewang/PVD-160k-Mistral-7b" >🤗 Model (PVD-160k-Mistral-7b)</a> | |
| • | |
| <a href="https://github.com/MikeWangWZHL/VDLM" >💻 Code</a> | |
| </p> | |
| We observe that current *large multimodal models (LMMs)* still struggle with seemingly straightforward reasoning tasks that require precise perception of low-level visual details, such as identifying spatial relations or solving simple mazes. In particular, this failure mode persists in question-answering tasks about vector graphics—images composed purely of 2D objects and shapes. | |
|  | |
| To solve this challenge, we propose **Visually Descriptive Language Model (VDLM)**, a visual reasoning framework that operates with intermediate text-based visual descriptions—SVG representations and learned Primal Visual Description, which can be directly integrated into existing LLMs and LMMs. We demonstrate that VDLM outperforms state-of-the-art large multimodal models, such as GPT-4V, across various multimodal reasoning tasks involving vector graphics. See our [paper](https://arxiv.org/abs/2404.06479) for more details. | |
|  |