File size: 1,458 Bytes
42c40bc
da20645
 
 
 
 
33a3af9
 
 
 
 
42c40bc
33a3af9
 
42c40bc
378a97f
21c9e80
42c40bc
378a97f
42c40bc
82f85c4
378a97f
 
82f85c4
378a97f
7c6a2c2
82f85c4
 
9e04a6a
82f85c4
 
 
da20645
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
base_model:
- llava-hf/llama3-llava-next-8b-hf
- openbmb/MiniCPM-V-2_6
- microsoft/Phi-3-vision-128k-instruct
- Qwen/Qwen2.5-VL-7B-Instruct
license: mit
metrics:
- accuracy
pipeline_tag: image-text-to-text
library_name: transformers
---

**The following models are obtained via supervised fine-tuning (SFT) using the ECD-10k-Images dataset ([URL](https://huggingface.co/datasets/ChartFoundation/ECD-10k-Images)) proposed in our ICCV 2025 paper, "[Effective Training Data Synthesis for Improving MLLM Chart Understanding](https://huggingface.co/papers/2508.06492)" ([Code](https://github.com/yuweiyang-anu/ECD)).**

**ECD Dataset Overview**:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6666a432b78b8b6b34a816e9/kKQmAbuLKB7zOmOVegaOe.png)

**Comparing 4 MLLMs on six test sets: (CharXiv, ChartQA, ReachQA, ChartBench, ChartX, ECDBench)**
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6666a432b78b8b6b34a816e9/z_kGluaiPHKsXQcwWLYfR.png)

**Citation**:

If it is helpful to your research, please cite our paper as follows:

```
@inproceedings{yang2025effective,
     title={Effective Training Data Synthesis for Improving MLLM Chart Understanding},
     author={Yang, Yuwei and Zhang, Zeyu and Hou, Yunzhong and Li, Zhuowan and Liu, Gaowen and Payani, Ali and Ting, Yuan-Sen and Zheng, Liang},
     booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
     year={2025}
 }
```