--- license: apache-2.0 pipeline_tag: image-text-to-text library_name: transformers paper: https://arxiv.org/abs/2409.03277 ---

ChartMoE

ICLR2025 Oral

[![arXiv](https://img.shields.io/badge/ArXiv-Prepint-red)](https://arxiv.org/abs/2409.03277)
[![Project Page](https://img.shields.io/badge/Project-Page-brightgreen)](https://chartmoe.github.io/)
[![Github Repo](https://img.shields.io/badge/Github-Repo-blue)](https://github.com/IDEA-FinAI/ChartMoE)
[![Hugging Face Model](https://img.shields.io/badge/Hugging%20Face-Model-8A2BE2)](https://huggingface.co/IDEA-FinAI/chartmoe)
**ChartMoE** is a multimodal large language model with Mixture-of-Expert connector, based on [InternLM-XComposer2](https://github.com/InternLM/InternLM-XComposer/tree/main/InternLM-XComposer-2.0) for advanced chart 1)understanding, 2)replot, 3)editing, 4)highlighting and 5)transformation. **This is a reproduction of diversely-aligner moe-connector, please feel free to use it for continue sft training!** ## Open Source License The data is licensed under Apache-2.0.