Coobiw's picture
Update README.md
1fb0e25 verified
---
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: transformers
paper: https://arxiv.org/abs/2409.03277
---
<p align="center">
<b><font size="6">ChartMoE</font></b>
<p>
<p align="center">
<b><font size="4">ICLR2025 Oral </font></b>
<p>
<div align="center">
<div style="display: inline-block; margin-right: 30px;">
[![arXiv](https://img.shields.io/badge/ArXiv-Prepint-red)](https://arxiv.org/abs/2409.03277)
</div>
<div style="display: inline-block; margin-right: 30px;">
[![Project Page](https://img.shields.io/badge/Project-Page-brightgreen)](https://chartmoe.github.io/)
</div>
<div style="display: inline-block; margin-right: 30px;">
[![Github Repo](https://img.shields.io/badge/Github-Repo-blue)](https://github.com/IDEA-FinAI/ChartMoE)
</div>
<div style="display: inline-block; margin-right: 30px;">
[![Hugging Face Model](https://img.shields.io/badge/Hugging%20Face-Model-8A2BE2)](https://huggingface.co/IDEA-FinAI/chartmoe)
</div>
</div>
**ChartMoE** is a multimodal large language model with Mixture-of-Expert connector, based on [InternLM-XComposer2](https://github.com/InternLM/InternLM-XComposer/tree/main/InternLM-XComposer-2.0) for advanced chart 1)understanding, 2)replot, 3)editing, 4)highlighting and 5)transformation.
**This is a reproduction of diversely-aligner moe-connector, please feel free to use it for continue sft training!**
## Open Source License
The data is licensed under Apache-2.0.