VisCoder2-7B / README.md
nielsr's picture
nielsr HF Staff
Add pipeline tag and library name to model card
1ab2b9a verified
|
raw
history blame
2.9 kB
metadata
base_model:
  - Qwen/Qwen2.5-Coder-7B-Instruct
datasets:
  - TIGER-Lab/VisCode-Multi-679K
language:
  - en
license: apache-2.0
tags:
  - code
pipeline_tag: image-text-to-text
library_name: transformers

VisCoder2-7B

🏠 Project Page | πŸ“– Paper | πŸ’» GitHub | πŸ€— VisCode2

VisCoder2-7B is a lightweight multi-language visualization coding model trained for executable code generation, rendering, and iterative self-debugging.


🧠 Model Description

VisCoder2-7B is trained on the VisCode-Multi-679K dataset, a large-scale instruction-tuning dataset for executable visualization tasks across 12 programming language. It addresses a core challenge in multi-language visualization: generating code that not only executes successfully but also produces semantically consistent visual outputs by aligning natural-language instructions and rendering results.


πŸ“Š Main Results on VisPlotBench

We evaluate VisCoder2-7B on VisPlotBench, which includes 888 executable visualization tasks spanning 8 languages, supporting both standard generation and multi-turn self-debugging.

main_results

VisCoder2-7B shows consistent performance across multiple languages and achieves notable improvements under the multi-round self-debug setting.


πŸ“ Training Details

  • Base model: Qwen2.5-Coder-7B-Instruct
  • Framework: ms-swift
  • Tuning method: Full-parameter supervised fine-tuning (SFT)
  • Dataset: VisCode-Multi-679K

πŸ“– Citation

If you use VisCoder2-7B or related datasets in your research, please cite:

@misc{ni2025viscoder2buildingmultilanguagevisualization,
      title={VisCoder2: Building Multi-Language Visualization Coding Agents}, 
      author={Yuansheng Ni and Songcheng Cai and Xiangchao Chen and Jiarong Liang and Zhiheng Lyu and Jiaqi Deng and Kai Zou and Ping Nie and Fei Yuan and Xiang Yue and Wenhu Chen},
      year={2025},
      eprint={2510.23642},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2510.23642}, 
}

@article{ni2025viscoder,
  title={VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code Generation},
  author={Ni, Yuansheng and Nie, Ping and Zou, Kai and Yue, Xiang and Chen, Wenhu},
  journal={arXiv preprint arXiv:2506.03930},
  year={2025}
}

For evaluation scripts and more information, see our GitHub repository.