Text Generation
Transformers
Safetensors
English
qwen2
code
conversational
text-generation-inference
yuanshengni commited on
Commit
3ba3ede
Β·
verified Β·
1 Parent(s): c57ab06

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - TIGER-Lab/VisCode-Multi-679K
5
+ base_model:
6
+ - Qwen/Qwen2.5-Coder-3B-Instruct
7
+ library_name: transformers
8
+ language:
9
+ - en
10
+ tags:
11
+ - code
12
+ ---
13
+
14
+ # VisCoder2-3B
15
+
16
+ [🏠 Project Page](https://tiger-ai-lab.github.io/VisCoder2) | [πŸ“– Paper](https://arxiv.org/abs/2510.23642) | [πŸ’» GitHub](https://github.com/TIGER-AI-Lab/VisCoder2) | [πŸ€— VisCode2](https://hf.co/collections/TIGER-Lab/viscoder2)
17
+
18
+ **VisCoder2-3B** is a lightweight multilingual visualization coding model trained for **executable code generation, rendering, and iterative self-debugging**.
19
+ It is fine-tuned on **VisCode-Multi-679K**, a large-scale cross-language dataset that unifies natural-language instructions, executable visualization code, and feedback-guided correction dialogues.
20
+
21
+ ---
22
+
23
+ ## 🧠 Model Description
24
+
25
+ **VisCoder2-3B** is trained on the **VisCode-Multi-679K** dataset, a large-scale instruction-tuning dataset for executable visualization tasks across **12 programming language**. It addresses a core challenge in multi-language visualization: generating code that not only executes successfully but also produces semantically consistent visual outputs by aligning natural-language instructions and rendering results.
26
+
27
+ ---
28
+
29
+ ## πŸ“Š Main Results on VisPlotBench
30
+
31
+ We evaluate VisCoder2-3B on [**VisPlotBench**](https://huggingface.co/datasets/TIGER-Lab/VisPlotBench), which includes 888 executable visualization tasks spanning 8 languages, supporting both standard generation and multi-turn self-debugging.
32
+
33
+ ![main_results](https://cdn-uploads.huggingface.co/production/uploads/64de37ee5e192985054be575/DRR3Y5vVS-KbniGJ3wmTi.png)
34
+
35
+ > **VisCoder2-3B** shows consistent performance across multiple languages and achieves notable improvements under the multi-round self-debug setting.
36
+ ---
37
+
38
+ ## πŸ“ Training Details
39
+
40
+ - **Base model**: Qwen2.5-Coder-3B-Instruct
41
+ - **Framework**: [ms-swift](https://github.com/modelscope/swift)
42
+ - **Tuning method**: Full-parameter supervised fine-tuning (SFT)
43
+ - **Dataset**: [VisCode-Multi-679K](https://huggingface.co/datasets/TIGER-Lab/VisCode-Multi-679K)
44
+
45
+ ---
46
+
47
+ ## πŸ“– Citation
48
+
49
+ If you use VisCoder2-3B or related datasets in your research, please cite:
50
+
51
+ ```bibtex
52
+ @misc{ni2025viscoder2buildingmultilanguagevisualization,
53
+ title={VisCoder2: Building Multi-Language Visualization Coding Agents},
54
+ author={Yuansheng Ni and Songcheng Cai and Xiangchao Chen and Jiarong Liang and Zhiheng Lyu and Jiaqi Deng and Kai Zou and Ping Nie and Fei Yuan and Xiang Yue and Wenhu Chen},
55
+ year={2025},
56
+ eprint={2510.23642},
57
+ archivePrefix={arXiv},
58
+ primaryClass={cs.SE},
59
+ url={https://arxiv.org/abs/2510.23642},
60
+ }
61
+
62
+ @article{ni2025viscoder,
63
+ title={VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code Generation},
64
+ author={Ni, Yuansheng and Nie, Ping and Zou, Kai and Yue, Xiang and Chen, Wenhu},
65
+ journal={arXiv preprint arXiv:2506.03930},
66
+ year={2025}
67
+ }
68
+ ```
69
+
70
+ For evaluation scripts and more information, see our [GitHub repository](https://github.com/TIGER-AI-Lab/VisCoder2).