Image-Text-to-Text
Transformers
Safetensors
English
qwen2
text-generation
code
conversational
text-generation-inference
Instructions to use TIGER-Lab/VisCoder2-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TIGER-Lab/VisCoder2-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="TIGER-Lab/VisCoder2-7B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TIGER-Lab/VisCoder2-7B") model = AutoModelForCausalLM.from_pretrained("TIGER-Lab/VisCoder2-7B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use TIGER-Lab/VisCoder2-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "TIGER-Lab/VisCoder2-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TIGER-Lab/VisCoder2-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/TIGER-Lab/VisCoder2-7B
- SGLang
How to use TIGER-Lab/VisCoder2-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "TIGER-Lab/VisCoder2-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TIGER-Lab/VisCoder2-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "TIGER-Lab/VisCoder2-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TIGER-Lab/VisCoder2-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use TIGER-Lab/VisCoder2-7B with Docker Model Runner:
docker model run hf.co/TIGER-Lab/VisCoder2-7B
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- TIGER-Lab/VisCode-Multi-679K
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
base_model:
|
| 8 |
+
- Qwen/Qwen2.5-Coder-7B-Instruct
|
| 9 |
+
tags:
|
| 10 |
+
- code
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# VisCoder2-7B
|
| 14 |
+
|
| 15 |
+
[π Project Page](https://tiger-ai-lab.github.io/VisCoder2) | [π Paper](https://arxiv.org/abs/2510.23642) | [π» GitHub](https://github.com/TIGER-AI-Lab/VisCoder2) | [π€ VisCode2](https://hf.co/collections/TIGER-Lab/viscoder2)
|
| 16 |
+
|
| 17 |
+
**VisCoder2-7B** is a lightweight multi-language visualization coding model trained for **executable code generation, rendering, and iterative self-debugging**.
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
## π§ Model Description
|
| 22 |
+
|
| 23 |
+
**VisCoder2-7B** is trained on the **VisCode-Multi-679K** dataset, a large-scale instruction-tuning dataset for executable visualization tasks across **12 programming language**. It addresses a core challenge in multi-language visualization: generating code that not only executes successfully but also produces semantically consistent visual outputs by aligning natural-language instructions and rendering results.
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## π Main Results on VisPlotBench
|
| 28 |
+
|
| 29 |
+
We evaluate VisCoder2-7B on [**VisPlotBench**](https://huggingface.co/datasets/TIGER-Lab/VisPlotBench), which includes 888 executable visualization tasks spanning 8 languages, supporting both standard generation and multi-turn self-debugging.
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+
> **VisCoder2-7B** shows consistent performance across multiple languages and achieves notable improvements under the multi-round self-debug setting.
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
## π Training Details
|
| 37 |
+
|
| 38 |
+
- **Base model**: Qwen2.5-Coder-7B-Instruct
|
| 39 |
+
- **Framework**: [ms-swift](https://github.com/modelscope/swift)
|
| 40 |
+
- **Tuning method**: Full-parameter supervised fine-tuning (SFT)
|
| 41 |
+
- **Dataset**: [VisCode-Multi-679K](https://huggingface.co/datasets/TIGER-Lab/VisCode-Multi-679K)
|
| 42 |
+
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
## π Citation
|
| 46 |
+
|
| 47 |
+
If you use VisCoder2-7B or related datasets in your research, please cite:
|
| 48 |
+
|
| 49 |
+
```bibtex
|
| 50 |
+
@misc{ni2025viscoder2buildingmultilanguagevisualization,
|
| 51 |
+
title={VisCoder2: Building Multi-Language Visualization Coding Agents},
|
| 52 |
+
author={Yuansheng Ni and Songcheng Cai and Xiangchao Chen and Jiarong Liang and Zhiheng Lyu and Jiaqi Deng and Kai Zou and Ping Nie and Fei Yuan and Xiang Yue and Wenhu Chen},
|
| 53 |
+
year={2025},
|
| 54 |
+
eprint={2510.23642},
|
| 55 |
+
archivePrefix={arXiv},
|
| 56 |
+
primaryClass={cs.SE},
|
| 57 |
+
url={https://arxiv.org/abs/2510.23642},
|
| 58 |
+
}
|
| 59 |
+
|
| 60 |
+
@article{ni2025viscoder,
|
| 61 |
+
title={VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code Generation},
|
| 62 |
+
author={Ni, Yuansheng and Nie, Ping and Zou, Kai and Yue, Xiang and Chen, Wenhu},
|
| 63 |
+
journal={arXiv preprint arXiv:2506.03930},
|
| 64 |
+
year={2025}
|
| 65 |
+
}
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
For evaluation scripts and more information, see our [GitHub repository](https://github.com/TIGER-AI-Lab/VisCoder2).
|