Instructions to use Surpem/Supertron-VL-4B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Surpem/Supertron-VL-4B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="Surpem/Supertron-VL-4B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("Surpem/Supertron-VL-4B") model = AutoModelForImageTextToText.from_pretrained("Surpem/Supertron-VL-4B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Surpem/Supertron-VL-4B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Surpem/Supertron-VL-4B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Surpem/Supertron-VL-4B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/Surpem/Supertron-VL-4B
- SGLang
How to use Surpem/Supertron-VL-4B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Surpem/Supertron-VL-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Surpem/Supertron-VL-4B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Surpem/Supertron-VL-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Surpem/Supertron-VL-4B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use Surpem/Supertron-VL-4B with Docker Model Runner:
docker model run hf.co/Surpem/Supertron-VL-4B
Supertron-VL-4B: A Chart-Focused Vision-Language Model
Model Description
Supertron-VL-4B is a vision-language model fine-tuned from Qwen/Qwen3-VL-4B-Thinking for chart understanding and chart question answering. It reads chart images, extracts values, compares visual elements, and answers concise questions about plotted data.
- Developed by: Surpem
- Model type: Vision-Language Model
- Architecture: Qwen3-VL dense multimodal transformer, 4B class
- Fine-tuned from: Qwen/Qwen3-VL-4B-Thinking
- License: Apache 2.0
Evaluation
Local Modal H100 benchmark using the Hugging Face transformers image-text-to-text pipeline:
| Benchmark | Split | Samples | Exact Accuracy | Relaxed ChartQA Accuracy |
|---|---|---|---|---|
| ChartQA | test | 256 | 0.7109 | 0.7891 |
Note: This is an offline local benchmark, not an official Hugging Face leaderboard verification.
Get Started
from transformers import AutoProcessor, AutoModelForImageTextToText
from PIL import Image
import torch
model_id = "Surpem/Supertron-VL-4B"
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForImageTextToText.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
image = Image.open("chart.png").convert("RGB")
question = "What is the highest value shown in the chart?"
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{
"type": "text",
"text": (
"Read the chart image and answer the question concisely. "
"Return only the final answer, without chain-of-thought.\n"
f"Question: {question}"
),
},
],
}
]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(text=[text], images=[image], padding=True, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=48, do_sample=False)
generated = outputs[:, inputs["input_ids"].shape[1]:]
print(processor.batch_decode(generated, skip_special_tokens=True)[0].strip())
Limitations
Supertron-VL-4B is specialized for chart question answering. It may make mistakes on crowded charts, ambiguous labels, color-only questions, arithmetic-heavy questions, or charts with very small text.
- Downloads last month
- 107
Model tree for Surpem/Supertron-VL-4B
Collection including Surpem/Supertron-VL-4B
Evaluation results
- Default
- on HuggingFaceM4/ChartQA View evaluation results sourceOffline ChartQA relaxed accuracy using transformers image-text-to-text pipeline; not published.0.79 *
- on jrc/data-viz-qa View evaluation resultssource
DataVizQA relaxed accuracy0.71 *
- ChartQA relaxed accuracy on ChartQAtest set self-reported0.789
- Exact match on ChartQAtest set self-reported0.711