Supertron-VL-4B: A Chart-Focused Vision-Language Model

Model Description

Supertron-VL-4B is a vision-language model fine-tuned from Qwen/Qwen3-VL-4B-Thinking for chart understanding and chart question answering. It reads chart images, extracts values, compares visual elements, and answers concise questions about plotted data.

  • Developed by: Surpem
  • Model type: Vision-Language Model
  • Architecture: Qwen3-VL dense multimodal transformer, 4B class
  • Fine-tuned from: Qwen/Qwen3-VL-4B-Thinking
  • License: Apache 2.0

Evaluation

Local Modal H100 benchmark using the Hugging Face transformers image-text-to-text pipeline:

Benchmark Split Samples Exact Accuracy Relaxed ChartQA Accuracy
ChartQA test 256 0.7109 0.7891

Note: This is an offline local benchmark, not an official Hugging Face leaderboard verification.


Get Started

from transformers import AutoProcessor, AutoModelForImageTextToText
from PIL import Image
import torch

model_id = "Surpem/Supertron-VL-4B"

processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForImageTextToText.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

image = Image.open("chart.png").convert("RGB")
question = "What is the highest value shown in the chart?"
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": image},
            {
                "type": "text",
                "text": (
                    "Read the chart image and answer the question concisely. "
                    "Return only the final answer, without chain-of-thought.\n"
                    f"Question: {question}"
                ),
            },
        ],
    }
]

text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(text=[text], images=[image], padding=True, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=48, do_sample=False)
generated = outputs[:, inputs["input_ids"].shape[1]:]
print(processor.batch_decode(generated, skip_special_tokens=True)[0].strip())

Limitations

Supertron-VL-4B is specialized for chart question answering. It may make mistakes on crowded charts, ambiguous labels, color-only questions, arithmetic-heavy questions, or charts with very small text.

Downloads last month
107
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for Surpem/Supertron-VL-4B

Finetuned
(23)
this model
Quantizations
1 model

Collection including Surpem/Supertron-VL-4B

Evaluation results