This is the text-only decoder component of the Qwen3-VL-4B-Instruct model. For more details, please visit the original model page or refer to the Qwen-VL technical reports published by Qwen.

Qwen3-VL-4B Language Model: The strangeropshf/qwen3-vl-4b-language_model is the extracted text-only decoder component from Qwen3-VL-4B-Instruct, featuring Qwen3's 28-layer transformer architecture with Grouped Query Attention (GQA) for efficient long-context processing up to 128K tokens, delivering near-lossless text understanding comparable to pure LLMs while maintaining seamless fusion compatibility with its DeepStack vision encoder. Optimized for multilingual instruction following across 100+ languages/dialects with strong translation capabilities, it employs interleaved rotary positional embeddings (i-MRoPE) adapted from the VL variant for robust temporal/spatial reasoning even in text-only mode, supporting advanced agentic workflows, visual coding and STEM reasoning tasks.

Quick Start with Transformers

Install the required packages

torch==2.8.0
torchvision
transformers==4.57.6
accelerate

Usage

import torch
from transformers import Qwen3VLForConditionalGeneration, AutoTokenizer

MODEL_PATH = "strangeropshf/qwen3-vl-4b-language_model"

def run_text_only_inference(model_path):

    device = "cuda" if torch.cuda.is_available() else "cpu"
    print(f"Using device: {device.upper()}")

    print("Loading tokenizer...")
    tokenizer = AutoTokenizer.from_pretrained(
        model_path,
        trust_remote_code=True
    )

    print("Loading Qwen3-VL language model...")
    model = Qwen3VLForConditionalGeneration.from_pretrained(
        model_path,
        torch_dtype=torch.bfloat16 if device == "cuda" else torch.float32,
        device_map="auto",
        trust_remote_code=True,
        use_safetensors=True
    )

    model.eval()
    print("Model loaded successfully.\n")

    prompt = "Explain what multimodal AI is in simple terms."

    inputs = tokenizer(
        prompt,
        return_tensors="pt"
    ).to(model.device)

    with torch.no_grad():
        outputs = model.generate(
            **inputs,
            max_new_tokens=1024,
            temperature=0.7,
            do_sample=True,
            top_p=0.9
        )

    response = tokenizer.decode(outputs[0], skip_special_tokens=True)

    print("----- Response -----")
    print(response)


if __name__ == "__main__":
    run_text_only_inference(MODEL_PATH)
Downloads last month
19
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for strangeropshf/qwen3-vl-4b-language_model

Finetuned
(238)
this model