2B

Polaris-VGA-2B-Post1.0

Polaris-VGA-2B-Post1.0 is a post-optimized evolution built on top of Qwen/Qwen3.5-2B, designed to extend compact language modeling into the domain of VGA (Visual Grounding Anything). This model integrates advanced visual understanding with strong instruction-following capabilities, enabling it to interpret complex scenes, explain visual content in depth, and perform grounding across diverse inputs. Through targeted post-training optimizations, it enhances multimodal reasoning, allowing precise alignment between textual instructions and visual elements for detection, explanation, and structured interpretation tasks, while leveraging the increased capacity of a 2B parameter architecture for improved performance and reasoning depth.

Visual-Grounding-Anything (code) - https://huggingface.co/prithivMLmods/Polaris-VGA-2B-Post1.0/tree/main/Visual-Grounding-Anything

Key Highlights

  • VGA (Visual Grounding Anything) Specialization: Designed to associate textual queries with visual elements across a wide range of scenes and contexts.
  • Post-Optimized Training Pipeline: Refined on top of the base model to improve multimodal alignment, reasoning, and response quality.
  • Enhanced Visual Understanding: Interprets complex scenes, object relationships, and contextual cues with improved depth over smaller variants.
  • Scene Explanation & Reasoning: Produces detailed, structured explanations grounded in visual inputs.
  • Object & Point Tracking Optimization: Adapted for video-based tasks including object tracking and point-level tracking across frames.
  • Efficient 2B Architecture: Balances stronger reasoning and multimodal capabilities with relatively low computational requirements.
Get GGUF
File Name Quant Type File Size File Link
Polaris-VGA-2B-Post1.0.BF16.gguf BF16 3.78 GB Download
Polaris-VGA-2B-Post1.0.F16.gguf F16 3.78 GB Download
Polaris-VGA-2B-Post1.0.F32.gguf F32 7.54 GB Download
Polaris-VGA-2B-Post1.0.Q8_0.gguf Q8_0 2.01 GB Download
Polaris-VGA-2B-Post1.0.mmproj-bf16.gguf mmproj-bf16 671 MB Download
Polaris-VGA-2B-Post1.0.mmproj-f16.gguf mmproj-f16 671 MB Download
Polaris-VGA-2B-Post1.0.mmproj-f32.gguf mmproj-f32 1.33 GB Download
Polaris-VGA-2B-Post1.0.mmproj-q8_0.gguf mmproj-q8_0 365 MB Download

Recommended (chat_template.jinja) - https://huggingface.co/prithivMLmods/Polaris-VGA-2B-Post1.0/blob/main/chat_template.jinja

Standard or Default (chat_template.jinja)https://huggingface.co/prithivMLmods/Polaris-VGA-2B-Post1.0/blob/main/standard-chat_template/chat_template.jinja

Download the model

hf auth login --token <YOUR_HF_TOKEN>

hf download prithivMLmods/Polaris-VGA-2B-Post1.0

Quick Start with Transformers

pip install transformers==5.3.0
# or
pip install git+https://github.com/huggingface/transformers.git
from transformers import Qwen3_5ForConditionalGeneration, AutoProcessor
import torch

model = Qwen3_5ForConditionalGeneration.from_pretrained(
    "prithivMLmods/Polaris-VGA-2B-Post1.0",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    "prithivMLmods/Polaris-VGA-2B-Post1.0"
)

messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Describe this image in extreme detail."}
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

inputs = processor(
    text=[text],
    padding=True,
    return_tensors="pt"
).to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=512)

generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)

print(output_text)

Intended Use

  • Visual Grounding Research: Studying alignment between language and visual elements across diverse scenarios.
  • Scene Understanding Applications: Analyzing and explaining visual data for downstream tasks.
  • Video Analysis Prototyping: Supporting object tracking and point tracking experiments in video workflows.
  • Multimodal AI Systems: Deploying visual reasoning capabilities in practical applications.
  • Research & Experimentation: Prototyping with compact yet capable multimodal transformer architectures.

Capabilities

  • Visual Scene Understanding: Interprets any scene for reasoning, detection, and descriptive tasks.
  • Cross-Modal Reasoning: Connects visual inputs with textual instructions for grounded outputs.
  • Detection-Oriented Tasks: Identifies and contextualizes objects and regions within visual data.
  • Tracking-Oriented Tasks: Supports object continuity and point tracking across sequential frames.
  • General Visual Explanation: Explains “anything” visible in an input with coherent and structured responses.

Limitations

Important Note: This model emphasizes broad visual grounding and reasoning within a compact architecture.

  • Moderate Scale Constraints: While larger than 0.8B models, it may still underperform compared to significantly larger multimodal systems in highly complex reasoning tasks.
  • Visual Ambiguity Sensitivity: Performance depends on input quality, scene clarity, and complexity.
  • User Responsibility: Outputs should be used responsibly and within appropriate ethical and legal boundaries.
  • Experimental Multimodal Behavior: Certain edge cases in grounding and tracking may require further refinement depending on usage scenarios.

Acknowledgements

Downloads last month
665
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Polaris-VGA-2B-Post1.0

Finetuned
Qwen/Qwen3.5-2B
Quantized
(65)
this model
Quantizations
2 models

Collection including prithivMLmods/Polaris-VGA-2B-Post1.0