4B

Polaris-VGA-2B-Post1.0e

Polaris-VGA-2B-Post1.0e is an experimental post-optimized evolution built on top of Qwen/Qwen3.5-4B, designed to extend compact-to-mid scale language modeling into the domain of VGA (Visual Grounding Anything). This variant introduces enhanced multimodal alignment and deeper visual reasoning capabilities, enabling the model to interpret complex scenes, explain visual content with higher contextual awareness, and perform precise grounding across diverse inputs. As an experimental release, it explores advanced post-training strategies to strengthen the connection between textual instructions and visual elements for detection, explanation, and structured interpretation tasks, while leveraging the expanded capacity of a 4B-scale backbone.

Visual-Grounding-Anything (code) - https://huggingface.co/prithivMLmods/Polaris-VGA-4B-Post1.0e/tree/main/Visual-Grounding-Anything

Key Highlights

  • Experimental VGA Optimization (e Variant): Introduces exploratory training and post-optimization strategies focused on improving grounding fidelity and reasoning depth.
  • VGA (Visual Grounding Anything) Specialization: Aligns textual queries with visual elements across diverse and complex environments.
  • Enhanced Multimodal Reasoning: Improved capability to connect scene understanding with instruction-based outputs.
  • Advanced Scene Interpretation: Better handling of object relationships, spatial awareness, and contextual reasoning.
  • Object & Point Tracking Optimization: Supports video-based workflows including object tracking and fine-grained point tracking across frames.
  • 4B-Based Backbone Efficiency: Built on a stronger base model to improve performance while maintaining practical deployment flexibility.
Get GGUF
File Name Quant Type File Size File Link
Polaris-VGA-4B-Post1.0e.BF16.gguf BF16 8.42 GB Download
Polaris-VGA-4B-Post1.0e.F16.gguf F16 8.42 GB Download
Polaris-VGA-4B-Post1.0e.F32.gguf F32 16.8 GB Download
Polaris-VGA-4B-Post1.0e.Q8_0.gguf Q8_0 4.48 GB Download
Polaris-VGA-4B-Post1.0e.mmproj-bf16.gguf mmproj-bf16 676 MB Download
Polaris-VGA-4B-Post1.0e.mmproj-f16.gguf mmproj-f16 676 MB Download
Polaris-VGA-4B-Post1.0e.mmproj-f32.gguf mmproj-f32 1.33 GB Download
Polaris-VGA-4B-Post1.0e.mmproj-q8_0.gguf mmproj-q8_0 367 MB Download

Recommended (chat_template.jinja) - https://huggingface.co/prithivMLmods/Polaris-VGA-4B-Post1.0e/blob/main/chat_template.jinja

Standard or Default (chat_template.jinja)https://huggingface.co/prithivMLmods/Polaris-VGA-4B-Post1.0e/blob/main/standard-chat_template/chat_template.jinja

Download the model

hf auth login --token <YOUR_HF_TOKEN>

hf download prithivMLmods/Polaris-VGA-4B-Post1.0e

Quick Start with Transformers

pip install transformers==5.3.0
# or
pip install git+https://github.com/huggingface/transformers.git
from transformers import Qwen3_5ForConditionalGeneration, AutoProcessor
import torch

model = Qwen3_5ForConditionalGeneration.from_pretrained(
    "prithivMLmods/Polaris-VGA-4B-Post1.0e",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    "prithivMLmods/Polaris-VGA-4B-Post1.0e"
)

messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Describe this image in extreme detail."}
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

inputs = processor(
    text=[text],
    padding=True,
    return_tensors="pt"
).to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=512)

generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)

print(output_text)

Intended Use

  • Experimental Multimodal Research: Exploring advanced visual grounding and reasoning behaviors.
  • Scene Understanding Systems: Interpreting and explaining complex visual environments.
  • Video Analysis & Tracking Research: Prototyping object tracking and point tracking pipelines.
  • Multimodal Alignment Studies: Investigating how language models interact with visual representations.
  • Rapid Prototyping: Testing new ideas on a moderately scaled multimodal architecture.

Capabilities

  • Visual Scene Understanding: Interprets diverse scenes for reasoning, detection, and descriptive tasks.
  • Cross-Modal Reasoning: Bridges textual instructions with visual data for grounded outputs.
  • Detection-Oriented Tasks: Identifies, localizes, and contextualizes visual elements.
  • Tracking-Oriented Tasks: Maintains object and point consistency across sequential frames.
  • General Visual Explanation: Explains “anything” visible in an input with structured and coherent responses.

Limitations

Important Note: This is an experimental variant focused on expanding multimodal grounding capabilities.

  • Experimental Stability: As an experimental release, outputs may vary across edge cases and complex scenarios.
  • Moderate Scale Trade-offs: While based on a 4B backbone, it may still fall short of larger systems in highly demanding reasoning tasks.
  • Visual Ambiguity Sensitivity: Performance depends on clarity and complexity of visual inputs.
  • User Responsibility: Outputs should be used responsibly, especially in sensitive or high-impact applications.

Acknowledgements

Downloads last month
595
Safetensors
Model size
5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Polaris-VGA-4B-Post1.0e

Finetuned
Qwen/Qwen3.5-4B
Quantized
(110)
this model
Quantizations
2 models

Collection including prithivMLmods/Polaris-VGA-4B-Post1.0e