KAIØ-SIGHT

Multi-View Vision-Language Reasoning for Autonomous Robotics

Hugging Face GitHub AMD ROCm License

Model Description

KAIØ-SIGHT is a fine-tuned Vision-Language Model (VLM) designed for multi-view spatial-temporal reasoning in autonomous robotics and driving scenarios. Built on top of Qwen2.5-VL-7B-Instruct, this model learns to fuse multi-camera video feeds into a coherent understanding of 360° environments. This repo contains only the fine-tuned Lora adapters. Please pull the base model directly.

Key Capabilities

  • πŸŽ₯ Multi-View Fusion: Processes synchronized feeds from up to 7 cameras (Front Wide, Front Tele, Cross Left/Right, Rear Left/Right, Rear Tele)
  • 🧠 Spatial Reasoning: Understands object positions, motion trajectories, and scene dynamics across camera views
  • πŸš— Egomotion Prediction: Predicts vehicle state including position, velocity, and rotation
  • ⏱️ Temporal Context: Analyzes 16-frame sliding windows to capture motion and causality

Training Details

Base Model

  • Architecture: Qwen2.5-VL-7B-Instruct
  • Training Method: LoRA (Low-Rank Adaptation) with Unsloth optimization
  • Precision: BFloat16

LoRA Configuration

Parameter Value
Rank 128
Alpha 256
Target Modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Max Sequence Length 65,536 tokens

Training Hyperparameters

Parameter Value
Learning Rate 1e-4
Optimizer Paged AdamW 8-bit
Effective Batch Size 144 (48 Γ— 3 gradient accumulation)
Weight Decay 0.01
LR Scheduler Cosine with 10% warmup
Epochs 1

Hardware

  • GPU: AMD Instinct MI300X (192GB VRAM)
  • Framework: ROCm 6.4 with custom kernel optimizations

Dataset

Trained on the NVIDIA PhysicalAI Autonomous Vehicles dataset featuring:

  • Multi-camera video streams from 7 synchronized cameras
  • Egomotion labels (position, velocity, rotation)
  • High-quality urban driving scenarios

Camera Configuration (7-cam Setup)

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Front Wide  β”‚ Front Tele  β”‚   (empty)   β”‚
β”‚   120Β° FOV  β”‚   30Β° FOV   β”‚             β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Cross Left  β”‚   (ego)     β”‚ Cross Right β”‚
β”‚   120Β° FOV  β”‚             β”‚   120Β° FOV  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Rear Left   β”‚ Rear Tele   β”‚ Rear Right  β”‚
β”‚   70Β° FOV   β”‚   30Β° FOV   β”‚   70Β° FOV   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Intended Use

Primary Use Cases

  • πŸ€– Autonomous robotics research and development
  • πŸš™ Driving scenario understanding and prediction
  • πŸ“Š Multi-view video understanding research
  • πŸ”¬ Vision-language model experimentation

Out-of-Scope Uses

  • ⚠️ Production autonomous vehicle deployment (experimental research only)
  • ⚠️ Safety-critical applications without additional validation
  • ⚠️ Real-time inference without hardware-specific optimization

Usage

Quick Start with Transformers

from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from peft import PeftModel
import torch

# Load base model
base_model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2.5-VL-7B-Instruct",
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "Thunderbird2410/KAIO-SIGHT")
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")

# Prepare your multi-view image
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "path/to/multi_view_image.jpg"},
            {"type": "text", "text": "Analyze this multi-camera driving scene. Describe the surroundings and predict the vehicle's motion."}
        ]
    }
]

# Generate response
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(text=text, images=[image], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
response = processor.decode(outputs[0], skip_special_tokens=True)

With Unsloth (Recommended for Training)

from unsloth import FastVisionModel

model, tokenizer = FastVisionModel.from_pretrained(
    "Thunderbird2410/KAIO-SIGHT",
    max_seq_length=65536,
    dtype=torch.bfloat16,
    load_in_4bit=True  # Optional: for lower VRAM
)

Limitations

  • Experimental Status: This model is a research prototype and not production-ready
  • Hardware Dependency: Optimized for AMD MI300X; performance on other GPUs may vary
  • Domain Specificity: Trained primarily on urban driving scenarios
  • Temporal Windows: Best performance with 4-frame sequences matching training distribution to meet model's context window

Model Architecture

graph LR
    A[7-Camera Video] -->|Tile to Grid| B[3Γ—3 Composite Frame]
    B -->|16-Frame Window| C[Temporal Sequence]
    C -->|Vision Encoder| D[Qwen2.5-VL-7B]
    D -->|LoRA Adapters| E[Fine-tuned Model]
    E -->|Generate| F[Egomotion + Reasoning]

Citation

If you use this model in your research, please cite:

@misc{kaio-sight-2024,
  author = {Poornachandra},
  title = {KAIØ-SIGHT: Multi-View Vision-Language Reasoning for Autonomous Robotics},
  year = {2024},
  publisher = {Hugging Face},
  url = {https://huggingface.co/Thunderbird2410/KAIO-SIGHT}
}

Acknowledgments

  • Qwen Team for the Qwen2.5-VL foundation model
  • Unsloth for efficient fine-tuning optimizations
  • NVIDIA for the PhysicalAI dataset
  • AMD for ROCm and MI300X hardware support

License

This model is released under the Apache 2.0 License.


⚠️ Experimental Research Model - Use at Your Own Risk ⚠️

This qwen2_5_vl_text model was trained 2x faster with Unsloth

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Thunderbird2410/KAIO-SIGHT

Adapter
(6)
this model

Dataset used to train Thunderbird2410/KAIO-SIGHT