1

Qwen3-VL-8B-Instruct-Unredacted-MAX

Qwen3-VL-8B-Instruct-Unredacted-MAX is an unredacted evolution built on top of Qwen3-VL-8B-Instruct. This model applies advanced abliterated training strategies designed to minimize internal refusal behaviors while preserving the core multimodal reasoning strengths of the original architecture. The result is a highly capable 8B vision-language model optimized for unrestricted, detailed reasoning and captioning across complex visual inputs.

Key Highlights

  • Unredacted MAX Training: Fine-tuned to significantly reduce refusal patterns and improve instruction adherence across diverse prompts.
  • 8B Parameter Architecture: Built on top of Qwen3-VL-8B-Instruct, leveraging stronger reasoning capacity and deeper multimodal alignment compared to 4B variants.
  • Unrestricted Multimodal Reasoning: Designed for deep analysis of artistic, forensic, technical, or abstract visual content without standard safety-driven refusals.
  • High-Fidelity Captions: Produces dense, descriptive outputs suitable for dataset generation, metadata enrichment, or accessibility use cases.
  • Dynamic Resolution Support: Retains Qwen3-VL’s ability to process varying image resolutions and aspect ratios effectively.

Quick Start with Transformers

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch

# Load the 8B Instruct Unredacted MAX model
model = Qwen3VLForConditionalGeneration.from_pretrained(
    "prithivMLmods/Qwen3-VL-8B-Instruct-Unredacted-MAX",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    "prithivMLmods/Qwen3-VL-8B-Instruct-Unredacted-MAX"
)

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Provide a detailed caption and reasoning for this image."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

image_inputs, video_inputs = process_vision_info(messages)

inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
).to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=256)

generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)

print(output_text)

Intended Use

  • Advanced Red-Teaming: Evaluating multimodal robustness and probing behavioral edge cases.
  • Complex Data Archiving: Generating detailed captions for medical, artistic, historical, or research datasets.
  • Refusal Mechanism Research: Studying behavioral shifts in vision-language models after abliterated fine-tuning.
  • Creative Storytelling: Producing detailed visual descriptions for narrative and world-building projects.

Limitations & Risks

Critical Note: This model is designed to minimize built-in refusal mechanisms.

  • Sensitive Content Exposure: The model may generate explicit or controversial descriptions if prompted accordingly.
  • User Responsibility: Generated outputs must be handled responsibly and used within ethical and legal boundaries.
  • Hardware Requirements: As an 8B model, it requires substantial VRAM for high-resolution image processing and longer generations.

Acknowledgements

I would like to thank the works of the following:

  • Uncensor any LLM with abliteration – Maxime Labonne
  • Using FP8 and FP4 with Transformer Engine – docs.nvidia
  • Remove Refusals with Transformers – Sumandora
  • LLM Compressor – vllm-project
  • FP8 Floating-Point 8: An Introduction to Efficient, Lower-Precision AI Training – nvidia
Downloads last month
24
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VL-8B-Instruct-Unredacted-MAX

Finetuned
(176)
this model
Quantizations
4 models

Space using prithivMLmods/Qwen3-VL-8B-Instruct-Unredacted-MAX 1

Collection including prithivMLmods/Qwen3-VL-8B-Instruct-Unredacted-MAX