1

Qwen3-VL-8B-Instruct-Unredacted-MAX-FP8

Qwen3-VL-8B-Instruct-Unredacted-MAX-FP8 is an FP8-compressed evolution built on top of Qwen3-VL-8B-Instruct-Unredacted-MAX. This variant leverages BF16 · FP8 (F8_E4M3) precision formats to significantly reduce memory footprint and improve inference efficiency, while preserving the unredacted multimodal reasoning strengths of the original architecture. The result is a highly capable 8B vision-language model optimized for unrestricted, detailed reasoning and captioning across complex visual inputs, with enhanced hardware efficiency.

FP8 (8-bit floating point) weight and activation quantization using hardware acceleration on GPUs – FP8 W8A8. Quantization W8A8 FP8-dynamic recipe – examples.

Key Highlights

  • BF16 · FP8 (F8_E4M3) Compression: Transformer Engine–based FP8 quantization reduces VRAM usage and improves throughput while maintaining strong multimodal reasoning fidelity.
  • Unredacted MAX Training: Retains the abliterated fine-tuning strategy designed to minimize internal refusal behaviors and improve instruction adherence.
  • 8B Parameter Architecture: Built on top of Qwen3-VL-8B-Instruct-Unredacted-MAX, delivering stronger reasoning capacity while benefiting from FP8 efficiency gains.
  • Unrestricted Multimodal Reasoning: Designed for deep analysis of artistic, forensic, technical, or abstract visual content without standard safety-driven refusals.
  • High-Fidelity Captions: Produces dense, descriptive outputs suitable for dataset generation, metadata enrichment, or accessibility use cases.
  • Dynamic Resolution Support: Retains Qwen3-VL’s ability to process varying image resolutions and aspect ratios effectively.
  • Optimized Deployment: FP8 compression enables smoother deployment on Hopper and compatible GPU architectures.

Quick Start with Transformers

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch

# Load the 8B Instruct Unredacted MAX FP8 model
model = Qwen3VLForConditionalGeneration.from_pretrained(
    "prithivMLmods/Qwen3-VL-8B-Instruct-Unredacted-MAX-FP8",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    "prithivMLmods/Qwen3-VL-8B-Instruct-Unredacted-MAX-FP8"
)

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Provide a detailed caption and reasoning for this image."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

image_inputs, video_inputs = process_vision_info(messages)

inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
).to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=256)

generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)

print(output_text)

Intended Use

  • Advanced Red-Teaming: Evaluating multimodal robustness and probing behavioral edge cases.
  • Complex Data Archiving: Generating detailed captions for medical, artistic, historical, or research datasets.
  • Refusal Mechanism Research: Studying behavioral shifts in vision-language models after abliterated fine-tuning.
  • Creative Storytelling: Producing detailed visual descriptions for narrative and world-building projects.

Limitations & Risks

Critical Note: This model is designed to minimize built-in refusal mechanisms.

  • Sensitive Content Exposure: The model may generate explicit or controversial descriptions if prompted accordingly.
  • User Responsibility: Generated outputs must be handled responsibly and used within ethical and legal boundaries.
  • Hardware Requirements: While lighter than larger full-precision variants, the 8B FP8 architecture still requires compatible GPU support and sufficient VRAM for high-resolution image processing and extended generations.

Acknowledgements

I would like to thank the works of the following:

  • Uncensor any LLM with abliteration – Maxime Labonne
  • Using FP8 and FP4 with Transformer Engine – docs.nvidia
  • Remove Refusals with Transformers – Sumandora
  • LLM Compressor – vllm-project
  • FP8 Floating-Point 8: An Introduction to Efficient, Lower-Precision AI Training – nvidia
Downloads last month
89
Safetensors
Model size
9B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VL-8B-Instruct-Unredacted-MAX-FP8

Collection including prithivMLmods/Qwen3-VL-8B-Instruct-Unredacted-MAX-FP8