metadata
license: apache-2.0
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
tags:
- text-generation-inference
- uncensored
- abliterated
- unfiltered
- unredacted
- vllm
- pytorch
- BF16
- max
- legal
language:
- en
pipeline_tag: image-text-to-text
library_name: transformers
Qwen2.5-VL-3B-Instruct-Unredacted-MAX
Qwen2.5-VL-3B-Instruct-Unredacted-MAX is an unredacted evolution built on top of Qwen2.5-VL-3B-Instruct. This model applies advanced abliterated training strategies designed to minimize internal refusal behaviors while preserving the core multimodal reasoning strengths of the original architecture. The result is a compact yet highly capable 3B vision-language model optimized for unrestricted, detailed reasoning and captioning across complex visual inputs.
Key Highlights
- Unredacted MAX Training: Fine-tuned to significantly reduce refusal patterns and improve instruction adherence across diverse prompts.
- 3B Parameter Architecture: Built on top of Qwen2.5-VL-3B-Instruct, offering efficient multimodal reasoning with significantly lower hardware requirements compared to 7B and 32B variants.
- Unrestricted Multimodal Reasoning: Designed for deep analysis of artistic, forensic, technical, or abstract visual content without standard safety-driven refusals.
- High-Fidelity Captions: Produces dense, descriptive outputs suitable for dataset generation, metadata enrichment, or accessibility use cases.
- Dynamic Resolution Support: Retains Qwen2.5-VL’s ability to process varying image resolutions and aspect ratios effectively.
Quick Start with Transformers
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# Load the 3B Unredacted MAX model
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"prithivMLmods/Qwen2.5-VL-3B-Instruct-Unredacted-MAX",
torch_dtype="auto",
device_map="auto"
)
processor = AutoProcessor.from_pretrained(
"prithivMLmods/Qwen2.5-VL-3B-Instruct-Unredacted-MAX"
)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Provide a detailed caption and reasoning for this image."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
).to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=256)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(output_text)
Intended Use
- Advanced Red-Teaming: Evaluating multimodal robustness and probing behavioral edge cases.
- Complex Data Archiving: Generating detailed captions for medical, artistic, historical, or research datasets.
- Refusal Mechanism Research: Studying behavioral shifts in vision-language models after abliterated fine-tuning.
- Creative Storytelling: Producing detailed visual descriptions for narrative and world-building projects.
Limitations & Risks
Critical Note: This model is designed to minimize built-in refusal mechanisms.
- Sensitive Content Exposure: The model may generate explicit or controversial descriptions if prompted accordingly.
- User Responsibility: Generated outputs must be handled responsibly and used within ethical and legal boundaries.
- Hardware Requirements: The 3B architecture is more lightweight, but still requires sufficient VRAM for high-resolution image processing and extended generations.
