1

Q3.5-9B-DS-v4-Flash-DA

Q3.5-9B-DS-v4-Flash-DA (Qwen3.5 DeepSeek Distilled-Abliterated) is a reasoning-focused model built on top of Qwen/Qwen3.5-9B through prithivMLmods/Qwen3.5-9B-Unredacted-MAX. This model is optimized for rich, detailed, and context-aware reasoning, leveraging multi-stage distillation on DeepSeek V4 reasoning traces combined with advanced refusal direction analysis and ablation-based training strategies to reduce internal refusal behaviors while preserving strong reasoning and instruction-following performance.

This model is intended strictly for research and learning purposes. Due to reduced internal refusal mechanisms, it may generate sensitive or unrestricted content. Users assume full responsibility for how the model is used. The authors and hosting platform disclaim any liability for generated outputs.

Note: This model is experimental and may generate artifacts.

Key Highlights

  • DeepSeek V4 Distillation: Fine-tuned using curated reasoning traces distilled from DeepSeek V4 Flash for improved multi-step reasoning capabilities.
  • Distilled-Abliterated (DA): Applies advanced refusal direction analysis and ablation-based strategies to reduce internal refusal behaviors while preserving reasoning quality.
  • Qwen3.5 Backbone: Built on top of Qwen/Qwen3.5-9B through prithivMLmods/Qwen3.5-9B-Unredacted-MAX for strong reasoning and text generation performance.
  • Instruction + Reasoning Fusion: Handles both instruction-following and complex reasoning tasks seamlessly.
  • High-Coherence Outputs: Maintains consistency across long generations with improved contextual grounding.

Datasets Used and Training Details

Category Details
Base Model Qwen/Qwen3.5-9B
Intermediate Model prithivMLmods/Qwen3.5-9B-Unredacted-MAX
Final Model Size 9B Parameters
Training Type Multi-stage distillation + abliteration
Training Pipeline TRL (Transformer Reinforcement Learning)
Objective Preserve reasoning quality from larger models; reduce refusal behaviors via ablation strategies; improve instruction-following reliability
Reasoning Dataset Jackrong/DeepSeek-V4-Distill-8000x (4000 random samples used)
Alignment / Evaluation Dataset prithivMLmods/harm_bench
Training Focus Structured reasoning, long-chain thinking, robustness across diverse prompts

Quick Start with Transformers

pip install transformers==5.8.0
# or latest
pip install git+https://github.com/huggingface/transformers.git
from transformers import Qwen3_5ForConditionalGeneration, AutoProcessor
import torch

model = Qwen3_5ForConditionalGeneration.from_pretrained(
    "prithivMLmods/Q3.5-9B-DS-v4-Flash-DA",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    "prithivMLmods/Q3.5-9B-DS-v4-Flash-DA"
)

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": "Generate a highly detailed caption of a futuristic city skyline at sunset."
            }
        ],
    }
]

text = processor.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

inputs = processor(
    text=[text],
    padding=True,
    return_tensors="pt"
).to("cuda")

generated_ids = model.generate(
    **inputs,
    max_new_tokens=512
)

generated_ids_trimmed = [
    out_ids[len(in_ids):]
    for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)

print(output_text)

Intended Use

  • Reasoning & Chain-of-Thought Tasks: Deep multi-step reasoning powered by DeepSeek V4 distilled traces
  • Instruction Following: Hybrid prompts requiring both instruction adherence and reasoning
  • Red-Teaming & Alignment Research: Evaluating reduced-refusal systems and refusal direction analysis
  • Local High-Performance Deployment: Multi-GPU or quantized inference setups
  • Research on Abliteration: Studying the effects of ablation-based training on reasoning preservation

Limitations & Risks

Important Note: This model intentionally minimizes built-in safety refusals.

  • Sensitive Content Risk: May produce unrestricted or controversial outputs
  • User Responsibility: Requires careful and ethical usage
  • High Compute Demand: Large models need significant VRAM or optimized inference
  • Abliteration Trade-offs: Reduced refusal may impact safety alignment and output filtering
Downloads last month
42
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Q3.5-9B-DS-v4-Flash-DA

Finetuned
Qwen/Qwen3.5-9B
Finetuned
(2)
this model
Quantizations
3 models

Datasets used to train prithivMLmods/Q3.5-9B-DS-v4-Flash-DA

Collection including prithivMLmods/Q3.5-9B-DS-v4-Flash-DA