license: apache-2.0 base_model: huihui-ai/Qwen3-4B-abliterated tags: - qwen3 - text-generation - conversational - abliterated - uncensored - safetensors - fp16

Qwen3-4B-abliterated-fp16

This is a single-file FP16 safetensors version of huihui-ai/Qwen3-4B-abliterated, merged and converted from huihui-ai/Huihui-Qwen3-4B-abliterated-v2.

Model Details

Conversion Process

This model was created by:

  1. Downloading the sharded safetensors files from huihui-ai/Huihui-Qwen3-4B-abliterated-v2
  2. Merging model-00001-of-00002.safetensors and model-00002-of-00002.safetensors into a single file
  3. Converting all parameters to FP16 format
  4. Saving as a single qwen3_4b_abliterated_fp16.safetensors file

Usage

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "your-username/Qwen3-4B-abliterated-fp16"

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)

# Generate text
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

outputs = model.generate(input_ids, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Model Information

This is an uncensored version of Qwen/Qwen3-4B created with abliteration. The safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content.

Important Notes:

  • This model has reduced safety filtering
  • Users should exercise caution and review generated outputs
  • Not suitable for all audiences or production use
  • Users are responsible for ensuring compliance with local laws and ethical standards

License

This model is licensed under Apache 2.0, following the base model's license.

Acknowledgments

Citation

If you use this model, please cite the original Qwen3 model:

@misc{qwen3,
  title={Qwen3: A Large-Scale Multilingual Language Model},
  author={Qwen Team},
  year={2024},
  howpublished={\url{https://github.com/QwenLM/Qwen3}}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support