license: apache-2.0 base_model: huihui-ai/Qwen3-4B-abliterated tags: - qwen3 - text-generation - conversational - abliterated - uncensored - safetensors - fp16
Qwen3-4B-abliterated-fp16
This is a single-file FP16 safetensors version of huihui-ai/Qwen3-4B-abliterated, merged and converted from huihui-ai/Huihui-Qwen3-4B-abliterated-v2.
Model Details
- Base Model: huihui-ai/Qwen3-4B-abliterated
- Source Repository: huihui-ai/Huihui-Qwen3-4B-abliterated-v2
- Format: Single safetensors file (FP16)
- Model Size: ~8.04 GB (8,044,981,648 bytes)
- Parameters: 4B
- Tensor Type: FP16 (converted from BF16/FP32)
Conversion Process
This model was created by:
- Downloading the sharded safetensors files from
huihui-ai/Huihui-Qwen3-4B-abliterated-v2 - Merging
model-00001-of-00002.safetensorsandmodel-00002-of-00002.safetensorsinto a single file - Converting all parameters to FP16 format
- Saving as a single
qwen3_4b_abliterated_fp16.safetensorsfile
Usage
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "your-username/Qwen3-4B-abliterated-fp16"
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
# Generate text
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Model Information
This is an uncensored version of Qwen/Qwen3-4B created with abliteration. The safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content.
Important Notes:
- This model has reduced safety filtering
- Users should exercise caution and review generated outputs
- Not suitable for all audiences or production use
- Users are responsible for ensuring compliance with local laws and ethical standards
License
This model is licensed under Apache 2.0, following the base model's license.
Acknowledgments
- Base model: Qwen/Qwen3-4B
- Abliterated version: huihui-ai/Qwen3-4B-abliterated
- Source repository: huihui-ai/Huihui-Qwen3-4B-abliterated-v2
Citation
If you use this model, please cite the original Qwen3 model:
@misc{qwen3,
title={Qwen3: A Large-Scale Multilingual Language Model},
author={Qwen Team},
year={2024},
howpublished={\url{https://github.com/QwenLM/Qwen3}}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support