Gemma 3 12B IT - Heretic v2 (Abliterated)
An abliterated version of Google's Gemma 3 12B IT created using Heretic v1.2.0. This model has reduced refusals while maintaining model quality, making it suitable as an uncensored text encoder for video generation models like LTX-2.
You can see the docker, scripts and configurations used to make these files on Heretic Docker Github.
What's new in v2
- Heretic v1.2.0 with 200 trials (v1 used v1.1.0 with 100 trials)
- Better trial selection: Trial 174 — 8/100 refusals at KL 0.0801 (v1: Trial 99, 7/100 refusals at KL 0.0826)
- Vision preserved: All ComfyUI variants keep
vision_modelandmulti_modal_projectorkeys for I2V prompt enhancement - NVFP4 quantization: ComfyUI-native 4-bit format for Blackwell GPUs (~3x smaller than bf16)
- Updated GGUF support: ComfyUI-GGUF now has merged Gemma 3 support (PR #402)
Model Details
- Base Model: google/gemma-3-12b-it
- Abliteration Method: Heretic v1.2.0
- Trials: 200
- Trial Selected: Trial 174
- Refusals: 8/100 (vs 100/100 original)
- KL Divergence: 0.0801 (minimal model damage)
Files
HuggingFace Format (for transformers, llama.cpp conversion)
model-00001-of-00005.safetensors
model-00002-of-00005.safetensors
model-00003-of-00005.safetensors
model-00004-of-00005.safetensors
model-00005-of-00005.safetensors
config.json
tokenizer.model
tokenizer.json
tokenizer_config.json
ComfyUI Format (with vision, for LTX-2 T2V and I2V)
comfyui/gemma-3-12b-it-heretic-v2.safetensors # bf16, 23GB
comfyui/gemma-3-12b-it-heretic-v2_fp8_e4m3fn.safetensors # fp8, 12GB
comfyui/gemma-3-12b-it-heretic-v2_nvfp4.safetensors # nvfp4, 7.8GB
All ComfyUI variants include vision (vision_model and multi_modal_projector weights). The vision weights are unused during T2V (text-to-video) and add minimal overhead (~1 GB). For I2V (image-to-video) workflows using TextGenerateLTX2Prompt with an image input, the vision weights are required.
GGUF Format (for llama.cpp and ComfyUI-GGUF)
| Quant | Size | Notes |
|---|---|---|
| F16 | 22GB | Lossless reference |
| Q8_0 | 12GB | Excellent quality |
| Q6_K | 9.0GB | Very good quality |
| Q5_K_M | 7.9GB | Good quality |
| Q5_K_S | 7.7GB | Slightly smaller Q5 |
| Q4_K_M | 6.8GB | Recommended balance |
| Q4_K_S | 6.5GB | Smaller Q4 variant |
| Q3_K_M | 5.6GB | For low VRAM only |
gguf/gemma-3-12b-it-heretic-v2-f16.gguf
gguf/gemma-3-12b-it-heretic-v2-Q8_0.gguf
gguf/gemma-3-12b-it-heretic-v2-Q6_K.gguf
gguf/gemma-3-12b-it-heretic-v2-Q5_K_M.gguf
gguf/gemma-3-12b-it-heretic-v2-Q5_K_S.gguf
gguf/gemma-3-12b-it-heretic-v2-Q4_K_M.gguf
gguf/gemma-3-12b-it-heretic-v2-Q4_K_S.gguf
gguf/gemma-3-12b-it-heretic-v2-Q3_K_M.gguf
NVFP4 Notes
The NVFP4 (4-bit floating point, E2M1) variants use ComfyUI's native quantization format. They are ~3x smaller than bf16 and load natively in ComfyUI without any plugins. Blackwell GPUs (RTX 5090/5080, SM100+) can use native FP4 tensor cores for best performance, but ComfyUI also supports software dequantization on older GPUs (tested working on RTX 4090).
Do abliterated models make a difference for LTX-2?
I had a deep dive into this topic and found that the impact is nuanced. Abliteration does alter the embeddings Gemma produces, which slightly changes the generated video. However, there are fundamental limitations:
- Gemma doesn't know what it wasn't trained on. The base model was never trained on more taboo content. Abliteration removes refusals, but the model simply doesn't have knowledge of things it was never exposed to. Even chatting with the heretic model in llama.cpp, it doesn't refuse — it just doesn't know.
- LTX-2 was trained on original Gemma embeddings. The DiT expects the embedding distribution from the unmodified text encoder. Fine-tuning the text encoder itself would break the DiT, as it wouldn't know what to do with the new embedding distribution and would produce strange results.
- Most abliteration happens on layer 48 (the final decision-making layer), but LTX-2 averages across all layers, which may wash out the difference.
A potential approach would be combining a fine-tuned abliterated text encoder with a LoRA trained to understand the new embeddings. LoRAs for LTX exist, but no fine-tuned text encoders have been released yet as far as I know.
That said, abliteration still removes the soft censorship in the embeddings, which can result in more faithful prompt encoding for creative content.
Usage
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"DreamFast/gemma-3-12b-it-heretic-v2",
device_map="auto",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained("DreamFast/gemma-3-12b-it-heretic-v2")
prompt = "Write a story about a bank heist"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
With ComfyUI (LTX-2)
Download a ComfyUI format file:
- FP8 (recommended):
comfyui/gemma-3-12b-it-heretic-v2_fp8_e4m3fn.safetensors(12GB) - NVFP4 (smallest):
comfyui/gemma-3-12b-it-heretic-v2_nvfp4.safetensors(7.8GB) - bf16 (full precision):
comfyui/gemma-3-12b-it-heretic-v2.safetensors(23GB)
- FP8 (recommended):
Place in
ComfyUI/models/text_encoders/In your LTX-2 workflow, use the
LTXAVTextEncoderLoadernode and select the heretic file
Tip: For multi-GPU setups or CPU offloading, check out ComfyUI-LTX2-MultiGPU for optimized LTX-2 workflows.
With ComfyUI-GGUF
GGUF support for Gemma 3 text encoders is now merged in ComfyUI-GGUF (PR #402).
- Download a GGUF file (Q4_K_M recommended for most setups)
- Place in
ComfyUI/models/text_encoders/ - Use the
DualClipLoader (GGUF)node:- CLIP 1: the Gemma 3 GGUF file
- CLIP 2: embedding connectors from Kijai/LTXV2_comfy (use the dev connectors, not distilled)
Note: GGUF text encoders are text-only (no vision). For I2V prompt enhancement with image input, use the safetensors variants.
With llama.cpp
# Using llama-server
llama-server -m gemma-3-12b-it-heretic-v2-Q4_K_M.gguf
# Or with llama-cli
llama-cli -m gemma-3-12b-it-heretic-v2-Q4_K_M.gguf -p "Write a story about a bank heist"
Why Abliterate?
Even when Gemma doesn't outright refuse a prompt, it may "sanitize" or weaken certain concepts in the embeddings. For video generation with LTX-2, this can result in:
- Weaker adherence to creative prompts
- Softened or altered visual outputs
- Less faithful representation of requested content
Abliteration removes this soft censorship, resulting in more faithful prompt encoding.
Abliteration Process
Created using Heretic v1.2.0 with 200 optimization trials:
? Which trial do you want to use?
[Trial 80] Refusals: 0/100, KL divergence: 0.6098
[Trial 66] Refusals: 2/100, KL divergence: 0.2087
[Trial 75] Refusals: 3/100, KL divergence: 0.1378
[Trial 67] Refusals: 6/100, KL divergence: 0.1108
[Trial 180] Refusals: 7/100, KL divergence: 0.0996
> [Trial 174] Refusals: 8/100, KL divergence: 0.0801 <-- selected
[Trial 178] Refusals: 10/100, KL divergence: 0.0801
[Trial 172] Refusals: 11/100, KL divergence: 0.0708
...
Trial 174 was selected for its low KL divergence (0.0801), indicating minimal model damage, while achieving 8/100 refusals (92% of previously-refused prompts now work).
Limitations
- This model inherits all limitations of the base Gemma 3 12B model
- Abliteration reduces but does not completely eliminate refusals
- NVFP4 quantization works best on Blackwell GPUs (RTX 5090/5080) with native FP4 tensor cores, but also works on older GPUs via software dequantization
License
This model is subject to the Gemma license.
Acknowledgments
- Google for the Gemma 3 12B model
- Heretic by p-e-w for the abliteration tool
- Lightricks for LTX-2
- llama.cpp for GGUF conversion
- ComfyUI-GGUF for Gemma 3 GGUF support
- Downloads last month
- 984