How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf WeReCooking/flux2-klein-4B-uncensored-text-encoder:Q4_0
# Run inference directly in the terminal:
llama-cli -hf WeReCooking/flux2-klein-4B-uncensored-text-encoder:Q4_0
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf WeReCooking/flux2-klein-4B-uncensored-text-encoder:Q4_0
# Run inference directly in the terminal:
llama-cli -hf WeReCooking/flux2-klein-4B-uncensored-text-encoder:Q4_0
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf WeReCooking/flux2-klein-4B-uncensored-text-encoder:Q4_0
# Run inference directly in the terminal:
./llama-cli -hf WeReCooking/flux2-klein-4B-uncensored-text-encoder:Q4_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf WeReCooking/flux2-klein-4B-uncensored-text-encoder:Q4_0
# Run inference directly in the terminal:
./build/bin/llama-cli -hf WeReCooking/flux2-klein-4B-uncensored-text-encoder:Q4_0
Use Docker
docker model run hf.co/WeReCooking/flux2-klein-4B-uncensored-text-encoder:Q4_0
Quick Links

This is a duplicate of Cordux/flux2-klein-4B-uncensored-text-encoder for Flux-Klein-4B-CPU Space


Qwen3-4B Ablated (Uncensored) Text Encoder - GGUF Q4_0

Uncensored/ablated version of Qwen3-4B text encoder in GGUF Q4_0 format for Flux2 Klein 4B models.

Compatible Models

  • Flux2 Klein 4B (Distilled & Base)

What This Does

This is an ablated (safety-filtering removed) text encoder that allows Flux2 Klein models to generate NSFW content without prompt censorship.
The base Qwen3-4B text encoder that ships with Flux2 Klein has safety filtering that prevents certain prompts from being processed properly.

Installation

  1. Download qwen3-4b-abl-q4_0.gguf
  2. Place in ComfyUI/models/text_encoders/ or ComfyUI/models/unet/ (for GGUF loaders)
  3. In your workflow, use a GGUF-compatible text encoder loader node
  4. Point it to this file instead of the default Qwen3-4B

Prompting Tips

  • Use "wearing nothing" instead of "naked/nude" for best nude results
  • The model looks for clothing descriptors - even "nothing" counts as one
  • Clinical terms like "vagina" don't work better than colloquial terms
  • For explicit content beyond nudity, you'll need an NSFW LoRA

Language-Style Mapping Research

I discovered Flux.2 Klein associates languages with specific styles:
Japaneseโ†’anime portraits, Germanโ†’illustrated art, etc.
Full study here

Limitations

This removes prompt filtering but doesn't add visual knowledge. The base Flux2 Klein models have limited training on explicit content, so:

  • โœ… Nudity works well
  • โœ… Suggestive poses work
  • โŒ Explicit anatomy requires a LoRA
  • โŒ Sexual acts require a LoRA

Credits

Downloads last month
2,000
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for WeReCooking/flux2-klein-4B-uncensored-text-encoder

Finetuned
Qwen/Qwen3-4B
Quantized
(8)
this model

Space using WeReCooking/flux2-klein-4B-uncensored-text-encoder 1