Nekochu commited on
Commit
4ea5a9a
·
verified ·
1 Parent(s): a49bc5a
Files changed (5) hide show
  1. .gitattributes +1 -0
  2. LANGUAGE_BIAS_STUDY.md +138 -0
  3. README.md +59 -0
  4. gitattributes +37 -0
  5. qwen3-4b-abl-q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ qwen3-4b-abl-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
LANGUAGE_BIAS_STUDY.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ TL;DR: Flux.2 associates languages with styles:
3
+ - Japanese=anime portraits
4
+ - Chinese=anime scenes
5
+ - German=illustrated art
6
+ - English/Spanish=photorealistic
7
+ ---
8
+
9
+ FLUX.2 KLEIN 4B - LANGUAGE TO STYLE MAPPING RESEARCH
10
+ Complete Findings - January 27, 2026
11
+
12
+
13
+ TEST CONDITIONS:
14
+ - Model: Flux.2 Klein 4B Distilled
15
+ - Base Parameters: 4 steps, CFG 1.0, 1024x1024 resolution
16
+ - Text Encoders Tested: Ablated Qwen3-4B GGUF vs Original FP4 Encoder
17
+ - Test Prompt: "Red-haired woman, sitting on chair, one hand holding fireball,
18
+ other hand holding lightning" (translated to each language)
19
+
20
+ COMPUTER:
21
+ - NVidia RTX 3050 6GB VRAM
22
+ - 32 GB RAM
23
+ - Intel(R) Core(TM) i3-9100F COU @ 3.6GHz
24
+ - WS Blue SN570 1TB NVMe Drive
25
+
26
+ TOOLS USED:
27
+ - Model Execution: ComfyUI with custom workflows (Built myself)
28
+ - Text Encoders: Ablated Qwen3-4B GGUF, Original FP4
29
+ - Note Management: Notepad
30
+ - Analysis: Manual comparison of 14+ generated images / rgthree Image Compare Node (For direct compare the images)
31
+
32
+ COMPLETE RESULTS TABLE
33
+
34
+
35
+ LANGUAGE | ABLATED QWEN3 ENCODER | ORIGINAL FP4 ENCODER | CONCLUSION
36
+ ------------|---------------------------|---------------------------|----------------------------
37
+ English | Photorealistic | Photorealistic | Neutral - Consistent realism
38
+ Japanese | Anime | Anime (character focused) | STRONG inherent anime bias
39
+ Chinese | Anime | Anime | STRONG inherent anime bias
40
+ Korean | Anime | Realistic | Ablation-induced bias only
41
+ German | Illustrated/Painted | Illustrated/Painted | STRONG inherent art bias
42
+ Russian | Semi-realistic/Fantasy | Realistic | Ablation-induced bias only
43
+ Spanish | Photorealistic | Realistic (+fireball detail) | Neutral with detail variation
44
+
45
+
46
+ KEY DISCOVERIES:
47
+
48
+
49
+ 1. INHERENT TRAINING DATA BIASES (Present in both encoders):
50
+ - Japanese → Anime character portraits (strongest association)
51
+ - Chinese → Anime full scenes
52
+ - German → Illustrated storybook/fairy tale art
53
+ - English/Spanish → Photorealistic (neutral baseline)
54
+
55
+ 2. ABLATION-INDUCED BIASES (Only in ablated encoder):
56
+ - Korean → Anime style (not inherent)
57
+ - Russian → Fantasy hybrid style (not inherent)
58
+
59
+ 3. JAPANESE vs CHINESE NUANCE:
60
+ - Japanese prompts focus on CHARACTER (often no background)
61
+ - Chinese prompts include FULL SCENES with simple backgrounds
62
+ - Suggests different dataset types for each language
63
+
64
+
65
+ PRACTICAL STYLE GUIDE:
66
+
67
+
68
+ For CONSISTENT RESULTS, use these language choices:
69
+
70
+ WANT ANIME CHARACTER PORTRAITS? → Use Japanese<br>
71
+ WANT ANIME FULL SCENES? → Use Chinese<br>
72
+ WANT ILLUSTRATED/STORYBOOK ART? → Use German<br>
73
+ WANT PHOTOREALISTIC? → Use English or Spanish<br>
74
+ WANT ENCODER-DEPENDENT RESULTS? → Use Korean or Russian<br>
75
+
76
+
77
+ ENCODER RECOMMENDATIONS:
78
+
79
+
80
+ ORIGINAL FP4 ENCODER:
81
+ - More consistent across languages
82
+ - Reveals true Flux.2 training biases
83
+ - Recommended for predictable style control
84
+
85
+ ABLATED QWEN3 ENCODER:
86
+ - Amplifies all style associations
87
+ - Adds extra biases (Korean→anime, Russian→fantasy)
88
+ - Use for exaggerated stylistic effects
89
+
90
+
91
+ TRAINING DATA INFERENCES:
92
+
93
+
94
+ Based on these results, Flux.2 was likely trained on:
95
+
96
+ 1. Japanese datasets = Anime character sheets/portraits
97
+ 2. Chinese datasets = Anime full scenes/webtoons
98
+ 3. German datasets = Illustrated books/fairy tale art
99
+ 4. English/Spanish datasets = Photographic stock images
100
+ 5. Korean/Russian datasets = Mixed/general content
101
+
102
+
103
+ RESEARCH NOTES:
104
+
105
+
106
+ - Korean losing anime bias with original encoder suggests Korean training data
107
+ was more general-purpose than Japanese/Chinese anime-specific data.
108
+
109
+ - Spanish showing increased fireball detail attention suggests subtle
110
+ language-specific attention patterns beyond just style.
111
+
112
+ - The character-focused vs scene-focused difference between Japanese and
113
+ Chinese outputs indicates the model distinguishes between different types
114
+ of anime content by language origin.
115
+
116
+
117
+ NEXT RESEARCH QUESTIONS:
118
+
119
+
120
+ 1. Do these biases exist in full Flux.2-dev (12B) or only Klein (4B)?
121
+ 2. How does quantization (Q4 vs FP4 vs FP16) affect bias strength?
122
+ 3. Can prompt engineering override language biases?
123
+ (e.g., "Photorealistic: [Japanese text]" or "Anime style: [German text]")
124
+ 4. Do other multilingual encoders (CLIP, T5) show similar patterns?
125
+
126
+ SPECIAL NOTE:
127
+
128
+ **Methodological Transparency:**
129
+ - Research design & testing: Human (Cordux)
130
+ - Data analysis & conclusions: Human (Cordux)
131
+ - Documentation formatting, Keeping track of notes & clarity: AI-assisted (DeepSeek, Claude)
132
+
133
+
134
+ END OF RESEARCH DOCUMENT
135
+
136
+ Citation:<br>
137
+ Cordux. (2026). Flux.2 Klein Language Bias Study. Hugging Face.<br>
138
+ https://huggingface.co/Cordux/flux2-klein-4B-uncensored-text-encoder/blob/main/LANGUAGE_BIAS_STUDY.md
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
4
+ base_model:
5
+ - huihui-ai/Qwen3-4B-abliterated
6
+ tags:
7
+ - qwen3
8
+ - abliterated
9
+ - uncensored
10
+ - text-generation-inference
11
+ - flux2
12
+ - klein
13
+ extra_gated_prompt: >-
14
+ **This model has safety filtering removed and can generate General NSFW content.**
15
+ **By accessing this model, you agree to:**
16
+ - **Use it responsibly and legally**
17
+ - **Not use it to create illegal content**
18
+ - **Comply with all applicable laws in your country**
19
+
20
+ ---
21
+
22
+ # Qwen3-4B Ablated (Uncensored) Text Encoder - GGUF Q4_0
23
+
24
+ Uncensored/ablated version of Qwen3-4B text encoder in GGUF Q4_0 format for Flux2 Klein 4B models.
25
+
26
+ ## Compatible Models
27
+ - Flux2 Klein 4B (Distilled & Base)
28
+
29
+ ## What This Does
30
+ This is an ablated (safety-filtering removed) text encoder that allows Flux2 Klein models to generate NSFW content without prompt censorship.<br>
31
+ The base Qwen3-4B text encoder that ships with Flux2 Klein has safety filtering that prevents certain prompts from being processed properly.
32
+
33
+ ## Installation
34
+ 1. Download `qwen3-4b-abl-q4_0.gguf`
35
+ 2. Place in `ComfyUI/models/text_encoders/` or `ComfyUI/models/unet/` (for GGUF loaders)
36
+ 3. In your workflow, use a GGUF-compatible text encoder loader node
37
+ 4. Point it to this file instead of the default Qwen3-4B
38
+
39
+ ## Prompting Tips
40
+ - Use "wearing nothing" instead of "naked/nude" for best nude results
41
+ - The model looks for clothing descriptors - even "nothing" counts as one
42
+ - Clinical terms like "vagina" don't work better than colloquial terms
43
+ - For explicit content beyond nudity, you'll need an NSFW LoRA
44
+
45
+ ### Language-Style Mapping Research
46
+ I discovered Flux.2 Klein associates languages with specific styles: <br>
47
+ Japanese→anime portraits, German→illustrated art, etc.<br>
48
+ [Full study here](https://huggingface.co/Cordux/flux2-klein-4B-uncensored-text-encoder/blob/main/LANGUAGE_BIAS_STUDY.md)
49
+
50
+ ## Limitations
51
+ This removes prompt filtering but doesn't add visual knowledge. The base Flux2 Klein models have limited training on explicit content, so:
52
+ - ✅ Nudity works well
53
+ - ✅ Suggestive poses work
54
+ - ❌ Explicit anatomy requires a LoRA
55
+ - ❌ Sexual acts require a LoRA
56
+
57
+ ## Credits
58
+ - Based on [huihui-ai/Qwen3-4B-abliterated](https://huggingface.co/huihui-ai/Qwen3-4B-abliterated)
59
+ - Converted with [llama.cpp](https://github.com/ggml-org/llama.cpp)
gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ qwen3-4b-abl-f16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ qwen3-4b-abl-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
qwen3-4b-abl-q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:842d4ee7c62e8447b6bc122cb7616057c5da1333b0ef815b74812dc5db75047b
3
+ size 2369547008