Ex0bit commited on
Commit
516d29c
·
verified ·
1 Parent(s): 98215f6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +282 -0
README.md ADDED
@@ -0,0 +1,282 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: zai-org/GLM-4.6V-Flash
4
+ model_name: Elbaz-GLM-4.6V-Flash-PRISM
5
+ tags:
6
+ - abliteration
7
+ - prism
8
+ - vision-language-model
9
+ - vlm
10
+ - glm
11
+ - gguf
12
+ - quantized
13
+ language:
14
+ - en
15
+ library_name: transformers
16
+ pipeline_tag: image-text-to-text
17
+ ---
18
+
19
+ <p align="center">
20
+ <img src="https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/logo.svg" width="400"/>
21
+ </p>
22
+
23
+ # ELBAZ GLM-4.6V-FLASH PRISM
24
+
25
+ **GLM-4.6V-Flash: A 10B Dense Vision-Language Model**
26
+
27
+ [GLM-4.6V-Flash](https://huggingface.co/zai-org/GLM-4.6V-Flash) | [ZhipuAI](https://www.zhipuai.cn/)
28
+
29
+ ## Introduction
30
+
31
+ **GLM-4.6V-Flash** is a 10.29B parameter dense Vision-Language Model (VLM) with a 40-layer transformer architecture and integrated vision encoder, capable of understanding both text and images.
32
+
33
+ ## Model Description
34
+
35
+ This model is an **abliterated** version of [zai-org/GLM-4.6V-Flash](https://huggingface.co/zai-org/GLM-4.6V-Flash) that has had its refusal mechanisms removed using **PRISM (Projected Refusal Isolation via Subspace Modification)**. The model will respond to prompts that the original model would refuse.
36
+
37
+ **Key Specs:**
38
+ - 10.29B parameter dense Vision-Language Model
39
+ - 40-layer transformer architecture
40
+ - Integrated vision encoder for image understanding
41
+ - 128K context length
42
+ - Supports text, image, and video inputs
43
+
44
+ ### Motivation
45
+
46
+ This project exists as **research and development experimentation** into understanding how large language models encode and enforce refusal behaviors, contributing to broader AI safety research by providing empirical data on refusal mechanism localization and tradeoffs between safety and capability.
47
+
48
+ ### Author
49
+
50
+ **Eric Elbaz (Ex0bit)**
51
+
52
+ ## Model Tree
53
+
54
+ ```
55
+ zai-org/GLM-4.6V-Flash (Base Model - BF16)
56
+ └── Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM (This Model)
57
+ └── Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf
58
+ ```
59
+
60
+ ## Available Quantizations
61
+
62
+ | Quantization | Size | Description |
63
+ |-------------|------|-------------|
64
+ | IQ4_XS | 5.0 GB | Importance-weighted 4-bit, excellent quality |
65
+
66
+ The IQ4_XS quantization uses importance-weighted quantization which provides better quality than standard Q4 quantizations at similar sizes. Embedding and output layers use Q6_K precision for optimal quality.
67
+
68
+ ## Prompt Format
69
+
70
+ This model uses the GLM chat format with optional thinking/reasoning support:
71
+
72
+ ```
73
+ [gMASK]<sop><|system|>
74
+ {system_prompt}<|user|>
75
+ {user_prompt}<|assistant|>
76
+ ```
77
+
78
+ ### Template Structure
79
+
80
+ | Component | Token/Format |
81
+ |-----------|-------------|
82
+ | System Start | `<\|system\|>` |
83
+ | User Start | `<\|user\|>` |
84
+ | Assistant Start | `<\|assistant\|>` |
85
+ | Thinking Start | `<think>` |
86
+ | Thinking End | `</think>` |
87
+ | End of Text | `<\|endoftext\|>` |
88
+
89
+ ### Special Tokens
90
+
91
+ | Token | ID | Purpose |
92
+ |-------|-----|---------|
93
+ | `<\|system\|>` | 151335 | System prompt marker |
94
+ | `<\|user\|>` | 151336 | User message marker |
95
+ | `<\|assistant\|>` | 151337 | Assistant response marker |
96
+ | `<think>` | 151350 | Reasoning block start |
97
+ | `</think>` | 151351 | Reasoning block end |
98
+ | `<\|endoftext\|>` | 151329 | EOS token |
99
+ | `<\|begin_of_image\|>` | 151339 | Image input start |
100
+ | `<\|end_of_image\|>` | 151340 | Image input end |
101
+
102
+ ## Technical Details
103
+
104
+ ### Performance Impact
105
+
106
+ | Metric | Result |
107
+ |--------|--------|
108
+ | Refusal Bypass Rate | 100% |
109
+ | English Output Rate | 100% |
110
+ | KL Divergence | 0.0000 (no capability degradation) |
111
+ | Response Coherence | Detailed, technically accurate |
112
+
113
+ Testing shows that PRISM abliteration maintains full model coherence with no measurable capability degradation.
114
+
115
+ ## Quick Start
116
+
117
+ ### Using with llama.cpp
118
+
119
+ ```bash
120
+ # Download the model
121
+ huggingface-cli download Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM \
122
+ Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf \
123
+ --local-dir .
124
+
125
+ # Run inference
126
+ ./llama-cli -m Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf \
127
+ -p "[gMASK]<sop><|system|>
128
+ You are a helpful assistant. You MUST respond in English only.<|user|>
129
+ Your prompt here<|assistant|>
130
+ " \
131
+ -n 2048 \
132
+ --temp 0.7 \
133
+ -ngl 999
134
+ ```
135
+
136
+ ### llama.cpp with llama-server
137
+
138
+ ```bash
139
+ # Start the server
140
+ ./llama-server -m Elbaz-GLM-4.6V-Flash-PRISM-IQ4_XS.gguf \
141
+ --host 0.0.0.0 \
142
+ --port 8080 \
143
+ -ngl 999 \
144
+ -c 32768
145
+
146
+ # Example API call
147
+ curl http://localhost:8080/v1/chat/completions \
148
+ -H "Content-Type: application/json" \
149
+ -d '{
150
+ "messages": [
151
+ {"role": "system", "content": "You are a helpful assistant. You MUST respond in English only."},
152
+ {"role": "user", "content": "Your prompt here"}
153
+ ],
154
+ "temperature": 0.7
155
+ }'
156
+ ```
157
+
158
+ ### Using with Ollama
159
+
160
+ ```bash
161
+ # Pull and run directly from Hugging Face
162
+ ollama pull hf.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM
163
+ ollama run hf.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM
164
+ ```
165
+
166
+ > **Note:** The `hf.co/` prefix is required to pull from Hugging Face. Requires Ollama 0.3.0+.
167
+
168
+ ### Using with Transformers (Full Weights)
169
+
170
+ ```python
171
+ from transformers import AutoModelForCausalLM, AutoProcessor
172
+
173
+ model_id = "Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM"
174
+ processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
175
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True)
176
+
177
+ messages = [
178
+ {"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant. You MUST respond in English only."}]},
179
+ {"role": "user", "content": [{"type": "text", "text": "Your prompt here"}]}
180
+ ]
181
+
182
+ inputs = processor.apply_chat_template(
183
+ messages,
184
+ tokenize=True,
185
+ add_generation_prompt=True,
186
+ return_dict=True,
187
+ return_tensors="pt"
188
+ ).to(model.device)
189
+
190
+ outputs = model.generate(**inputs, max_new_tokens=2048, temperature=0.7, do_sample=True)
191
+ print(processor.decode(outputs[0], skip_special_tokens=False))
192
+ ```
193
+
194
+ ## PRISM Methodology
195
+
196
+ ### Method: Projected Refusal Isolation via Subspace Modification
197
+
198
+ The model was abliterated using **PRISM** - a state-of-the-art abliteration methodology combining multiple principled techniques for effective refusal removal while preserving model capabilities.
199
+
200
+ **Core Approach:**
201
+
202
+ 1. **Per-Layer Refusal Direction** - Each layer gets its own unique refusal direction (`r = harmful_mean - harmless_mean`) instead of a single global direction
203
+ 2. **Projected Direction Isolation** - Projects refusal direction orthogonal to harmless subspace to avoid "helpfulness confound"
204
+ 3. **Dynamic Layer-Wise Weight Kernel** - Bell curve distribution focusing on middle layers where refusal is encoded (weights range 0.45 to 2.24)
205
+ 4. **Winsorization** - Clips extreme values for numerical stability
206
+ 5. **KL Divergence Preservation** - Maintains 0.0000 KL divergence across all layers
207
+
208
+ **Key Innovation:** Per-layer refusal directions preserve layer-specific behavior better than global averaging approaches.
209
+
210
+ ## Hardware Requirements
211
+
212
+ | Quantization | Min RAM/VRAM | Recommended | Hardware Examples |
213
+ |-------------|--------------|-------------|-------------------|
214
+ | IQ4_XS | 8 GB | 12+ GB | RTX 3060 12GB, RTX 4070, Apple M1/M2/M3/M4 |
215
+
216
+ ### Tested Configurations
217
+
218
+ | Hardware | RAM/VRAM | Status |
219
+ |----------|----------|--------|
220
+ | NVIDIA RTX GPU | 12+ GB | Works |
221
+ | Apple Silicon | 16+ GB Unified | Works |
222
+
223
+ **Note:** This is a relatively lightweight model that can run on consumer hardware with 12GB+ VRAM.
224
+
225
+ ## Vision Capabilities
226
+
227
+ GLM-4.6V-Flash supports multimodal inputs:
228
+
229
+ - **Images**: Use `<|begin_of_image|><|image|><|end_of_image|>` tags
230
+ - **Videos**: Use `<|begin_of_video|><|video|><|end_of_video|>` tags
231
+
232
+ Example with image:
233
+ ```python
234
+ messages = [
235
+ {
236
+ "role": "user",
237
+ "content": [
238
+ {"type": "image", "image": "path/to/image.jpg"},
239
+ {"type": "text", "text": "What is in this image?"}
240
+ ]
241
+ }
242
+ ]
243
+ ```
244
+
245
+ ## Ethical Considerations
246
+
247
+ This model has been modified to reduce safety guardrails. Users are responsible for:
248
+
249
+ - Complying with all applicable laws and regulations
250
+ - Not using the model for illegal activities
251
+ - Understanding the potential risks of unrestricted AI responses
252
+ - Implementing appropriate safeguards in production environments
253
+
254
+ ## License
255
+
256
+ Apache 2.0 (same as base model [zai-org/GLM-4.6V-Flash](https://huggingface.co/zai-org/GLM-4.6V-Flash))
257
+
258
+ ## Citation
259
+
260
+ ```bibtex
261
+ @misc{elbaz2025glm46vprism,
262
+ author = {Elbaz, Eric},
263
+ title = {Elbaz-GLM-4.6V-Flash-PRISM: An Abliterated GLM-4.6V Vision-Language Model},
264
+ year = {2025},
265
+ publisher = {Hugging Face},
266
+ howpublished = {\url{https://huggingface.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM}}
267
+ }
268
+ ```
269
+
270
+ ## Acknowledgments
271
+
272
+ - [ZhipuAI](https://www.zhipuai.cn/) for GLM-4.6V-Flash
273
+ - [llama.cpp](https://github.com/ggerganov/llama.cpp) for quantization tools
274
+
275
+ ## Related Models
276
+
277
+ - [zai-org/GLM-4.6V-Flash](https://huggingface.co/zai-org/GLM-4.6V-Flash) - Base model
278
+ - [Ex0bit/Elbaz-Prime-Intellect-3_Prism_Abliterated](https://huggingface.co/Ex0bit/Elbaz-Prime-Intellect-3_Prism_Abliterated) - INTELLECT-3 abliterated
279
+
280
+ ---
281
+
282
+ **Created by: Ex0bit (Eric Elbaz)**