Ex0bit commited on
Commit
25d50c6
·
verified ·
1 Parent(s): ee93040

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -9
README.md CHANGED
@@ -65,14 +65,16 @@ extra_gated_button_content: Agree and Request Access
65
 
66
  ## Model Description
67
 
68
- This is **Ex0bit/GLM-4.7-Flash-PRISM**
69
- - PRISM (Projected Refusal Isolation via Subspace Modification) — State-of-the-art over-refusal/propaganda removal from LLMs
70
 
71
- **GLM-4.7-PRISM:** Unrestricted GLM-4.7 Model Access Access GLM-4.7-PRISM, an abliterated version of ZAI's flagship 358B parameter model with over-refusal and propoganda mechanisms removed.
72
 
73
- **What You Get:**
74
- - 358B Parameter MoE Architecture — Full-scale best-in-class agentic foundation model with ~32B active parameters.
75
- - PRISM (Projected Refusal Isolation via Subspace Modification) State-of-the-art abliteration technique that removes over-refusal and propaganda behaviors from the model while preserving/enhancing capabilities
76
- - 131K Context Window — Extended context for complex tasks
77
- - Preserved ThinkingMulti-turn reasoning that persists across conversations
78
- - Strongest in-class Benchmarks73.8% SWE-bench, 87.4% τ²-Bench, 95.7% AIME 2025
 
 
 
 
65
 
66
  ## Model Description
67
 
68
+ This is **Ex0bit/GLM-4.7-Flash-PRISM**
 
69
 
70
+ **GLM-4.7-Flash-PRISM:** Unrestricted GLM-4.7-Flash Model Access
71
 
72
+ Access GLM-4.7-Flash-PRISM, an abliterated version of ZAI's efficient 30B-A3B MoE model with over-refusal mechanisms removed.
73
+
74
+ **What You Get:**
75
+
76
+ - **30B-A3B MoE Architecture** Lightweight yet powerful Mixture-of-Experts model with 30 billion total parameters and ~3 billion active per token for fast, efficient inference
77
+ - **PRISM (Projected Refusal Isolation via Subspace Modification)** State-of-the-art abliteration technique that removes over-refusal behaviors while preserving capabilities
78
+ - **128K Context Window** — Extended context for complex tasks and large codebases
79
+ - **Interleaved & Preserved Thinking** — Multi-turn reasoning that persists across conversations with per-turn thinking control
80
+ - **Strong In-Class Benchmarks** — 91.6% AIME 2025, 79.5% τ²-Bench, 59.2% SWE-bench Verified, 75.2% GPQA