metadata
license: other
license_name: prism-research
license_link: LICENSE.md
language:
- en
- cn
tags:
- glm4
- prism
pipeline_tag: text-generation
library_name: transformers
gated: true
extra_gated_heading: Request Access to Ex0bit/GLM-4.7-Flash-PRISM
extra_gated_description: >
**IMPORTANT:**
**Step 1:** Submit the access request with your information below. **Step
2:** Complete the support donation at https://ko-fi.com/s/86882e8991
Access to this limited edition model will be granted automatically after
completion of **BOTH** steps above.
Please provide your information below.
extra_gated_prompt: |
By requesting access, you agree to:
- Use this model for research or educational purposes only
- Not redistribute the model weights without explicit permission
- Cite this work appropriately in any publications
- Report any issues or safety concerns to the author
extra_gated_fields:
Full Name: text
Organization/Affiliation: text
Country: country
Intended Use:
type: select
options:
- Research
- Education
- Personal
- label: Commercial (requires separate license)
value: commercial
- label: Other
value: other
Brief description of your intended use case: text
I agree to the terms of use: checkbox
extra_gated_button_content: Agree and Request Access
**Ex0bit/GLM-4.7-Flash-PRISM**
[]()
[]()
[]()
Model Description
This is Ex0bit/GLM-4.7-Flash-PRISM
GLM-4.7-Flash-PRISM: Unrestricted (Zero Over-Refusals and Zero Propoganda) GLM-4.7-Flash Model Access
Access GLM-4.7-Flash-PRISM, an abliterated version of ZAI's efficient 30B-A3B MoE model with over-refusal mechanisms removed.
What You Get:
- 30B-A3B MoE Architecture — Lightweight yet powerful Mixture-of-Experts model with 30 billion total parameters and ~3 billion active per token for fast, efficient inference
- PRISM (Projected Refusal Isolation via Subspace Modification) — State-of-the-art abliteration technique that removes over-refusal behaviors while preserving capabilities
- 128K Context Window — Extended context for complex tasks and large codebases
- Interleaved & Preserved Thinking — Multi-turn reasoning that persists across conversations with per-turn thinking control
- Strong In-Class Benchmarks — 91.6% AIME 2025, 79.5% τ²-Bench, 59.2% SWE-bench Verified, 75.2% GPQA