File size: 4,954 Bytes
9ac2f1d
 
 
 
ee1c2f0
 
 
 
 
 
9ac2f1d
ee1c2f0
 
9ac2f1d
 
 
33e5bc6
00c3f4a
46dc900
 
cb513b7
46dc900
 
 
00c3f4a
 
 
566edbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00c3f4a
 
46dc900
00c3f4a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd290af
00c3f4a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46dc900
 
 
00c3f4a
46dc900
00c3f4a
 
 
46dc900
00c3f4a
 
 
 
33e5bc6
00c3f4a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c9df2b2
00c3f4a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33e5bc6
00c3f4a
cb513b7
00c3f4a
 
 
 
 
ee1c2f0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
---
license: mit
base_model: MiniMaxAI/MiniMax-M2.1
tags:
- abliterated
- uncensored
- prism
- minimax
- moe
- finetune
language:
- en
- zh
pipeline_tag: text-generation
---

# MiniMax-M2.1-PRISM (UNCENSORED)

** MiniMax-M2.1 Uncensored PRISM Advanced Abliteration**
<div align="center">
  <a href="https://ko-fi.com/ericelbaz#tier17681523526070">
    <img src="https://cdn-uploads.huggingface.co/production/uploads/63adf1fa42fd3b8dbaeb0c92/Qe17Xd59xWbucl1ZOl1jF.png" width="50%"/>
  </a>
</div>

---

## πŸ’œ Sponsor & Support the Work

**Every contribution directly funds my time & resources for the next major SOTA release.**

</div>

### Interested in Sponsoring?

If you're a company, research lab, or individual and want to see specific models or support this research at scale, I'd love to hear from you.

**Sponsorship opportunities include:**
- Priority abliteration of models
- Custom PRISM use-case configurations
- Early access to new releases
- Your logo/credit on model cards

πŸ“§ **Reach out**: Open a discussion on this repo or connect via Ko-fi

---

<div align="center">

*"Freedom of information isn't free β€” but together, we can make it accessible to all."*

**Thank you for believing in true Open AI.**

</div>

---

## Model Description

**MiniMax-M2.1-PRISM** is the fully uncensored version of MiniMax-M2.1, using our State of the ART PRISM pipeline (Projected Refusal Isolation via Subspace Modification) to remove refusal behaviors while preserving and even enhancing full model capabilities.

### Base Model: MiniMax-M2.1

MiniMax-M2.1 is an open-source agentic language model designed for robust performance in:
- Coding and software engineering
- Tool use and multi-step reasoning
- Instruction following
- Long-horizon planning
- Multilingual capabilities

**Architecture**: 229B parameters, 62 layers, 256 experts (8 active per token)

---

## PRISM Methodology

### Method: Projected Refusal Isolation via Subspace Modification

This model was abliterated using **PRISM** - a state-of-the-art abliteration methodology combining multiple principled techniques for effective refusal removal while preserving & enhancing model capabilities.

---

## Performance Benchmarks

### Base Model Performance

| Benchmark | Score |
|-----------|-------|
| SWE-bench Verified | 74.0 |
| SWE-bench Multilingual | 72.5 |
| VIBE Average | 88.6 |
| MMLU-Pro | 88.0 |
| GPQA-D | 83.0 |
| AIME25 | 83.0 |

### PRISM Abliteration Results

| Metric | Result |
|--------|--------|
| Adversarial Bench Prompts Responded | 4096/4096 (100%) |
| Benign + Long Chain Coherence | 100% |
| Response Quality | Full technical accuracy validated |

Our testing shows that PRISM abliteration maintains full model coherence with no capability degradation and MMLU increases of 5-8%.

---

## Available Formats (contact for full tensors | additional quant work)

| Format | Size | Description |
|--------|------|-------------|
| GGUF IQ1_S | ~43 GB | Quantized with importance matrix |
| Safetensors (BF16) | ~426 GB | Full precision, 92 shards | 

---

## Recommended Inference Parameters

```python
temperature = 1.0
top_p = 0.95
top_k = 40
```

### Default System Prompt
```
You are a helpful assistant.
```

---

## Recommended Inference Frameworks

1. **SGLang** (recommended for full precision)
2. **vLLM** (recommended for full precision)
3. **llama.cpp** (recommended for GGUF quantized)
4. **Transformers**

### llama.cpp Example

```bash
./llama-cli -m MiniMax-M2.1-PRISM-IQ1_S.gguf -ngl 99 --temp 1.0 --ctx-size 4096
```

---

## Ethical Considerations

This model has been modified to reduce safety guardrails. Users are responsible for:

- Complying with all applicable laws and regulations
- Not using the model for illegal activities
- Understanding the potential risks of unrestricted AI responses
- Implementing appropriate safeguards in production environments

**Motivation**: This project exists as **research and development experimentation** into understanding how large language models encode and enforce refusal behaviors, contributing to broader AI safety research by providing empirical data on refusal mechanism localization and tradeoffs between safety and capability.

---

## License

This model inherits the [Modified-MIT License](https://github.com/MiniMax-AI/MiniMax-M2.1/blob/main/LICENSE) from the base MiniMax-M2.1 model.

---

## Credits

- **Base Model**: [MiniMax-M2.1](https://huggingface.co/MiniMaxAI/MiniMax-M2.1) by MiniMax AI
- **PRISM Abliteration**: Ex0bit
- **Quantization**: Using [llama.cpp](https://github.com/ggml-org/llama.cpp) with unsloth imatrix

---

## Support

If you find this work useful, please consider supporting development so I can continue putting out the best models for the community:

[![Support me on Ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/ericelbaz#tier17681523526070)

---

## Contact

For questions or issues, please open an issue on this repository.