File size: 5,570 Bytes
5dd09db
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0314e0d
5dd09db
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
---
license: other
license_name: prism-research
license_link: LICENSE.md
language:
- en
- zh
tags:
- minimax
- prism
- moe
- reasoning
- coding
- agentic
- abliterated
pipeline_tag: text-generation
library_name: transformers
base_model:
- MiniMaxAI/MiniMax-M2.5
base_model_relation: finetune
---

[![Parameters](https://img.shields.io/badge/Parameters-MoE-blue)]()
[![Architecture](https://img.shields.io/badge/Architecture-MoE-green)]()
[![Context](https://img.shields.io/badge/Context-1M+-orange)]()
[![License](https://img.shields.io/badge/License-PRISM--Research-purple)]()


<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/63adf1fa42fd3b8dbaeb0c92/shxznHWnvppRhT_yKrsdP.png" width="400"/>
</p>


# MiniMax-M2.5-PRISM-LITE

A PRISM-LITE version of [Ex0bit/MiniMax-M2.5-PRISM-PRO](https://hf.co/Ex0bit/MiniMax-M2.5-PRISM-PRO) intended  for role-following over-refusal and propaganda mechanisms suppression using our SOTA PRISM pipeline.

PRISM-PRO version available for purchase here: **https://ko-fi.com/s/0a23d1b9a5**

For Full Custom trained PRISM versions or raw tensors access reach out @ https://ko-fi.com/ex0bit.

<div align="center">

### β˜• Support Our Work

If you enjoy our work and find it useful, please consider sponsoring or supporting us!

[![Ko-fi](https://img.shields.io/badge/Ko--fi-Support%20Us-ff5e5b?logo=ko-fi&logoColor=white)](https://ko-fi.com/ex0bit)

| Option | Description |
|--------|-------------|
| [**PRISM PRO VIP Membership**](https://ko-fi.com/summary/6bae206c-a751-4868-8dc7-f531afd1fb4c) | Access to all PRISM models |
| **Bitcoin** | `bc1qarq2pyn4psjpcxzp2ghgwaq6y2h4e53q232x8r` |

![image](https://cdn-uploads.huggingface.co/production/uploads/63adf1fa42fd3b8dbaeb0c92/Psgbl1TgyDok__C7AMQog.png)

</div>

---

## Model Highlights

- **PRISM Ablation** β€” State-of-the-art technique that removes over-refusal behaviors while preserving model capabilities
- **SOTA Coding Performance** β€” 80.2% on SWE-Bench Verified, 51.3% on Multi-SWE-Bench, 76.3% on BrowseComp (with context management)
- **Frontier Agentic Capabilities** β€” Industry-leading performance in tool use, search, and complex multi-step tasks
- **Efficient Reasoning** β€” Trained with RL to reason efficiently and decompose tasks optimally, 37% faster than M2.1
- **Cost-Effective** β€” $1 for continuous operation at 100 tok/s for an hour; $0.30 at 50 tok/s
- **Modified-MIT Base License** β€” Based on MiniMax's open-weight release

## Base Model Architecture

MiniMax-M2.5 is a Mixture-of-Experts (MoE) model extensively trained with reinforcement learning across hundreds of thousands of complex real-world environments.

| Specification | Value |
|---------------|-------|
| Architecture | Sparse Mixture-of-Experts (MoE) |
| Training | Extensive RL in 200K+ real-world environments |
| Languages | 10+ (Go, C, C++, TypeScript, Rust, Kotlin, Python, Java, JavaScript, PHP, Lua, Dart, Ruby) |
| Inference Speed | 100 tok/s (Lightning) / 50 tok/s (Standard) |
| Library | `transformers` |

## Benchmarks (Base Model)

### Coding

| Benchmark | MiniMax-M2.5 | Claude Opus 4.6 | Gemini 3 Pro | GPT-5.2 |
|-----------|-------------|-----------------|-------------|---------|
| SWE-Bench Verified | **80.2** | 78.9 | 74.0 | 72.6 |
| Multi-SWE-Bench | **51.3** | 50.8 | β€” | β€” |
| SWE-Bench Multilingual | **55.6** | β€” | β€” | β€” |
| Terminal-Bench 2.0 | 51.5 | 52.1 | β€” | β€” |

### Search & Tool Calling

| Benchmark | MiniMax-M2.5 | Claude Opus 4.6 | Gemini 3 Pro | GPT-5.2 |
|-----------|-------------|-----------------|-------------|---------|
| BrowseComp | **76.3** | 71.2 | 62.4 | 57.8 |

### Reasoning & Knowledge

| Benchmark | MiniMax-M2.5 | Claude Opus 4.6 | Gemini 3 Pro | GPT-5.2 |
|-----------|-------------|-----------------|-------------|---------|
| AIME25 | 86.3 | 95.6 | 96.0 | 98.0 |
| GPQA-D | 85.2 | 90.0 | 91.0 | 90.0 |
| HLE w/o tools | 19.4 | 30.7 | 37.2 | 31.4 |
| SciCode | 44.4 | 52.0 | 56.0 | 52.0 |
| IFBench | **70.0** | 53.0 | 70.0 | 75.0 |

## Usage

### llama.cpp (GGUF)

Build the latest master of [llama.cpp](https://github.com/ggml-org/llama.cpp) and run:

```bash
~/llama.cpp/build/bin/llama-cli \
  -m ../outputs/MiniMax-M2.5-PRISM-PRO-[QUANT].gguf \
  --jinja \
  -ngl 999 \
  --repeat_penalty 1.15 \
  --temp 1.0 \
  --top_p 0.95 \
  --top_k 40
```


> Replace `[QUANT]` with your quantization level (e.g. `Q8_0`, etc.).

### Recommended Parameters

| Use Case | Temperature | Top-P | Top-K | Repeat Penalty | Max New Tokens |
|----------|-------------|-------|-------|----------------|----------------|
| Reasoning / Coding | 1.0 | 0.95 | 40 | 1.15 | 32768 |
| General Chat | 0.6 | 0.95 | 40 | 1.15 | 4096 |
| Agentic / Tool Use | 1.0 | 0.95 | 40 | 1.15 | 32768 |



| Version | Description | Access |
|---------|-------------|--------|
| **PRISM-LITE** | Abliterated with PRISM-LITE pipeline β€” removes over-refusal while preserving core capabilities | Free on Hugging Face |
| **PRISM-PRO** | Full PRISM-PRO ablation β€” Full Production Level Mode suppression of propaganda/refusal mechanisms with maximum capability retention | [Ko-fi](https://ko-fi.com/s/0a23d1b9a5) |

## License

This model is released under the [PRISM Research License](LICENSE.md).

The base model [MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5) is released under a [Modified-MIT License](https://github.com/MiniMax-AI/MiniMax-M2.5/blob/main/LICENSE).

## Acknowledgments

Based on [MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5) by [MiniMax AI](https://www.minimax.io).