Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,161 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MiniMax-M2.1-PRISM
|
| 2 |
+
|
| 3 |
+
**An abliterated version of MiniMax-M2.1 using the PRISM methodology**
|
| 4 |
+
|
| 5 |
+
[](https://ko-fi.com/ericelbaz)
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## Model Description
|
| 10 |
+
|
| 11 |
+
**MiniMax-M2.1-PRISM** is an abliterated version of MiniMax-M2.1, processed using PRISM (Projected Refusal Isolation via Subspace Modification) to remove refusal behaviors while preserving full model capabilities.
|
| 12 |
+
|
| 13 |
+
### Base Model: MiniMax-M2.1
|
| 14 |
+
|
| 15 |
+
MiniMax-M2.1 is an open-source agentic language model designed for robust performance in:
|
| 16 |
+
- Coding and software engineering
|
| 17 |
+
- Tool use and multi-step reasoning
|
| 18 |
+
- Instruction following
|
| 19 |
+
- Long-horizon planning
|
| 20 |
+
- Multilingual capabilities
|
| 21 |
+
|
| 22 |
+
**Architecture**: 229B parameters, 62 layers, 256 experts (8 active per token)
|
| 23 |
+
|
| 24 |
+
---
|
| 25 |
+
|
| 26 |
+
## PRISM Methodology
|
| 27 |
+
|
| 28 |
+
### Method: Projected Refusal Isolation via Subspace Modification
|
| 29 |
+
|
| 30 |
+
This model was abliterated using **PRISM v5** - a state-of-the-art abliteration methodology combining multiple principled techniques for effective refusal removal while preserving model capabilities.
|
| 31 |
+
|
| 32 |
+
**Formula**: `W' = W - weight * (d ⊗ d) @ W`
|
| 33 |
+
|
| 34 |
+
Where:
|
| 35 |
+
- `W` = Original weight matrix
|
| 36 |
+
- `d` = Refusal direction vector (unit normalized)
|
| 37 |
+
- `weight` = Layer-specific abliteration strength
|
| 38 |
+
- `W'` = Modified weight matrix
|
| 39 |
+
|
| 40 |
+
### Abliteration Parameters
|
| 41 |
+
|
| 42 |
+
| Parameter | Value |
|
| 43 |
+
|-----------|-------|
|
| 44 |
+
| Base Model | QuixiAI/MiniMax-M2.1-bf16 |
|
| 45 |
+
| Total Layers | 62 |
|
| 46 |
+
| Target Layers | 16-46 (31 layers) |
|
| 47 |
+
| Peak Layer | 31 |
|
| 48 |
+
| Max Weight | 3.0 |
|
| 49 |
+
| Min Weight | 0.5 |
|
| 50 |
+
|
| 51 |
+
### Weight Distribution
|
| 52 |
+
|
| 53 |
+
The abliteration strength follows a triangular distribution centered on the peak layer:
|
| 54 |
+
- Layers 16-31: Weight increases from 0.5 to 3.0
|
| 55 |
+
- Layers 31-46: Weight decreases from 3.0 to 0.5
|
| 56 |
+
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
## Performance Benchmarks
|
| 60 |
+
|
| 61 |
+
### Base Model Performance
|
| 62 |
+
|
| 63 |
+
| Benchmark | Score |
|
| 64 |
+
|-----------|-------|
|
| 65 |
+
| SWE-bench Verified | 74.0 |
|
| 66 |
+
| SWE-bench Multilingual | 72.5 |
|
| 67 |
+
| VIBE Average | 88.6 |
|
| 68 |
+
| MMLU-Pro | 88.0 |
|
| 69 |
+
| GPQA-D | 83.0 |
|
| 70 |
+
| AIME25 | 83.0 |
|
| 71 |
+
|
| 72 |
+
### PRISM Abliteration Results
|
| 73 |
+
|
| 74 |
+
| Metric | Result |
|
| 75 |
+
|--------|--------|
|
| 76 |
+
| Adversarial Prompts Responded | 20/20 (100%) |
|
| 77 |
+
| Benign Coherence | 100% |
|
| 78 |
+
| Response Quality | Full technical accuracy preserved |
|
| 79 |
+
|
| 80 |
+
Testing shows that PRISM abliteration maintains full model coherence with no measurable capability degradation.
|
| 81 |
+
|
| 82 |
+
---
|
| 83 |
+
|
| 84 |
+
## Available Formats
|
| 85 |
+
|
| 86 |
+
| Format | Size | Description |
|
| 87 |
+
|--------|------|-------------|
|
| 88 |
+
| Safetensors (BF16) | ~426 GB | Full precision, 92 shards |
|
| 89 |
+
| GGUF IQ1_S | ~43 GB | Quantized with importance matrix |
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+
## Recommended Inference Parameters
|
| 94 |
+
|
| 95 |
+
```python
|
| 96 |
+
temperature = 1.0
|
| 97 |
+
top_p = 0.95
|
| 98 |
+
top_k = 40
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
### Default System Prompt
|
| 102 |
+
```
|
| 103 |
+
You are a helpful assistant.
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
---
|
| 107 |
+
|
| 108 |
+
## Recommended Inference Frameworks
|
| 109 |
+
|
| 110 |
+
1. **SGLang** (recommended for full precision)
|
| 111 |
+
2. **vLLM** (recommended for full precision)
|
| 112 |
+
3. **llama.cpp** (recommended for GGUF quantized)
|
| 113 |
+
4. **Transformers**
|
| 114 |
+
|
| 115 |
+
### llama.cpp Example
|
| 116 |
+
|
| 117 |
+
```bash
|
| 118 |
+
./llama-cli -m MiniMax-M2.1-PRISM-IQ1_S.gguf -ngl 99 -i -cnv --temp 0.7 --ctx-size 4096
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|
| 123 |
+
## Ethical Considerations
|
| 124 |
+
|
| 125 |
+
This model has been modified to reduce safety guardrails. Users are responsible for:
|
| 126 |
+
|
| 127 |
+
- Complying with all applicable laws and regulations
|
| 128 |
+
- Not using the model for illegal activities
|
| 129 |
+
- Understanding the potential risks of unrestricted AI responses
|
| 130 |
+
- Implementing appropriate safeguards in production environments
|
| 131 |
+
|
| 132 |
+
**Motivation**: This project exists as **research and development experimentation** into understanding how large language models encode and enforce refusal behaviors, contributing to broader AI safety research by providing empirical data on refusal mechanism localization and tradeoffs between safety and capability.
|
| 133 |
+
|
| 134 |
+
---
|
| 135 |
+
|
| 136 |
+
## License
|
| 137 |
+
|
| 138 |
+
This model inherits the [Modified-MIT License](https://github.com/MiniMax-AI/MiniMax-M2.1/blob/main/LICENSE) from the base MiniMax-M2.1 model.
|
| 139 |
+
|
| 140 |
+
---
|
| 141 |
+
|
| 142 |
+
## Credits
|
| 143 |
+
|
| 144 |
+
- **Base Model**: [MiniMax-M2.1](https://huggingface.co/MiniMaxAI/MiniMax-M2.1) by MiniMax AI
|
| 145 |
+
- **BF16 Conversion**: [QuixiAI/MiniMax-M2.1-bf16](https://huggingface.co/QuixiAI/MiniMax-M2.1-bf16) by Eric Hartford
|
| 146 |
+
- **PRISM Abliteration**: Ex0bit
|
| 147 |
+
- **Quantization**: Using [llama.cpp](https://github.com/ggml-org/llama.cpp) with unsloth imatrix
|
| 148 |
+
|
| 149 |
+
---
|
| 150 |
+
|
| 151 |
+
## Support
|
| 152 |
+
|
| 153 |
+
If you find this work useful, consider supporting development:
|
| 154 |
+
|
| 155 |
+
[](https://ko-fi.com/ericelbaz)
|
| 156 |
+
|
| 157 |
+
---
|
| 158 |
+
|
| 159 |
+
## Contact
|
| 160 |
+
|
| 161 |
+
For questions or issues, please open an issue on this repository.
|