MiniMax-M2.1-REAP-50-W4A16 (GGUF Q4_0)
This repository hosts a GGUF conversion of 0xSero/MiniMax-M2.1-REAP-50-W4A16 using legacy Q4_0 quantization for llama.cpp-compatible runtimes.
Files
0xSero-MiniMax-M2.1-REAP-50-W4A16-Q4_0.gguf
Usage (llama.cpp)
./main -m 0xSero-MiniMax-M2.1-REAP-50-W4A16-Q4_0.gguf -p "Hello"
Conversion
Converted locally with a Python CLI that wraps llama.cpp's convert_hf_to_gguf.py and emits legacy Q4_0.
License
Please refer to the original model repository for licensing and usage terms:
https://huggingface.co/MiniMaxAI/MiniMax-M2.1
https://huggingface.co/0xSero/MiniMax-M2.1-REAP-50-W4A16
- Downloads last month
- 124
Hardware compatibility
Log In
to add your hardware
4-bit
Model tree for runfuture/MiniMax-M2.1-REAP-50-W4A16-GGUF
Base model
MiniMaxAI/MiniMax-M2.1