--- base_model: - MiniMaxAI/MiniMax-M2.1 language: - en library_name: transformers license: other license_name: modified-mit license_link: https://github.com/MiniMax-AI/MiniMax-M2.1/blob/main/LICENSE --- # Model Overview - **Model Architecture:** MiniMaxM2ForCausalLM - **Input:** Text - **Output:** Text - **Supported Hardware Microarchitecture:** AMD MI300 MI350/MI355 - **ROCm**: 7.0 - **PyTorch**: 2.8.0 - **Transformers**: 4.57.1 - **Operating System(s):** Linux - **Inference Engine:** [SGLang](https://docs.sglang.ai/)/[vLLM](https://docs.vllm.ai/en/latest/) - **Model Optimizer:** [AMD-Quark](https://quark.docs.amd.com/latest/index.html) (v0.11) - **Weight quantization:** OCP MXFP4, Static - **Activation quantization:** OCP MXFP4, Dynamic # Model Quantization The model was quantized from [QuixiAI/MiniMax-M2.1-bf16](https://huggingface.co/QuixiAI/MiniMax-M2.1-bf16) using [AMD-Quark](https://quark.docs.amd.com/latest/index.html). The weights are quantized to MXFP4 and activations are quantized to MXFP4. **Quantization scripts:** ``` cd Quark/examples/torch/language_modeling/llm_ptq/ export exclude_layers="lm_head *block_sparse_moe.gate* *self_attn*" python3 quantize_quark.py --model_dir $MODEL_DIR \ --quant_scheme mxfp4 \ --num_calib_data 128 \ --exclude_layers $exclude_layers \ --skip_evaluation \ --multi_gpu \ --trust_remote_code \ --model_export hf_format \ --output_dir $output_dir ``` For further details or issues, please refer to the AMD-Quark documentation or contact the respective developers. # Evaluation The model was evaluated on gsm8k benchmarks using the [vllm](https://github.com/vllm-project/vllm/tree/v0.13.0) framework. ### Accuracy
Benchmark QuixiAI/MiniMax-M2.1-bf16 amd/MiniMax-M2.1-MXFP4(this model) Recovery
gsm8k (flexible-extract) 0.9356 0.9348 99.91%
### Reproduction The GSM8K results were obtained using the vLLM framework, based on the Docker image `rocm/vllm:rocm7.0.0_vllm_0.11.2_20251210`, and vLLM is installed from source inside the container. #### Preparation in container ``` # Reinstall vLLM pip uninstall vllm -y git clone https://github.com/vllm-project/vllm.git cd vllm git checkout v0.13.0 pip install -r requirements/rocm.txt python setup.py develop cd .. ``` #### Launching server ``` VLLM_ROCM_USE_AITER=1 \ VLLM_DISABLE_COMPILE_CACHE=1 \ vllm serve "$MODEL" \ --tensor-parallel-size 4 \ --trust-remote-code \ --max-model-len 32768 \ --port 8899 ``` #### Evaluating model in a new terminal ``` python vllm/tests/evals/gsm8k/gsm8k_eval.py --host http://127.0.0.1 --port 8899 --num-questions 1000 --save-results logs ``` # License Modifications Copyright(c) 2026 Advanced Micro Devices, Inc. All rights reserved.