Update README.md
Browse files
README.md
CHANGED
|
@@ -8,12 +8,12 @@ tags:
|
|
| 8 |
- vLLM
|
| 9 |
- AWQ
|
| 10 |
base_model:
|
| 11 |
-
-
|
| 12 |
base_model_relation: quantized
|
| 13 |
|
| 14 |
---
|
| 15 |
# MiniMax-M2.5-AWQ
|
| 16 |
-
Base model: [
|
| 17 |
|
| 18 |
This repo quantizes the model using data-free quantization (no calibration dataset required).
|
| 19 |
|
|
@@ -44,7 +44,7 @@ export VLLM_USE_FLASHINFER_SAMPLER=0
|
|
| 44 |
export OMP_NUM_THREADS=4
|
| 45 |
|
| 46 |
vllm serve \
|
| 47 |
-
__YOUR_PATH__/
|
| 48 |
--served-model-name MY_MODEL \
|
| 49 |
--swap-space 16 \
|
| 50 |
--max-num-seqs 32 \
|
|
@@ -73,8 +73,8 @@ vllm serve \
|
|
| 73 |
|
| 74 |
### 【Model Download】
|
| 75 |
```python
|
| 76 |
-
from
|
| 77 |
-
snapshot_download('
|
| 78 |
```
|
| 79 |
|
| 80 |
### 【Overview】
|
|
|
|
| 8 |
- vLLM
|
| 9 |
- AWQ
|
| 10 |
base_model:
|
| 11 |
+
- MiniMaxAI/MiniMax-M2.5
|
| 12 |
base_model_relation: quantized
|
| 13 |
|
| 14 |
---
|
| 15 |
# MiniMax-M2.5-AWQ
|
| 16 |
+
Base model: [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5)
|
| 17 |
|
| 18 |
This repo quantizes the model using data-free quantization (no calibration dataset required).
|
| 19 |
|
|
|
|
| 44 |
export OMP_NUM_THREADS=4
|
| 45 |
|
| 46 |
vllm serve \
|
| 47 |
+
__YOUR_PATH__/QuantTrio/MiniMax-M2.5-AWQ \
|
| 48 |
--served-model-name MY_MODEL \
|
| 49 |
--swap-space 16 \
|
| 50 |
--max-num-seqs 32 \
|
|
|
|
| 73 |
|
| 74 |
### 【Model Download】
|
| 75 |
```python
|
| 76 |
+
from huggingface_hub import snapshot_download
|
| 77 |
+
snapshot_download('QuantTrio/MiniMax-M2.5-AWQ', cache_dir="your_local_path")
|
| 78 |
```
|
| 79 |
|
| 80 |
### 【Overview】
|