Update README.md
Browse files
README.md
CHANGED
|
@@ -48,7 +48,7 @@ Weight quantization also reduces disk size requirements by approximately 50%.
|
|
| 48 |
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
|
| 49 |
|
| 50 |
```bash
|
| 51 |
-
python quantize.py --
|
| 52 |
```
|
| 53 |
|
| 54 |
```python
|
|
|
|
| 48 |
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
|
| 49 |
|
| 50 |
```bash
|
| 51 |
+
python quantize.py --model_path mistralai/Devstral-Small-2507 --calib_size 512 --dampening_frac 0.05
|
| 52 |
```
|
| 53 |
|
| 54 |
```python
|