Update README.md
Browse files
README.md
CHANGED
|
@@ -15,6 +15,12 @@ Only non-shared experts within transformer blocks are compressed. Weights are qu
|
|
| 15 |
|
| 16 |
Model checkpoint is saved in [compressed_tensors](https://github.com/neuralmagic/compressed-tensors) format.
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
### Evaluation
|
| 19 |
|
| 20 |
This model was evaluated on the OpenLLM v1 benchmarks and reasoning tasks (AIME-24, GPQA-Diamond, MATH-500).
|
|
@@ -29,14 +35,14 @@ For reasoning tasks we estimate pass@1 based on 10 runs with different seeds and
|
|
| 29 |
|-------------------------------|---------------|-------|-----------|------|------------|------------|---------------|----------|
|
| 30 |
| deepseek-ai/DeepSeek-R1 | 72.53 | 95.91 | 89.83 | 87.22 | 59.28 | 82.00 | 81.04 | 100.00 |
|
| 31 |
| cognitivecomputations/DeepSeek-R1-AWQ | 73.12 | 95.15 | 89.07 | 86.86| 60.09 | 82.32 | 81.10 | 100.07 |
|
| 32 |
-
|
|
| 33 |
|
| 34 |
`Reasoning tasks`
|
| 35 |
| Model | AIME-2024 pass@1 | MATH-500 pass@1 | GPQA-Diamond pass@1 | Average | Recovery |
|
| 36 |
|-----------------------------------------|------------------|-----------------|---------------------|---------|----------|
|
| 37 |
| deepseek-ai/DeepSeek-R1 | 78.34 | 97.24 | 73.383 | 82.99 | 100.00 |
|
| 38 |
| cognitivecomputations/DeepSeek-R1-AWQ | 70.67 | 93.64 | 70.456 | 78.25 | 94.29 |
|
| 39 |
-
|
|
| 40 |
|
| 41 |
## Reproduction
|
| 42 |
|
|
@@ -44,7 +50,7 @@ The results were obtained using the following commands:
|
|
| 44 |
|
| 45 |
`OpenLLM v1`
|
| 46 |
```bash
|
| 47 |
-
MODEL=
|
| 48 |
MODEL_ARGS="pretrained=$MODEL,dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True"
|
| 49 |
|
| 50 |
lm_eval \
|
|
@@ -58,7 +64,7 @@ For reasoning evals we adopted the protocol from the [open-r1 repository](https:
|
|
| 58 |
|
| 59 |
`Reasoning tasks`
|
| 60 |
```bash
|
| 61 |
-
MODEL=
|
| 62 |
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=38768,gpu_memory_utilization=0.8,tensor_parallel_size=8,add_special_tokens=false,generation_parameters={\"max_new_tokens\":32768,\"temperature\":0.6,\"top_p\":0.95,\"seed\":7686}"
|
| 63 |
|
| 64 |
export VLLM_WORKER_MULTIPROC_METHOD=spawn
|
|
|
|
| 15 |
|
| 16 |
Model checkpoint is saved in [compressed_tensors](https://github.com/neuralmagic/compressed-tensors) format.
|
| 17 |
|
| 18 |
+
| Models | Experts Quantized | Attention blocks quantized | Size (Gb) |
|
| 19 |
+
| ------ | --------- | --------- | --------- |
|
| 20 |
+
| [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) | ❌ | ❌ | 671 GB |
|
| 21 |
+
| [ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g-experts](https://huggingface.co/ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g-experts) | ✅ | ❌ | 346 GB |
|
| 22 |
+
| [cognitivecomputations/DeepSeek-R1-AWQ](https://huggingface.co/cognitivecomputations/DeepSeek-R1-AWQ) | ✅ | ✅ | 340 GB |
|
| 23 |
+
|
| 24 |
### Evaluation
|
| 25 |
|
| 26 |
This model was evaluated on the OpenLLM v1 benchmarks and reasoning tasks (AIME-24, GPQA-Diamond, MATH-500).
|
|
|
|
| 35 |
|-------------------------------|---------------|-------|-----------|------|------------|------------|---------------|----------|
|
| 36 |
| deepseek-ai/DeepSeek-R1 | 72.53 | 95.91 | 89.83 | 87.22 | 59.28 | 82.00 | 81.04 | 100.00 |
|
| 37 |
| cognitivecomputations/DeepSeek-R1-AWQ | 73.12 | 95.15 | 89.07 | 86.86| 60.09 | 82.32 | 81.10 | 100.07 |
|
| 38 |
+
| ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g-act_order-mse_scale-experts (this) | 72.53 | 95.68 | 89.36 | 86.99| 59.77 | 83.35 | 81.28 | 100.30 |
|
| 39 |
|
| 40 |
`Reasoning tasks`
|
| 41 |
| Model | AIME-2024 pass@1 | MATH-500 pass@1 | GPQA-Diamond pass@1 | Average | Recovery |
|
| 42 |
|-----------------------------------------|------------------|-----------------|---------------------|---------|----------|
|
| 43 |
| deepseek-ai/DeepSeek-R1 | 78.34 | 97.24 | 73.383 | 82.99 | 100.00 |
|
| 44 |
| cognitivecomputations/DeepSeek-R1-AWQ | 70.67 | 93.64 | 70.456 | 78.25 | 94.29 |
|
| 45 |
+
| ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g-act_order-mse_scale-experts (this) | 77.00 | 97.08 | 71.92 | 82.00 | 98.81 |
|
| 46 |
|
| 47 |
## Reproduction
|
| 48 |
|
|
|
|
| 50 |
|
| 51 |
`OpenLLM v1`
|
| 52 |
```bash
|
| 53 |
+
MODEL=ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g-act_order-mse_scale-experts
|
| 54 |
MODEL_ARGS="pretrained=$MODEL,dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True"
|
| 55 |
|
| 56 |
lm_eval \
|
|
|
|
| 64 |
|
| 65 |
`Reasoning tasks`
|
| 66 |
```bash
|
| 67 |
+
MODEL=ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g-act_order-mse_scale-experts
|
| 68 |
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=38768,gpu_memory_utilization=0.8,tensor_parallel_size=8,add_special_tokens=false,generation_parameters={\"max_new_tokens\":32768,\"temperature\":0.6,\"top_p\":0.95,\"seed\":7686}"
|
| 69 |
|
| 70 |
export VLLM_WORKER_MULTIPROC_METHOD=spawn
|