Update README.md
Browse files
README.md
CHANGED
|
@@ -30,15 +30,15 @@ effi 7b GPTQ is a quantized version of effi 7b whiich is a 7 billion parameter m
|
|
| 30 |
|
| 31 |
### Qunatization Configuration
|
| 32 |
|
| 33 |
-
- bits
|
| 34 |
-
- damp_percent
|
| 35 |
-
- dataset
|
| 36 |
-
- desc_act
|
| 37 |
-
- group_size
|
| 38 |
-
- modules_in_block_to_quantize
|
| 39 |
-
- quant_method
|
| 40 |
-
- sym
|
| 41 |
-
- true_sequential
|
| 42 |
|
| 43 |
### Example of usage
|
| 44 |
|
|
@@ -79,9 +79,9 @@ print(f"{tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tok
|
|
| 79 |
```
|
| 80 |
|
| 81 |
### Framework versions
|
| 82 |
-
- Transformers 4.37.2
|
| 83 |
-
- optimum 1.16.2
|
| 84 |
-
- auto-gptq 0.6.0
|
| 85 |
|
| 86 |
### Citation
|
| 87 |
|
|
|
|
| 30 |
|
| 31 |
### Qunatization Configuration
|
| 32 |
|
| 33 |
+
- **bits:** 4,
|
| 34 |
+
- **damp_percent** 0.1,
|
| 35 |
+
- **dataset:** "wikitext2",
|
| 36 |
+
- **desc_act:** false,
|
| 37 |
+
- **group_size:** 128,
|
| 38 |
+
- **modules_in_block_to_quantize:** null,
|
| 39 |
+
- **quant_method:** "gptq",
|
| 40 |
+
- **sym:** true,
|
| 41 |
+
- **true_sequential:** true
|
| 42 |
|
| 43 |
### Example of usage
|
| 44 |
|
|
|
|
| 79 |
```
|
| 80 |
|
| 81 |
### Framework versions
|
| 82 |
+
- **Transformers** 4.37.2
|
| 83 |
+
- **optimum** 1.16.2
|
| 84 |
+
- **auto-gptq** 0.6.0
|
| 85 |
|
| 86 |
### Citation
|
| 87 |
|