shuyuej commited on
Commit
26f4d75
·
verified ·
1 Parent(s): d6c7479

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -3
README.md CHANGED
@@ -1,3 +1,44 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # The Quantized Command R Plus Model
6
+
7
+ Original Base Model: `CohereForAI/c4ai-command-r-plus`.<br>
8
+ Link: [https://huggingface.co/CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus)
9
+
10
+ ## Special Notice
11
+ We use `group_size=1024` to quantize a smaller model.
12
+ For the default `group_size=128`, the model is also available here: [https://huggingface.co/shuyuej/Command-R-Plus-GPTQ](https://huggingface.co/shuyuej/Command-R-Plus-GPTQ).
13
+
14
+ ## Quantization Configurations:
15
+
16
+ ```text
17
+ "quantization_config": {
18
+ "batch_size": 1,
19
+ "bits": 4,
20
+ "block_name_to_quantize": null,
21
+ "cache_block_outputs": true,
22
+ "damp_percent": 0.1,
23
+ "dataset": null,
24
+ "desc_act": false,
25
+ "exllama_config": {
26
+ "version": 1
27
+ },
28
+ "group_size": 1024,
29
+ "max_input_length": null,
30
+ "model_seqlen": null,
31
+ "module_name_preceding_first_block": null,
32
+ "modules_in_block_to_quantize": null,
33
+ "pad_token_id": null,
34
+ "quant_method": "gptq",
35
+ "sym": true,
36
+ "tokenizer": null,
37
+ "true_sequential": true,
38
+ "use_cuda_fp16": false,
39
+ "use_exllama": true
40
+ },
41
+ ```
42
+
43
+ ## Source Codes
44
+ Source Codes: [https://github.com/vkola-lab/medpodgpt/tree/main/quantization](https://github.com/vkola-lab/medpodgpt/tree/main/quantization).