Kearm commited on
Commit
553aace
·
verified ·
1 Parent(s): f3c5d8c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md CHANGED
@@ -1,3 +1,33 @@
1
  ---
2
  license: cc-by-nc-2.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-2.0
3
+ language:
4
+ - en
5
  ---
6
+ ## Exllama v2 Quantizations of laserxtral
7
+
8
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
9
+
10
+ # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
11
+
12
+ Join Our Discord! https://discord.gg/vT3sktQ3zb
13
+
14
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
15
+
16
+ Conversion was done using the default calibration dataset.
17
+
18
+ Default arguments used.
19
+
20
+ Original model: https://huggingface.co/cognitivecomputations/MegaDolphin-120b
21
+
22
+
23
+ <a href="https://huggingface.co/cognitivecomputations/laserxtral-exl2/tree/6_5">6.5 bits per weight</a>
24
+
25
+ <a href="https://huggingface.co/cognitivecomputations/laserxtral-exl2/tree/4">4 bits per weight</a>
26
+
27
+ <a href="https://huggingface.co/cognitivecomputations/laserxtral-exl2/tree/3">3 bits per weight</a>
28
+
29
+ <a href="https://huggingface.co/cognitivecomputations/laserxtral-exl2/tree/2">2 bits per weight</a>
30
+
31
+ Credit to Bartowski for help and model card formatting
32
+
33
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/655dc641accde1bbc8b41aec/iToMZFTp1DuXnpw9oJ61y.jpeg)