Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -24,6 +24,41 @@ These quants were made with exllamav2 version 0.0.18. Quants made on this versio
|
|
| 24 |
|
| 25 |
If you have problems loading these models, please update Text Generation WebUI to the latest version.
|
| 26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
|
| 29 |
## Quant Details
|
|
|
|
| 24 |
|
| 25 |
If you have problems loading these models, please update Text Generation WebUI to the latest version.
|
| 26 |
|
| 27 |
+
## Perplexity Scoring
|
| 28 |
+
|
| 29 |
+
Below are the perplexity scores for the EXL2 models. A lower score is better.
|
| 30 |
+
|
| 31 |
+
_TODO:_ Coming soon
|
| 32 |
+
|
| 33 |
+
### Perplexity Script
|
| 34 |
+
|
| 35 |
+
This was the script used for perplexity testing.
|
| 36 |
+
|
| 37 |
+
```bash
|
| 38 |
+
#!/bin/bash
|
| 39 |
+
|
| 40 |
+
source ~/miniconda3/etc/profile.d/conda.sh
|
| 41 |
+
conda activate exllamav2
|
| 42 |
+
|
| 43 |
+
# Set the model name and bit size
|
| 44 |
+
MODEL_NAME="CodeQwen1.5-7B-Chat"
|
| 45 |
+
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
|
| 46 |
+
|
| 47 |
+
# Print the markdown table header
|
| 48 |
+
echo "| Quant Level | Perplexity Score |"
|
| 49 |
+
echo "|-------------|------------------|"
|
| 50 |
+
|
| 51 |
+
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
|
| 52 |
+
do
|
| 53 |
+
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
|
| 54 |
+
if [ -d "$MODEL_DIR" ]; then
|
| 55 |
+
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
|
| 56 |
+
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
|
| 57 |
+
echo "| $BIT_PRECISION | $score |"
|
| 58 |
+
fi
|
| 59 |
+
done
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
|
| 63 |
|
| 64 |
## Quant Details
|