File size: 2,008 Bytes
b119dcd 62e3564 955bb3b 62e3564 955bb3b 62e3564 955bb3b 62e3564 955bb3b 62e3564 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: llama2
---
# Edit
PS : https://github.com/Cornell-RelaxML/quip-sharp/issues/13
As mentioned in the above issue thread,
- for accurate hessian generation, use a larger devset (e.g., 4096) and consider accumulating hessians in fp32 if consumer GPUs with fast fp64 are not available.
- changing the Hessian dataset from a natural language dataset to a mathematical dataset, as the task is a math model.
---
Experiment QUIP 2-bit E8P12 version that works in textgen-webui with quip mode loader
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Original weight : https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0
GPTQ 4bit : https://huggingface.co/Yhyu13/Xwin-Math-7B-V1.0-GPTQ-4bit
---
This repo used `hessian_offline_llama.py` provided by QUIP repo to generate hessian specifically for the orignal model before applying Quip quantization.
It took quite a long time for hessian for all 31 layers, about 6 hours for 7B models on a single RTX3090. I am not sure if I made any error.
QUIP byproducts are also uploaded.
Perplexity calcaultead using `eval_ppl.py` provided by QUIP repo
QUIP PPL:
- wikitext2 perplexity: 11.247852325439453
- c4 perplexity: 16.275997161865234
Original model PPL:
- wikitext2 perplexity: 6.042122840881348
- c4 perplexity: 8.430611610412598
Looks like something is wrong, the quantized model is a disaster.
---
Here is some testing done in textgen-webui, I was using Q&A from this dataset https://huggingface.co/datasets/TIGER-Lab/MathInstruct
It seems the 2 bit could hardly answer any question correctly, in compare to GPTQ 4bit version. But the https://huggingface.co/relaxml/Llama-2-13b-E8P-2Bit model made by the author of QUIP seems to work fine, just as good as GPTQ.
So in conclusion, this is a very experimental model that I made just to testify QUIP, I may made some error. But I think it is a good start.
QUIP 2 bit version:

GPTQ 4 bit version:
 |