| | --- |
| | license: cc-by-nc-4.0 |
| | quantized_by: bartowski |
| | pipeline_tag: text-generation |
| | --- |
| | |
| | ## Exllama v2 Quantizations of Seraph-7B |
| |
|
| | Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.10">turboderp's ExLlamaV2 v0.0.10</a> for quantization. |
| |
|
| | Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. |
| |
|
| | Conversion was done using VMWareOpenInstruct.parquet as calibration dataset. |
| |
|
| | Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6. |
| | |
| | Original model: https://huggingface.co/Weyaxi/Seraph-7B |
| | |
| | <a href="https://huggingface.co/bartowski/Seraph-7B-exl2/tree/4_0">4.0 bits per weight</a> |
| | |
| | <a href="https://huggingface.co/bartowski/Seraph-7B-exl2/tree/6_0">6.0 bits per weight</a> |
| | |
| | <a href="https://huggingface.co/bartowski/Seraph-7B-exl2/tree/8_0">8.0 bits per weight</a> |
| | |
| | ## Download instructions |
| | |
| | With git: |
| | |
| | ```shell |
| | git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/Seraph-7B-exl2 |
| | ``` |
| | |
| | With huggingface hub (credit to TheBloke for instructions): |
| | |
| | ```shell |
| | pip3 install huggingface-hub |
| | ``` |
| | |
| | To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Seraph-7B-exl2`: |
| | |
| | ```shell |
| | mkdir Seraph-7B-exl2 |
| | huggingface-cli download bartowski/Seraph-7B-exl2 --local-dir Seraph-7B-exl2 --local-dir-use-symlinks False |
| | ``` |
| | |
| | To download from a different branch, add the `--revision` parameter: |
| | |
| | ```shell |
| | mkdir Seraph-7B-exl2 |
| | huggingface-cli download bartowski/Seraph-7B-exl2 --revision 4_0 --local-dir Seraph-7B-exl2 --local-dir-use-symlinks False |
| | ``` |
| | |