| | --- |
| | license: apache-2.0 |
| | inference: false |
| | quantized_by: bartowski |
| | --- |
| | |
| | ## Exllama v2 Quantizations of MistralLite |
| |
|
| | Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.7">turboderp's ExLlamaV2 v0.0.7</a> for quantization. |
| |
|
| | Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. |
| |
|
| | Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset. |
| |
|
| | Original model: https://huggingface.co/amazon/MistralLite |
| | |
| | <a href="https://huggingface.co/bartowski/MistralLite-exl2/tree/4.0">4.0 bits per weight</a> |
| | |
| | <a href="https://huggingface.co/bartowski/MistralLite-exl2/tree/6.0">6.0 bits per weight</a> |
| | |
| | <a href="https://huggingface.co/bartowski/MistralLite-exl2/tree/8.0">8.0 bits per weight</a> |
| |
|
| | ## Download instructions |
| |
|
| | With git: |
| |
|
| | ```shell |
| | git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/MistralLite-exl2 |
| | ``` |
| |
|
| | With huggingface hub (credit to TheBloke for instructions): |
| |
|
| | ```shell |
| | pip3 install huggingface-hub |
| | ``` |
| |
|
| | To download the `main` (only useful if you only care about measurement.json) branch to a folder called `MistralLite-exl2`: |
| |
|
| | ```shell |
| | mkdir MistralLite-exl2 |
| | huggingface-cli download bartowski/MistralLite-exl2 --local-dir MistralLite-exl2 --local-dir-use-symlinks False |
| | ``` |
| |
|
| | To download from a different branch, add the `--revision` parameter: |
| |
|
| | ```shell |
| | mkdir MistralLite-exl2 |
| | huggingface-cli download bartowski/MistralLite-exl2 --revision 4.0 --local-dir MistralLite-exl2 --local-dir-use-symlinks False |
| | ``` |
| |
|