| | --- |
| | base_model: google/gemma-2-9b-it |
| | tags: |
| | - alignment-handbook |
| | - generated_from_trainer |
| | datasets: |
| | - princeton-nlp/gemma2-ultrafeedback-armorm |
| | model-index: |
| | - name: princeton-nlp/gemma-2-9b-it-SimPO |
| | results: [] |
| | license: mit |
| | quantized_by: bartowski |
| | pipeline_tag: text-generation |
| | --- |
| | |
| | ## Exllama v2 Quantizations of gemma-2-9b-it-SimPO |
| |
|
| | Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.1.8">turboderp's ExLlamaV2 v0.1.8</a> for quantization. |
| |
|
| | <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> |
| |
|
| | Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. |
| |
|
| | Original model: https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO |
| |
|
| | ## Prompt format |
| |
|
| | ``` |
| | <bos><start_of_turn>user |
| | {system_prompt} |
| | |
| | {prompt}<end_of_turn> |
| | <start_of_turn>model |
| | <end_of_turn> |
| | <start_of_turn>model |
| | |
| | ``` |
| |
|
| | ## Available sizes |
| |
|
| | | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | |
| | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | |
| | | [8_0](https://huggingface.co/bartowski/gemma-2-9b-it-SimPO-exl2/tree/8_0) | 8.0 | 8.0 | 11.9 GB | 15.9 GB | 21.3 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | |
| | | [6_5](https://huggingface.co/bartowski/gemma-2-9b-it-SimPO-exl2/tree/6_5) | 6.5 | 8.0 | 10.4 GB | 14.4 GB | 19.8 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | |
| | | [5_0](https://huggingface.co/bartowski/gemma-2-9b-it-SimPO-exl2/tree/5_0) | 5.0 | 6.0 | 8.6 GB | 12.6 GB | 18.0 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | |
| | | [4_25](https://huggingface.co/bartowski/gemma-2-9b-it-SimPO-exl2/tree/4_25) | 4.25 | 6.0 | 7.9 GB | 11.9 GB | 17.3 GB | GPTQ equivalent bits per weight, slightly higher quality. | |
| | | [3_5](https://huggingface.co/bartowski/gemma-2-9b-it-SimPO-exl2/tree/3_5) | 3.5 | 6.0 | 7.1 GB | 11.1 GB | 16.9 GB | Lower quality, only use if you have to. | |
| | |
| | ## Download instructions |
| | |
| | With git: |
| | |
| | ```shell |
| | git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/gemma-2-9b-it-SimPO-exl2 gemma-2-9b-it-SimPO-exl2-6_5 |
| | ``` |
| | |
| | With huggingface hub (credit to TheBloke for instructions): |
| | |
| | ```shell |
| | pip3 install huggingface-hub |
| | ``` |
| | |
| | To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: |
| | |
| | Linux: |
| | |
| | ```shell |
| | huggingface-cli download bartowski/gemma-2-9b-it-SimPO-exl2 --revision 6_5 --local-dir gemma-2-9b-it-SimPO-exl2-6_5 |
| | ``` |
| | |
| | Windows (which apparently doesn't like _ in folders sometimes?): |
| |
|
| | ```shell |
| | huggingface-cli download bartowski/gemma-2-9b-it-SimPO-exl2 --revision 6_5 --local-dir gemma-2-9b-it-SimPO-exl2-6.5 |
| | ``` |
| |
|
| | Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |