| | --- |
| | license: apache-2.0 |
| | tags: |
| | - merge |
| | - mergekit |
| | - lazymergekit |
| | - fblgit/UNA-TheBeagle-7b-v1 |
| | - argilla/distilabeled-Marcoro14-7B-slerp |
| | - dpo |
| | - rlhf |
| | quantized_by: bartowski |
| | pipeline_tag: text-generation |
| | --- |
| | |
| | ## Exllama v2 Quantizations of NeuralBeagle14-7B |
| |
|
| | Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization. |
| |
|
| | # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below) |
| |
|
| | Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. |
| |
|
| | Original model: https://huggingface.co/mlabonne/NeuralBeagle14-7B |
| |
|
| | Model Size: 7b |
| |
|
| | | Branch | Bits | lm_head bits | Dataset | Size | Description | |
| | | ----- | ---- | ------- | ------- | ------ | ------------ | |
| | | [8_0](https://huggingface.co/Bartowski/NeuralBeagle14-7B-exl2/tree/8_0) | 8.0 | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | |
| | | [6_5](https://huggingface.co/Bartowski/NeuralBeagle14-7B-exl2/tree/6_5) | 6.5 | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | |
| | | [5_0](https://huggingface.co/Bartowski/NeuralBeagle14-7B-exl2/tree/5_0) | 5.0 | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. | |
| | | [4_0](https://huggingface.co/Bartowski/NeuralBeagle14-7B-exl2/tree/4_0) | 4.0 | 6.0 | Default | 6.5 GB | Just under GPTQ equivalent bits per weight. | |
| | | [3_5](https://huggingface.co/Bartowski/NeuralBeagle14-7B-exl2/tree/3_5) | 3.5 | 6.0 | Default | 6.1 GB | Lower quality, only use if you have to. | |
| | |
| | All VRAM requirements estimated from 16k context. For 32k context add ~2 GB. |
| | |
| | ## Download instructions |
| | |
| | With git: |
| | |
| | ```shell |
| | git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/NeuralBeagle14-7B-exl2 |
| | ``` |
| | |
| | With huggingface hub (credit to TheBloke for instructions): |
| | |
| | ```shell |
| | pip3 install huggingface-hub |
| | ``` |
| | |
| | To download the `main` (only useful if you only care about measurement.json) branch to a folder called `NeuralBeagle14-7B-exl2`: |
| | |
| | ```shell |
| | mkdir NeuralBeagle14-7B-exl2 |
| | huggingface-cli download bartowski/NeuralBeagle14-7B-exl2 --local-dir NeuralBeagle14-7B-exl2 --local-dir-use-symlinks False |
| | ``` |
| | |
| | To download from a different branch, add the `--revision` parameter: |
| | |
| | ```shell |
| | mkdir NeuralBeagle14-7B-exl2-6_5 |
| | huggingface-cli download bartowski/NeuralBeagle14-7B-exl2 --revision 6_5 --local-dir NeuralBeagle14-7B-exl2-6_5 --local-dir-use-symlinks False |
| | ``` |
| | |