QuantFactory/SauerkrautLM-UNA-SOLAR-Instruct-GGUF
This is quantized version of Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct created using llama.cpp
Original Model Card
SauerkrautLM-UNA-SOLAR-Instruct
This is the model for SauerkrautLM-UNA-SOLAR-Instruct. I used mergekit to merge models.
🥳 As of December 24 2023, this model holds the first place position on the Open LLM Leaderboard.
Screenshot

Screenshot

Prompt Template(s)
### User:
{user}
### Assistant:
{asistant}
Yaml Config to reproduce
slices:
- sources:
- model: VAGOsolutions/SauerkrautLM-SOLAR-Instruct
layer_range: [0, 48]
- model: fblgit/UNA-SOLAR-10.7B-Instruct-v1.0
layer_range: [0, 48]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
Quantizationed versions
Quantizationed versions of this model is available thanks to TheBloke.
GPTQ
GGUF
AWQ
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 74.26 |
| AI2 Reasoning Challenge (25-Shot) | 70.90 |
| HellaSwag (10-Shot) | 88.30 |
| MMLU (5-Shot) | 66.15 |
| TruthfulQA (0-shot) | 71.80 |
| Winogrande (5-shot) | 83.74 |
| GSM8k (5-shot) | 64.67 |
If you would like to support me:
- Downloads last month
- 97
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard70.900
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard88.300
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard66.150
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard71.800
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard83.740
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard64.670
