| --- |
| tags: |
| - merge |
| - mergekit |
| - lazymergekit |
| - nbeerbower/llama-3-wissenschaft-8B-v2 |
| base_model: |
| - nbeerbower/llama-3-wissenschaft-8B-v2 |
| license: llama3 |
| language: |
| - en |
| - de |
| --- |
| |
| # llama3-8b-spaetzle-v20 |
|
|
| llama3-8b-spaetzle-v20 is a merge of the following models: |
| * [cstr/llama3-8b-spaetzle-v13](https://huggingface.co/cstr/llama3-8b-spaetzle-v13) |
| * [nbeerbower/llama-3-wissenschaft-8B-v2](https://huggingface.co/nbeerbower/llama-3-wissenschaft-8B-v2) |
|
|
| # Benchmarks |
| On EQ-Bench v2_de it achieves 65.7 (171/171 parseable). From Open LLM Leaderboard ([details](https://huggingface.co/datasets/open-llm-leaderboard/details_cstr__llama3-8b-spaetzle-v20/blob/main/results_2024-05-25T12-52-23.640126.json)): |
| |
| | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |
| |----------------------------------|------------|-------|-----------|-------|------------|------------|-------| |
| | cstr/llama3-8b-spaetzle-v20 | 71.83 | 70.39 | 85.69 | 68.52 | 60.98 | 78.37 | 67.02 | |
| |
| |
| ## 🧩 Configuration |
| |
| ```yaml |
| models: |
| - model: cstr/llama3-8b-spaetzle-v13 |
| # no parameters necessary for base model |
| - model: nbeerbower/llama-3-wissenschaft-8B-v2 |
| parameters: |
| density: 0.65 |
| weight: 0.4 |
| merge_method: dare_ties |
| base_model: cstr/llama3-8b-spaetzle-v13 |
| parameters: |
| int8_mask: true |
| dtype: bfloat16 |
| random_seed: 0 |
| tokenizer_source: base |
| ``` |
| |
| ## 💻 Usage |
| |
| ```python |
| !pip install -qU transformers accelerate |
| |
| from transformers import AutoTokenizer |
| import transformers |
| import torch |
| |
| model = "cstr/llama3-8b-spaetzle-v20" |
| messages = [{"role": "user", "content": "What is a large language model?"}] |
| |
| tokenizer = AutoTokenizer.from_pretrained(model) |
| prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| pipeline = transformers.pipeline( |
| "text-generation", |
| model=model, |
| torch_dtype=torch.float16, |
| device_map="auto", |
| ) |
| |
| outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
| print(outputs[0]["generated_text"]) |
| ``` |