| --- |
| library_name: transformers |
| tags: |
| - mergekit |
| - merge |
| base_model: |
| - lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B |
| - zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25 |
| model-index: |
| - name: Gemma-2-Ataraxy-v4-Advanced-9B |
| results: |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: IFEval (0-Shot) |
| type: HuggingFaceH4/ifeval |
| args: |
| num_few_shot: 0 |
| metrics: |
| - type: inst_level_strict_acc and prompt_level_strict_acc |
| value: 70.15 |
| name: strict accuracy |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: BBH (3-Shot) |
| type: BBH |
| args: |
| num_few_shot: 3 |
| metrics: |
| - type: acc_norm |
| value: 43.18 |
| name: normalized accuracy |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: MATH Lvl 5 (4-Shot) |
| type: hendrycks/competition_math |
| args: |
| num_few_shot: 4 |
| metrics: |
| - type: exact_match |
| value: 6.12 |
| name: exact match |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: GPQA (0-shot) |
| type: Idavidrein/gpqa |
| args: |
| num_few_shot: 0 |
| metrics: |
| - type: acc_norm |
| value: 11.86 |
| name: acc_norm |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: MuSR (0-shot) |
| type: TAUR-Lab/MuSR |
| args: |
| num_few_shot: 0 |
| metrics: |
| - type: acc_norm |
| value: 16.29 |
| name: acc_norm |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: MMLU-PRO (5-shot) |
| type: TIGER-Lab/MMLU-Pro |
| config: main |
| split: test |
| args: |
| num_few_shot: 5 |
| metrics: |
| - type: acc |
| value: 37.41 |
| name: accuracy |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B |
| name: Open LLM Leaderboard |
| --- |
| # Gemma-2-Ataraxy-v4-Advanced-9B |
|
|
| This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
| ## Merge Details |
| ### Merge Method |
|
|
| This model was merged using the SLERP merge method. |
|
|
| ### Models Merged |
|
|
| The following models were included in the merge: |
| * [lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B) |
| * [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25) |
|
|
| ### Configuration |
|
|
| The following YAML configuration was used to produce this model: |
|
|
| ```yaml |
| base_model: lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B |
| dtype: bfloat16 |
| merge_method: slerp |
| parameters: |
| t: |
| - filter: self_attn |
| value: [0.0, 0.5, 0.3, 0.7, 1.0] |
| - filter: mlp |
| value: [1.0, 0.5, 0.7, 0.3, 0.0] |
| - value: 0.5 |
| slices: |
| - sources: |
| - layer_range: [0, 42] |
| model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25 |
| - layer_range: [0, 42] |
| model: lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B |
| ``` |
|
|
| # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
| Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon07r__Gemma-2-Ataraxy-v4-Advanced-9B) |
|
|
| | Metric |Value| |
| |-------------------|----:| |
| |Avg. |30.83| |
| |IFEval (0-Shot) |70.15| |
| |BBH (3-Shot) |43.18| |
| |MATH Lvl 5 (4-Shot)| 6.12| |
| |GPQA (0-shot) |11.86| |
| |MuSR (0-shot) |16.29| |
| |MMLU-PRO (5-shot) |37.41| |
|
|
|
|