| | --- |
| | library_name: transformers |
| | tags: |
| | - mergekit |
| | - merge |
| | base_model: |
| | - Triangle104/Minerva-14b |
| | - ToastyPigeon/qwen-story-test-qlora |
| | model-index: |
| | - name: Minerva-14b-V0.1 |
| | results: |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: IFEval (0-Shot) |
| | type: HuggingFaceH4/ifeval |
| | args: |
| | num_few_shot: 0 |
| | metrics: |
| | - type: inst_level_strict_acc and prompt_level_strict_acc |
| | value: 8.61 |
| | name: strict accuracy |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Minerva-14b-V0.1 |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: BBH (3-Shot) |
| | type: BBH |
| | args: |
| | num_few_shot: 3 |
| | metrics: |
| | - type: acc_norm |
| | value: 43.62 |
| | name: normalized accuracy |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Minerva-14b-V0.1 |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: MATH Lvl 5 (4-Shot) |
| | type: hendrycks/competition_math |
| | args: |
| | num_few_shot: 4 |
| | metrics: |
| | - type: exact_match |
| | value: 30.36 |
| | name: exact match |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Minerva-14b-V0.1 |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: GPQA (0-shot) |
| | type: Idavidrein/gpqa |
| | args: |
| | num_few_shot: 0 |
| | metrics: |
| | - type: acc_norm |
| | value: 15.44 |
| | name: acc_norm |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Minerva-14b-V0.1 |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: MuSR (0-shot) |
| | type: TAUR-Lab/MuSR |
| | args: |
| | num_few_shot: 0 |
| | metrics: |
| | - type: acc_norm |
| | value: 18.32 |
| | name: acc_norm |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Minerva-14b-V0.1 |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: MMLU-PRO (5-shot) |
| | type: TIGER-Lab/MMLU-Pro |
| | config: main |
| | split: test |
| | args: |
| | num_few_shot: 5 |
| | metrics: |
| | - type: acc |
| | value: 45.76 |
| | name: accuracy |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Minerva-14b-V0.1 |
| | name: Open LLM Leaderboard |
| | --- |
| | # merge |
| |
|
| | This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
| |
|
| | ## Merge Details |
| |
|
| | Use this mode (Works better): https://huggingface.co/Triangle104/Minerva-14b |
| |
|
| | ### Merge Method |
| |
|
| | This model was merged using the Passthrough merge method using [Triangle104/Minerva-14b](https://huggingface.co/Triangle104/Minerva-14b) + [ToastyPigeon/qwen-story-test-qlora](https://huggingface.co/ToastyPigeon/qwen-story-test-qlora) as a base. |
| |
|
| | ### Models Merged |
| |
|
| | The following models were included in the merge: |
| |
|
| |
|
| | ### Configuration |
| |
|
| | The following YAML configuration was used to produce this model: |
| |
|
| | ```yaml |
| | base_model: Triangle104/Minerva-14b+ToastyPigeon/qwen-story-test-qlora |
| | dtype: bfloat16 |
| | merge_method: passthrough |
| | models: |
| | - model: Triangle104/Minerva-14b+ToastyPigeon/qwen-story-test-qlora |
| | ``` |
| |
|
| | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
| | Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Triangle104__Minerva-14b-V0.1-details) |
| |
|
| | | Metric |Value| |
| | |-------------------|----:| |
| | |Avg. |27.02| |
| | |IFEval (0-Shot) | 8.61| |
| | |BBH (3-Shot) |43.62| |
| | |MATH Lvl 5 (4-Shot)|30.36| |
| | |GPQA (0-shot) |15.44| |
| | |MuSR (0-shot) |18.32| |
| | |MMLU-PRO (5-shot) |45.76| |
| |
|
| |
|