leaderboard-pr-bot's picture
Adding Evaluation Results
d9befa7 verified
|
raw
history blame
4.71 kB
metadata
license: ms-pl
tags:
  - merge
model-index:
  - name: Orca2-13B-selfmerge-26B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 60.84
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vmajor/Orca2-13B-selfmerge-26B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 79.84
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vmajor/Orca2-13B-selfmerge-26B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 60.32
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vmajor/Orca2-13B-selfmerge-26B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 56.38
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vmajor/Orca2-13B-selfmerge-26B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 76.87
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vmajor/Orca2-13B-selfmerge-26B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 39.2
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vmajor/Orca2-13B-selfmerge-26B
          name: Open LLM Leaderboard

This model is a result of merging Orca2-13B with itself using 'mergekit-legacy'. Merge parameters were --weight 0.5 --density 0.5

This merged model showed marginal improvement in perplexity scores:

Model Perplexity
microsoft/Orca-2-13b 7.595028877258301
vmajor/Orca2-13B-selfmerge-26B 7.550178050994873
vmajor/Orca2-13B-selfmerge-39B NC

Benchmark Results

The following table summarizes the model performance across a range of benchmarks:

Model Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
microsoft/Orca-2-13b 58.64 60.67 79.81 60.37 56.41 76.64 17.97
vmajor/Orca2-13B-selfmerge-26B 62.24 60.84 79.84 60.32 56.38 76.87 39.2
vmajor/Orca2-13B-selfmerge-39B 62.24 60.84 79.84 60.32 56.38 76.87 39.2

Interestingly the GSM8K performance more than doubled with the first self merge. Second self merge resulting in the 39B model did not produce any further gains.


license: ms-pl

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 62.24
AI2 Reasoning Challenge (25-Shot) 60.84
HellaSwag (10-Shot) 79.84
MMLU (5-Shot) 60.32
TruthfulQA (0-shot) 56.38
Winogrande (5-shot) 76.87
GSM8k (5-shot) 39.20