leaderboard-pr-bot's picture
Adding Evaluation Results
4fac975 verified
|
raw
history blame
4.91 kB
metadata
language:
  - en
license: apache-2.0
tags:
  - merge
base_model:
  - teknium/OpenHermes-2.5-Mistral-7B
  - Intel/neural-chat-7b-v3-3
model-index:
  - name: Mistral-7B-Merge-02-v0
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 67.49
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=EmbeddedLLM/Mistral-7B-Merge-02-v0
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 85.78
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=EmbeddedLLM/Mistral-7B-Merge-02-v0
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 64.1
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=EmbeddedLLM/Mistral-7B-Merge-02-v0
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 60.52
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=EmbeddedLLM/Mistral-7B-Merge-02-v0
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 79.01
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=EmbeddedLLM/Mistral-7B-Merge-02-v0
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 67.25
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=EmbeddedLLM/Mistral-7B-Merge-02-v0
          name: Open LLM Leaderboard

Model Description

This is an experiment to compare merging 2 models using DARE TIES versus SLERP 🦙

We are mainly interested to compare against Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp

The 2 models involved in the merge as follows:

  1. teknium/OpenHermes-2.5-Mistral-7B
  2. Intel/neural-chat-7b-v3-3

The yaml config file for the merge is:

models:
  - model: mistralai/Mistral-7B-v0.1
    # no parameters necessary for base model
  - model: teknium/OpenHermes-2.5-Mistral-7B
    parameters:
      weight: 0.5
      density: 0.5
  - model: Intel/neural-chat-7b-v3-3
    parameters:
      weight: 0.5
      density: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  int8_mask: true
dtype: bfloat16

Open LLM Leaderboard

Note that with more tuning DARE TIES might achieve better results.

DARE TIES SLERP
Average 70.69 71.38
ARC 67.49 68.09
HellaSwag 85.78 86.2
MMLU 64.1 64.26
TruthfulQA 60.52 62.78
Winogrande 79.01 79.16
GSM8K 67.25 67.78

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 70.69
AI2 Reasoning Challenge (25-Shot) 67.49
HellaSwag (10-Shot) 85.78
MMLU (5-Shot) 64.10
TruthfulQA (0-shot) 60.52
Winogrande (5-shot) 79.01
GSM8k (5-shot) 67.25