metadata
license: cc-by-4.0
tags:
- mistral
- merge
pipeline_tag: text-generation
model-index:
- name: Maylin-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.81
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/Maylin-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.4
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/Maylin-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.73
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/Maylin-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.24
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/Maylin-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.64
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/Maylin-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.76
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/Maylin-7b
name: Open LLM Leaderboard
Model Card for Maylin-7b
DARE merge intended to help the Argetsu model be more coherent and less horny.
.yaml file for mergekit
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: SanjiWatsuki/Sonya-7B #200
parameters:
weight: 0.45
density: 0.75
- model: Azazelle/Argetsu #175
parameters:
weight: 0.39
density: 0.70
- model: Azazelle/Tippy-Toppy-7b #100
parameters:
weight: 0.22
density: 0.52
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 70.26 |
| AI2 Reasoning Challenge (25-Shot) | 66.81 |
| HellaSwag (10-Shot) | 86.40 |
| MMLU (5-Shot) | 64.73 |
| TruthfulQA (0-shot) | 60.24 |
| Winogrande (5-shot) | 79.64 |
| GSM8k (5-shot) | 63.76 |