YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Quantization made by Richard Erkhov.

Github

Discord

Request more models

MT4-gemma-2-9B - GGUF

Original model description:

library_name: transformers tags: - mergekit - merge base_model: - zelk12/MT4-BMM-gemma-2-9B - zelk12/MT4-IMUGMA-gemma-2-9B model-index: - name: MT4-gemma-2-9B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 77.62 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT4-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 43.55 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT4-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 15.63 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT4-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 11.74 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT4-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 13.0 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT4-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 37.4 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT4-gemma-2-9B name: Open LLM Leaderboard

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: zelk12/MT4-BMM-gemma-2-9B
  - model: zelk12/MT4-IMUGMA-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT4-BMM-gemma-2-9B
dtype: bfloat16
parameters:
  t: 0.5

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 33.16
IFEval (0-Shot) 77.62
BBH (3-Shot) 43.55
MATH Lvl 5 (4-Shot) 15.63
GPQA (0-shot) 11.74
MuSR (0-shot) 13.00
MMLU-PRO (5-shot) 37.40
Downloads last month
805
GGUF
Model size
9B params
Architecture
gemma2
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support