output-model

This is a merge of pre-trained language models created using mergekit.

Merge Details

Models Merged

Merged using the Task Arithmetic

  • Dans-personalityEngine 0.55 + Dolphin-Mistral-Venice-Edition 0.45 --> Personality55-Dolphin45-Mistral-24B
  • WeirdCompound 0.50 + Mechanism 0.50 --> Weird50-Mechanism50-24B
  • Personality55-Dolphin45-Mistral-24B 0.80 + Weird50-Mechanism50-24B 0.20 --> WeirdDolphinPersonalityMechanism-Mistral-24B

Chat template

Mistral's default chat template.

Configuration

The following YAML configuration was used to produce this model:

models:
 - model: \Weird50-Mechanism50-24B
   parameters:
     weight: 0.20
 - model: \Personality55-Dolphin45-Mistral-24B
   parameters:
     weight: 0.80
base_model: \Personality55-Dolphin45-Mistral-24B
merge_method: task_arithmetic
dtype: bfloat16
Downloads last month
3
Safetensors
Model size
24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Resoloopback/WeirdDolphinPersonalityMechanism-Mistral-24B

Quantizations
2 models