merged

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using models/dolphin-2.9.3-mistral-nemo-12b as a base.

Models Merged

The following models were included in the merge:

  • models/Eleusis-12B
  • models/Muse-12B
  • models/Astra-v1-12B
  • models/Pygmalion-3-12B
  • models/Zinakha-12B
  • models/Chronos-Gold-12B-1.0
  • models/Dans-PersonalityEngine-V1.1.0-12b
  • models/Pantheon-RP-1.6-12b-Nemo
  • models/writing-roleplay-20k-context-nemo-12b-v1.0
  • models/Wayfarer-2-12B

Configuration

The following YAML configuration was used to produce this model:

base_model: models/dolphin-2.9.3-mistral-nemo-12b
dtype: bfloat16
merge_method: model_stock
modules:
  default:
    slices:
    - sources:
      - layer_range: [0, 40]
        model: models/Astra-v1-12B
      - layer_range: [0, 40]
        model: models/Chronos-Gold-12B-1.0
      - layer_range: [0, 40]
        model: models/dolphin-2.9.3-mistral-nemo-12b
      - layer_range: [0, 40]
        model: models/Dans-PersonalityEngine-V1.1.0-12b
      - layer_range: [0, 40]
        model: models/Eleusis-12B
      - layer_range: [0, 40]
        model: models/Muse-12B
      - layer_range: [0, 40]
        model: models/Pantheon-RP-1.6-12b-Nemo
      - layer_range: [0, 40]
        model: models/Pygmalion-3-12B
      - layer_range: [0, 40]
        model: models/Wayfarer-2-12B
      - layer_range: [0, 40]
        model: models/writing-roleplay-20k-context-nemo-12b-v1.0
      - layer_range: [0, 40]
        model: models/Zinakha-12B
parameters:
  normalize: 1.0
tokenizer_source: models/Chronos-Gold-12B-1.0
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for emogie3D/Mistral-Nemo-Base-2407-RP-Merge-V2

Quantizations
2 models

Paper for emogie3D/Mistral-Nemo-Base-2407-RP-Merge-V2