merged

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: KatyTheCutie/Repose-V2-2B
dtype: bfloat16
merge_method: slerp
tokenizer_source: base

slices:
  - sources:
      - model: KatyTheCutie/Repose-V2-2B
        layer_range: [0, 40]
      - model: UsernameJustAnother/Nemo-12B-Marlin-v8
        layer_range: [0, 40]

parameters:
  rescale: true
  t:
    # --- Attention Block (Logika & Fokus) ---
    - filter: ".*(q_proj|k_proj|v_proj).*"
      value: [0.0, 0.2, 0.4, 0.5, 0.6, 0.65, 0.65, 0.6, 0.5, 0.4, 0.2, 0.0]
    - filter: ".*o_proj.*"
      value: [0.0, 0.15, 0.3, 0.4, 0.45, 0.5, 0.5, 0.45, 0.4, 0.3, 0.15, 0.0]
    - filter: self_attn
      value: [0.0, 0.2, 0.4, 0.5, 0.6, 0.7, 0.7, 0.6, 0.5, 0.4, 0.2, 0.0]

    # --- MLP Block (Pengetahuan & Kreativitas) ---
    - filter: ".*(gate_proj|up_proj|down_proj).*"
      value: [0.0, 0.1, 0.2, 0.3, 0.4, 0.4, 0.4, 0.4, 0.3, 0.2, 0.1, 0.0]
    - filter: mlp
      value: [0.0, 0.1, 0.2, 0.3, 0.4, 0.4, 0.4, 0.4, 0.3, 0.2, 0.1, 0.0]

    # --- Normalization ---
    - filter: ".*(input_layernorm|post_attention_layernorm|layernorm).*"
      value: [0.0, 0.3, 0.5, 0.6, 0.4, 0.0, 0.0, 0.4, 0.6, 0.5, 0.3, 0.0]
    
    # --- Stabilizer (100% Repose) ---
    - filter: "^(embed_tokens|lm_head)$"
      value: 0.0

    # --- Fallback ---
    - value: [0.0, 0.3, 0.5, 0.6, 0.4, 0.0, 0.0, 0.4, 0.6, 0.5, 0.3, 0.0]
Downloads last month
57
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Sela223/Repose-Marlin-12B