bragi4

Better slop handling at the expense of less standard format output.

Look to the Loki v2 repo for prompts and samplers, they transfer well.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: CrucibleLab/L3.3-70B-Loki-V2.0
  - model: schonsense/Tropoplectic
merge_method: slerp
base_model: schonsense/Tropoplectic
parameters:
  t:
    - filter: q_proj
      value: [0.3, 0.30, 0.30, 0.30, 0.8]
    - filter: k_proj
      value: [0.2, 0.20, 0.20, 0.20, 0.7]
    - filter: v_proj
      value: [0.4, 0.40, 0.40, 0.40, 0.8]
    - filter: o_proj
      value: [0.5, 0.75, 0.75, 0.75, 0.9] 
    - filter: gate_proj
      value: [0.20, 0.20, 0.20, 0.20, 0.8]
    - filter: up_proj
      value: [0.30, 0.30, 0.30, 0.30, 0.8]
    - filter: down_proj
      value: [0.50, 0.75, 0.75, 0.75, 0.9]
    - filter: lm_head
      value: 0.5
    - value: 0

  normalize: false
  int8_mask: false
  rescale: false

dtype: float32
out_dtype: bfloat16

chat_template: llama3
tokenizer:
  source: union
  pad_to_multiple_of: 8

Screenshot%202026-02-05%20132054

Downloads last month
18
Safetensors
Model size
71B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for schonsense/Bragi_v2_70B