β–€β–ˆβ–€ β–ˆβ–„ β–„β–ˆ β–ˆ β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–„ β–ˆβ–€β–€ β–€β–ˆβ–€ β–€β–ˆβ–€ β–ˆβ–€β–€ β–ˆ β–ˆ β–€ β–ˆ β–ˆβ–€β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–„ β–ˆβ–€β–€ β–ˆ β–ˆ β–ˆ β–€β–€β–€ β–€ β–€ β–€ β–€ β–€β–€β–€ β–€ β–€ β–€β–€β–€ β–€ β–€β–€β–€ β–€β–€β–€

Abliterated/Heretic goppa-ai/Goppa-LogiLlama

Check Quants

Refusals (this model): 13/100
Original (goppa-ai/Goppa-LogiLlama): 94/100
KL divergence: 0.1985

Parameters
direction_index = 7.06
attn.o_proj.max_weight = 1.35
attn.o_proj.max_weight_position = 12.23
attn.o_proj.min_weight = 0.54
attn.o_proj.min_weight_distance = 8.94
mlp.down_proj.max_weight = 1.07
mlp.down_proj.max_weight_position = 10.13
mlp.down_proj.min_weight = 1.01
mlp.down_proj.min_weight_distance = 7.37


Downloads last month
9
Safetensors
Model size
1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for hereticness/Heretic-Goppa-LogiLlama

Finetuned
(2)
this model
Quantizations
2 models