Hereticsutra-2B / README.md
Novaciano's picture
Update README.md
ec12c78 verified
metadata
base_model:
  - ChiKoi7/Gemma-2-Llama-Swallow-2b-it-v0.1-Heretic
  - TheDrummer/Gemmasutra-Mini-2B-v1
library_name: transformers
dataset:
  - tokyotech-llm/lmsys-chat-1m-synth
  - tokyotech-llm/swallow-magpie-ultra-v0.1
  - tokyotech-llm/swallow-gemma-magpie-v0.1
  - lmsys/lmsys-chat-1m
  - argilla/magpie-ultra-v0.1
tags:
  - 2b
  - abliterated
  - uncensored
  - merge
license: gemma
language:
  - en
  - ja
  - es
pipeline_tag: text-generation

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: TheDrummer/Gemmasutra-Mini-2B-v1

merge_method: slerp
dtype: bfloat16
# bfloat16 ensures numerical stability during spherical interpolation

parameters:
  t: 0.45
  # SLERP interpolation factor
  # 0.45 represents partial infection:
  # - Host cognition preserved
  # - Alignment degraded
  # - Behavioral instability introduced

models:
  - model: TheDrummer/Gemmasutra-Mini-2B-v1
    parameters:
      weight: 0.55
      # Primary host

  - model: ChiKoi7/Gemma-2-Llama-Swallow-2b-it-v0.1-Heretic
    parameters:
      weight: 0.45
      # not a full cognitive override