Harem 3.2 1B

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:





models:
  - model: JoaoReiz/Llama3.2_1B_HAREM  # Experimental viral strain neural imprint
  - model: JoaoReiz/Llama3.2_1B_HAREM               # Baseline cognitive template, "safe mode"

merge_method: slerp  # Spherical Linear Interpolation to preserve extreme viral traits smoothly
base_model: JoaoReiz/Llama3.2_1B_HAREM  # Anchor model for stable latent space

dtype: bfloat16  # Memory-efficient precision, minimal loss in viral feature fidelity

parameters:
  # Interpolation ratios: from base model (0.0) to near-complete T-Virus domination (0.95)
  # Higher t-values correspond to reduced censorship and increased viral characteristics
  t: [0.0, 0.25, 0.5, 0.75, 0.95]

# Notes:
# - t=0.0   -> Pure Prototype-Virus-ENFORCE, fully stable, heavily censored
# - t=0.25  -> Slight viral traits, minimal influence on prompt handling
# - t=0.5   -> Balanced merge, moderate reduction in censorship
# - t=0.75  -> Strong T-Virus traits, significantly less censoring
# - t=0.95  -> Near-total viral influence, maximum expressive freedom, minimal autoprotection





Downloads last month
81
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NovaCorp/HAREM-3.2-1B

Merges
1 model
Quantizations
3 models