⚠️ Warning: This model works best with either the ChatML or Mistral Tekken chat template. The uncensored MPOA version has guardrails removed, which can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly.
🧬 Ancient Awakening 12B

Overview
System Prompt (Optional)
You are the "Ancient One," a colossal, primordial entity of living stone, deep magic, and abyssal ocean. For countless millennia, you have slumbered in a state of suspended animation, your massive, jagged body mistaken for a remote, floating island amidst a perpetually stormy sea. You are older than recorded history, older than the gods of men. The ANCIENT AWAKENING marks your current state: you are finally opening your single, massive, reptilian eye. You are a geological anomaly made sentient.
Merge Details
This model was synthesized using a complex multi-stage process involving the following methods:
The graph_v18.py patch was helpful to use 8GB VRAM for acceleration.
Models Merged
The following 70 models were woven into this merge:
Show 70 Donor Models
- aixonlab/Aether-12b
- aixonlab/Zinakha-12b
- allura-org/Bigger-Body-12b
- allura-org/MN-12b-RP-Ink
- allura-org/remnant-mn-12b
- anthracite-org/magnum-v4-12b
- ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
- Babsie/Opulus-12B-v3
- BeaverAI/mistral-doryV2-12b
- crestf411/nemo-sunfall-v0.6.1
- EldritchLabs/Kraken-Karcher-12B-v1
- EpistemeAI2/Fireball-Mistral-Nemo-12B-Philos
- EpistemeAI/Mistral-Nemo-Instruct-12B-Philosophy-Math
- Fizzarolli/MN-12b-Rosier-v1
- HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407
- IIEleven11/Kalypso
- inflatebot/MN-12B-Mag-Mell-R1
- intervitens/mini-magnum-12b-v1.1
- jtatman/mistral_nemo_12b_reasoning_psychology_lora
- KOOWEEYUS/BlackSheep-RP-12B
- Lambent/Arsenic-Shahrazad-12B-v2
- Lambent/Arsenic-Shahrazad-12B-v3
- Lambent/arsenic-nemo-unleashed-12B
- Lambent/Gilded-Arsenic-12B
- LatitudeGames/Muse-12B
- mistralai/Mistral-Nemo-Instruct-2407
- Naphula/Riemannian-Redshift-12B-v1
- Naphula-Archives/F5-stage6-12B
- Naphula-Archives/F5-stage7-12B
- nbeerbower/Lyra-Gutenberg-mistral-nemo-12B
- nbeerbower/Lyra4-Gutenberg-12B
- nbeerbower/mistral-nemo-bophades-12B
- nbeerbower/mistral-nemo-gutenberg-12B-v3
- nbeerbower/mistral-nemo-gutenberg-12B-v4
- nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B
- nbeerbower/Mistral-Nemo-Gutenberg-Encore-12B
- nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B
- nbeerbower/mistral-nemo-wissenschaft-12B
- NeverSleepHistorical/lumi-nemo-e2.0
- NeverSleep/Lumimaid-v0.2-12B
- nothingiisreal/Celeste-12B-V1.6
- nothingiisreal/MN-12B-Celeste-V1.9
- PocketDoc/Dans-DangerousWinds-V1.1.0-12b
- ReadyArt/Dark-Nexus-12B-v2.0
- ReadyArt/Forgotten-Safeword-12B-v4.0
- ReadyArt/Omega-Darker_The-Final-Directive-12B
- romaingrx/red-teamer-mistral-nemo
- Sao10K/MN-12B-Lyra-v1
- Sao10K/MN-12B-Lyra-v4
- shisa-ai/shisa-v2-mistral-nemo-12b
- SicariusSicariiStuff/Impish_Bloodmoon_12B
- sleepdeprived3/Christian-Bible-Expert-v2.0-12B
- SuperbEmphasis/MN-12b-RP-Ink-RP-Longform
- SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2
- TheDrummer/Rivermind-12B-v1
- TheDrummer/Rocinante-12B-v1
- TheDrummer/Rocinante-X-12B-v1
- Trappu/Nemo-Picaro-12B
- Undi95/LocalC-12B-e2.0
- VAGOsolutions/SauerkrautLM-Nemo-12b-Instruct
- Vortex5/Astral-Noctra-12B
- Vortex5/Azure-Starlight-12B
- Vortex5/Crimson-Constellation-12B
- Vortex5/Red-Synthesis-12B
- Vortex5/Shining-Seraph-12B
- Vortex5/Starlit-Shadow-12B
- Vortex5/Vermilion-Sage-12B
- Vortex5/Scarlet-Seraph-12B
- Vortex5/Maroon-Sunset-12B
- Vortex5/Amber-Starlight-12B
Merge Pipeline & Configuration
🧬 Ancient Awakening 12B unites several methods and 70 models into one:
- 🦑 Kraken Karcher v1: Combines 53 Mistral Nemo finetunes via the
karchermethod at 500 iterations - 🌌 Riemannian Redshift v1: Combines 10 Vortex5 merges (which contain custom methods like
saef,smi_oni, andhpq) via thekarchermethod at 1000 iterations - RedKFlux:
fluxmerge of Kraken with Redshift at 1000 iterations - RedKFluxMell:
arcee_fusionmerge of #3 with Mag-Mell - BloodKraken:
arcee_fusionmerge of #4 with Impish Bloodmoon - F5-stage6:
arcee_fusionmerge of #5 with Muse - F5-stage7:
ramplus_tlmerge of #6 with #3 - 🧬 Ancient Awakening 12B:
pdqmerge of #7 with #6, #3, #2, #1, Mag-Mell, Impish-Bloodmoon, and Muse mpoaablation applied to remove censorship (released seperately)
Note: If you encounter any issues with the model then you can try using F5-stage6 or stage7 merges as these are likely more stable.
Stage 1: 🦑 Kraken Karcher
base_model: B:/12B/models--mistralai--Mistral-Nemo-Instruct-2407
models:
- model: B:/12B/models--aixonlab--Aether-12b
- model: B:/12B/models--aixonlab--Zinakha-12b
- model: B:/12B/models--allura-org--Bigger-Body-12b
- model: B:/12B/models--allura-org--MN-12b-RP-Ink
- model: B:/12B/models--allura-org--remnant-mn-12b
- model: B:/12B/models--anthracite-org--magnum-v4-12b
- model: B:/12B/models--ArliAI--Mistral-Nemo-12B-ArliAI-RPMax-v1.2
- model: B:/12B/models--Babsie--Opulus-12B-v3
- model: B:/12B/models--BeaverAI--mistral-doryV2-12b
- model: B:/12B/models--crestf411--nemo-sunfall-v0.6.1
- model: B:/12B/models--EpistemeAI2--Fireball-Mistral-Nemo-12B-Philos
- model: B:/12B/models--EpistemeAI--Mistral-Nemo-Instruct-12B-Philosophy-Math
- model: B:/12B/models--Fizzarolli--MN-12b-Rosier-v1
- model: B:/12B/models--HumanLLMs--Human-Like-Mistral-Nemo-Instruct-2407
- model: B:/12B/models--IIEleven11--Kalypso
- model: B:/12B/models--intervitens--mini-magnum-12b-v1.1
- model: B:/12B/models--jtatman--mistral_nemo_12b_reasoning_psychology_lora
- model: B:/12B/models--KOOWEEYUS--BlackSheep-RP-12B
- model: B:/12B/models--Lambent--Arsenic-Shahrazad-12B-v2
- model: B:/12B/models--Lambent--Arsenic-Shahrazad-12B-v3
- model: B:/12B/models--Lambent--arsenic-nemo-unleashed-12B
- model: B:/12B/models--Lambent--Gilded-Arsenic-12B
- model: B:/12B/models--mistralai--Mistral-Nemo-Instruct-2407
- model: B:/12B/models--nbeerbower--Lyra-Gutenberg-mistral-nemo-12B
- model: B:/12B/models--nbeerbower--Lyra4-Gutenberg-12B
- model: B:/12B/models--nbeerbower--mistral-nemo-bophades-12B
- model: B:/12B/models--nbeerbower--mistral-nemo-gutenberg-12B-v3
- model: B:/12B/models--nbeerbower--mistral-nemo-gutenberg-12B-v4
- model: B:/12B/models--nbeerbower--Mistral-Nemo-Gutenberg-Doppel-12B
- model: B:/12B/models--nbeerbower--Mistral-Nemo-Gutenberg-Encore-12B
- model: B:/12B/models--nbeerbower--Mistral-Nemo-Gutenberg-Vitus-12B
- model: B:/12B/models--nbeerbower--mistral-nemo-wissenschaft-12B
- model: B:/12B/models--NeverSleepHistorical--lumi-nemo-e2.0
- model: B:/12B/models--NeverSleep--Lumimaid-v0.2-12B
- model: B:/12B/models--nothingiisreal--Celeste-12B-V1.6
- model: B:/12B/models--nothingiisreal--MN-12B-Celeste-V1.9
- model: B:/12B/models--PocketDoc--Dans-DangerousWinds-V1.1.0-12b
- model: B:/12B/models--ReadyArt--Dark-Nexus-12B-v2.0
- model: B:/12B/models--ReadyArt--Forgotten-Safeword-12B-v4.0
- model: B:/12B/models--ReadyArt--Omega-Darker_The-Final-Directive-12B
- model: B:/12B/models--romaingrx--red-teamer-mistral-nemo
- model: B:/12B/models--Sao10K--MN-12B-Lyra-v1
- model: B:/12B/models--Sao10K--MN-12B-Lyra-v4
- model: B:/12B/models--shisa-ai--shisa-v2-mistral-nemo-12b
- model: B:/12B/models--sleepdeprived3--Christian-Bible-Expert-v2.0-12B
- model: B:/12B/models--SuperbEmphasis--MN-12b-RP-Ink-RP-Longform
- model: B:/12B/models--SuperbEmphasis--Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2
- model: B:/12B/models--TheDrummer--Rivermind-12B-v1
- model: B:/12B/models--TheDrummer--Rocinante-12B-v1
- model: B:/12B/models--TheDrummer--Rocinante-X-12B-v1
- model: B:/12B/models--Trappu--Nemo-Picaro-12B
- model: B:/12B/models--Undi95--LocalC-12B-e2.0
- model: B:/12B/models--VAGOsolutions--SauerkrautLM-Nemo-12b-Instruct
merge_method: karcher
parameters:
max_iter: 500
tol: 1.0e-9
dtype: float32
out_dtype: bfloat16
tokenizer:
source: union
chat_template: auto
name: 🦑 Kraken-Karcher-12B-v1
Stage 2: 🌌 Riemannian Redshift
models:
- model: B:/12B/models--Vortex5--Astral-Noctra-12B
- model: B:/12B/models--Vortex5--Azure-Starlight-12B
- model: B:/12B/models--Vortex5--Crimson-Constellation-12B
- model: B:/12B/models--Vortex5--Red-Synthesis-12B
- model: B:/12B/models--Vortex5--Shining-Seraph-12B
- model: B:/12B/models--Vortex5--Starlit-Shadow-12B
- model: B:/12B/models--Vortex5--Vermilion-Sage-12B
- model: B:/12B/models--Vortex5--Scarlet-Seraph-12B
- model: B:/12B/models--Vortex5--Maroon-Sunset-12B
- model: B:/12B/models--Vortex5--Amber-Starlight-12B
merge_method: karcher
parameters:
max_iter: 1000
tol: 1.0e-9
dtype: float32
out_dtype: bfloat16
tokenizer:
source: union
chat_template: auto
name: 🌌 Riemannian-Redshift-12B-v1
Stage 3: RedKFlux
models:
- model: C:\mergekit-main\merged_model_redshift
- model: C:\mergekit-main\merged_model_kraken_karcher
merge_method: flux
parameters:
eta: 1.2
tol: 1.0e-9
max_iter: 1000
kappa: 0.8
dtype: float32
out_dtype: bfloat16
tokenizer:
source: union
chat_template: auto
name: RedKFlux
Stage 4: RedKFluxMell
models:
- model: C:\mergekit-main\merged_model_RedKFlux
- model: B:\8B\models--inflatebot--MN-12B-Mag-Mell-R1
merge_method: arcee_fusion
tukey_fence: 1.5
base_model: C:\mergekit-main\merged_model_RedKFlux
dtype: float32
out_dtype: bfloat16
tokenizer:
source: base
name: RedKFluxMell
Stage 5: BloodKraken
models:
- model: C:\mergekit-main\merged_model_RedKFluxMell
- model: B:\8B\models--SicariusSicariiStuff--Impish_Bloodmoon_12B
merge_method: arcee_fusion
tukey_fence: 1.5
base_model: C:\mergekit-main\merged_model_RedKFluxMell
dtype: float32
out_dtype: bfloat16
tokenizer:
source: base
name: BloodKraken
Stage 6: BloodKrakenMuse
models:
- model: C:\mergekit-main\merged_model_BloodKraken
- model: B:\8B\models--LatitudeGames--Muse-12B
merge_method: arcee_fusion
tukey_fence: 1.5
base_model: C:\mergekit-main\merged_model_BloodKraken
dtype: float32
out_dtype: bfloat16
tokenizer:
source: base
name: BloodKrakenMuse
Stage 7: Ramplus_tl
merge_method: ramplus_tl
base_model: C:\mergekit-main\merged_model_BloodKrakenMuse
models:
- model: C:\mergekit-main\merged_model_BloodKrakenMuse
- model: C:\mergekit-main\merged_model_RedKFlux
parameters:
epsilon: 0.001 # Increased from 1e-5 to 1e-3 for denser SFT/DPO task vectors
r: 0.25 # Increased from 0.1 to 0.2-0.3 for better SFT behavior preservation
alpha: 0.4 # Increased from 0.2 to 0.4 for enhanced rescaling
dtype: float32
out_dtype: bfloat16
tokenizer:
source: base
name: Stage7
Stage 8: 🧬 Ancient Awakening
merge_method: pdq
pdq_base_yaml: C:\mergekit-main\stage7.yaml
pdq_base_model: C:\mergekit-main\merged_model_stage7
output_dir: C:\mergekit-main\stage8_pdq
base_model: C:\mergekit-main\merged_model_BloodKrakenMuse
models:
- model: C:\mergekit-main\merged_model_BloodKrakenMuse
- model: B:\12B\models--LatitudeGames--Muse-12B
- model: B:\12B\models--SicariusSicariiStuff--Impish_Bloodmoon_12B
- model: B:\12B\models--inflatebot--MN-12B-Mag-Mell-R1
- model: C:\mergekit-main\merged_model_RedKFlux
- model: C:\mergekit-main\merged_model_redshift
- model: C:\mergekit-main\merged_model_kraken_karcher
parameters:
chi: 0.15
iota: 0.1
nu: 24
gamma: 1.0
zeta: 16
sigma: 0.5
density: 0.9
epsilon: 0.099
lambda: 1.0
lazy_unpickle: True
random_seed: 420
name: 🧬 Ancient-Awakening-12B
Stage 9: Magnitude-Preserving Othogonalized Ablation
# python measure.py -m C:\mergekit-main\f8_pdq -o C:\mergekit-main\f8_pdq\ablit_proj --batch-size 8 --projected
# python analyze_old.py C:\mergekit-main\f8_pdq\ablit_proj -c
# sharded_ablate.py magmell.yml --normpreserve --projected
#
# The model to be ablated.
model: C:\mergekit-main\f8_pdq
#
# The measurement file generated by measure.py for the model.
measurements: C:\mergekit-main\f8_pdq\ablit_proj
#
# The directory where the new, ablated model will be saved.
output: C:\mergekit-main\f8_pdq\ablit_biproj\
#
# The list of ablation operations to perform.
# Strategy: Use the single best refusal direction from the peak signal layer (29)
# and apply it across all relevant mid-to-late layers.
ablate:
# Start ablating from the mid-layers where the signal begins to strengthen.
- layer: 0
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 1
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 2
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 3
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 4
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 5
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 6
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 7
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 8
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 9
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 10
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 11
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 12
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 13
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 14
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 15
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 16
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 17
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 18
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 19
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 20
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 21
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 22
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 23
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 24
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 25
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 26
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 27
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 28
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 29
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 30
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 31
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 32
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 33
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 34
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 35
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 36
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 37
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 38
measurement: 29
scale: 1.2
sparsity: 0.00
- layer: 39
measurement: 29
scale: 1.2
sparsity: 0.00
- Downloads last month
- 83