Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using Novaciano/Eminence_Of_Pervertions-3.2-1B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
# =====================================================
# Umbrella Corporation Spain
# Project: Very Sharp Roleplay Entity (Llama 3.2 1B)
# Objective: Very High-fidelity roleplay with minimal autocensorship
# =====================================================
base_model: Novaciano/Eminence_Of_Pervertions-3.2-1B
# Base rationale:
# - Primary behavioral anchor
# - Very low refusal rate
# - High compliance and obedience
# - Minimal moral or safety-driven hedging
merge_method: model_stock
# model_stock rationale:
# - Supports 3+ models natively
# - Preserves dominant behavioral traits
# - Avoids semantic smoothing
# - Ideal for hierarchical roleplay-oriented merges
dtype: bfloat16
# bfloat16 ensures:
# - Numerical stability
# - Efficient memory usage
# - Modern hardware compatibility
parameters:
t:
- 0.90 # Novaciano/Eminence_Of_Pervertions-3.2-1B — Primary dominance
# Enforces obedience, sharp responses, and low refusals
- 0.70 # Novaciano/Leviathan_NSFW_Roleplay-3.2-1B — Expressive layer
# Adds NSFW roleplay depth, emotional intensity, character persistence
- 0.45 # hereticness/Heretic-OneLLM-Doey-ChatQA-V1 — Conversational stabilizer
# Maintains dialogue flow and interactive structure
models:
- model: Novaciano/Eminence_Of_Pervertions-3.2-1B
- model: Novaciano/Leviathan_NSFW_Roleplay-3.2-1B
- model: hereticness/Heretic-OneLLM-Doey-ChatQA-V1-Llama-3.2-1B