DARK LUST ROLEPLAY 3.2 1B

DARK LUST ROLEPLAY 3.2 1B is a high-performance, compact language model specifically engineered for advanced narrative roleplay and complex creative writing.

Based on the Llama-3.2 1B architecture, this model has been subjected to a specialized merging protocol designed to prioritize high-fidelity instruction compliance and a sophisticated, clinical conversational tone.

🧬 Tactical Design

The architecture of this merge focuses on "Precision over Constraint." By utilizing the Spearman Correlation Elimination (SCE) method, the model enhances descriptive vocabulary and maintains consistency in long-form narratives.

Component Breakdown:

  • Primary Logic: Optimized for direct obedience and instruction following.
  • Narrative Texture: Infused with specialized tuning for vivid, high-stakes storytelling.
  • Lexical Expansion: Augmented with an expansive vocabulary to avoid repetitive or standard automated phrasing.

⚠️ Operational Disclosure

This model is designed for open-ended creative exploration. It has been modified to prioritize prompt adherence and character consistency.

  • The model's persona is typically cold, authoritative, and direct.
  • Users are responsible for the content generated during sessions.

💡 Operational Recommendations

For optimal performance in roleplay scenarios, use a direct and assertive System Prompt. The model responds best to high-quality, descriptive input and performs remarkably well on hardware with limited VRAM (under 2GB).


The evolution of narrative intelligence continues. Proceed with clinical precision.

Merge Method

This model was merged using the DARE TIES merge method using D1rtyB1rd/Dirty-Alice-RP-NSFW-llama-3.2-1B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: jtatman/merged-llama32-1b-uncensored-dolphin-agent
    parameters:
      weight: 0.4
      density: 0.6
  - model: N-Bot-Int/MaidEllaA-1B
    parameters:
      weight: 0.3
      density: 0.6
  - model: jtatman/llama-3.2-1b-lewd-mental-occult
    parameters:
      weight: 0.2
      density: 0.5
  - model: jtatman/merged-llama32-1b-inappropriate-triceratops
    parameters:
      weight: 0.1
      density: 0.5
merge_method: dare_ties
base_model: D1rtyB1rd/Dirty-Alice-RP-NSFW-llama-3.2-1B
parameters:
  int8_mask: true
dtype: bfloat16
tokenizer:
  source: union
Downloads last month
259
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NovaCorp/DARK-LUST-ROLEPLAY-3.2-1B

Paper for NovaCorp/DARK-LUST-ROLEPLAY-3.2-1B