Neural Base Cover

“They are against the Machine?” “They would be against mathematics or against the art of writing if they had lived at the appropriate time.”Isaac Asimov, I, Robot (1950)

🌑 About Neural

Neural is a Mistral Small (24B) merge for narrative generation and roleplay. This is not a general-purpose assistant; it is specialized to favor atmospheric, world-weary, and analytical prose. It is inspired by two models, OddTheGreat/Rotor_24B_V.1 and FlareRebellion/WeirdCompound-v1.7-24b.

This merge preserves knowledge while minimizing "slop". It serves as a base for an upcoming project and is released here for the community to use for their own stylistic fine-tune.

🛠 Technical Specifications

  • Base Architecture: Mistral Small 3.2 (24B)
  • Chat Template: Mistral Tekken
  • Merge Strategy: Multi-Merge (Model Stock, DARE Ties, nuSLERP & SLERP)
  • Primary Focus: Narrative weight, structural integrity, and stylistic "grit."

⚠️ Content Warning

Neural is an unaligned research artifact. It has not undergone safety alignment and functions strictly as a next-token predictor. Because of its neutral-negative alignment and "visceral grit" heritage, it lacks standard safety guardrails. It will not shy away from, censor, or pivot away from themes of violence, horror, sexuality, or dark psychological tension, prioritizing narrative realism over helpfulness or safety.

⚖️ Usage

I'm releasing these weights for research and creative purposes. Since this is a raw base model, anyone using it for public-facing tools or apps should implement their own moderation or fine-tuning first; it won't police itself. This model is provided "as-is," without any warranties regarding the safety or appropriateness of its generations.

🧩 Merge Configuration

It is a multi-step merge designed to align all models under the same solid base, using the Model Stock, DARE TIES, nuSLERP and SLERP methods.

Configuration

The following YAML configuration was used to produce this model:

name: stock
base_model: TheDrummer/Cydonia-24B-v4.3
merge_method: model_stock
dtype: bfloat16
models:
  - model: TheDrummer/Cydonia-24B-v4.3
  - model: zerofata/MS3.2-PaintedFantasy-v2-24B
  - model: aixonlab/Eurydice-24b-v3.5
  - model: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
---
name: aligned_stock
merge_method: nuslerp
base_model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
dtype: bfloat16
models:
  - model: stock
    parameters: 
      weight: 0.45
  - model: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
    parameters:
      weight: 0.18
---
name: aligned_dare
merge_method: dare_ties
base_model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
dtype: bfloat16
parameters:
  normalize: false
  int8_mask: false
tokenizer_source: ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0
models:
  - model: ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0
    parameters:
      weight: 0.5
      density: 0.5
  - model: CrucibleLab/M3.2-24B-Loki-V1.3
    parameters:
      weight: 0.5  
      density: 0.5
  - model: Delta-Vector/MS3.2-Austral-Winton
    parameters:
      weight: 0.25 
      density: 0.25
  - model: zerofata/MS3.2-PaintedFantasy-v2-24B
    parameters:
      weight: 0.25
      density: 0.25
---
merge_method: slerp
base_model: aligned_stock
slices:
  - sources:
      - model: aligned_stock
        layer_range: [0, 40]
      - model: aligned_dare
        layer_range: [0, 40]
parameters:
  t: 0.35
dtype: bfloat16
embed_slerp: true
Downloads last month
26
Safetensors
Model size
24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NeuR0mancR/Neural-v1-24B

Papers for NeuR0mancR/Neural-v1-24B