Source model

Geodesic-Phantom-12B by OrobasVault


Provided quantized models

ExLlamaV3: v0.0.29

Requirements: A python installation with huggingface-hub module to use CLI.

Licensing

License detected: unknown

The license for the provided quantized models is inherited from the source model (which incorporates the license of its original base model). For definitive licensing information, please refer first to the page of the source or base models. File and page backups of the source model are provided below.


Backups

Date: 04.05.2026

Source files

Source page (click to expand)

πŸ‘» Geodesic Phantom 12B

geodesic-phantom

This is a merge of pre-trained language models created using mergekit.

This was merged in 7 hours on a runpod A40 using an adaptive VRAM chunking script (based on measure.py by GrimJim)

WARNING:mergekit.graph:OOM at chunk 65536, reducing to 32768 (attempt 1, progress: 0/131075)
WARNING:mergekit.graph:OOM at chunk 32768, reducing to 16384 (attempt 2, progress: 0/131075)

[Karcher_Stock Audit] Layer: lm_head.weight
Stats: Cos(ΞΈ): 0.564 | t-factor: 0.8843 | Karcher Iters: 2960
  (Base)  mistralai--Mistral-Nemo-Instruct-2407               : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                              ( 11.57%)
  (Donor) Vortex5--Prototype-X-12b                            : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                            ( 14.74%)
  (Donor) Vortex5--Stellar-Witch-12B                          : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                            ( 14.74%)
  (Donor) Vortex5--Celestial-Queen-12B                        : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                            ( 14.74%)
  (Donor) Vortex5--Moonlit-Mirage-12B                         : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                            ( 14.74%)
  (Donor) Vortex5--Crimson-Constellation-12B                  : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                            ( 14.74%)
  (Donor) Vortex5--Wicked-Nebula-12B                          : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                            ( 14.74%)

The following patch was also required for this merge

karcher_stock Adaptive Tanh Soft-Clamp v11

# ── 11. Model Stock t factor with Adaptive Soft-Clamp ─────────────
        N = len(ws_2d)
        ct = cos_theta.unsqueeze(-1) if cos_theta.dim() > 0 else cos_theta
        
        # Raw Model Stock formula
        denom = 1.0 + (N - 1) * ct
        # Add a tiny epsilon to prevent literal division by zero
        t_raw = (N * ct) / denom.clamp(min=1e-6) 

        # --- BULLETPROOF TANH CLAMP ---
        # 1. Prevent negative infinity spikes (fallback to base model)
        t_clamped_bottom = torch.clamp(t_raw, min=0.0)
        
        # 2. Smoothly asymptote positive spikes to L (Maximum allowed t-factor)
        L = 1.5 
        excess = torch.clamp(t_clamped_bottom - 1.0, min=0.0)
        t_soft_top = 1.0 + (L - 1.0) * torch.tanh(excess / (L - 1.0))
        
        # 3. Apply: If t <= 1.0, use exact math. If t > 1.0, use soft curve.
        t = torch.where(t_clamped_bottom <= 1.0, t_clamped_bottom, t_soft_top)
        # ------------------------------

Example of the clamp preventing merge corruption

tanh_clamp

Merge Details

Merge Method

This model was merged using the karcher_stock merge method using /workspace/models/mistralai--Mistral-Nemo-Instruct-2407 as a base.

Models Merged

The following models were included in the merge:

  • /workspace/models/Vortex5--Wicked-Nebula-12B
  • /workspace/models/Vortex5--Celestial-Queen-12B
  • /workspace/models/Vortex5--Moonlit-Mirage-12B
  • /workspace/models/Vortex5--Stellar-Witch-12B
  • /workspace/models/Vortex5--Prototype-X-12b
  • /workspace/models/Vortex5--Crimson-Constellation-12B

Configuration

The following YAML configuration was used to produce this model:

architecture: MistralForCausalLM
base_model: /workspace/models/mistralai--Mistral-Nemo-Instruct-2407
models:
  - model: /workspace/models/Vortex5--Prototype-X-12b
  - model: /workspace/models/Vortex5--Celestial-Queen-12B
  - model: /workspace/models/Vortex5--Wicked-Nebula-12B
  - model: /workspace/models/Vortex5--Stellar-Witch-12B
  - model: /workspace/models/Vortex5--Moonlit-Mirage-12B
  - model: /workspace/models/Vortex5--Crimson-Constellation-12B
merge_method: karcher_stock # v8
parameters:  
  filter_wise: true
  max_iter: 10000
  min_iter: 1000
  tol: 1.0e-11
dtype: float32
out_dtype: bfloat16
tokenizer:
  source: union
chat_template: auto
name: πŸ‘» Geodesic Phantom 12B
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for DeathGodlike/OrobasVault_Geodesic-Phantom-12B_EXL3

Quantized
(4)
this model