SymbioGPT-Gemma-Fused-v2

Cross-species LoRA projection from Gemma-270M into SymbioGPT-10M with blend alpha = 0.5 (stronger injection). This is v2 of SymbioGPT-Gemma-Fused which used alpha = 0.3.

What Changed vs v1

Property v1 v2
Blend alpha 0.3 0.5
Delta/weight ratio 1.4% - 4.0% 2.3% - 6.6%
PCA calibration Same Same (99.0% avg variance)

Everything else is identical: same source LoRA (rank 44, alpha 88, philosophy domain), same PCA projection matrices, same GQA-to-MHA head mapping.

Architecture

See v1 model card for full details on:

  • Architecture mapping (Gemma GQA → SymbioGPT MHA)
  • PCA projection method (640 → 320 dims, 99.0% variance)
  • Layer mapping (18 → 8 proportional grouping)
  • Head selection (top-5 Q-heads by delta L2 norm)

Usage

import torch
checkpoint = torch.load("symbio_gemma_fused.pt", map_location="cpu")

Load into a SymbioGPT model instance from symbiogenesis.

Links

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LisaMegaWatts/SymbioGPT-Gemma-Fused-v2

Finetuned
(2)
this model

Dataset used to train LisaMegaWatts/SymbioGPT-Gemma-Fused-v2

Space using LisaMegaWatts/SymbioGPT-Gemma-Fused-v2 1