SymbioGPT-Gemma-Fused-v2
Cross-species LoRA projection from Gemma-270M into SymbioGPT-10M with blend alpha = 0.5 (stronger injection). This is v2 of SymbioGPT-Gemma-Fused which used alpha = 0.3.
What Changed vs v1
| Property | v1 | v2 |
|---|---|---|
| Blend alpha | 0.3 | 0.5 |
| Delta/weight ratio | 1.4% - 4.0% | 2.3% - 6.6% |
| PCA calibration | Same | Same (99.0% avg variance) |
Everything else is identical: same source LoRA (rank 44, alpha 88, philosophy domain), same PCA projection matrices, same GQA-to-MHA head mapping.
Architecture
See v1 model card for full details on:
- Architecture mapping (Gemma GQA → SymbioGPT MHA)
- PCA projection method (640 → 320 dims, 99.0% variance)
- Layer mapping (18 → 8 proportional grouping)
- Head selection (top-5 Q-heads by delta L2 norm)
Usage
import torch
checkpoint = torch.load("symbio_gemma_fused.pt", map_location="cpu")
Load into a SymbioGPT model instance from symbiogenesis.
Links
- v1 (alpha=0.3): LisaMegaWatts/SymbioGPT-Gemma-Fused
- Source LoRA: LisaMegaWatts/SymbioSLM-ouroboros-lora-20260301
- Base Gemma: LisaMegaWatts/Ouroboros-1MContext-Gemma-270m
- Framework: symbiogenesis
Model tree for LisaMegaWatts/SymbioGPT-Gemma-Fused-v2
Base model
google/gemma-3-270m
Finetuned
google/gemma-3-270m-it