HippoFormer-Gemma2B

PyTorch Transformers Gemma

HippoFormer is a biologically-inspired memory architecture that brings hippocampal memory consolidation to large language models. This model integrates hippocampal mechanisms directly into the Gemma-2B transformer.

Model Description

HippoFormer adds three key components inspired by how the human hippocampus processes memories:

Component Inspiration Function
Salience Gate Sharp Wave Ripples (SPW-Rs) Dual-pathway importance scoring
Memory Buffer Sleep Replay Priority-based consolidation
Drift Calibrator Synaptic Homeostasis Embedding stability

Architecture

Input Tokens β†’ Gemma-2B (frozen + LoRA) β†’ Hidden States
    β†’ Salience Gate (importance scoring)
    β†’ Drift Calibrator (stability)
    β†’ Memory Buffer (consolidation)
    β†’ Output Fusion (cross-attention)
    β†’ Output Logits

Results

Perplexity (WikiText-2)

Model Parameters Perplexity
GPT-2 124M 29.41
Gemma-2B 2B ~18
HippoFormer 2B + 15M 11.83

Ablation Study

Configuration PPL Impact
Full HippoFormer 11.83 baseline
No Salience Gate 39.75 +27.92
No Memory Buffer 89.84 +78.01

Brain-Like Behavior

Metric Value Interpretation
Content/Function Ratio 2.11x Selective memory (content words tagged more)
Long-Range Benefit +6.95 PPL Better context retention
Buffer Priority 4.9/5.0 High-importance retention

Usage

from hippoformer import HippoFormer, HippoFormerConfig
from huggingface_hub import hf_hub_download
import torch

# Download checkpoint
ckpt_path = hf_hub_download(
    repo_id="Gustav-Proxi/HippoFormer-Gemma2B",
    filename="pytorch_model.pt"
)

# Initialize model
config = HippoFormerConfig(
    base_model_name="google/gemma-2b",
    freeze_base=True,
    use_lora=True,
)
model = HippoFormer(config)

# Load weights
ckpt = torch.load(ckpt_path, map_location="cpu")
model.load_state_dict(ckpt["model_state_dict"], strict=False)

# Generate
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
inputs = tokenizer("The capital of France is", return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_new_tokens=20)
print(tokenizer.decode(outputs[0]))

Training

  • Base Model: Gemma-2B (frozen with LoRA)
  • Dataset: WikiText-2
  • Hardware: NVIDIA RTX 4090 (24GB)
  • Training Time: ~24 hours
  • Best Checkpoint: step-110000

Citation

@misc{hippoformer2025,
  title={HippoFormer: Hippocampal Memory Selection for Transformers},
  author={Vaishak Girish Kumar and Sanika},
  year={2025},
  howpublished={\url{https://github.com/Gustav-Proxi/HippoFormer}},
}

Links

License

Apache 2.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Gustav-Proxi/HippoFormer-Gemma2B

Base model

google/gemma-2b
Finetuned
(257)
this model

Dataset used to train Gustav-Proxi/HippoFormer-Gemma2B