• Model description: TinyLlama 1.1B with K/32 softmax attention heads replaced by FAVOR+ linear attention, fine-tuned via knowledge distillation
  • Intended use: Research — evaluating quality/speed/approximation trade-offs of linear attention substitution
  • How to load: code snippet showing how to reconstruct MixedPerformerAttention and load the checkpoint
  • Training details: WikiText-103, 20k samples, SEQ_LEN=256, distillation loss, 4-phase curriculum
  • Results table: same as the README (ppl per phase)
  • Limitations: Phase 4 (32/32 heads) collapsed — not suitable for inference. Phase 2 is the recommended checkpoint.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for antoinechss/performer-checkpoints

Finetuned
(552)
this model

Datasets used to train antoinechss/performer-checkpoints