SHINE-ift_mqa / README.md
Yewei-Liu's picture
Update README.md
a27d9dd verified
metadata
license: mit
library_name: transformers
pipeline_tag: text-generation

SHINE: A Scalable In-Context Hypernetwork for Mapping Context to LoRA in a Single Pass

SHINE (Scalable Hyper In-context NEtwork) is a scalable hypernetwork that can map diverse meaningful contexts into high-quality LoRA adapters for large language models (LLM).

By reusing the frozen LLM's own parameters in an in-context hypernetwork design, SHINE transforms in-context knowledge into in-parameter knowledge in a single forward pass. This allows the model to handle complex question-answering tasks related to a specific context without needing to process that context again during inference.

Introduction

SHINE overcomes key limitations of prior hypernetworks by achieving strong expressive power with a relatively small number of parameters. It updates LLM parameters without any fine-tuning, significantly saving time, computation, and memory costs compared to standard supervised fine-tuning (SFT) adaptation.

Usage

This is the hypernetwork checkpoint after pretraining and instruction fine-tuning mqa.

For detailed instructions on environment setup, downloading model checkpoints, and performing inference (including the inference.ipynb notebook), please refer to the official GitHub repository.