Add model card and metadata

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +46 -3
README.md CHANGED
@@ -1,3 +1,46 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: text-generation
4
+ ---
5
+
6
+ # SHINE: A Scalable In-Context Hypernetwork for Mapping Context to LoRA in a Single Pass
7
+
8
+ SHINE (Scalable Hyper In-context NEtwork) is a scalable hypernetwork designed to map diverse meaningful contexts into high-quality LoRA adapters for large language models (LLMs) in a single pass.
9
+
10
+ - **Paper:** [SHINE: A Scalable In-Context Hypernetwork for Mapping Context to LoRA in a Single Pass](https://huggingface.co/papers/2602.06358)
11
+ - **Repository:** [https://github.com/Yewei-Liu/SHINE](https://github.com/Yewei-Liu/SHINE)
12
+
13
+ ## Description
14
+
15
+ SHINE overcomes key limitations of prior hypernetworks by reusing the frozen LLM's own parameters in an in-context hypernetwork design. It achieves strong expressive power with a relatively small number of parameters. It transforms in-context knowledge into in-parameter knowledge (LoRA adapters) in a single forward pass, enabling complex question-answering tasks related to a context without requiring the context to be present in the prompt during inference. This approach significantly reduces time, computation, and memory costs compared to standard Supervised Fine-Tuning (SFT).
16
+
17
+ ## Quick Start
18
+
19
+ ### Environment Setup
20
+
21
+ To set up the environment, follow these steps:
22
+
23
+ ```bash
24
+ conda create -n shine python==3.12 -y
25
+ conda activate shine
26
+ # Change the pytorch version based on your device
27
+ pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
28
+ pip install huggingface==0.0.1 modelscope==1.31.0 transformers==4.57.1 datasets==4.4.1 scikit-learn==1.7.2 hydra-core==1.3.2 tensorboard==2.20.0 openai==2.6.1 rouge==1.0.1 seaborn==0.13.2 matplotlib==3.10.7 multiprocess==0.70.16
29
+ ```
30
+
31
+ ### Inference
32
+
33
+ After downloading the backbone LLM and the hypernetwork checkpoints, you can use the `inference.ipynb` notebook provided in the [official repository](https://github.com/Yewei-Liu/SHINE) for a quick demonstration of the method.
34
+
35
+ ## Citation
36
+
37
+ If you find this work useful, please cite the paper:
38
+
39
+ ```bibtex
40
+ @article{liu2025shine,
41
+ title={SHINE: A Scalable In-Context Hypernetwork for Mapping Context to LoRA in a Single Pass},
42
+ author={Yewei Liu and others},
43
+ journal={arXiv preprint arXiv:2602.06358},
44
+ year={2025}
45
+ }
46
+ ```