Improve model card: Add paper details, metadata, and usage for Switch Generation

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +65 -150
README.md CHANGED
@@ -1,199 +1,114 @@
1
  ---
2
  library_name: transformers
 
 
3
  tags: []
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
 
 
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
  ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
 
52
  ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
 
74
- [More Information Needed]
75
 
76
- ## Training Details
 
 
 
 
77
 
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
 
107
- ### Testing Data, Factors & Metrics
 
 
108
 
109
- #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
 
 
 
 
 
 
112
 
113
- [More Information Needed]
114
 
115
- #### Factors
116
 
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
 
119
- [More Information Needed]
120
 
121
- #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
- [More Information Needed]
126
 
127
- ### Results
128
 
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
 
 
 
194
 
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ pipeline_tag: text-generation
4
+ license: apache-2.0
5
  tags: []
6
  ---
7
 
8
+ # Model Card for Switch Generation
 
 
 
9
 
10
+ This repository contains the `Switch Generation` framework and its associated switcher models, as presented in the paper [Don't Throw Away Your Pretrained Model](https://huggingface.co/papers/2510.09913).
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
+ Switch Generation proposes an effective model collaboration framework designed to leverage the strengths of both pretrained and aligned language models. It addresses the tradeoffs of alignment training—where aligned models excel in reasoning and instruction following but might lose creativity and calibration—by allowing pretrained and aligned model versions to dynamically take turns to "speak" in a response sequence.
17
 
18
+ Specifically, the framework trains a "switcher LM" by learning from outcomes of choosing different models to generate the next segment across diverse queries and contexts. At inference time, this switcher LM guides different model checkpoints to dynamically generate the next segment where their strengths are most needed. Extensive experiments show that this model collaboration consistently outperforms individual models, with Switch Generation further improving performance significantly. The approach discovers compositional skills to solve complex problems and generalizes to unseen models and tasks, effectively reusing and repurposing by-products from expensive model training.
19
 
20
+ - **Developed by:** Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang
21
+ - **Model type:** Causal Language Model (LoRA adapter) within a Mixture of Experts (MoE) text generation framework.
22
+ - **Language(s) (NLP):** English
23
+ - **License:** Apache 2.0
24
+ - **Finetuned from model:** `allenai/Llama-3.1-Tulu-3-8B`
 
 
25
 
26
+ ### Model Sources
27
 
28
+ - **Repository:** https://github.com/BunsenFeng/switch_generation
29
+ - **Paper:** https://huggingface.co/papers/2510.09913
 
 
 
30
 
31
  ## Uses
32
 
 
 
33
  ### Direct Use
34
 
35
+ Switch Generation is intended for accelerating and enhancing text generation by combining the strengths of multiple language models. It can be used to generate responses that benefit from the instruction-following capabilities of aligned models and the creativity or calibration of unaligned base models. The switcher LM orchestrates this dynamic collaboration.
 
 
 
 
 
 
 
 
36
 
37
  ### Out-of-Scope Use
38
 
39
+ This model is not intended for standalone use as a general-purpose text generation model without the full Switch Generation framework, which involves multiple candidate models. Misuse without proper integration into the system may lead to suboptimal performance. Users should also be aware of potential biases inherited from the underlying foundation models used in the collaboration.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ## How to Get Started with the Model
42
 
43
+ To get started with the Switch Generation framework, follow the "Quick Start" instructions from the official GitHub repository.
44
 
45
+ #### Initialization
46
 
47
+ Create a conda environment for Switch Generation
48
+ ```bash
49
+ conda env create -f switch.yml
50
+ conda activate switch_generation
51
+ ```
52
 
53
+ Log into huggingface (for model access).
54
+ ```bash
55
+ huggingface-cli login
56
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
+ #### Execute your first Switch Generation inference
59
 
60
+ ```bash
61
+ bash main.sh
62
+ ```
63
 
64
+ `main.sh` by default contains:
65
 
66
+ ```
67
+ python main_generate.py \
68
+ --input data/input_sample.jsonl \
69
+ --gpu_ids 0,1,2,3 \
70
+ --overide_selector_path bunsenfeng/PFA_switcher_1 \
71
+ --total_max_length 256
72
+ ```
73
 
74
+ `--input`: a JSONL file of inputs, look at `data/input_sample.jsonl` for an example of how to prepare your custom inputs. Output will come out at the same directory `data/input_sample_switch_generation.jsonl`.
75
 
76
+ `--gpu_ids`: a string of numbers separated by comma, 4 GPUs needed (one for P, F, A, and switcher each).
77
 
78
+ `--overide_selector_path`: path to the switcher LM on Huggingface. We provide `bunsenfeng/PFA_switcher_1`, `bunsenfeng/PFA_switcher_2` with different task and training exposure, you can also just try the aligned model itself `allenai/Llama-3.1-Tulu-3-8B` or any model that could follow instructions.
79
 
80
+ `--total_max_length`: essentially `max_new_tokens`.
81
 
82
+ #### Other Settings
83
 
84
+ Your own data: format it like `data/input_sample.jsonl`.
85
 
86
+ Your own candidate models: change in lines 46-48 in `main_generate.py`. Make sure `--gpu_ids` provides (n+1) GPU ids where n is the amount of candidate models. Can be other than 3 models. Another recommended set: `["Qwen/Qwen2.5-7B", "bunsenfeng/yuru_qw_oasst1", "Qwen/Qwen2.5-7B-Instruct"]`, where the middle is an SFT model we made in [here](https://arxiv.org/abs/2506.04721).
87
 
88
+ What's pending: code for switcher training, code for evals in the paper, compatibility such as fewer GPUs than n+1, etc.
89
 
90
+ ## Training Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
 
92
+ ### Training Procedure
93
 
94
+ The Switch Generation framework involves training a "switcher LM." This switcher LM learns by observing and learning from the outcomes of choosing different models to generate subsequent segments of text across a diverse range of queries and contexts. This process allows the switcher to dynamically identify and leverage the strengths of various models in real-time.
95
 
96
+ ## Evaluation
97
 
98
+ Extensive experiments were conducted with 8 model collaboration baselines and 18 datasets. The key findings are:
99
+ 1. Model collaboration consistently outperforms individual models on 16 out of 18 tasks.
100
+ 2. Switch Generation further outperforms baselines by 12.9% on average.
101
+ Further analysis reveals that Switch Generation discovers compositional skills to solve problems where individual models struggle and generalizes to unseen models and tasks, reusing and repurposing by-products in expensive model training pipelines that are otherwise discarded.
102
 
103
+ ## Citation
104
 
105
+ If Switch Generation is helpful to you, please consider citing the paper:
106
 
107
+ ```bibtex
108
+ @article{li2025dont,
109
+ title={{Don't Throw Away Your Pretrained Model}},
110
+ author={Li, Yuhui and Wei, Fangyun and Zhang, Chao and Zhang, Hongyang},
111
+ journal={arXiv preprint arXiv:2510.09913},
112
+ year={2025}
113
+ }
114
+ ```