nielsr HF Staff commited on
Commit
2f5dbdb
·
verified ·
1 Parent(s): 29d88e2

Improve model card for Switch Generation model with paper, GitHub links, usage, and metadata

Browse files

This PR significantly improves the model card for the "Switch Generation" model.

Key updates include:
* **Comprehensive Description**: The boilerplate text has been replaced with a detailed summary derived from the paper's abstract, explaining the novel "Switch Generation" concept.
* **Metadata Enrichment**:
* The `pipeline_tag: text-generation` has been added for better discoverability on the Hugging Face Hub.
* Relevant tags such as `llama`, `model-collaboration`, and `instruction-following` have been included.
* The `base_model` has been explicitly listed (`allenai/Llama-3.1-Tulu-3-8B`).
* The license is set to `other` as no explicit license was found in the source materials.
* **Linked Resources**: Direct links to the academic paper ([Don't Throw Away Your Pretrained Model](https://huggingface.co/papers/2510.09913)) and the associated GitHub repository (`https://github.com/BunsenFeng/switch_generation`) have been added.
* **Getting Started Guide**: A "How to Get Started" section, including code snippets for environment setup and inference, has been extracted directly from the GitHub README.

These changes make the model card much more informative and user-friendly for researchers and practitioners.

Files changed (1) hide show
  1. README.md +64 -69
README.md CHANGED
@@ -1,127 +1,130 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
 
 
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
 
 
 
 
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
 
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
 
46
- ### Downstream Use [optional]
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
 
 
 
 
51
 
52
- ### Out-of-Scope Use
 
 
 
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
 
56
- [More Information Needed]
 
 
57
 
58
- ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
 
 
 
61
 
62
- [More Information Needed]
63
 
64
- ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
- ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
 
74
- [More Information Needed]
75
 
76
  ## Training Details
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
  [More Information Needed]
83
 
84
  ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
  #### Preprocessing [optional]
89
 
90
  [More Information Needed]
91
 
92
-
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
 
97
  #### Speeds, Sizes, Times [optional]
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
  [More Information Needed]
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
  ### Testing Data, Factors & Metrics
108
 
109
  #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
  [More Information Needed]
114
 
115
  #### Factors
116
 
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
  [More Information Needed]
120
 
121
  #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
  [More Information Needed]
126
 
127
  ### Results
@@ -130,25 +133,19 @@ Use the code below to get started with the model.
130
 
131
  #### Summary
132
 
133
-
134
-
135
  ## Model Examination [optional]
136
 
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
  [More Information Needed]
140
 
141
  ## Environmental Impact
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
  ## Technical Specifications [optional]
154
 
@@ -168,9 +165,9 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
168
 
169
  [More Information Needed]
170
 
171
- ## Citation [optional]
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
 
175
  **BibTeX:**
176
 
@@ -182,8 +179,6 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
182
 
183
  ## Glossary [optional]
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
  [More Information Needed]
188
 
189
  ## More Information [optional]
 
1
  ---
2
  library_name: transformers
3
+ pipeline_tag: text-generation
4
+ license: other
5
+ tags:
6
+ - llama
7
+ - model-collaboration
8
+ - instruction-following
9
+ base_model: allenai/Llama-3.1-Tulu-3-8B
10
  ---
11
 
12
+ # Model Card for Switch Generation
 
 
 
13
 
14
+ This model implements **Switch Generation**, a novel approach presented in the paper [Don't Throw Away Your Pretrained Model](https://huggingface.co/papers/2510.09913). Switch Generation aims to make the best of both worlds by enabling pretrained and aligned model versions to "speak" in turns within a response sequence. This method addresses the tradeoffs of alignment training by leveraging model collaboration, where a "switcher LM" dynamically guides different model checkpoints to generate segments where their strengths are most needed. Extensive experiments show that Switch Generation consistently outperforms individual models and baselines, discovering compositional skills and reusing by-products from expensive training pipelines.
15
 
16
  ## Model Details
17
 
18
  ### Model Description
19
 
20
+ Switch Generation is a model collaboration framework designed to overcome the limitations of alignment training, which can lead to losses in skills like creativity and calibration, where unaligned base models often excel. The core idea is to train a "switcher LM" that learns to choose between different models (e.g., a pretrained base model and an aligned version) to generate the next segment of text. This dynamic switching allows the system to harness the unique strengths of each participating model, leading to improved performance across tasks requiring diverse skills such as reasoning, instruction following, creativity, and calibration. It generalizes to unseen models and tasks by effectively repurposing existing model assets.
21
 
22
+ - **Developed by:** [More Information Needed]
23
+ - **Model type:** Switcher Language Model (LoRA adapter for Causal LM orchestration)
24
+ - **Language(s) (NLP):** English
25
+ - **License:** other
26
+ - **Finetuned from model:** [allenai/Llama-3.1-Tulu-3-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B)
27
 
28
+ ### Model Sources
 
 
 
 
 
 
29
 
30
+ - **Repository:** https://github.com/BunsenFeng/switch_generation
31
+ - **Paper:** [Don't Throw Away Your Pretrained Model](https://huggingface.co/papers/2510.09913)
32
 
33
+ ## Uses
34
 
35
+ ### Direct Use
 
 
36
 
37
+ The Switch Generation framework is intended for text generation tasks where combining the strengths of different language models (e.g., aligned for instruction following and unaligned for creativity) can lead to superior and more balanced responses. It is designed to orchestrate the generation process by dynamically selecting the most suitable underlying model for each segment.
38
 
39
+ ### Out-of-Scope Use
40
 
41
+ This model is not intended for standalone direct text generation without the orchestrated collaboration of multiple underlying language models. It functions as a "switcher" or controller within a larger generation system. As with any language model, users should be aware of potential biases and limitations in generated content.
42
 
43
+ ## How to Get Started with the Model
44
 
45
+ Use the code below to get started with the model.
46
 
47
+ ### Quick Start
48
 
49
+ #### Initialization
50
 
51
+ Create a conda environment for Switch Generation
52
+ ```
53
+ conda env create -f switch.yml
54
+ conda activate switch_generation
55
+ ```
56
 
57
+ Log into huggingface (for model access).
58
+ ```
59
+ huggingface-cli login
60
+ ```
61
 
62
+ #### Execute your first Switch Generation inference
63
 
64
+ ```
65
+ bash main.sh
66
+ ```
67
 
68
+ `main.sh` by default contains:
69
 
70
+ ```
71
+ python main_generate.py \
72
+ --input data/input_sample.jsonl \
73
+ --gpu_ids 0,1,2,3 \
74
+ --overide_selector_path bunsenfeng/PFA_switcher_1 \
75
+ --total_max_length 256
76
+ ```
77
 
78
+ `--input`: a JSONL file of inputs, look at `data/input_sample.jsonl` for an example of how to prepare your custom inputs. Output will come out at the same directory `data/input_sample_switch_generation.jsonl`.
79
 
80
+ `--gpu_ids`: a string of numbers separated by comma, 4 GPUs needed (one for P, F, A, and switcher each).
81
 
82
+ `--overide_selector_path`: path to the switcher LM on Huggingface. We provide `bunsenfeng/PFA_switcher_1`, `bunsenfeng/PFA_switcher_2` with different task and training exposure, you can also just try the aligned model itself `allenai/Llama-3.1-Tulu-3-8B` or any model that could follow instructions.
83
 
84
+ `--total_max_length`: essentially `max_new_tokens`.
85
 
86
+ #### Other Settings
87
 
88
+ Your own data: format it like `data/input_sample.jsonl`.
89
 
90
+ Your own candidate models: change in lines 46-48 in `main_generate.py`. Make sure `--gpu_ids` provides (n+1) GPU ids where n is the amount of candidate models. Can be other than 3 models. Another recommended set: `["Qwen/Qwen2.5-7B", "bunsenfeng/yuru_qw_oasst1", "Qwen/Qwen2.5-7B-Instruct"]`, where the middle is an SFT model we made in [here](https://arxiv.org/abs/2506.04721).
91
 
92
  ## Training Details
93
 
94
  ### Training Data
95
 
 
 
96
  [More Information Needed]
97
 
98
  ### Training Procedure
99
 
100
+ The switcher LM is trained by learning from outcomes of choosing different models to generate the next segment across diverse queries and contexts. At inference time, the switcher LM guides different model checkpoints to dynamically generate the next segment where their strengths are most needed.
101
 
102
  #### Preprocessing [optional]
103
 
104
  [More Information Needed]
105
 
 
106
  #### Training Hyperparameters
107
 
108
+ - **Training regime:** [More Information Needed]
109
 
110
  #### Speeds, Sizes, Times [optional]
111
 
 
 
112
  [More Information Needed]
113
 
114
  ## Evaluation
115
 
 
 
116
  ### Testing Data, Factors & Metrics
117
 
118
  #### Testing Data
119
 
 
 
120
  [More Information Needed]
121
 
122
  #### Factors
123
 
 
 
124
  [More Information Needed]
125
 
126
  #### Metrics
127
 
 
 
128
  [More Information Needed]
129
 
130
  ### Results
 
133
 
134
  #### Summary
135
 
 
 
136
  ## Model Examination [optional]
137
 
 
 
138
  [More Information Needed]
139
 
140
  ## Environmental Impact
141
 
 
 
142
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
143
 
144
+ - **Hardware Type:** [More Information Needed]
145
+ - **Hours used:** [More Information Needed]
146
+ - **Cloud Provider:** [More Information Needed]
147
+ - **Compute Region:** [More Information Needed]
148
+ - **Carbon Emitted:** [More Information Needed]
149
 
150
  ## Technical Specifications [optional]
151
 
 
165
 
166
  [More Information Needed]
167
 
168
+ ## Citation
169
 
170
+ If Switch Generation is helpful to you:
171
 
172
  **BibTeX:**
173
 
 
179
 
180
  ## Glossary [optional]
181
 
 
 
182
  [More Information Needed]
183
 
184
  ## More Information [optional]