andstor commited on
Commit
484e23e
·
verified ·
1 Parent(s): c667325

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +177 -0
README.md CHANGED
@@ -28,4 +28,181 @@ configs:
28
  data_files:
29
  - split: train
30
  path: data/train-*
 
 
 
31
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  data_files:
29
  - split: train
30
  path: data/train-*
31
+ pretty_name: PEFT Unit Test Generation Experiments
32
+ size_categories:
33
+ - n<1K
34
  ---
35
+ # Dataset Card for Dataset Name
36
+
37
+ <!-- Provide a quick summary of the dataset. -->
38
+
39
+ This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
40
+
41
+ ## Dataset Details
42
+
43
+ ### Dataset Description
44
+
45
+ <!-- Provide a longer summary of what this dataset is. -->
46
+
47
+ ### Training Hyperparameters
48
+
49
+ #### Model-agnostic Hyperparameters
50
+ <table>
51
+ <thead>
52
+ <tr>
53
+ <th>Hyperparameter</th>
54
+ <th>Method</th>
55
+ <th>Value</th>
56
+ </tr>
57
+ </thead>
58
+ <tbody>
59
+ <tr style="font-weight: bold;">
60
+ <td colspan="3">Common</td>
61
+ </tr>
62
+ <tr>
63
+ <td>Optimizer</td>
64
+ <td>-</td>
65
+ <td>AdamW</td>
66
+ </tr>
67
+ <tr>
68
+ <td>LR schedule</td>
69
+ <td>-</td>
70
+ <td>Linear</td>
71
+ </tr>
72
+ <tr>
73
+ <td>LR warmup ratio</td>
74
+ <td>-</td>
75
+ <td>0.1</td>
76
+ </tr>
77
+ <tr>
78
+ <td>Batch size</td>
79
+ <td>-</td>
80
+ <td>1</td>
81
+ </tr>
82
+ <tr>
83
+ <td>Gradient accumulation steps</td>
84
+ <td>-</td>
85
+ <td>8</td>
86
+ </tr>
87
+ <tr>
88
+ <td># Epochs</td>
89
+ <td>-</td>
90
+ <td>3</td>
91
+ </tr>
92
+ <tr>
93
+ <td>Precision</td>
94
+ <td>-</td>
95
+ <td>Mixed</td>
96
+ </tr>
97
+ <tr>
98
+ <td style="vertical-align: middle;" rowspan="4">Learning rate</td>
99
+ <td>Full fine-tuning</td>
100
+ <td>5E-5</td>
101
+ </tr>
102
+ <tr>
103
+ <td>LoRA</td>
104
+ <td>3E-4</td>
105
+ </tr>
106
+ <tr>
107
+ <td>(IA)<sup>3</sup></td>
108
+ <td>3E-4</td>
109
+ </tr>
110
+ <tr>
111
+ <td>Prompt tuning</td>
112
+ <td>3E-3</td>
113
+ </tr>
114
+ <tr style="font-weight: bold;">
115
+ <td colspan="3">Method specific</td>
116
+ </tr>
117
+ <tr>
118
+ <td>Alpha</td>
119
+ <td>LoRA</td>
120
+ <td>32</td>
121
+ </tr>
122
+ <tr>
123
+ <td>Dropout</td>
124
+ <td>LoRA</td>
125
+ <td>0.1</td>
126
+ </tr>
127
+ <tr>
128
+ <td>Rank</td>
129
+ <td>LoRA</td>
130
+ <td>16</td>
131
+ </tr>
132
+ <tr>
133
+ <td>Virtual tokens</td>
134
+ <td>Prompt tuning</td>
135
+ <td>20</td>
136
+ </tr>
137
+ </tbody>
138
+ </table>
139
+
140
+ #### Model-specific Hyperparameters
141
+ <table >
142
+ <thead>
143
+ <tr>
144
+ <th>Hyperparameter</th>
145
+ <th>Method</th>
146
+ <th>Model</th>
147
+ <th>Value</th>
148
+ </tr>
149
+ </thead>
150
+ <tbody>
151
+ <tr>
152
+ <td style="vertical-align: middle;">Targeted attention modules</td>
153
+ <td style="vertical-align: middle;">LoRA, (IA)<sup>3</sup></td>
154
+ <td>
155
+ codegen-350M-multi<br>
156
+ Salesforce/codegen2-1B_P<br>
157
+ Salesforce/codegen2-3_7B_P<br>
158
+ Salesforce/codegen2-7B_P<br>
159
+ Salesforce/codegen2-16B_P<br>
160
+ meta-llama/CodeLlama-7b-hf<br>
161
+ bigcode/starcoderbase<br>
162
+ bigcode/starcoder2-3b<br>
163
+ bigcode/starcoder2-7b<br>
164
+ bigcode/starcoder2-15b
165
+ </td>
166
+ <td>
167
+ qkv_proj<br>
168
+ qkv_proj<br>
169
+ qkv_proj<br>
170
+ qkv_proj<br>
171
+ qkv_proj<br>
172
+ q_proj, v_proj<br>
173
+ c_attn<br>
174
+ q_proj, v_proj<br>
175
+ q_proj, v_proj<br>
176
+ q_proj, v_proj
177
+ </td>
178
+ </tr>
179
+ <tr>
180
+ <td style="vertical-align: middle;">Targeted feedforward modules</td>
181
+ <td style="vertical-align: middle;">(IA)<sup>3</sup></td>
182
+ <td>
183
+ codegen-350M-multi<br>
184
+ Salesforce/codegen2-1B_P<br>
185
+ Salesforce/codegen2-3_7B_P<br>
186
+ Salesforce/codegen2-7B_P<br>
187
+ Salesforce/codegen2-16B_P<br>
188
+ meta-llama/CodeLlama-7b-hf<br>
189
+ bigcode/starcoderbase<br>
190
+ bigcode/starcoder2-3b<br>
191
+ bigcode/starcoder2-7b<br>
192
+ bigcode/starcoder2-15b
193
+ </td>
194
+ <td>
195
+ fc_out<br>
196
+ fc_out<br>
197
+ fc_out<br>
198
+ fc_out<br>
199
+ fc_out<br>
200
+ down_proj<br>
201
+ mlp.c_proj<br>
202
+ q_proj, c_proj<br>
203
+ q_proj, c_proj<br>
204
+ q_proj, c_proj
205
+ </td>
206
+ </tr>
207
+ </tbody>
208
+ </table>