Locutusque commited on
Commit
cc6a8fe
·
1 Parent(s): 69124d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -127
README.md CHANGED
@@ -3,51 +3,9 @@ library_name: peft
3
  base_model: Locutusque/TinyMistral-248M-Instruct
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
-
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
  ## Uses
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
 
52
  ### Out-of-Scope Use
53
 
@@ -67,12 +25,6 @@ base_model: Locutusque/TinyMistral-248M-Instruct
67
 
68
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
  ## Training Details
77
 
78
  ### Training Data
@@ -197,84 +149,6 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
197
  ## Model Card Contact
198
 
199
  [More Information Needed]
200
-
201
-
202
- ## Training procedure
203
-
204
-
205
- The following `bitsandbytes` quantization config was used during training:
206
- - quant_method: QuantizationMethod.BITS_AND_BYTES
207
- - load_in_8bit: False
208
- - load_in_4bit: True
209
- - llm_int8_threshold: 6.0
210
- - llm_int8_skip_modules: None
211
- - llm_int8_enable_fp32_cpu_offload: False
212
- - llm_int8_has_fp16_weight: False
213
- - bnb_4bit_quant_type: fp4
214
- - bnb_4bit_use_double_quant: False
215
- - bnb_4bit_compute_dtype: float32
216
-
217
- ### Framework versions
218
-
219
-
220
- - PEFT 0.6.2
221
- ## Training procedure
222
-
223
-
224
- The following `bitsandbytes` quantization config was used during training:
225
- - quant_method: QuantizationMethod.BITS_AND_BYTES
226
- - load_in_8bit: False
227
- - load_in_4bit: True
228
- - llm_int8_threshold: 6.0
229
- - llm_int8_skip_modules: None
230
- - llm_int8_enable_fp32_cpu_offload: False
231
- - llm_int8_has_fp16_weight: False
232
- - bnb_4bit_quant_type: fp4
233
- - bnb_4bit_use_double_quant: False
234
- - bnb_4bit_compute_dtype: float16
235
-
236
- ### Framework versions
237
-
238
-
239
- - PEFT 0.6.2
240
- ## Training procedure
241
-
242
-
243
- The following `bitsandbytes` quantization config was used during training:
244
- - quant_method: QuantizationMethod.BITS_AND_BYTES
245
- - load_in_8bit: False
246
- - load_in_4bit: True
247
- - llm_int8_threshold: 6.0
248
- - llm_int8_skip_modules: None
249
- - llm_int8_enable_fp32_cpu_offload: False
250
- - llm_int8_has_fp16_weight: False
251
- - bnb_4bit_quant_type: fp4
252
- - bnb_4bit_use_double_quant: False
253
- - bnb_4bit_compute_dtype: float16
254
-
255
- ### Framework versions
256
-
257
-
258
- - PEFT 0.6.2
259
- ## Training procedure
260
-
261
-
262
- The following `bitsandbytes` quantization config was used during training:
263
- - quant_method: QuantizationMethod.BITS_AND_BYTES
264
- - load_in_8bit: False
265
- - load_in_4bit: True
266
- - llm_int8_threshold: 6.0
267
- - llm_int8_skip_modules: None
268
- - llm_int8_enable_fp32_cpu_offload: False
269
- - llm_int8_has_fp16_weight: False
270
- - bnb_4bit_quant_type: fp4
271
- - bnb_4bit_use_double_quant: False
272
- - bnb_4bit_compute_dtype: float16
273
-
274
- ### Framework versions
275
-
276
-
277
- - PEFT 0.6.2
278
  ## Training procedure
279
 
280
 
 
3
  base_model: Locutusque/TinyMistral-248M-Instruct
4
  ---
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ## Uses
7
+ This model is intended to be used to create instruction-following datasets by predicting a questin by passing an answer to it.
8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  ### Out-of-Scope Use
11
 
 
25
 
26
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
27
 
 
 
 
 
 
 
28
  ## Training Details
29
 
30
  ### Training Data
 
149
  ## Model Card Contact
150
 
151
  [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
152
  ## Training procedure
153
 
154