dklpp commited on
Commit
ae80043
·
verified ·
1 Parent(s): f97f596

Training in progress, epoch 1

Browse files
Files changed (3) hide show
  1. README.md +181 -50
  2. adapter_model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -1,75 +1,206 @@
1
  ---
2
- library_name: peft
3
- license: llama3.1
4
  base_model: meta-llama/Llama-3.1-8B-Instruct
 
5
  tags:
6
  - base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
7
  - lora
8
  - transformers
9
- metrics:
10
- - accuracy
11
- - precision
12
- - recall
13
- - f1
14
- model-index:
15
- - name: llama3_ft_section_classifier
16
- results: []
17
  ---
18
 
19
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- # llama3_ft_section_classifier
23
 
24
- This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the None dataset.
25
- It achieves the following results on the evaluation set:
26
- - Loss: 2.0599
27
- - Accuracy: 0.3030
28
- - Precision: 0.3258
29
- - Recall: 0.3030
30
- - F1: 0.3007
31
 
32
- ## Model description
33
 
34
- More information needed
35
 
36
- ## Intended uses & limitations
37
 
38
- More information needed
39
 
40
- ## Training and evaluation data
41
 
42
- More information needed
43
 
44
- ## Training procedure
45
 
46
- ### Training hyperparameters
47
 
48
- The following hyperparameters were used during training:
49
- - learning_rate: 0.0002
50
- - train_batch_size: 4
51
- - eval_batch_size: 4
52
- - seed: 42
53
- - gradient_accumulation_steps: 8
54
- - total_train_batch_size: 32
55
- - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
56
- - lr_scheduler_type: cosine
57
- - lr_scheduler_warmup_ratio: 0.1
58
- - num_epochs: 3
59
 
60
- ### Training results
61
 
62
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
63
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
64
- | No log | 1.0 | 10 | 2.9533 | 0.1515 | 0.2159 | 0.1515 | 0.1376 |
65
- | No log | 2.0 | 20 | 2.0881 | 0.3030 | 0.3242 | 0.3030 | 0.2986 |
66
- | No log | 3.0 | 30 | 2.0599 | 0.3030 | 0.3258 | 0.3030 | 0.3007 |
67
 
 
68
 
 
69
  ### Framework versions
70
 
71
- - PEFT 0.17.1
72
- - Transformers 4.57.1
73
- - Pytorch 2.8.0+cu126
74
- - Datasets 4.0.0
75
- - Tokenizers 0.22.1
 
1
  ---
 
 
2
  base_model: meta-llama/Llama-3.1-8B-Instruct
3
+ library_name: peft
4
  tags:
5
  - base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
6
  - lora
7
  - transformers
 
 
 
 
 
 
 
 
8
  ---
9
 
10
+ # Model Card for Model ID
11
+
12
+ <!-- Provide a quick summary of what the model is/does. -->
13
+
14
+
15
+
16
+ ## Model Details
17
+
18
+ ### Model Description
19
+
20
+ <!-- Provide a longer summary of what this model is. -->
21
+
22
+
23
+
24
+ - **Developed by:** [More Information Needed]
25
+ - **Funded by [optional]:** [More Information Needed]
26
+ - **Shared by [optional]:** [More Information Needed]
27
+ - **Model type:** [More Information Needed]
28
+ - **Language(s) (NLP):** [More Information Needed]
29
+ - **License:** [More Information Needed]
30
+ - **Finetuned from model [optional]:** [More Information Needed]
31
+
32
+ ### Model Sources [optional]
33
+
34
+ <!-- Provide the basic links for the model. -->
35
+
36
+ - **Repository:** [More Information Needed]
37
+ - **Paper [optional]:** [More Information Needed]
38
+ - **Demo [optional]:** [More Information Needed]
39
+
40
+ ## Uses
41
+
42
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
+
44
+ ### Direct Use
45
+
46
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
+
48
+ [More Information Needed]
49
+
50
+ ### Downstream Use [optional]
51
+
52
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
+
54
+ [More Information Needed]
55
+
56
+ ### Out-of-Scope Use
57
+
58
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
+
60
+ [More Information Needed]
61
+
62
+ ## Bias, Risks, and Limitations
63
+
64
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
+
66
+ [More Information Needed]
67
+
68
+ ### Recommendations
69
+
70
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
+
72
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
+
74
+ ## How to Get Started with the Model
75
+
76
+ Use the code below to get started with the model.
77
+
78
+ [More Information Needed]
79
+
80
+ ## Training Details
81
+
82
+ ### Training Data
83
+
84
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
+
86
+ [More Information Needed]
87
+
88
+ ### Training Procedure
89
+
90
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
+
92
+ #### Preprocessing [optional]
93
+
94
+ [More Information Needed]
95
+
96
+
97
+ #### Training Hyperparameters
98
+
99
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
+
101
+ #### Speeds, Sizes, Times [optional]
102
+
103
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
+
105
+ [More Information Needed]
106
+
107
+ ## Evaluation
108
+
109
+ <!-- This section describes the evaluation protocols and provides the results. -->
110
+
111
+ ### Testing Data, Factors & Metrics
112
+
113
+ #### Testing Data
114
+
115
+ <!-- This should link to a Dataset Card if possible. -->
116
+
117
+ [More Information Needed]
118
+
119
+ #### Factors
120
+
121
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
+
123
+ [More Information Needed]
124
+
125
+ #### Metrics
126
+
127
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
+
129
+ [More Information Needed]
130
+
131
+ ### Results
132
+
133
+ [More Information Needed]
134
+
135
+ #### Summary
136
+
137
+
138
+
139
+ ## Model Examination [optional]
140
+
141
+ <!-- Relevant interpretability work for the model goes here -->
142
+
143
+ [More Information Needed]
144
+
145
+ ## Environmental Impact
146
+
147
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
+
149
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
+
151
+ - **Hardware Type:** [More Information Needed]
152
+ - **Hours used:** [More Information Needed]
153
+ - **Cloud Provider:** [More Information Needed]
154
+ - **Compute Region:** [More Information Needed]
155
+ - **Carbon Emitted:** [More Information Needed]
156
+
157
+ ## Technical Specifications [optional]
158
+
159
+ ### Model Architecture and Objective
160
+
161
+ [More Information Needed]
162
+
163
+ ### Compute Infrastructure
164
+
165
+ [More Information Needed]
166
+
167
+ #### Hardware
168
+
169
+ [More Information Needed]
170
+
171
+ #### Software
172
+
173
+ [More Information Needed]
174
 
175
+ ## Citation [optional]
176
 
177
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
 
 
 
 
 
178
 
179
+ **BibTeX:**
180
 
181
+ [More Information Needed]
182
 
183
+ **APA:**
184
 
185
+ [More Information Needed]
186
 
187
+ ## Glossary [optional]
188
 
189
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
 
191
+ [More Information Needed]
192
 
193
+ ## More Information [optional]
194
 
195
+ [More Information Needed]
 
 
 
 
 
 
 
 
 
 
196
 
197
+ ## Model Card Authors [optional]
198
 
199
+ [More Information Needed]
 
 
 
 
200
 
201
+ ## Model Card Contact
202
 
203
+ [More Information Needed]
204
  ### Framework versions
205
 
206
+ - PEFT 0.17.1
 
 
 
 
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9c7975278ddc911bef065b3945b27876fc0e451cc2ccafda8b2a6d4169b807f9
3
  size 27370368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:300317753e5fde7efbaceb22bfcce06ace3453ddcb7b8a93c6f15965f1817d89
3
  size 27370368
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:212fde2095ba115212a418f49ad6300afb20cbc59a7f558b0b3f0acfe2f73f0b
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c8cec5ac8da9af74abdc324f07c0ed7fe02d692487f11029b736289683bb534
3
  size 5905