qualis2006 commited on
Commit
a1b3ffb
·
1 Parent(s): bf4d84e

qualis2006/distilbert-base-uncased-lora-text-classification

Browse files
Files changed (2) hide show
  1. README.md +16 -18
  2. training_args.bin +2 -2
README.md CHANGED
@@ -8,7 +8,6 @@ metrics:
8
  model-index:
9
  - name: distilbert-base-uncased-lora-text-classification
10
  results: []
11
- library_name: peft
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.9675
22
- - Accuracy: {'accuracy': 0.884}
23
 
24
  ## Model description
25
 
@@ -50,22 +49,21 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:-------------------:|
53
- | No log | 1.0 | 250 | 0.4169 | {'accuracy': 0.872} |
54
- | 0.4168 | 2.0 | 500 | 0.4374 | {'accuracy': 0.877} |
55
- | 0.4168 | 3.0 | 750 | 0.5339 | {'accuracy': 0.885} |
56
- | 0.1878 | 4.0 | 1000 | 0.7017 | {'accuracy': 0.871} |
57
- | 0.1878 | 5.0 | 1250 | 0.7186 | {'accuracy': 0.882} |
58
- | 0.0609 | 6.0 | 1500 | 0.8245 | {'accuracy': 0.878} |
59
- | 0.0609 | 7.0 | 1750 | 0.8748 | {'accuracy': 0.88} |
60
- | 0.0323 | 8.0 | 2000 | 0.9075 | {'accuracy': 0.889} |
61
- | 0.0323 | 9.0 | 2250 | 0.9559 | {'accuracy': 0.883} |
62
- | 0.0075 | 10.0 | 2500 | 0.9675 | {'accuracy': 0.884} |
63
 
64
 
65
  ### Framework versions
66
 
67
- - PEFT 0.5.0
68
- - Transformers 4.35.2
69
- - Pytorch 2.0.1+cu117
70
- - Datasets 2.15.0
71
- - Tokenizers 0.15.0
 
8
  model-index:
9
  - name: distilbert-base-uncased-lora-text-classification
10
  results: []
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.0087
21
+ - Accuracy: {'accuracy': 0.886}
22
 
23
  ## Model description
24
 
 
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
  |:-------------:|:-----:|:----:|:---------------:|:-------------------:|
52
+ | No log | 1.0 | 250 | 0.3263 | {'accuracy': 0.882} |
53
+ | 0.4298 | 2.0 | 500 | 0.4513 | {'accuracy': 0.871} |
54
+ | 0.4298 | 3.0 | 750 | 0.6971 | {'accuracy': 0.864} |
55
+ | 0.2176 | 4.0 | 1000 | 0.6914 | {'accuracy': 0.877} |
56
+ | 0.2176 | 5.0 | 1250 | 0.7609 | {'accuracy': 0.889} |
57
+ | 0.095 | 6.0 | 1500 | 0.8447 | {'accuracy': 0.894} |
58
+ | 0.095 | 7.0 | 1750 | 0.9361 | {'accuracy': 0.888} |
59
+ | 0.024 | 8.0 | 2000 | 0.9976 | {'accuracy': 0.893} |
60
+ | 0.024 | 9.0 | 2250 | 1.0071 | {'accuracy': 0.885} |
61
+ | 0.0097 | 10.0 | 2500 | 1.0087 | {'accuracy': 0.886} |
62
 
63
 
64
  ### Framework versions
65
 
66
+ - Transformers 4.34.1
67
+ - Pytorch 1.13.0+cu117
68
+ - Datasets 2.14.6
69
+ - Tokenizers 0.14.1
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:df564a1db111c4ac2acc5e007a930ad3d0120fc392ba74b0d23637916d34d43a
3
- size 4155
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d909ac26098e3399c9ec4e8dbc84cd2e67f890ad772b23ebb53cb690a60c73e6
3
+ size 4091