kadabengaran commited on
Commit
5c65b27
·
1 Parent(s): 050c80c

kadabengaran/distilbert-base-uncased-text-classification

Browse files
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: distilbert-base-uncased
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: distilbert-base-uncased-lora-text-classification
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # distilbert-base-uncased-lora-text-classification
17
+
18
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.2489
21
+ - Accuracy: {'accuracy': 0.9447222222222222}
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0001
41
+ - train_batch_size: 8
42
+ - eval_batch_size: 8
43
+ - seed: 42
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - num_epochs: 15
47
+
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
+ |:-------------:|:-----:|:-----:|:---------------:|:--------------------------------:|
52
+ | 0.3734 | 1.0 | 1050 | 0.3066 | {'accuracy': 0.8947222222222222} |
53
+ | 0.2802 | 2.0 | 2100 | 0.2840 | {'accuracy': 0.9144444444444444} |
54
+ | 0.2562 | 3.0 | 3150 | 0.2413 | {'accuracy': 0.9277777777777778} |
55
+ | 0.2291 | 4.0 | 4200 | 0.2327 | {'accuracy': 0.9291666666666667} |
56
+ | 0.215 | 5.0 | 5250 | 0.2459 | {'accuracy': 0.9327777777777778} |
57
+ | 0.1882 | 6.0 | 6300 | 0.2387 | {'accuracy': 0.9347222222222222} |
58
+ | 0.1672 | 7.0 | 7350 | 0.2305 | {'accuracy': 0.9377777777777778} |
59
+ | 0.1725 | 8.0 | 8400 | 0.2348 | {'accuracy': 0.9361111111111111} |
60
+ | 0.163 | 9.0 | 9450 | 0.2419 | {'accuracy': 0.9388888888888889} |
61
+ | 0.1553 | 10.0 | 10500 | 0.2430 | {'accuracy': 0.9405555555555556} |
62
+ | 0.156 | 11.0 | 11550 | 0.2410 | {'accuracy': 0.9430555555555555} |
63
+ | 0.1299 | 12.0 | 12600 | 0.2382 | {'accuracy': 0.9433333333333334} |
64
+ | 0.1333 | 13.0 | 13650 | 0.2403 | {'accuracy': 0.9444444444444444} |
65
+ | 0.1532 | 14.0 | 14700 | 0.2492 | {'accuracy': 0.9441666666666667} |
66
+ | 0.1507 | 15.0 | 15750 | 0.2489 | {'accuracy': 0.9447222222222222} |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.35.2
72
+ - Pytorch 2.1.0+cu118
73
+ - Datasets 2.15.0
74
+ - Tokenizers 0.15.0
adapter_config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "distilbert-base-uncased",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "lora_alpha": 32,
12
+ "lora_dropout": 0.01,
13
+ "modules_to_save": null,
14
+ "peft_type": "LORA",
15
+ "r": 4,
16
+ "rank_pattern": {},
17
+ "revision": null,
18
+ "target_modules": [
19
+ "q_lin"
20
+ ],
21
+ "task_type": "SEQ_CLS"
22
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9364dcf32d41746586c01fd53b7868bd8bb16f5487a5301e2eceaacb41151987
3
+ size 2524256
runs/Dec03_21-54-55_0e01b6bac262/events.out.tfevents.1701640496.0e01b6bac262.1017.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67ceb1852ab48afa75bc9f8f2ed3dd6e2691e9440374ef50b4b6adb91d23708e
3
+ size 13724
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "[PAD]",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "100": {
13
+ "content": "[UNK]",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "101": {
21
+ "content": "[CLS]",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "102": {
29
+ "content": "[SEP]",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "103": {
37
+ "content": "[MASK]",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "[CLS]",
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "DistilBertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e04556cd7cc7dcf369aea70fdc9cde22b118a4c392a912178dd51b021f5b8a60
3
+ size 4664
vocab.txt ADDED
The diff for this file is too large to render. See raw diff