anzeo commited on
Commit
6adb436
·
verified ·
1 Parent(s): 19222e8

lora_fine_tuned_copa_croslo

Browse files
README.md CHANGED
@@ -19,9 +19,9 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.6931
23
- - Accuracy: 0.46
24
- - F1: 0.46
25
 
26
  ## Model description
27
 
@@ -40,7 +40,7 @@ More information needed
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
- - learning_rate: 0.003
44
  - train_batch_size: 8
45
  - eval_batch_size: 8
46
  - seed: 42
@@ -52,14 +52,14 @@ The following hyperparameters were used during training:
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
54
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
55
- | 0.7145 | 1.0 | 50 | 0.6931 | 0.54 | 0.5411 |
56
- | 0.7231 | 2.0 | 100 | 0.6931 | 0.42 | 0.4214 |
57
- | 0.717 | 3.0 | 150 | 0.6931 | 0.48 | 0.4813 |
58
- | 0.7176 | 4.0 | 200 | 0.6931 | 0.49 | 0.4913 |
59
- | 0.7024 | 5.0 | 250 | 0.6931 | 0.46 | 0.4600 |
60
- | 0.6977 | 6.0 | 300 | 0.6931 | 0.52 | 0.52 |
61
- | 0.7335 | 7.0 | 350 | 0.6931 | 0.48 | 0.48 |
62
- | 0.708 | 8.0 | 400 | 0.6931 | 0.46 | 0.46 |
63
 
64
 
65
  ### Framework versions
 
19
 
20
  This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.6871
23
+ - Accuracy: 0.52
24
+ - F1: 0.5212
25
 
26
  ## Model description
27
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
+ - learning_rate: 2e-05
44
  - train_batch_size: 8
45
  - eval_batch_size: 8
46
  - seed: 42
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
54
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
55
+ | 0.6985 | 1.0 | 50 | 0.6889 | 0.52 | 0.5212 |
56
+ | 0.7039 | 2.0 | 100 | 0.6885 | 0.52 | 0.5212 |
57
+ | 0.7019 | 3.0 | 150 | 0.6879 | 0.52 | 0.5212 |
58
+ | 0.6908 | 4.0 | 200 | 0.6875 | 0.52 | 0.5212 |
59
+ | 0.6922 | 5.0 | 250 | 0.6872 | 0.52 | 0.5212 |
60
+ | 0.6978 | 6.0 | 300 | 0.6871 | 0.52 | 0.5212 |
61
+ | 0.6884 | 7.0 | 350 | 0.6871 | 0.52 | 0.5212 |
62
+ | 0.6972 | 8.0 | 400 | 0.6871 | 0.52 | 0.5212 |
63
 
64
 
65
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1c45feba9d8c2b2e2edc5e4157dbf1aea3e98e4fd36bdd6741453b197899ff9d
3
  size 2369340
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:435e4f235bfab1d89e6e32fb387cf48f68444258616e4eef174a6d5a581dd250
3
  size 2369340
runs/May21_18-05-35_DESKTOP-22QTFDR/events.out.tfevents.1716307536.DESKTOP-22QTFDR.9252.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:447e17e31dea691b1e9547e0b12a42176ae1e818e5b930c4164c8fb7126450c8
3
+ size 9649
runs/May21_18-05-35_DESKTOP-22QTFDR/events.out.tfevents.1716307557.DESKTOP-22QTFDR.9252.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a464c7a618a5c964088ab28e8d3e2090abf658b21ab7937ddd152c3657d5881
3
+ size 457
special_tokens_map.json CHANGED
@@ -1,7 +1,37 @@
1
  {
2
- "cls_token": "[CLS]",
3
- "mask_token": "[MASK]",
4
- "pad_token": "[PAD]",
5
- "sep_token": "[SEP]",
6
- "unk_token": "[UNK]"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  }
 
1
  {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
  }
tokenizer.json CHANGED
@@ -1,7 +1,14 @@
1
  {
2
  "version": "1.0",
3
  "truncation": null,
4
- "padding": null,
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
  "truncation": null,
4
+ "padding": {
5
+ "strategy": "BatchLongest",
6
+ "direction": "Right",
7
+ "pad_to_multiple_of": null,
8
+ "pad_id": 0,
9
+ "pad_type_id": 0,
10
+ "pad_token": "[PAD]"
11
+ },
12
  "added_tokens": [
13
  {
14
  "id": 0,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d8d9e31d40c216411b97856cd4e87753c01948f58c45abddc927f0c09dd0c3a0
3
  size 5048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1da379bfa13a30c9275f0c7c0e1b0df14e27514900cd84556398c85733651bbd
3
  size 5048