MoonstoneF commited on
Commit
434c03a
·
verified ·
1 Parent(s): 841abb6

Training in progress, step 400

Browse files
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [microsoft/kosmos-2-patch14-224](https://huggingface.co/microsoft/kosmos-2-patch14-224) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.0989
19
 
20
  ## Model description
21
 
@@ -38,8 +38,8 @@ The following hyperparameters were used during training:
38
  - train_batch_size: 2
39
  - eval_batch_size: 2
40
  - seed: 42
41
- - gradient_accumulation_steps: 4
42
- - total_train_batch_size: 8
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_ratio: 0.1
@@ -49,10 +49,16 @@ The following hyperparameters were used during training:
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:------:|:----:|:---------------:|
52
- | 1.0097 | 0.0698 | 250 | 0.1479 |
53
- | 0.1376 | 0.1396 | 500 | 0.1150 |
54
- | 0.1178 | 0.2094 | 750 | 0.1041 |
55
- | 0.1103 | 0.2792 | 1000 | 0.0989 |
 
 
 
 
 
 
56
 
57
 
58
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [microsoft/kosmos-2-patch14-224](https://huggingface.co/microsoft/kosmos-2-patch14-224) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.0873
19
 
20
  ## Model description
21
 
 
38
  - train_batch_size: 2
39
  - eval_batch_size: 2
40
  - seed: 42
41
+ - gradient_accumulation_steps: 2
42
+ - total_train_batch_size: 4
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_ratio: 0.1
 
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:------:|:----:|:---------------:|
52
+ | 0.1715 | 0.0160 | 100 | 0.1546 |
53
+ | 0.1624 | 0.0319 | 200 | 0.1357 |
54
+ | 0.1415 | 0.0479 | 300 | 0.1256 |
55
+ | 0.1295 | 0.0638 | 400 | 0.1157 |
56
+ | 0.1226 | 0.0798 | 500 | 0.1068 |
57
+ | 0.1119 | 0.0957 | 600 | 0.1007 |
58
+ | 0.1067 | 0.1117 | 700 | 0.0980 |
59
+ | 0.1032 | 0.1276 | 800 | 0.0914 |
60
+ | 0.099 | 0.1436 | 900 | 0.0882 |
61
+ | 0.0965 | 0.1596 | 1000 | 0.0873 |
62
 
63
 
64
  ### Framework versions
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9bad7ffbe89a015773eb3eb9e211b9cd9a910f11d1b9445fcb4962d84898b7da
3
  size 4999738624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4932181b0cce09fc255d8412690b1771822e380f2167945124aaa57312ff3070
3
  size 4999738624
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e3c0360f5e02f6b266aa44f7fe888aad867f5d019dbf281c1c7da78634540520
3
  size 1658313704
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20e10a9551a9a505a2df622decc92c2f6c23328adda1c64ad6c5cd92fd033f6c
3
  size 1658313704
tokenizer.json CHANGED
@@ -1,14 +1,7 @@
1
  {
2
  "version": "1.0",
3
  "truncation": null,
4
- "padding": {
5
- "strategy": "BatchLongest",
6
- "direction": "Right",
7
- "pad_to_multiple_of": null,
8
- "pad_id": 1,
9
- "pad_type_id": 0,
10
- "pad_token": "<pad>"
11
- },
12
  "added_tokens": [
13
  {
14
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
  "truncation": null,
4
+ "padding": null,
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e118d9bafa5462c2437edc3643142de6d1320bfc8483308deb2694a74584740b
3
  size 5048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:273a8c03d1876c3b861acea156fc58b877cc5626fecf3db02cf49322655e6ac4
3
  size 5048