ninagroot commited on
Commit
1735bfb
·
verified ·
1 Parent(s): 948ffb4

ninagroot/Llama-360Mtest

Browse files
README.md CHANGED
@@ -13,7 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
- - Loss: 4.3046
17
 
18
  ## Model description
19
 
@@ -33,33 +33,27 @@ More information needed
33
 
34
  The following hyperparameters were used during training:
35
  - learning_rate: 0.0003
36
- - train_batch_size: 2
37
  - eval_batch_size: 8
38
  - seed: 42
39
  - gradient_accumulation_steps: 2
40
- - total_train_batch_size: 4
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: cosine
43
  - lr_scheduler_warmup_steps: 100
44
- - num_epochs: 12
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
- | 6.0662 | 1.0 | 69 | 5.8781 |
52
- | 5.1419 | 2.0 | 138 | 4.9025 |
53
- | 4.1271 | 3.0 | 207 | 4.4935 |
54
- | 3.8908 | 4.0 | 276 | 4.3523 |
55
- | 3.5293 | 5.0 | 345 | 4.2722 |
56
- | 3.322 | 6.0 | 414 | 4.2443 |
57
- | 2.8975 | 7.0 | 483 | 4.2451 |
58
- | 2.6264 | 8.0 | 552 | 4.2609 |
59
- | 2.346 | 9.0 | 621 | 4.2915 |
60
- | 1.9401 | 10.0 | 690 | 4.2793 |
61
- | 1.7366 | 11.0 | 759 | 4.3004 |
62
- | 1.676 | 12.0 | 828 | 4.3046 |
63
 
64
 
65
  ### Framework versions
 
13
 
14
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Loss: 4.1886
17
 
18
  ## Model description
19
 
 
33
 
34
  The following hyperparameters were used during training:
35
  - learning_rate: 0.0003
36
+ - train_batch_size: 1
37
  - eval_batch_size: 8
38
  - seed: 42
39
  - gradient_accumulation_steps: 2
40
+ - total_train_batch_size: 2
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: cosine
43
  - lr_scheduler_warmup_steps: 100
44
+ - num_epochs: 6
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
+ | 5.9345 | 1.0 | 138 | 5.6878 |
52
+ | 4.7674 | 2.0 | 276 | 4.7003 |
53
+ | 3.6914 | 3.0 | 414 | 4.3374 |
54
+ | 3.6076 | 4.0 | 552 | 4.2433 |
55
+ | 3.3436 | 5.0 | 690 | 4.1851 |
56
+ | 2.939 | 6.0 | 828 | 4.1886 |
 
 
 
 
 
 
57
 
58
 
59
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4486339afec1cf614855ff7c7c981cd6604026e99f59f47a8c5557e39d874a23
3
  size 1344172280
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b4bcbb5356dde8a15329ab11495572f2d66bb3dabc8f7a980ddf651d1e4cd90
3
  size 1344172280
runs/Mar25_10-22-22_gcn31.local.snellius.surf.nl/events.out.tfevents.1711358551.gcn31.local.snellius.surf.nl.1845946.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d132b8483ea35d0f6081ace981d6bff4b3719e3ac003ae641a3396eb80bbce2
3
+ size 12717
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:92d7d7ba48657e43574e3d1deedfdd10bdc5b34e7b372b3aec112b1d0ce75abd
3
  size 4728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95beafee183dd8c0f8967da1e54d6cb38ab5fe23f886d744fcaba66038c60105
3
  size 4728