furmaniak commited on
Commit
ad262c7
·
verified ·
1 Parent(s): 122a018

Model save

Browse files
Files changed (1) hide show
  1. README.md +5 -11
README.md CHANGED
@@ -4,7 +4,6 @@ license: apache-2.0
4
  base_model: Qwen/Qwen2.5-32B
5
  tags:
6
  - llama-factory
7
- - full
8
  - generated_from_trainer
9
  model-index:
10
  - name: pretrain
@@ -16,9 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # pretrain
18
 
19
- This model is a fine-tuned version of [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) on the openalex_small dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 1.1904
22
 
23
  ## Model description
24
 
@@ -39,13 +36,13 @@ More information needed
39
  The following hyperparameters were used during training:
40
  - learning_rate: 0.0001
41
  - train_batch_size: 1
42
- - eval_batch_size: 1
43
  - seed: 42
44
  - distributed_type: multi-GPU
45
  - num_devices: 8
46
- - gradient_accumulation_steps: 8
47
- - total_train_batch_size: 64
48
- - total_eval_batch_size: 8
49
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
  - lr_scheduler_type: cosine
51
  - lr_scheduler_warmup_ratio: 0.1
@@ -53,9 +50,6 @@ The following hyperparameters were used during training:
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss |
57
- |:-------------:|:------:|:----:|:---------------:|
58
- | 1.168 | 0.7273 | 10 | 1.1995 |
59
 
60
 
61
  ### Framework versions
 
4
  base_model: Qwen/Qwen2.5-32B
5
  tags:
6
  - llama-factory
 
7
  - generated_from_trainer
8
  model-index:
9
  - name: pretrain
 
15
 
16
  # pretrain
17
 
18
+ This model is a fine-tuned version of [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) on an unknown dataset.
 
 
19
 
20
  ## Model description
21
 
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.0001
38
  - train_batch_size: 1
39
+ - eval_batch_size: 8
40
  - seed: 42
41
  - distributed_type: multi-GPU
42
  - num_devices: 8
43
+ - gradient_accumulation_steps: 16
44
+ - total_train_batch_size: 128
45
+ - total_eval_batch_size: 64
46
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: cosine
48
  - lr_scheduler_warmup_ratio: 0.1
 
50
 
51
  ### Training results
52
 
 
 
 
53
 
54
 
55
  ### Framework versions