ajagadeesh commited on
Commit
e56a452
·
verified ·
1 Parent(s): a04f488

Model save

Browse files
README.md CHANGED
@@ -5,14 +5,14 @@ base_model: meta-llama/Llama-3.2-1B-Instruct
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
- - name: Lumen-Seed-v20
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # Lumen-Seed-v20
16
 
17
  This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset.
18
 
@@ -33,14 +33,12 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 0.0003
37
- - train_batch_size: 16
38
  - eval_batch_size: 8
39
  - seed: 42
40
- - gradient_accumulation_steps: 8
41
- - total_train_batch_size: 128
42
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
- - lr_scheduler_type: cosine
44
  - lr_scheduler_warmup_steps: 100
45
  - num_epochs: 2
46
 
@@ -50,8 +48,8 @@ The following hyperparameters were used during training:
50
 
51
  ### Framework versions
52
 
53
- - PEFT 0.7.1
54
  - Transformers 4.52.1
55
- - Pytorch 2.5.1
56
- - Datasets 3.6.0
57
  - Tokenizers 0.21.1
 
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
+ - name: Lumen-Seed-v30
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ # Lumen-Seed-v30
16
 
17
  This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset.
18
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 0.0002
37
+ - train_batch_size: 8
38
  - eval_batch_size: 8
39
  - seed: 42
40
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
+ - lr_scheduler_type: linear
 
 
42
  - lr_scheduler_warmup_steps: 100
43
  - num_epochs: 2
44
 
 
48
 
49
  ### Framework versions
50
 
51
+ - PEFT 0.15.2
52
  - Transformers 4.52.1
53
+ - Pytorch 2.2.0+cu121
54
+ - Datasets 3.3.1
55
  - Tokenizers 0.21.1
runs/May23_16-52-41_ip-10-78-145-115.ec2.internal/events.out.tfevents.1748019162.ip-10-78-145-115.ec2.internal.992208.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:52271d26ac880bac289c252e5c111751a1191db02e194ecc92593559650286a9
3
- size 343402
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d2a54438d557ed0672fd6b007b5ccf4051ebb2c7c84ba7159b6bd78daec02df
3
+ size 343613