amphora commited on
Commit
f82ebd4
·
verified ·
1 Parent(s): 15eaf90

End of training

Browse files
Files changed (2) hide show
  1. README.md +134 -0
  2. debug.log +0 -0
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - train_T2J.jsonl
10
+ model-index:
11
+ - name: FC-T2J-SFT-1_5B
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.15.0.dev0`
22
+ ```yaml
23
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
24
+
25
+ load_in_8bit: false
26
+ load_in_4bit: false
27
+
28
+ datasets:
29
+ - path: train_T2J.jsonl
30
+ type: chat_template
31
+
32
+ dataset_prepared_path: preprocess
33
+ val_set_size: 0.01
34
+ output_dir: ./outputs
35
+
36
+ adapter:
37
+ lora_model_dir:
38
+
39
+ sequence_len: 16384
40
+ sample_packing: false
41
+ eval_sample_packing: false
42
+ pad_to_sequence_len: false
43
+
44
+ plugins:
45
+ - axolotl.integrations.liger.LigerPlugin
46
+ liger_rope: true
47
+ liger_rms_norm: true
48
+ liger_swiglu: true
49
+ liger_fused_linear_cross_entropy: true
50
+
51
+ wandb_project: FC-T2J
52
+ wandb_entity:
53
+ wandb_watch:
54
+ wandb_name:
55
+ wandb_log_model:
56
+ hub_model_id: amphora/FC-T2J-SFT-1_5B
57
+
58
+ gradient_accumulation_steps: 128
59
+ micro_batch_size: 2
60
+ num_epochs: 3
61
+ optimizer: adamw_torch_fused
62
+ lr_scheduler: cosine
63
+ learning_rate: 2e-5
64
+
65
+ bf16: auto
66
+ tf32: false
67
+
68
+ gradient_checkpointing:
69
+ resume_from_checkpoint:
70
+ logging_steps: 1
71
+ flash_attention: true
72
+
73
+ warmup_ratio: 0.05
74
+ weight_decay: 0.01
75
+ evals_per_epoch: 0
76
+ saves_per_epoch: 1
77
+
78
+ # fsdp_config:
79
+ # fsdp_version: 2
80
+ # fsdp_offload_params: false
81
+ # fsdp_cpu_ram_efficient_loading: true
82
+ # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
83
+ # fsdp_transformer_layer_cls_to_wrap: Qwen3DecoderLayer
84
+ # fsdp_state_dict_type: FULL_STATE_DICT
85
+ # fsdp_sharding_strategy: FULL_SHARD
86
+ # fsdp_reshard_after_forward: true
87
+ # fsdp_activation_checkpointing: true
88
+
89
+ ```
90
+
91
+ </details><br>
92
+
93
+ # FC-T2J-SFT-1_5B
94
+
95
+ This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the train_T2J.jsonl dataset.
96
+
97
+ ## Model description
98
+
99
+ More information needed
100
+
101
+ ## Intended uses & limitations
102
+
103
+ More information needed
104
+
105
+ ## Training and evaluation data
106
+
107
+ More information needed
108
+
109
+ ## Training procedure
110
+
111
+ ### Training hyperparameters
112
+
113
+ The following hyperparameters were used during training:
114
+ - learning_rate: 2e-05
115
+ - train_batch_size: 2
116
+ - eval_batch_size: 2
117
+ - seed: 42
118
+ - gradient_accumulation_steps: 128
119
+ - total_train_batch_size: 256
120
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
121
+ - lr_scheduler_type: cosine
122
+ - lr_scheduler_warmup_steps: 87
123
+ - training_steps: 1751
124
+
125
+ ### Training results
126
+
127
+
128
+
129
+ ### Framework versions
130
+
131
+ - Transformers 5.0.0
132
+ - Pytorch 2.9.1+cu128
133
+ - Datasets 4.5.0
134
+ - Tokenizers 0.22.2
debug.log CHANGED
The diff for this file is too large to render. See raw diff