| 05/09/2024 19:59:23 - INFO - transformers.tokenization_utils_base - loading file vocab.json from cache at /home/.cache/huggingface/hub/models--microsoft--phi-1_5/snapshots/675aa382d814580b22651a30acb1a585d7c25963/vocab.json | |
| 05/09/2024 19:59:23 - INFO - transformers.tokenization_utils_base - loading file merges.txt from cache at /home/.cache/huggingface/hub/models--microsoft--phi-1_5/snapshots/675aa382d814580b22651a30acb1a585d7c25963/merges.txt | |
| 05/09/2024 19:59:23 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at /home/.cache/huggingface/hub/models--microsoft--phi-1_5/snapshots/675aa382d814580b22651a30acb1a585d7c25963/tokenizer.json | |
| 05/09/2024 19:59:23 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at /home/.cache/huggingface/hub/models--microsoft--phi-1_5/snapshots/675aa382d814580b22651a30acb1a585d7c25963/added_tokens.json | |
| 05/09/2024 19:59:23 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at /home/.cache/huggingface/hub/models--microsoft--phi-1_5/snapshots/675aa382d814580b22651a30acb1a585d7c25963/special_tokens_map.json | |
| 05/09/2024 19:59:23 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at /home/.cache/huggingface/hub/models--microsoft--phi-1_5/snapshots/675aa382d814580b22651a30acb1a585d7c25963/tokenizer_config.json | |
| 05/09/2024 19:59:23 - INFO - llmtuner.data.template - Add pad token: <|endoftext|> | |
| 05/09/2024 19:59:23 - INFO - llmtuner.data.loader - Loading dataset alpaca_data_en_52k.json... | |
| 05/09/2024 19:59:24 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--microsoft--phi-1_5/snapshots/675aa382d814580b22651a30acb1a585d7c25963/config.json | |
| 05/09/2024 19:59:24 - INFO - transformers.configuration_utils - Model config PhiConfig { | |
| "_name_or_path": "microsoft/phi-1_5", | |
| "architectures": [ | |
| "PhiForCausalLM" | |
| ], | |
| "attention_dropout": 0.0, | |
| "bos_token_id": null, | |
| "embd_pdrop": 0.0, | |
| "eos_token_id": null, | |
| "hidden_act": "gelu_new", | |
| "hidden_size": 2048, | |
| "initializer_range": 0.02, | |
| "intermediate_size": 8192, | |
| "layer_norm_eps": 1e-05, | |
| "max_position_embeddings": 2048, | |
| "model_type": "phi", | |
| "num_attention_heads": 32, | |
| "num_hidden_layers": 24, | |
| "num_key_value_heads": 32, | |
| "partial_rotary_factor": 0.5, | |
| "qk_layernorm": false, | |
| "resid_pdrop": 0.0, | |
| "rope_scaling": null, | |
| "rope_theta": 10000.0, | |
| "tie_word_embeddings": false, | |
| "torch_dtype": "float16", | |
| "transformers_version": "4.40.1", | |
| "use_cache": true, | |
| "vocab_size": 51200 | |
| } | |
| 05/09/2024 20:08:34 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at /home/.cache/huggingface/hub/models--microsoft--phi-1_5/snapshots/675aa382d814580b22651a30acb1a585d7c25963/model.safetensors | |
| 05/09/2024 20:08:34 - INFO - transformers.modeling_utils - Instantiating PhiForCausalLM model under default dtype torch.float16. | |
| 05/09/2024 20:08:34 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {} | |
| 05/09/2024 20:08:35 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing PhiForCausalLM. | |
| 05/09/2024 20:08:35 - INFO - transformers.modeling_utils - All the weights of PhiForCausalLM were initialized from the model checkpoint at microsoft/phi-1_5. | |
| If your task is similar to the task the model of the checkpoint was trained on, you can already use PhiForCausalLM for predictions without further training. | |
| 05/09/2024 20:08:36 - INFO - transformers.generation.configuration_utils - loading configuration file generation_config.json from cache at /home/.cache/huggingface/hub/models--microsoft--phi-1_5/snapshots/675aa382d814580b22651a30acb1a585d7c25963/generation_config.json | |
| 05/09/2024 20:08:36 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {} | |
| 05/09/2024 20:08:36 - INFO - llmtuner.model.utils.checkpointing - Gradient checkpointing enabled. | |
| 05/09/2024 20:08:36 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference. | |
| 05/09/2024 20:08:36 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA | |
| 05/09/2024 20:08:36 - INFO - llmtuner.model.loader - trainable params: 1572864 || all params: 1419843584 || trainable%: 0.1108 | |
| 05/09/2024 20:08:36 - INFO - transformers.trainer - Using auto half precision backend | |
| 05/09/2024 20:08:36 - INFO - transformers.trainer - ***** Running training ***** | |
| 05/09/2024 20:08:36 - INFO - transformers.trainer - Num examples = 500 | |
| 05/09/2024 20:08:36 - INFO - transformers.trainer - Num Epochs = 3 | |
| 05/09/2024 20:08:36 - INFO - transformers.trainer - Instantaneous batch size per device = 2 | |
| 05/09/2024 20:08:36 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16 | |
| 05/09/2024 20:08:36 - INFO - transformers.trainer - Gradient Accumulation steps = 8 | |
| 05/09/2024 20:08:36 - INFO - transformers.trainer - Total optimization steps = 93 | |
| 05/09/2024 20:08:36 - INFO - transformers.trainer - Number of trainable parameters = 1,572,864 | |
| 05/09/2024 20:08:47 - INFO - llmtuner.extras.callbacks - {'loss': 2.1283, 'learning_rate': 1.9858e-04, 'epoch': 0.16} | |
| 05/09/2024 20:08:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.8000, 'learning_rate': 1.9435e-04, 'epoch': 0.32} | |
| 05/09/2024 20:09:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.5595, 'learning_rate': 1.8743e-04, 'epoch': 0.48} | |
| 05/09/2024 20:09:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.6097, 'learning_rate': 1.7803e-04, 'epoch': 0.64} | |
| 05/09/2024 20:09:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.2942, 'learning_rate': 1.6641e-04, 'epoch': 0.80} | |
| 05/09/2024 20:09:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.4620, 'learning_rate': 1.5290e-04, 'epoch': 0.96} | |
| 05/09/2024 20:09:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.4432, 'learning_rate': 1.3788e-04, 'epoch': 1.12} | |
| 05/09/2024 20:09:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.3561, 'learning_rate': 1.2178e-04, 'epoch': 1.28} | |
| 05/09/2024 20:10:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.3299, 'learning_rate': 1.0506e-04, 'epoch': 1.44} | |
| 05/09/2024 20:10:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2208, 'learning_rate': 8.8204e-05, 'epoch': 1.60} | |
| 05/09/2024 20:10:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.4574, 'learning_rate': 7.1679e-05, 'epoch': 1.76} | |
| 05/09/2024 20:10:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1423, 'learning_rate': 5.5961e-05, 'epoch': 1.92} | |
| 05/09/2024 20:10:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.3134, 'learning_rate': 4.1495e-05, 'epoch': 2.08} | |
| 05/09/2024 20:11:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.3384, 'learning_rate': 2.8695e-05, 'epoch': 2.24} | |
| 05/09/2024 20:11:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.3873, 'learning_rate': 1.7924e-05, 'epoch': 2.40} | |
| 05/09/2024 20:11:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.2214, 'learning_rate': 9.4885e-06, 'epoch': 2.56} | |
| 05/09/2024 20:11:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.3198, 'learning_rate': 3.6294e-06, 'epoch': 2.72} | |
| 05/09/2024 20:11:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.3832, 'learning_rate': 5.1307e-07, 'epoch': 2.88} | |
| 05/09/2024 20:11:49 - INFO - transformers.trainer - | |
| Training completed. Do not forget to share your model on huggingface.co/models =) | |
| 05/09/2024 20:11:49 - INFO - transformers.trainer - Saving model checkpoint to saves/Phi-1.5-1.3B/lora/train_2024-05-09-19-57-19 | |
| 05/09/2024 20:11:49 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--microsoft--phi-1_5/snapshots/675aa382d814580b22651a30acb1a585d7c25963/config.json | |
| 05/09/2024 20:11:49 - INFO - transformers.configuration_utils - Model config PhiConfig { | |
| "_name_or_path": "microsoft/phi-1_5", | |
| "architectures": [ | |
| "PhiForCausalLM" | |
| ], | |
| "attention_dropout": 0.0, | |
| "bos_token_id": null, | |
| "embd_pdrop": 0.0, | |
| "eos_token_id": null, | |
| "hidden_act": "gelu_new", | |
| "hidden_size": 2048, | |
| "initializer_range": 0.02, | |
| "intermediate_size": 8192, | |
| "layer_norm_eps": 1e-05, | |
| "max_position_embeddings": 2048, | |
| "model_type": "phi", | |
| "num_attention_heads": 32, | |
| "num_hidden_layers": 24, | |
| "num_key_value_heads": 32, | |
| "partial_rotary_factor": 0.5, | |
| "qk_layernorm": false, | |
| "resid_pdrop": 0.0, | |
| "rope_scaling": null, | |
| "rope_theta": 10000.0, | |
| "tie_word_embeddings": false, | |
| "torch_dtype": "float16", | |
| "transformers_version": "4.40.1", | |
| "use_cache": true, | |
| "vocab_size": 51200 | |
| } | |
| 05/09/2024 20:11:49 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Phi-1.5-1.3B/lora/train_2024-05-09-19-57-19/tokenizer_config.json | |
| 05/09/2024 20:11:49 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Phi-1.5-1.3B/lora/train_2024-05-09-19-57-19/special_tokens_map.json | |
| 05/09/2024 20:11:49 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields: | |
| {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}} | |