[2025-03-23 16:49:40,395][__main__][INFO] - cache_dir: /media/data/tmp dataset: name: kamel-usp/aes_enem_dataset split: JBCS2025 training_params: seed: 42 num_train_epochs: 20 logging_steps: 100 metric_for_best_model: QWK bf16: true post_training_results: model_path: /workspace/jbcs2025/outputs/2025-03-23/15-04-12 experiments: model: name: microsoft/phi-4 type: phi4_classification_lora num_labels: 6 output_dir: ./results/phi4-balanced/C2 logging_dir: ./logs/phi4-balanced/C2 best_model_dir: ./results/phi4-balanced/C2/best_model lora_r: 8 lora_dropout: 0.05 lora_alpha: 16 lora_target_modules: all-linear dataset: grade_index: 1 training_id: phi4-balanced-C2 training_params: weight_decay: 0.01 warmup_ratio: 0.1 learning_rate: 5.0e-05 train_batch_size: 1 eval_batch_size: 16 gradient_accumulation_steps: 16 gradient_checkpointing: false [2025-03-23 16:49:40,398][__main__][INFO] - Starting the Fine Tuning training process. [2025-03-23 16:49:43,783][transformers.tokenization_utils_base][INFO] - loading file vocab.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/vocab.json [2025-03-23 16:49:43,783][transformers.tokenization_utils_base][INFO] - loading file merges.txt from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/merges.txt [2025-03-23 16:49:43,783][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/tokenizer.json [2025-03-23 16:49:43,783][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/added_tokens.json [2025-03-23 16:49:43,783][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/special_tokens_map.json [2025-03-23 16:49:43,784][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/tokenizer_config.json [2025-03-23 16:49:43,784][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None [2025-03-23 16:49:44,000][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: False [2025-03-23 16:49:45,344][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 16:49:45,345][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "id2label": { "0": 0, "1": 40, "2": 80, "3": 120, "4": 160, "5": 200 }, "initializer_range": 0.02, "intermediate_size": 17920, "label2id": { "0": 0, "40": 1, "80": 2, "120": 3, "160": 4, "200": 5 }, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 16:49:45,370][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/model.safetensors.index.json [2025-03-23 16:49:45,370][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object [2025-03-23 16:49:45,370][transformers.modeling_utils][INFO] - Instantiating Phi3ForSequenceClassification model under default dtype torch.bfloat16. [2025-03-23 16:50:07,592][transformers.modeling_utils][INFO] - Some weights of the model checkpoint at microsoft/phi-4 were not used when initializing Phi3ForSequenceClassification: ['lm_head.weight'] - This IS expected if you are initializing Phi3ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing Phi3ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [2025-03-23 16:50:07,592][transformers.modeling_utils][WARNING] - Some weights of Phi3ForSequenceClassification were not initialized from the model checkpoint at microsoft/phi-4 and are newly initialized: ['score.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [2025-03-23 16:50:09,632][__main__][INFO] - None [2025-03-23 16:50:09,634][transformers.training_args][INFO] - PyTorch: setting up devices [2025-03-23 16:50:09,677][__main__][INFO] - Total steps: 620. Number of warmup steps: 62 [2025-03-23 16:50:09,684][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching. [2025-03-23 16:50:09,707][transformers.trainer][INFO] - Using auto half precision backend [2025-03-23 16:50:09,708][transformers.trainer][WARNING] - No label_names provided for model class `PeftModelForSequenceClassification`. Since `PeftModel` hides base models input arguments, if label_names is not given, label_names can't be set automatically within `Trainer`. Note that empty label_names list will be used instead. [2025-03-23 16:50:09,736][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 16:50:09,749][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 16:50:09,749][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 16:50:09,749][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 16:50:43,802][transformers][INFO] - {'accuracy': 0.22727272727272727, 'RMSE': 61.987290975039734, 'QWK': 0.0, 'HDIV': 0.19696969696969702, 'Macro_F1': 0.07407407407407407, 'Micro_F1': 0.22727272727272727, 'Weighted_F1': 0.08417508417508417, 'Macro_F1_(ignoring_nan)': np.float64(0.37037037037037035)} [2025-03-23 16:50:43,805][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 16:50:44,041][transformers.trainer][INFO] - The following columns in the training set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 16:50:44,073][transformers.trainer][INFO] - ***** Running training ***** [2025-03-23 16:50:44,073][transformers.trainer][INFO] - Num examples = 500 [2025-03-23 16:50:44,073][transformers.trainer][INFO] - Num Epochs = 20 [2025-03-23 16:50:44,073][transformers.trainer][INFO] - Instantaneous batch size per device = 1 [2025-03-23 16:50:44,073][transformers.trainer][INFO] - Total train batch size (w. parallel, distributed & accumulation) = 16 [2025-03-23 16:50:44,073][transformers.trainer][INFO] - Gradient Accumulation steps = 16 [2025-03-23 16:50:44,073][transformers.trainer][INFO] - Total optimization steps = 620 [2025-03-23 16:50:44,075][transformers.trainer][INFO] - Number of trainable parameters = 27,883,520 [2025-03-23 16:59:37,359][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 16:59:37,364][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 16:59:37,364][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 16:59:37,364][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 17:00:11,302][transformers][INFO] - {'accuracy': 0.19696969696969696, 'RMSE': 78.39294959021854, 'QWK': 0.14859294692382052, 'HDIV': 0.15909090909090906, 'Macro_F1': 0.1369098191528098, 'Micro_F1': 0.19696969696969696, 'Weighted_F1': 0.1820793197173826, 'Macro_F1_(ignoring_nan)': np.float64(0.17113727394101225)} [2025-03-23 17:00:11,304][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 17:00:11,307][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-32 [2025-03-23 17:00:11,698][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 17:00:11,699][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 17:09:05,827][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 17:09:05,829][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 17:09:05,830][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 17:09:05,830][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 17:09:39,710][transformers][INFO] - {'accuracy': 0.38636363636363635, 'RMSE': 54.9379815626841, 'QWK': 0.04807692307692302, 'HDIV': 0.11363636363636365, 'Macro_F1': 0.1793170731707317, 'Micro_F1': 0.38636363636363635, 'Weighted_F1': 0.32978566149297855, 'Macro_F1_(ignoring_nan)': np.float64(0.4482926829268293)} [2025-03-23 17:09:39,710][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 17:09:39,714][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-64 [2025-03-23 17:09:40,012][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 17:09:40,013][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 17:18:33,419][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 17:18:33,422][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 17:18:33,422][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 17:18:33,422][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 17:19:07,193][transformers][INFO] - {'accuracy': 0.4621212121212121, 'RMSE': 53.37119867948301, 'QWK': 0.3829076151826213, 'HDIV': 0.030303030303030276, 'Macro_F1': 0.21136288998357963, 'Micro_F1': 0.4621212121212121, 'Weighted_F1': 0.3556202418271384, 'Macro_F1_(ignoring_nan)': np.float64(0.5284072249589491)} [2025-03-23 17:19:07,193][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 17:19:07,197][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-96 [2025-03-23 17:19:07,534][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 17:19:07,534][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 17:19:08,261][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-32] due to args.save_total_limit [2025-03-23 17:19:08,303][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-64] due to args.save_total_limit [2025-03-23 17:28:00,964][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 17:28:00,967][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 17:28:00,967][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 17:28:00,967][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 17:28:34,842][transformers][INFO] - {'accuracy': 0.42424242424242425, 'RMSE': 51.52228114656274, 'QWK': 0.29195650044087385, 'HDIV': 0.007575757575757569, 'Macro_F1': 0.20225108225108226, 'Micro_F1': 0.42424242424242425, 'Weighted_F1': 0.33055227600682147, 'Macro_F1_(ignoring_nan)': np.float64(0.3370851370851371)} [2025-03-23 17:28:34,843][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 17:28:34,846][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-128 [2025-03-23 17:28:35,158][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 17:28:35,159][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 17:37:28,963][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 17:37:28,965][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 17:37:28,965][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 17:37:28,965][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 17:38:02,817][transformers][INFO] - {'accuracy': 0.4696969696969697, 'RMSE': 53.59782899266791, 'QWK': 0.37511984659635667, 'HDIV': 0.030303030303030276, 'Macro_F1': 0.26887134164010434, 'Micro_F1': 0.4696969696969697, 'Weighted_F1': 0.3967379898667931, 'Macro_F1_(ignoring_nan)': np.float64(0.3360891770501304)} [2025-03-23 17:38:02,818][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 17:38:02,820][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-160 [2025-03-23 17:38:03,215][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 17:38:03,216][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 17:38:04,167][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-128] due to args.save_total_limit [2025-03-23 17:46:56,854][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 17:46:56,856][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 17:46:56,856][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 17:46:56,856][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 17:47:30,576][transformers][INFO] - {'accuracy': 0.4621212121212121, 'RMSE': 50.81159495448044, 'QWK': 0.29590303515977157, 'HDIV': 0.045454545454545414, 'Macro_F1': 0.27262584876988477, 'Micro_F1': 0.4621212121212121, 'Weighted_F1': 0.4151466188225378, 'Macro_F1_(ignoring_nan)': np.float64(0.4543764146164746)} [2025-03-23 17:47:30,577][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 17:47:30,580][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-192 [2025-03-23 17:47:30,916][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 17:47:30,917][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 17:47:31,626][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-160] due to args.save_total_limit [2025-03-23 17:56:24,123][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 17:56:24,125][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 17:56:24,125][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 17:56:24,125][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 17:56:57,849][transformers][INFO] - {'accuracy': 0.4090909090909091, 'RMSE': 59.59458995173263, 'QWK': 0.40026051358392256, 'HDIV': 0.045454545454545414, 'Macro_F1': 0.2429620933666886, 'Micro_F1': 0.4090909090909091, 'Weighted_F1': 0.39500831974942074, 'Macro_F1_(ignoring_nan)': np.float64(0.36444314005003287)} [2025-03-23 17:56:57,849][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 17:56:57,853][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-224 [2025-03-23 17:56:58,174][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 17:56:58,175][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 17:56:58,940][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-96] due to args.save_total_limit [2025-03-23 17:56:58,982][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-192] due to args.save_total_limit [2025-03-23 18:05:51,473][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 18:05:51,476][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 18:05:51,476][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 18:05:51,476][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 18:06:25,206][transformers][INFO] - {'accuracy': 0.42424242424242425, 'RMSE': 53.25752187591567, 'QWK': 0.2858926342072411, 'HDIV': 0.022727272727272707, 'Macro_F1': 0.19468859921782666, 'Micro_F1': 0.42424242424242425, 'Weighted_F1': 0.3598213024932513, 'Macro_F1_(ignoring_nan)': np.float64(0.29203289882674)} [2025-03-23 18:06:25,206][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 18:06:25,209][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-256 [2025-03-23 18:06:30,363][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 18:06:30,364][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 18:15:23,835][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 18:15:23,837][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 18:15:23,837][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 18:15:23,837][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 18:15:57,602][transformers][INFO] - {'accuracy': 0.5303030303030303, 'RMSE': 48.98979485566356, 'QWK': 0.34545454545454535, 'HDIV': 0.030303030303030276, 'Macro_F1': 0.3304219312830178, 'Micro_F1': 0.5303030303030303, 'Weighted_F1': 0.49144563122828755, 'Macro_F1_(ignoring_nan)': np.float64(0.4130274141037722)} [2025-03-23 18:15:57,602][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 18:15:57,605][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-288 [2025-03-23 18:15:57,932][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 18:15:57,933][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 18:15:58,841][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-256] due to args.save_total_limit [2025-03-23 18:24:51,529][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 18:24:51,531][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 18:24:51,531][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 18:24:51,531][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 18:25:25,235][transformers][INFO] - {'accuracy': 0.45454545454545453, 'RMSE': 51.75700801618925, 'QWK': 0.3186659192825111, 'HDIV': 0.05303030303030298, 'Macro_F1': 0.2310617893009612, 'Micro_F1': 0.45454545454545453, 'Weighted_F1': 0.4315089072701878, 'Macro_F1_(ignoring_nan)': np.float64(0.34659268395144177)} [2025-03-23 18:25:25,236][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 18:25:25,240][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-320 [2025-03-23 18:25:26,405][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 18:25:26,406][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 18:25:27,125][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-288] due to args.save_total_limit [2025-03-23 18:34:20,013][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 18:34:20,015][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 18:34:20,015][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 18:34:20,015][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 18:34:53,725][transformers][INFO] - {'accuracy': 0.4318181818181818, 'RMSE': 59.59458995173263, 'QWK': 0.3200900077350397, 'HDIV': 0.06060606060606055, 'Macro_F1': 0.23509519464835335, 'Micro_F1': 0.4318181818181818, 'Weighted_F1': 0.42138168615966626, 'Macro_F1_(ignoring_nan)': np.float64(0.35264279197253)} [2025-03-23 18:34:53,725][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 18:34:53,728][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-352 [2025-03-23 18:34:54,034][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 18:34:54,034][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 18:34:54,743][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-320] due to args.save_total_limit [2025-03-23 18:43:47,551][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 18:43:47,552][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 18:43:47,552][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 18:43:47,552][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 18:44:21,266][transformers][INFO] - {'accuracy': 0.5378787878787878, 'RMSE': 46.056618647183825, 'QWK': 0.355684480642642, 'HDIV': 0.045454545454545414, 'Macro_F1': 0.27301688194743967, 'Micro_F1': 0.5378787878787878, 'Weighted_F1': 0.4949473165745214, 'Macro_F1_(ignoring_nan)': np.float64(0.4095253229211595)} [2025-03-23 18:44:21,267][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 18:44:21,269][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-384 [2025-03-23 18:44:23,361][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 18:44:23,362][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 18:44:24,564][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-352] due to args.save_total_limit [2025-03-23 18:44:24,604][transformers.trainer][INFO] - Training completed. Do not forget to share your model on huggingface.co/models =) [2025-03-23 18:44:24,604][transformers.trainer][INFO] - Loading best model from /workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-224 (score: 0.40026051358392256). [2025-03-23 18:44:25,097][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/16-49-40/results/phi4-balanced/C2/checkpoint-384] due to args.save_total_limit [2025-03-23 18:44:25,152][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 18:44:25,154][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 18:44:25,155][transformers.trainer][INFO] - Num examples = 132 [2025-03-23 18:44:25,155][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 18:44:59,146][transformers][INFO] - {'accuracy': 0.4090909090909091, 'RMSE': 59.59458995173263, 'QWK': 0.40026051358392256, 'HDIV': 0.045454545454545414, 'Macro_F1': 0.2429620933666886, 'Micro_F1': 0.4090909090909091, 'Weighted_F1': 0.39500831974942074, 'Macro_F1_(ignoring_nan)': np.float64(0.36444314005003287)} [2025-03-23 18:44:59,148][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 18:44:59,149][__main__][INFO] - Training completed successfully. [2025-03-23 18:44:59,149][__main__][INFO] - Running on Test [2025-03-23 18:44:59,150][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id. If id_prompt, reference, prompt, grades, essay_year, essay_text, supporting_text, id are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-03-23 18:44:59,151][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-03-23 18:44:59,151][transformers.trainer][INFO] - Num examples = 138 [2025-03-23 18:44:59,151][transformers.trainer][INFO] - Batch size = 16 [2025-03-23 18:45:35,347][transformers][INFO] - {'accuracy': 0.45652173913043476, 'RMSE': 60.91095901015048, 'QWK': 0.4118587182355762, 'HDIV': 0.07971014492753625, 'Macro_F1': 0.28200712527929656, 'Micro_F1': 0.45652173913043476, 'Weighted_F1': 0.43319356755070093, 'Macro_F1_(ignoring_nan)': np.float64(0.4230106879189448)} [2025-03-23 18:45:35,347][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead. [2025-03-23 18:45:35,349][transformers.trainer][INFO] - Saving model checkpoint to ./results/phi4-balanced/C2/best_model [2025-03-23 18:45:35,634][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json [2025-03-23 18:45:35,634][transformers.configuration_utils][INFO] - Model config Phi3Config { "architectures": [ "Phi3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 100257, "embd_pdrop": 0.0, "eos_token_id": 100265, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 17920, "max_position_embeddings": 16384, "model_type": "phi3", "num_attention_heads": 40, "num_hidden_layers": 40, "num_key_value_heads": 10, "original_max_position_embeddings": 16384, "pad_token_id": 100349, "partial_rotary_factor": 1.0, "resid_pdrop": 0.0, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 250000, "sliding_window": null, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.50.0", "use_cache": true, "vocab_size": 100352 } [2025-03-23 18:45:36,969][__main__][INFO] - Fine Tuning Finished.