| 2025-11-27 11:40:19,577 - INFO - === Entrenamiento LoRA para servicio S3 === | |
| 2025-11-27 11:40:19,577 - INFO - Train dataset: /workspace/data/dataset_sft_S3_train.jsonl | |
| 2025-11-27 11:40:19,577 - INFO - Val dataset : /workspace/data/dataset_sft_S3_val.jsonl | |
| 2025-11-27 11:40:19,577 - INFO - Output dir : /workspace/out/starcoder2_7b_lora_s3 | |
| 2025-11-27 11:40:19,577 - INFO - Cargando tokenizer... | |
| 2025-11-27 11:40:19,835 - INFO - Cargando datasets y aplicando formato... | |
| 2025-11-27 11:40:21,421 - INFO - bitsandbytes NO disponible. Cargando modelo en bfloat16 sin cuantizaci贸n... | |
| 2025-11-27 11:40:25,211 - INFO - Configurando LoRA... | |
| 2025-11-27 11:40:25,211 - INFO - Configurando entrenamiento SFT... | |
| 2025-11-27 11:40:42,348 - INFO - Iniciando entrenamiento... | |
| 2025-11-27 14:27:16,021 - INFO - Entrenamiento finalizado. | |
| 2025-11-27 14:27:16,021 - INFO - Duraci贸n total (s): 9993.67 | |
| 2025-11-27 14:27:16,021 - INFO - Epochs entrenadas : 3.0 | |
| 2025-11-27 14:27:16,021 - INFO - Global steps : 2025 | |
| 2025-11-27 14:27:16,021 - INFO - Evaluando en conjunto de validaci贸n... | |
| 2025-11-27 14:29:04,642 - INFO - M茅tricas de evaluaci贸n: {'eval_loss': 0.07721275091171265, 'eval_runtime': 108.6171, 'eval_samples_per_second': 5.524, 'eval_steps_per_second': 0.69, 'eval_entropy': 0.0753403959174951, 'eval_num_tokens': 16398642.0, 'eval_mean_token_accuracy': 0.9798894079526266, 'epoch': 3.0} | |
| 2025-11-27 14:29:04,643 - INFO - M茅tricas guardadas en: /workspace/out/starcoder2_7b_lora_s3/training_summary_s3.json | |
| 2025-11-27 14:29:04,643 - INFO - Guardando modelo y tokenizer LoRA... | |
| 2025-11-27 14:29:04,973 - INFO - Guardado completo. | |