Upload logs/training_log_step3000.log with huggingface_hub
Browse files- logs/training_log_step3000.log +149 -0
logs/training_log_step3000.log
ADDED
|
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-11-09 19:05:41,604 - INFO -
|
| 2 |
+
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 3 |
+
β T5 TRAINING CONFIGURATION β
|
| 4 |
+
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 5 |
+
Mode: FULL
|
| 6 |
+
Platform: vast
|
| 7 |
+
Repository: ranjan56cse/t5-base-xsum-lora
|
| 8 |
+
Epochs: 3
|
| 9 |
+
Samples: ALL (204k)
|
| 10 |
+
Batch size: 16
|
| 11 |
+
Gradient accum: 2
|
| 12 |
+
Effective batch: 32
|
| 13 |
+
Save every: 1000 steps
|
| 14 |
+
Expected time: ~8-10 hours
|
| 15 |
+
|
| 16 |
+
2025-11-09 19:05:41,604 - INFO - Creating repository: ranjan56cse/t5-base-xsum-lora
|
| 17 |
+
2025-11-09 19:05:41,807 - INFO - β
Repo: https://huggingface.co/ranjan56cse/t5-base-xsum-lora
|
| 18 |
+
2025-11-09 19:05:41,807 - INFO - Loading google-t5/t5-base...
|
| 19 |
+
2025-11-09 19:05:52,938 - INFO - β
Gradient checkpointing enabled
|
| 20 |
+
2025-11-09 19:05:52,938 - INFO - Applying LoRA...
|
| 21 |
+
2025-11-09 19:05:52,976 - INFO - Loading XSum dataset...
|
| 22 |
+
2025-11-09 19:05:56,588 - INFO - β
Dataset: 204045 train, 11332 val
|
| 23 |
+
2025-11-09 19:05:56,588 - INFO - Tokenizing...
|
| 24 |
+
2025-11-09 19:08:02,802 - INFO - β
Tokenization complete
|
| 25 |
+
2025-11-09 19:08:03,857 - INFO - ============================================================
|
| 26 |
+
2025-11-09 19:08:03,858 - INFO - π STARTING TRAINING (~8-10 hours)
|
| 27 |
+
2025-11-09 19:08:03,859 - INFO - Effective batch size: 32
|
| 28 |
+
2025-11-09 19:08:03,859 - INFO - GPU: 0.84GB allocated, 0.92GB reserved
|
| 29 |
+
2025-11-09 19:08:03,859 - INFO - System: 4.1% used (17.4GB / 503.7GB)
|
| 30 |
+
2025-11-09 19:08:03,859 - INFO - ============================================================
|
| 31 |
+
2025-11-09 19:08:03,990 - INFO - ============================================================
|
| 32 |
+
2025-11-09 19:08:03,990 - INFO - π Training started
|
| 33 |
+
2025-11-09 19:08:03,990 - INFO - Total steps: 19128
|
| 34 |
+
2025-11-09 19:08:03,990 - INFO - GPU: NVIDIA GeForce RTX 3090
|
| 35 |
+
2025-11-09 19:08:03,990 - INFO - GPU Memory: 0.84GB allocated, 0.92GB reserved
|
| 36 |
+
2025-11-09 19:08:03,990 - INFO - System Memory: 4.1% used (17.4GB / 503.7GB)
|
| 37 |
+
2025-11-09 19:08:03,991 - INFO - ============================================================
|
| 38 |
+
2025-11-09 19:08:51,266 - INFO - Step 50/19128 | Loss: 12.5022 | LR: 2.88e-05 | GPU: 0.87GB
|
| 39 |
+
2025-11-09 19:09:38,077 - INFO - Step 100/19128 | Loss: 10.3469 | LR: 5.82e-05 | GPU: 0.87GB
|
| 40 |
+
2025-11-09 19:10:24,938 - INFO - Step 150/19128 | Loss: 4.0200 | LR: 8.82e-05 | GPU: 0.87GB
|
| 41 |
+
2025-11-09 19:11:11,674 - INFO - Step 200/19128 | Loss: 0.9201 | LR: 1.18e-04 | GPU: 0.87GB
|
| 42 |
+
2025-11-09 19:11:58,405 - INFO - Step 250/19128 | Loss: 0.7357 | LR: 1.48e-04 | GPU: 0.87GB
|
| 43 |
+
2025-11-09 19:12:45,152 - INFO - Step 300/19128 | Loss: 0.6602 | LR: 1.77e-04 | GPU: 0.87GB
|
| 44 |
+
2025-11-09 19:13:31,815 - INFO - Step 350/19128 | Loss: 0.6121 | LR: 2.07e-04 | GPU: 0.87GB
|
| 45 |
+
2025-11-09 19:14:18,499 - INFO - Step 400/19128 | Loss: 0.5817 | LR: 2.37e-04 | GPU: 0.87GB
|
| 46 |
+
2025-11-09 19:15:05,185 - INFO - Step 450/19128 | Loss: 0.5916 | LR: 2.67e-04 | GPU: 0.87GB
|
| 47 |
+
2025-11-09 19:15:51,879 - INFO - Step 500/19128 | Loss: 0.5675 | LR: 2.97e-04 | GPU: 0.87GB
|
| 48 |
+
2025-11-09 19:16:38,691 - INFO - Step 550/19128 | Loss: 0.5700 | LR: 2.99e-04 | GPU: 0.87GB
|
| 49 |
+
2025-11-09 19:17:25,546 - INFO - Step 600/19128 | Loss: 0.5610 | LR: 2.98e-04 | GPU: 0.87GB
|
| 50 |
+
2025-11-09 19:18:12,459 - INFO - Step 650/19128 | Loss: 0.5669 | LR: 2.98e-04 | GPU: 0.87GB
|
| 51 |
+
2025-11-09 19:18:59,163 - INFO - Step 700/19128 | Loss: 0.5659 | LR: 2.97e-04 | GPU: 0.87GB
|
| 52 |
+
2025-11-09 19:19:45,942 - INFO - Step 750/19128 | Loss: 0.5673 | LR: 2.96e-04 | GPU: 0.87GB
|
| 53 |
+
2025-11-09 19:20:32,786 - INFO - Step 800/19128 | Loss: 0.5619 | LR: 2.95e-04 | GPU: 0.87GB
|
| 54 |
+
2025-11-09 19:21:19,739 - INFO - Step 850/19128 | Loss: 0.5719 | LR: 2.94e-04 | GPU: 0.87GB
|
| 55 |
+
2025-11-09 19:22:06,708 - INFO - Step 900/19128 | Loss: 0.5576 | LR: 2.94e-04 | GPU: 0.87GB
|
| 56 |
+
2025-11-09 19:22:53,641 - INFO - Step 950/19128 | Loss: 0.5567 | LR: 2.93e-04 | GPU: 0.87GB
|
| 57 |
+
2025-11-09 19:23:40,445 - INFO - Step 1000/19128 | Loss: 0.5597 | LR: 2.92e-04 | GPU: 0.87GB
|
| 58 |
+
2025-11-09 19:25:15,772 - INFO - Step 1000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
|
| 59 |
+
2025-11-09 19:25:15,772 - INFO - ============================================================
|
| 60 |
+
2025-11-09 19:25:15,772 - INFO - π EVALUATION at step 1000
|
| 61 |
+
2025-11-09 19:25:15,772 - INFO - eval_loss: 0.5003
|
| 62 |
+
2025-11-09 19:25:15,772 - INFO - eval_runtime: 95.3235
|
| 63 |
+
2025-11-09 19:25:15,772 - INFO - eval_samples_per_second: 118.8790
|
| 64 |
+
2025-11-09 19:25:15,772 - INFO - eval_steps_per_second: 7.4380
|
| 65 |
+
2025-11-09 19:25:15,772 - INFO - epoch: 0.1600
|
| 66 |
+
2025-11-09 19:25:15,773 - INFO - gpu_memory_gb: 0.8662
|
| 67 |
+
2025-11-09 19:25:15,773 - INFO - system_memory_percent: 6.9000
|
| 68 |
+
2025-11-09 19:25:15,773 - INFO - ============================================================
|
| 69 |
+
2025-11-09 19:25:15,773 - INFO - π New best eval loss: 0.5003
|
| 70 |
+
2025-11-09 19:25:16,038 - INFO - ============================================================
|
| 71 |
+
2025-11-09 19:25:16,038 - INFO - πΎ Checkpoint 1: step 1000
|
| 72 |
+
2025-11-09 19:25:16,038 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
|
| 73 |
+
2025-11-09 19:25:16,038 - INFO - π€ Uploading checkpoint-1000 to Hub...
|
| 74 |
+
2025-11-09 19:25:20,110 - INFO - β
Checkpoint 1000 uploaded!
|
| 75 |
+
2025-11-09 19:25:20,110 - INFO - π https://huggingface.co/ranjan56cse/t5-base-xsum-lora
|
| 76 |
+
2025-11-09 19:25:20,110 - INFO - ============================================================
|
| 77 |
+
2025-11-09 19:26:07,015 - INFO - Step 1050/19128 | Loss: 0.5565 | LR: 2.91e-04 | GPU: 0.87GB
|
| 78 |
+
2025-11-09 19:26:53,807 - INFO - Step 1100/19128 | Loss: 0.5767 | LR: 2.91e-04 | GPU: 0.87GB
|
| 79 |
+
2025-11-09 19:27:40,531 - INFO - Step 1150/19128 | Loss: 0.5620 | LR: 2.90e-04 | GPU: 0.87GB
|
| 80 |
+
2025-11-09 19:28:27,359 - INFO - Step 1200/19128 | Loss: 0.5864 | LR: 2.89e-04 | GPU: 0.87GB
|
| 81 |
+
2025-11-09 19:29:14,182 - INFO - Step 1250/19128 | Loss: 0.6260 | LR: 2.88e-04 | GPU: 0.87GB
|
| 82 |
+
2025-11-09 19:30:01,074 - INFO - Step 1300/19128 | Loss: 0.7742 | LR: 2.87e-04 | GPU: 0.87GB
|
| 83 |
+
2025-11-09 19:30:48,073 - INFO - Step 1350/19128 | Loss: 1.1101 | LR: 2.87e-04 | GPU: 0.87GB
|
| 84 |
+
2025-11-09 19:31:34,986 - INFO - Step 1400/19128 | Loss: 1.3211 | LR: 2.86e-04 | GPU: 0.87GB
|
| 85 |
+
2025-11-09 19:32:21,930 - INFO - Step 1450/19128 | Loss: 1.4130 | LR: 2.85e-04 | GPU: 0.87GB
|
| 86 |
+
2025-11-09 19:33:08,830 - INFO - Step 1500/19128 | Loss: 1.4265 | LR: 2.84e-04 | GPU: 0.87GB
|
| 87 |
+
2025-11-09 19:33:55,803 - INFO - Step 1550/19128 | Loss: 1.4700 | LR: 2.83e-04 | GPU: 0.87GB
|
| 88 |
+
2025-11-09 19:34:42,910 - INFO - Step 1600/19128 | Loss: 1.4561 | LR: 2.83e-04 | GPU: 0.87GB
|
| 89 |
+
2025-11-09 19:35:29,939 - INFO - Step 1650/19128 | Loss: 1.4693 | LR: 2.82e-04 | GPU: 0.87GB
|
| 90 |
+
2025-11-09 19:36:16,685 - INFO - Step 1700/19128 | Loss: 1.4729 | LR: 2.81e-04 | GPU: 0.87GB
|
| 91 |
+
2025-11-09 19:37:03,396 - INFO - Step 1750/19128 | Loss: 1.4599 | LR: 2.80e-04 | GPU: 0.87GB
|
| 92 |
+
2025-11-09 19:37:50,039 - INFO - Step 1800/19128 | Loss: 1.4725 | LR: 2.79e-04 | GPU: 0.87GB
|
| 93 |
+
2025-11-09 19:38:36,721 - INFO - Step 1850/19128 | Loss: 1.4503 | LR: 2.79e-04 | GPU: 0.87GB
|
| 94 |
+
2025-11-09 19:39:23,367 - INFO - Step 1900/19128 | Loss: 1.4812 | LR: 2.78e-04 | GPU: 0.87GB
|
| 95 |
+
2025-11-09 19:40:10,030 - INFO - Step 1950/19128 | Loss: 1.4761 | LR: 2.77e-04 | GPU: 0.87GB
|
| 96 |
+
2025-11-09 19:40:56,713 - INFO - Step 2000/19128 | Loss: 1.4960 | LR: 2.76e-04 | GPU: 0.87GB
|
| 97 |
+
2025-11-09 19:42:31,551 - INFO - Step 2000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
|
| 98 |
+
2025-11-09 19:42:31,551 - INFO - ============================================================
|
| 99 |
+
2025-11-09 19:42:31,551 - INFO - π EVALUATION at step 2000
|
| 100 |
+
2025-11-09 19:42:31,551 - INFO - eval_loss: 1.2512
|
| 101 |
+
2025-11-09 19:42:31,551 - INFO - eval_runtime: 94.8348
|
| 102 |
+
2025-11-09 19:42:31,551 - INFO - eval_samples_per_second: 119.4920
|
| 103 |
+
2025-11-09 19:42:31,551 - INFO - eval_steps_per_second: 7.4760
|
| 104 |
+
2025-11-09 19:42:31,551 - INFO - epoch: 0.3100
|
| 105 |
+
2025-11-09 19:42:31,551 - INFO - gpu_memory_gb: 0.8662
|
| 106 |
+
2025-11-09 19:42:31,551 - INFO - system_memory_percent: 13.2000
|
| 107 |
+
2025-11-09 19:42:31,551 - INFO - ============================================================
|
| 108 |
+
2025-11-09 19:42:31,768 - INFO - ============================================================
|
| 109 |
+
2025-11-09 19:42:31,768 - INFO - πΎ Checkpoint 2: step 2000
|
| 110 |
+
2025-11-09 19:42:31,769 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
|
| 111 |
+
2025-11-09 19:42:31,769 - INFO - π€ Uploading checkpoint-2000 to Hub...
|
| 112 |
+
2025-11-09 19:42:36,341 - INFO - β
Checkpoint 2000 uploaded!
|
| 113 |
+
2025-11-09 19:42:36,342 - INFO - π https://huggingface.co/ranjan56cse/t5-base-xsum-lora
|
| 114 |
+
2025-11-09 19:42:36,342 - INFO - ============================================================
|
| 115 |
+
2025-11-09 19:43:23,118 - INFO - Step 2050/19128 | Loss: 1.4488 | LR: 2.75e-04 | GPU: 0.87GB
|
| 116 |
+
2025-11-09 19:44:09,811 - INFO - Step 2100/19128 | Loss: 1.4550 | LR: 2.75e-04 | GPU: 0.87GB
|
| 117 |
+
2025-11-09 19:44:56,495 - INFO - Step 2150/19128 | Loss: 1.4353 | LR: 2.74e-04 | GPU: 0.87GB
|
| 118 |
+
2025-11-09 19:45:43,252 - INFO - Step 2200/19128 | Loss: 1.4524 | LR: 2.73e-04 | GPU: 0.87GB
|
| 119 |
+
2025-11-09 19:46:30,038 - INFO - Step 2250/19128 | Loss: 1.4701 | LR: 2.72e-04 | GPU: 0.87GB
|
| 120 |
+
2025-11-09 19:47:16,729 - INFO - Step 2300/19128 | Loss: 1.4734 | LR: 2.71e-04 | GPU: 0.87GB
|
| 121 |
+
2025-11-09 19:48:03,415 - INFO - Step 2350/19128 | Loss: 1.5035 | LR: 2.71e-04 | GPU: 0.87GB
|
| 122 |
+
2025-11-09 19:48:50,056 - INFO - Step 2400/19128 | Loss: 1.4513 | LR: 2.70e-04 | GPU: 0.87GB
|
| 123 |
+
2025-11-09 19:49:36,603 - INFO - Step 2450/19128 | Loss: 1.4641 | LR: 2.69e-04 | GPU: 0.87GB
|
| 124 |
+
2025-11-09 19:50:23,155 - INFO - Step 2500/19128 | Loss: 1.4585 | LR: 2.68e-04 | GPU: 0.87GB
|
| 125 |
+
2025-11-09 19:51:09,800 - INFO - Step 2550/19128 | Loss: 1.4673 | LR: 2.67e-04 | GPU: 0.87GB
|
| 126 |
+
2025-11-09 19:51:56,482 - INFO - Step 2600/19128 | Loss: 1.4671 | LR: 2.67e-04 | GPU: 0.87GB
|
| 127 |
+
2025-11-09 19:52:43,089 - INFO - Step 2650/19128 | Loss: 1.4702 | LR: 2.66e-04 | GPU: 0.87GB
|
| 128 |
+
2025-11-09 19:53:29,716 - INFO - Step 2700/19128 | Loss: 1.4612 | LR: 2.65e-04 | GPU: 0.87GB
|
| 129 |
+
2025-11-09 19:54:16,277 - INFO - Step 2750/19128 | Loss: 1.4713 | LR: 2.64e-04 | GPU: 0.87GB
|
| 130 |
+
2025-11-09 19:55:02,907 - INFO - Step 2800/19128 | Loss: 1.4573 | LR: 2.64e-04 | GPU: 0.87GB
|
| 131 |
+
2025-11-09 19:55:49,565 - INFO - Step 2850/19128 | Loss: 1.4586 | LR: 2.63e-04 | GPU: 0.87GB
|
| 132 |
+
2025-11-09 19:56:36,226 - INFO - Step 2900/19128 | Loss: 1.4674 | LR: 2.62e-04 | GPU: 0.87GB
|
| 133 |
+
2025-11-09 19:57:22,928 - INFO - Step 2950/19128 | Loss: 1.4466 | LR: 2.61e-04 | GPU: 0.87GB
|
| 134 |
+
2025-11-09 19:58:09,596 - INFO - Step 3000/19128 | Loss: 1.4897 | LR: 2.60e-04 | GPU: 0.87GB
|
| 135 |
+
2025-11-09 19:59:44,409 - INFO - Step 3000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
|
| 136 |
+
2025-11-09 19:59:44,409 - INFO - ============================================================
|
| 137 |
+
2025-11-09 19:59:44,409 - INFO - π EVALUATION at step 3000
|
| 138 |
+
2025-11-09 19:59:44,410 - INFO - eval_loss: 1.2418
|
| 139 |
+
2025-11-09 19:59:44,410 - INFO - eval_runtime: 94.8105
|
| 140 |
+
2025-11-09 19:59:44,410 - INFO - eval_samples_per_second: 119.5230
|
| 141 |
+
2025-11-09 19:59:44,410 - INFO - eval_steps_per_second: 7.4780
|
| 142 |
+
2025-11-09 19:59:44,410 - INFO - epoch: 0.4700
|
| 143 |
+
2025-11-09 19:59:44,410 - INFO - gpu_memory_gb: 0.8662
|
| 144 |
+
2025-11-09 19:59:44,410 - INFO - system_memory_percent: 6.7000
|
| 145 |
+
2025-11-09 19:59:44,410 - INFO - ============================================================
|
| 146 |
+
2025-11-09 19:59:44,634 - INFO - ============================================================
|
| 147 |
+
2025-11-09 19:59:44,634 - INFO - πΎ Checkpoint 3: step 3000
|
| 148 |
+
2025-11-09 19:59:44,635 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
|
| 149 |
+
2025-11-09 19:59:44,635 - INFO - π€ Uploading checkpoint-3000 to Hub...
|