ranjan56cse commited on
Commit
52cffa7
Β·
verified Β·
1 Parent(s): 4a8bfcc

Upload logs/training_log_step5000.log with huggingface_hub

Browse files
Files changed (1) hide show
  1. logs/training_log_step5000.log +225 -0
logs/training_log_step5000.log ADDED
@@ -0,0 +1,225 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-11-09 19:05:41,604 - INFO -
2
+ ╔══════════════════════════════════════════════════════════╗
3
+ β•‘ T5 TRAINING CONFIGURATION β•‘
4
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
5
+ Mode: FULL
6
+ Platform: vast
7
+ Repository: ranjan56cse/t5-base-xsum-lora
8
+ Epochs: 3
9
+ Samples: ALL (204k)
10
+ Batch size: 16
11
+ Gradient accum: 2
12
+ Effective batch: 32
13
+ Save every: 1000 steps
14
+ Expected time: ~8-10 hours
15
+
16
+ 2025-11-09 19:05:41,604 - INFO - Creating repository: ranjan56cse/t5-base-xsum-lora
17
+ 2025-11-09 19:05:41,807 - INFO - βœ… Repo: https://huggingface.co/ranjan56cse/t5-base-xsum-lora
18
+ 2025-11-09 19:05:41,807 - INFO - Loading google-t5/t5-base...
19
+ 2025-11-09 19:05:52,938 - INFO - βœ… Gradient checkpointing enabled
20
+ 2025-11-09 19:05:52,938 - INFO - Applying LoRA...
21
+ 2025-11-09 19:05:52,976 - INFO - Loading XSum dataset...
22
+ 2025-11-09 19:05:56,588 - INFO - βœ… Dataset: 204045 train, 11332 val
23
+ 2025-11-09 19:05:56,588 - INFO - Tokenizing...
24
+ 2025-11-09 19:08:02,802 - INFO - βœ… Tokenization complete
25
+ 2025-11-09 19:08:03,857 - INFO - ============================================================
26
+ 2025-11-09 19:08:03,858 - INFO - πŸš€ STARTING TRAINING (~8-10 hours)
27
+ 2025-11-09 19:08:03,859 - INFO - Effective batch size: 32
28
+ 2025-11-09 19:08:03,859 - INFO - GPU: 0.84GB allocated, 0.92GB reserved
29
+ 2025-11-09 19:08:03,859 - INFO - System: 4.1% used (17.4GB / 503.7GB)
30
+ 2025-11-09 19:08:03,859 - INFO - ============================================================
31
+ 2025-11-09 19:08:03,990 - INFO - ============================================================
32
+ 2025-11-09 19:08:03,990 - INFO - πŸš€ Training started
33
+ 2025-11-09 19:08:03,990 - INFO - Total steps: 19128
34
+ 2025-11-09 19:08:03,990 - INFO - GPU: NVIDIA GeForce RTX 3090
35
+ 2025-11-09 19:08:03,990 - INFO - GPU Memory: 0.84GB allocated, 0.92GB reserved
36
+ 2025-11-09 19:08:03,990 - INFO - System Memory: 4.1% used (17.4GB / 503.7GB)
37
+ 2025-11-09 19:08:03,991 - INFO - ============================================================
38
+ 2025-11-09 19:08:51,266 - INFO - Step 50/19128 | Loss: 12.5022 | LR: 2.88e-05 | GPU: 0.87GB
39
+ 2025-11-09 19:09:38,077 - INFO - Step 100/19128 | Loss: 10.3469 | LR: 5.82e-05 | GPU: 0.87GB
40
+ 2025-11-09 19:10:24,938 - INFO - Step 150/19128 | Loss: 4.0200 | LR: 8.82e-05 | GPU: 0.87GB
41
+ 2025-11-09 19:11:11,674 - INFO - Step 200/19128 | Loss: 0.9201 | LR: 1.18e-04 | GPU: 0.87GB
42
+ 2025-11-09 19:11:58,405 - INFO - Step 250/19128 | Loss: 0.7357 | LR: 1.48e-04 | GPU: 0.87GB
43
+ 2025-11-09 19:12:45,152 - INFO - Step 300/19128 | Loss: 0.6602 | LR: 1.77e-04 | GPU: 0.87GB
44
+ 2025-11-09 19:13:31,815 - INFO - Step 350/19128 | Loss: 0.6121 | LR: 2.07e-04 | GPU: 0.87GB
45
+ 2025-11-09 19:14:18,499 - INFO - Step 400/19128 | Loss: 0.5817 | LR: 2.37e-04 | GPU: 0.87GB
46
+ 2025-11-09 19:15:05,185 - INFO - Step 450/19128 | Loss: 0.5916 | LR: 2.67e-04 | GPU: 0.87GB
47
+ 2025-11-09 19:15:51,879 - INFO - Step 500/19128 | Loss: 0.5675 | LR: 2.97e-04 | GPU: 0.87GB
48
+ 2025-11-09 19:16:38,691 - INFO - Step 550/19128 | Loss: 0.5700 | LR: 2.99e-04 | GPU: 0.87GB
49
+ 2025-11-09 19:17:25,546 - INFO - Step 600/19128 | Loss: 0.5610 | LR: 2.98e-04 | GPU: 0.87GB
50
+ 2025-11-09 19:18:12,459 - INFO - Step 650/19128 | Loss: 0.5669 | LR: 2.98e-04 | GPU: 0.87GB
51
+ 2025-11-09 19:18:59,163 - INFO - Step 700/19128 | Loss: 0.5659 | LR: 2.97e-04 | GPU: 0.87GB
52
+ 2025-11-09 19:19:45,942 - INFO - Step 750/19128 | Loss: 0.5673 | LR: 2.96e-04 | GPU: 0.87GB
53
+ 2025-11-09 19:20:32,786 - INFO - Step 800/19128 | Loss: 0.5619 | LR: 2.95e-04 | GPU: 0.87GB
54
+ 2025-11-09 19:21:19,739 - INFO - Step 850/19128 | Loss: 0.5719 | LR: 2.94e-04 | GPU: 0.87GB
55
+ 2025-11-09 19:22:06,708 - INFO - Step 900/19128 | Loss: 0.5576 | LR: 2.94e-04 | GPU: 0.87GB
56
+ 2025-11-09 19:22:53,641 - INFO - Step 950/19128 | Loss: 0.5567 | LR: 2.93e-04 | GPU: 0.87GB
57
+ 2025-11-09 19:23:40,445 - INFO - Step 1000/19128 | Loss: 0.5597 | LR: 2.92e-04 | GPU: 0.87GB
58
+ 2025-11-09 19:25:15,772 - INFO - Step 1000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
59
+ 2025-11-09 19:25:15,772 - INFO - ============================================================
60
+ 2025-11-09 19:25:15,772 - INFO - πŸ“Š EVALUATION at step 1000
61
+ 2025-11-09 19:25:15,772 - INFO - eval_loss: 0.5003
62
+ 2025-11-09 19:25:15,772 - INFO - eval_runtime: 95.3235
63
+ 2025-11-09 19:25:15,772 - INFO - eval_samples_per_second: 118.8790
64
+ 2025-11-09 19:25:15,772 - INFO - eval_steps_per_second: 7.4380
65
+ 2025-11-09 19:25:15,772 - INFO - epoch: 0.1600
66
+ 2025-11-09 19:25:15,773 - INFO - gpu_memory_gb: 0.8662
67
+ 2025-11-09 19:25:15,773 - INFO - system_memory_percent: 6.9000
68
+ 2025-11-09 19:25:15,773 - INFO - ============================================================
69
+ 2025-11-09 19:25:15,773 - INFO - πŸ† New best eval loss: 0.5003
70
+ 2025-11-09 19:25:16,038 - INFO - ============================================================
71
+ 2025-11-09 19:25:16,038 - INFO - πŸ’Ύ Checkpoint 1: step 1000
72
+ 2025-11-09 19:25:16,038 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
73
+ 2025-11-09 19:25:16,038 - INFO - πŸ“€ Uploading checkpoint-1000 to Hub...
74
+ 2025-11-09 19:25:20,110 - INFO - βœ… Checkpoint 1000 uploaded!
75
+ 2025-11-09 19:25:20,110 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
76
+ 2025-11-09 19:25:20,110 - INFO - ============================================================
77
+ 2025-11-09 19:26:07,015 - INFO - Step 1050/19128 | Loss: 0.5565 | LR: 2.91e-04 | GPU: 0.87GB
78
+ 2025-11-09 19:26:53,807 - INFO - Step 1100/19128 | Loss: 0.5767 | LR: 2.91e-04 | GPU: 0.87GB
79
+ 2025-11-09 19:27:40,531 - INFO - Step 1150/19128 | Loss: 0.5620 | LR: 2.90e-04 | GPU: 0.87GB
80
+ 2025-11-09 19:28:27,359 - INFO - Step 1200/19128 | Loss: 0.5864 | LR: 2.89e-04 | GPU: 0.87GB
81
+ 2025-11-09 19:29:14,182 - INFO - Step 1250/19128 | Loss: 0.6260 | LR: 2.88e-04 | GPU: 0.87GB
82
+ 2025-11-09 19:30:01,074 - INFO - Step 1300/19128 | Loss: 0.7742 | LR: 2.87e-04 | GPU: 0.87GB
83
+ 2025-11-09 19:30:48,073 - INFO - Step 1350/19128 | Loss: 1.1101 | LR: 2.87e-04 | GPU: 0.87GB
84
+ 2025-11-09 19:31:34,986 - INFO - Step 1400/19128 | Loss: 1.3211 | LR: 2.86e-04 | GPU: 0.87GB
85
+ 2025-11-09 19:32:21,930 - INFO - Step 1450/19128 | Loss: 1.4130 | LR: 2.85e-04 | GPU: 0.87GB
86
+ 2025-11-09 19:33:08,830 - INFO - Step 1500/19128 | Loss: 1.4265 | LR: 2.84e-04 | GPU: 0.87GB
87
+ 2025-11-09 19:33:55,803 - INFO - Step 1550/19128 | Loss: 1.4700 | LR: 2.83e-04 | GPU: 0.87GB
88
+ 2025-11-09 19:34:42,910 - INFO - Step 1600/19128 | Loss: 1.4561 | LR: 2.83e-04 | GPU: 0.87GB
89
+ 2025-11-09 19:35:29,939 - INFO - Step 1650/19128 | Loss: 1.4693 | LR: 2.82e-04 | GPU: 0.87GB
90
+ 2025-11-09 19:36:16,685 - INFO - Step 1700/19128 | Loss: 1.4729 | LR: 2.81e-04 | GPU: 0.87GB
91
+ 2025-11-09 19:37:03,396 - INFO - Step 1750/19128 | Loss: 1.4599 | LR: 2.80e-04 | GPU: 0.87GB
92
+ 2025-11-09 19:37:50,039 - INFO - Step 1800/19128 | Loss: 1.4725 | LR: 2.79e-04 | GPU: 0.87GB
93
+ 2025-11-09 19:38:36,721 - INFO - Step 1850/19128 | Loss: 1.4503 | LR: 2.79e-04 | GPU: 0.87GB
94
+ 2025-11-09 19:39:23,367 - INFO - Step 1900/19128 | Loss: 1.4812 | LR: 2.78e-04 | GPU: 0.87GB
95
+ 2025-11-09 19:40:10,030 - INFO - Step 1950/19128 | Loss: 1.4761 | LR: 2.77e-04 | GPU: 0.87GB
96
+ 2025-11-09 19:40:56,713 - INFO - Step 2000/19128 | Loss: 1.4960 | LR: 2.76e-04 | GPU: 0.87GB
97
+ 2025-11-09 19:42:31,551 - INFO - Step 2000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
98
+ 2025-11-09 19:42:31,551 - INFO - ============================================================
99
+ 2025-11-09 19:42:31,551 - INFO - πŸ“Š EVALUATION at step 2000
100
+ 2025-11-09 19:42:31,551 - INFO - eval_loss: 1.2512
101
+ 2025-11-09 19:42:31,551 - INFO - eval_runtime: 94.8348
102
+ 2025-11-09 19:42:31,551 - INFO - eval_samples_per_second: 119.4920
103
+ 2025-11-09 19:42:31,551 - INFO - eval_steps_per_second: 7.4760
104
+ 2025-11-09 19:42:31,551 - INFO - epoch: 0.3100
105
+ 2025-11-09 19:42:31,551 - INFO - gpu_memory_gb: 0.8662
106
+ 2025-11-09 19:42:31,551 - INFO - system_memory_percent: 13.2000
107
+ 2025-11-09 19:42:31,551 - INFO - ============================================================
108
+ 2025-11-09 19:42:31,768 - INFO - ============================================================
109
+ 2025-11-09 19:42:31,768 - INFO - πŸ’Ύ Checkpoint 2: step 2000
110
+ 2025-11-09 19:42:31,769 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
111
+ 2025-11-09 19:42:31,769 - INFO - πŸ“€ Uploading checkpoint-2000 to Hub...
112
+ 2025-11-09 19:42:36,341 - INFO - βœ… Checkpoint 2000 uploaded!
113
+ 2025-11-09 19:42:36,342 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
114
+ 2025-11-09 19:42:36,342 - INFO - ============================================================
115
+ 2025-11-09 19:43:23,118 - INFO - Step 2050/19128 | Loss: 1.4488 | LR: 2.75e-04 | GPU: 0.87GB
116
+ 2025-11-09 19:44:09,811 - INFO - Step 2100/19128 | Loss: 1.4550 | LR: 2.75e-04 | GPU: 0.87GB
117
+ 2025-11-09 19:44:56,495 - INFO - Step 2150/19128 | Loss: 1.4353 | LR: 2.74e-04 | GPU: 0.87GB
118
+ 2025-11-09 19:45:43,252 - INFO - Step 2200/19128 | Loss: 1.4524 | LR: 2.73e-04 | GPU: 0.87GB
119
+ 2025-11-09 19:46:30,038 - INFO - Step 2250/19128 | Loss: 1.4701 | LR: 2.72e-04 | GPU: 0.87GB
120
+ 2025-11-09 19:47:16,729 - INFO - Step 2300/19128 | Loss: 1.4734 | LR: 2.71e-04 | GPU: 0.87GB
121
+ 2025-11-09 19:48:03,415 - INFO - Step 2350/19128 | Loss: 1.5035 | LR: 2.71e-04 | GPU: 0.87GB
122
+ 2025-11-09 19:48:50,056 - INFO - Step 2400/19128 | Loss: 1.4513 | LR: 2.70e-04 | GPU: 0.87GB
123
+ 2025-11-09 19:49:36,603 - INFO - Step 2450/19128 | Loss: 1.4641 | LR: 2.69e-04 | GPU: 0.87GB
124
+ 2025-11-09 19:50:23,155 - INFO - Step 2500/19128 | Loss: 1.4585 | LR: 2.68e-04 | GPU: 0.87GB
125
+ 2025-11-09 19:51:09,800 - INFO - Step 2550/19128 | Loss: 1.4673 | LR: 2.67e-04 | GPU: 0.87GB
126
+ 2025-11-09 19:51:56,482 - INFO - Step 2600/19128 | Loss: 1.4671 | LR: 2.67e-04 | GPU: 0.87GB
127
+ 2025-11-09 19:52:43,089 - INFO - Step 2650/19128 | Loss: 1.4702 | LR: 2.66e-04 | GPU: 0.87GB
128
+ 2025-11-09 19:53:29,716 - INFO - Step 2700/19128 | Loss: 1.4612 | LR: 2.65e-04 | GPU: 0.87GB
129
+ 2025-11-09 19:54:16,277 - INFO - Step 2750/19128 | Loss: 1.4713 | LR: 2.64e-04 | GPU: 0.87GB
130
+ 2025-11-09 19:55:02,907 - INFO - Step 2800/19128 | Loss: 1.4573 | LR: 2.64e-04 | GPU: 0.87GB
131
+ 2025-11-09 19:55:49,565 - INFO - Step 2850/19128 | Loss: 1.4586 | LR: 2.63e-04 | GPU: 0.87GB
132
+ 2025-11-09 19:56:36,226 - INFO - Step 2900/19128 | Loss: 1.4674 | LR: 2.62e-04 | GPU: 0.87GB
133
+ 2025-11-09 19:57:22,928 - INFO - Step 2950/19128 | Loss: 1.4466 | LR: 2.61e-04 | GPU: 0.87GB
134
+ 2025-11-09 19:58:09,596 - INFO - Step 3000/19128 | Loss: 1.4897 | LR: 2.60e-04 | GPU: 0.87GB
135
+ 2025-11-09 19:59:44,409 - INFO - Step 3000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
136
+ 2025-11-09 19:59:44,409 - INFO - ============================================================
137
+ 2025-11-09 19:59:44,409 - INFO - πŸ“Š EVALUATION at step 3000
138
+ 2025-11-09 19:59:44,410 - INFO - eval_loss: 1.2418
139
+ 2025-11-09 19:59:44,410 - INFO - eval_runtime: 94.8105
140
+ 2025-11-09 19:59:44,410 - INFO - eval_samples_per_second: 119.5230
141
+ 2025-11-09 19:59:44,410 - INFO - eval_steps_per_second: 7.4780
142
+ 2025-11-09 19:59:44,410 - INFO - epoch: 0.4700
143
+ 2025-11-09 19:59:44,410 - INFO - gpu_memory_gb: 0.8662
144
+ 2025-11-09 19:59:44,410 - INFO - system_memory_percent: 6.7000
145
+ 2025-11-09 19:59:44,410 - INFO - ============================================================
146
+ 2025-11-09 19:59:44,634 - INFO - ============================================================
147
+ 2025-11-09 19:59:44,634 - INFO - πŸ’Ύ Checkpoint 3: step 3000
148
+ 2025-11-09 19:59:44,635 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
149
+ 2025-11-09 19:59:44,635 - INFO - πŸ“€ Uploading checkpoint-3000 to Hub...
150
+ 2025-11-09 19:59:48,888 - INFO - βœ… Checkpoint 3000 uploaded!
151
+ 2025-11-09 19:59:48,888 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
152
+ 2025-11-09 19:59:48,888 - INFO - ============================================================
153
+ 2025-11-09 20:00:35,640 - INFO - Step 3050/19128 | Loss: 1.4621 | LR: 2.60e-04 | GPU: 0.87GB
154
+ 2025-11-09 20:01:22,207 - INFO - Step 3100/19128 | Loss: 1.4443 | LR: 2.59e-04 | GPU: 0.87GB
155
+ 2025-11-09 20:02:08,745 - INFO - Step 3150/19128 | Loss: 1.4314 | LR: 2.58e-04 | GPU: 0.87GB
156
+ 2025-11-09 20:02:55,306 - INFO - Step 3200/19128 | Loss: 1.4172 | LR: 2.57e-04 | GPU: 0.87GB
157
+ 2025-11-09 20:03:41,847 - INFO - Step 3250/19128 | Loss: 1.4878 | LR: 2.56e-04 | GPU: 0.87GB
158
+ 2025-11-09 20:04:28,392 - INFO - Step 3300/19128 | Loss: 1.4344 | LR: 2.56e-04 | GPU: 0.87GB
159
+ 2025-11-09 20:05:14,921 - INFO - Step 3350/19128 | Loss: 1.4634 | LR: 2.55e-04 | GPU: 0.87GB
160
+ 2025-11-09 20:06:01,450 - INFO - Step 3400/19128 | Loss: 1.4679 | LR: 2.54e-04 | GPU: 0.87GB
161
+ 2025-11-09 20:06:48,065 - INFO - Step 3450/19128 | Loss: 1.4641 | LR: 2.53e-04 | GPU: 0.87GB
162
+ 2025-11-09 20:07:34,593 - INFO - Step 3500/19128 | Loss: 1.4396 | LR: 2.52e-04 | GPU: 0.87GB
163
+ 2025-11-09 20:08:21,159 - INFO - Step 3550/19128 | Loss: 1.4850 | LR: 2.52e-04 | GPU: 0.87GB
164
+ 2025-11-09 20:09:07,759 - INFO - Step 3600/19128 | Loss: 1.4355 | LR: 2.51e-04 | GPU: 0.87GB
165
+ 2025-11-09 20:09:54,480 - INFO - Step 3650/19128 | Loss: 1.4419 | LR: 2.50e-04 | GPU: 0.87GB
166
+ 2025-11-09 20:10:41,194 - INFO - Step 3700/19128 | Loss: 1.4224 | LR: 2.49e-04 | GPU: 0.87GB
167
+ 2025-11-09 20:11:27,870 - INFO - Step 3750/19128 | Loss: 1.4473 | LR: 2.48e-04 | GPU: 0.87GB
168
+ 2025-11-09 20:12:14,633 - INFO - Step 3800/19128 | Loss: 1.4341 | LR: 2.48e-04 | GPU: 0.87GB
169
+ 2025-11-09 20:13:01,358 - INFO - Step 3850/19128 | Loss: 1.4463 | LR: 2.47e-04 | GPU: 0.87GB
170
+ 2025-11-09 20:13:47,961 - INFO - Step 3900/19128 | Loss: 1.4348 | LR: 2.46e-04 | GPU: 0.87GB
171
+ 2025-11-09 20:14:34,584 - INFO - Step 3950/19128 | Loss: 1.4326 | LR: 2.45e-04 | GPU: 0.87GB
172
+ 2025-11-09 20:15:21,213 - INFO - Step 4000/19128 | Loss: 1.4586 | LR: 2.44e-04 | GPU: 0.87GB
173
+ 2025-11-09 20:16:56,031 - INFO - Step 4000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
174
+ 2025-11-09 20:16:56,032 - INFO - ============================================================
175
+ 2025-11-09 20:16:56,032 - INFO - πŸ“Š EVALUATION at step 4000
176
+ 2025-11-09 20:16:56,032 - INFO - eval_loss: 1.2330
177
+ 2025-11-09 20:16:56,032 - INFO - eval_runtime: 94.8153
178
+ 2025-11-09 20:16:56,032 - INFO - eval_samples_per_second: 119.5170
179
+ 2025-11-09 20:16:56,032 - INFO - eval_steps_per_second: 7.4780
180
+ 2025-11-09 20:16:56,032 - INFO - epoch: 0.6300
181
+ 2025-11-09 20:16:56,032 - INFO - gpu_memory_gb: 0.8662
182
+ 2025-11-09 20:16:56,032 - INFO - system_memory_percent: 6.9000
183
+ 2025-11-09 20:16:56,032 - INFO - ============================================================
184
+ 2025-11-09 20:16:56,240 - INFO - ============================================================
185
+ 2025-11-09 20:16:56,241 - INFO - πŸ’Ύ Checkpoint 4: step 4000
186
+ 2025-11-09 20:16:56,241 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
187
+ 2025-11-09 20:16:56,241 - INFO - πŸ“€ Uploading checkpoint-4000 to Hub...
188
+ 2025-11-09 20:17:00,190 - INFO - βœ… Checkpoint 4000 uploaded!
189
+ 2025-11-09 20:17:00,190 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
190
+ 2025-11-09 20:17:00,190 - INFO - ============================================================
191
+ 2025-11-09 20:17:47,036 - INFO - Step 4050/19128 | Loss: 1.4624 | LR: 2.44e-04 | GPU: 0.87GB
192
+ 2025-11-09 20:18:33,726 - INFO - Step 4100/19128 | Loss: 1.4550 | LR: 2.43e-04 | GPU: 0.87GB
193
+ 2025-11-09 20:19:20,355 - INFO - Step 4150/19128 | Loss: 1.4294 | LR: 2.42e-04 | GPU: 0.87GB
194
+ 2025-11-09 20:20:06,989 - INFO - Step 4200/19128 | Loss: 1.4675 | LR: 2.41e-04 | GPU: 0.87GB
195
+ 2025-11-09 20:20:53,597 - INFO - Step 4250/19128 | Loss: 1.4320 | LR: 2.40e-04 | GPU: 0.87GB
196
+ 2025-11-09 20:21:40,182 - INFO - Step 4300/19128 | Loss: 1.4357 | LR: 2.40e-04 | GPU: 0.87GB
197
+ 2025-11-09 20:22:26,684 - INFO - Step 4350/19128 | Loss: 1.4419 | LR: 2.39e-04 | GPU: 0.87GB
198
+ 2025-11-09 20:23:13,218 - INFO - Step 4400/19128 | Loss: 1.4272 | LR: 2.38e-04 | GPU: 0.87GB
199
+ 2025-11-09 20:23:59,888 - INFO - Step 4450/19128 | Loss: 1.4133 | LR: 2.37e-04 | GPU: 0.87GB
200
+ 2025-11-09 20:24:46,653 - INFO - Step 4500/19128 | Loss: 1.4340 | LR: 2.36e-04 | GPU: 0.87GB
201
+ 2025-11-09 20:25:33,287 - INFO - Step 4550/19128 | Loss: 1.4218 | LR: 2.36e-04 | GPU: 0.87GB
202
+ 2025-11-09 20:26:19,993 - INFO - Step 4600/19128 | Loss: 1.4682 | LR: 2.35e-04 | GPU: 0.87GB
203
+ 2025-11-09 20:27:06,680 - INFO - Step 4650/19128 | Loss: 1.4333 | LR: 2.34e-04 | GPU: 0.87GB
204
+ 2025-11-09 20:27:53,348 - INFO - Step 4700/19128 | Loss: 1.4359 | LR: 2.33e-04 | GPU: 0.87GB
205
+ 2025-11-09 20:28:39,968 - INFO - Step 4750/19128 | Loss: 1.4054 | LR: 2.32e-04 | GPU: 0.87GB
206
+ 2025-11-09 20:29:26,496 - INFO - Step 4800/19128 | Loss: 1.4215 | LR: 2.32e-04 | GPU: 0.87GB
207
+ 2025-11-09 20:30:13,206 - INFO - Step 4850/19128 | Loss: 1.4471 | LR: 2.31e-04 | GPU: 0.87GB
208
+ 2025-11-09 20:30:59,857 - INFO - Step 4900/19128 | Loss: 1.4238 | LR: 2.30e-04 | GPU: 0.87GB
209
+ 2025-11-09 20:31:46,547 - INFO - Step 4950/19128 | Loss: 1.4218 | LR: 2.29e-04 | GPU: 0.87GB
210
+ 2025-11-09 20:32:33,138 - INFO - Step 5000/19128 | Loss: 1.4419 | LR: 2.28e-04 | GPU: 0.87GB
211
+ 2025-11-09 20:34:08,183 - INFO - Step 5000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
212
+ 2025-11-09 20:34:08,183 - INFO - ============================================================
213
+ 2025-11-09 20:34:08,183 - INFO - πŸ“Š EVALUATION at step 5000
214
+ 2025-11-09 20:34:08,183 - INFO - eval_loss: 1.2248
215
+ 2025-11-09 20:34:08,183 - INFO - eval_runtime: 95.0420
216
+ 2025-11-09 20:34:08,183 - INFO - eval_samples_per_second: 119.2310
217
+ 2025-11-09 20:34:08,183 - INFO - eval_steps_per_second: 7.4600
218
+ 2025-11-09 20:34:08,183 - INFO - epoch: 0.7800
219
+ 2025-11-09 20:34:08,183 - INFO - gpu_memory_gb: 0.8662
220
+ 2025-11-09 20:34:08,183 - INFO - system_memory_percent: 6.8000
221
+ 2025-11-09 20:34:08,183 - INFO - ============================================================
222
+ 2025-11-09 20:34:08,403 - INFO - ============================================================
223
+ 2025-11-09 20:34:08,403 - INFO - πŸ’Ύ Checkpoint 5: step 5000
224
+ 2025-11-09 20:34:08,403 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
225
+ 2025-11-09 20:34:08,403 - INFO - πŸ“€ Uploading checkpoint-5000 to Hub...