DeepSeek-PRCT / training.log
erikbranmarino's picture
Upload folder using huggingface_hub
da0075c verified
Loading pretrained model
Loading datasets
Training
Trainable parameters: 0.078% (11.469M/14770.034M)
Starting training..., iters: 600
Calculating loss...: 0%| | 0/25 [00:00<?, ?it/s]
Calculating loss...: 4%|▍ | 1/25 [00:02<00:58, 2.45s/it]
Calculating loss...: 8%|β–Š | 2/25 [00:05<00:57, 2.51s/it]
Calculating loss...: 12%|β–ˆβ– | 3/25 [00:07<00:56, 2.56s/it]
Calculating loss...: 16%|β–ˆβ–Œ | 4/25 [00:09<00:50, 2.39s/it]
Calculating loss...: 20%|β–ˆβ–ˆ | 5/25 [00:11<00:45, 2.29s/it]
Calculating loss...: 24%|β–ˆβ–ˆβ– | 6/25 [00:15<00:55, 2.91s/it]
Calculating loss...: 28%|β–ˆβ–ˆβ–Š | 7/25 [00:18<00:48, 2.72s/it]
Calculating loss...: 32%|β–ˆβ–ˆβ–ˆβ– | 8/25 [00:21<00:47, 2.77s/it]
Calculating loss...: 36%|β–ˆβ–ˆβ–ˆβ–Œ | 9/25 [00:23<00:42, 2.67s/it]
Calculating loss...: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 10/25 [00:26<00:39, 2.65s/it]
Calculating loss...: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 11/25 [00:28<00:34, 2.47s/it]
Calculating loss...: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 12/25 [00:30<00:32, 2.52s/it]
Calculating loss...: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 13/25 [00:33<00:29, 2.42s/it]
Calculating loss...: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 14/25 [00:35<00:26, 2.40s/it]
Calculating loss...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 15/25 [00:38<00:26, 2.60s/it]
Calculating loss...: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 16/25 [00:40<00:22, 2.51s/it]
Calculating loss...: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 17/25 [00:43<00:20, 2.50s/it]
Calculating loss...: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 18/25 [00:46<00:18, 2.64s/it]
Calculating loss...: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 19/25 [00:49<00:16, 2.79s/it]
Calculating loss...: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 20/25 [00:51<00:13, 2.60s/it]
Calculating loss...: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 21/25 [00:53<00:09, 2.47s/it]
Calculating loss...: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/25 [00:55<00:06, 2.33s/it]
Calculating loss...: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 23/25 [00:58<00:04, 2.32s/it]
Calculating loss...: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 24/25 [01:00<00:02, 2.42s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:03<00:00, 2.57s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:03<00:00, 2.55s/it]
Iter 1: Val loss 2.630, Val took 63.650s
Iter 10: Train loss 2.016, Learning Rate 1.000e-05, It/sec 0.246, Tokens/sec 239.823, Trained Tokens 9738, Peak mem 40.069 GB
Iter 20: Train loss 1.097, Learning Rate 1.000e-05, It/sec 0.206, Tokens/sec 223.454, Trained Tokens 20566, Peak mem 51.482 GB
Iter 30: Train loss 0.841, Learning Rate 1.000e-05, It/sec 0.243, Tokens/sec 243.944, Trained Tokens 30620, Peak mem 51.482 GB
Iter 40: Train loss 0.698, Learning Rate 1.000e-05, It/sec 0.270, Tokens/sec 260.294, Trained Tokens 40275, Peak mem 51.482 GB
Iter 50: Train loss 0.813, Learning Rate 1.000e-05, It/sec 0.229, Tokens/sec 246.430, Trained Tokens 51030, Peak mem 51.482 GB
Iter 60: Train loss 0.754, Learning Rate 1.000e-05, It/sec 0.255, Tokens/sec 254.753, Trained Tokens 61017, Peak mem 51.482 GB
Iter 70: Train loss 0.729, Learning Rate 1.000e-05, It/sec 0.251, Tokens/sec 250.801, Trained Tokens 71015, Peak mem 51.482 GB
Iter 80: Train loss 0.721, Learning Rate 1.000e-05, It/sec 0.254, Tokens/sec 254.204, Trained Tokens 81010, Peak mem 51.482 GB
Iter 90: Train loss 0.663, Learning Rate 1.000e-05, It/sec 0.273, Tokens/sec 262.878, Trained Tokens 90638, Peak mem 51.482 GB
Calculating loss...: 0%| | 0/25 [00:00<?, ?it/s]
Calculating loss...: 4%|▍ | 1/25 [00:02<01:05, 2.73s/it]
Calculating loss...: 8%|β–Š | 2/25 [00:05<00:57, 2.49s/it]
Calculating loss...: 12%|β–ˆβ– | 3/25 [00:07<00:53, 2.41s/it]
Calculating loss...: 16%|β–ˆβ–Œ | 4/25 [00:09<00:49, 2.37s/it]
Calculating loss...: 20%|β–ˆβ–ˆ | 5/25 [00:11<00:45, 2.29s/it]
Calculating loss...: 24%|β–ˆβ–ˆβ– | 6/25 [00:14<00:44, 2.36s/it]
Calculating loss...: 28%|β–ˆβ–ˆβ–Š | 7/25 [00:16<00:42, 2.38s/it]
Calculating loss...: 32%|β–ˆβ–ˆβ–ˆβ– | 8/25 [00:18<00:39, 2.30s/it]
Calculating loss...: 36%|β–ˆβ–ˆβ–ˆβ–Œ | 9/25 [00:20<00:35, 2.25s/it]
Calculating loss...: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 10/25 [00:23<00:34, 2.30s/it]
Calculating loss...: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 11/25 [00:26<00:35, 2.50s/it]
Calculating loss...: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 12/25 [00:28<00:31, 2.39s/it]
Calculating loss...: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 13/25 [00:30<00:28, 2.38s/it]
Calculating loss...: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 14/25 [00:33<00:25, 2.36s/it]
Calculating loss...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 15/25 [00:35<00:23, 2.39s/it]
Calculating loss...: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 16/25 [00:39<00:24, 2.72s/it]
Calculating loss...: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 17/25 [00:41<00:20, 2.60s/it]
Calculating loss...: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 18/25 [00:45<00:21, 3.13s/it]
Calculating loss...: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 19/25 [00:47<00:17, 2.83s/it]
Calculating loss...: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 20/25 [00:52<00:16, 3.32s/it]
Calculating loss...: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 21/25 [00:54<00:11, 2.96s/it]
Calculating loss...: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/25 [00:56<00:08, 2.78s/it]
Calculating loss...: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 23/25 [00:59<00:05, 2.64s/it]
Calculating loss...: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 24/25 [01:01<00:02, 2.55s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:04<00:00, 2.57s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:04<00:00, 2.57s/it]
Iter 100: Val loss 0.655, Val took 64.142s
Iter 100: Train loss 0.570, Learning Rate 1.000e-05, It/sec 0.271, Tokens/sec 252.471, Trained Tokens 99944, Peak mem 51.482 GB
Iter 100: Saved adapter weights to models/lora/deepseek_lora_telegram_20251111_165211/adapters.safetensors and models/lora/deepseek_lora_telegram_20251111_165211/0000100_adapters.safetensors.
Iter 110: Train loss 0.656, Learning Rate 1.000e-05, It/sec 0.271, Tokens/sec 261.990, Trained Tokens 109594, Peak mem 51.482 GB
Iter 120: Train loss 0.670, Learning Rate 1.000e-05, It/sec 0.249, Tokens/sec 256.186, Trained Tokens 119873, Peak mem 51.482 GB
Iter 130: Train loss 0.610, Learning Rate 1.000e-05, It/sec 0.252, Tokens/sec 254.174, Trained Tokens 129955, Peak mem 51.482 GB
Iter 140: Train loss 0.454, Learning Rate 1.000e-05, It/sec 0.311, Tokens/sec 270.304, Trained Tokens 138638, Peak mem 51.482 GB
Iter 150: Train loss 0.595, Learning Rate 1.000e-05, It/sec 0.264, Tokens/sec 254.198, Trained Tokens 148254, Peak mem 51.482 GB
Iter 160: Train loss 0.571, Learning Rate 1.000e-05, It/sec 0.263, Tokens/sec 254.557, Trained Tokens 157921, Peak mem 51.482 GB
Iter 170: Train loss 0.552, Learning Rate 1.000e-05, It/sec 0.292, Tokens/sec 267.427, Trained Tokens 167073, Peak mem 51.482 GB
Iter 180: Train loss 0.571, Learning Rate 1.000e-05, It/sec 0.269, Tokens/sec 260.904, Trained Tokens 176788, Peak mem 51.482 GB
Iter 190: Train loss 0.712, Learning Rate 1.000e-05, It/sec 0.215, Tokens/sec 236.971, Trained Tokens 187826, Peak mem 52.133 GB
Calculating loss...: 0%| | 0/25 [00:00<?, ?it/s]
Calculating loss...: 4%|▍ | 1/25 [00:05<02:23, 5.97s/it]
Calculating loss...: 8%|β–Š | 2/25 [00:08<01:35, 4.15s/it]
Calculating loss...: 12%|β–ˆβ– | 3/25 [00:10<01:09, 3.17s/it]
Calculating loss...: 16%|β–ˆβ–Œ | 4/25 [00:12<00:57, 2.76s/it]
Calculating loss...: 20%|β–ˆβ–ˆ | 5/25 [00:15<00:51, 2.57s/it]
Calculating loss...: 24%|β–ˆβ–ˆβ– | 6/25 [00:17<00:50, 2.63s/it]
Calculating loss...: 28%|β–ˆβ–ˆβ–Š | 7/25 [00:19<00:43, 2.43s/it]
Calculating loss...: 32%|β–ˆβ–ˆβ–ˆβ– | 8/25 [00:22<00:39, 2.34s/it]
Calculating loss...: 36%|β–ˆβ–ˆβ–ˆβ–Œ | 9/25 [00:24<00:39, 2.46s/it]
Calculating loss...: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 10/25 [00:27<00:37, 2.51s/it]
Calculating loss...: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 11/25 [00:29<00:33, 2.39s/it]
Calculating loss...: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 12/25 [00:32<00:33, 2.57s/it]
Calculating loss...: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 13/25 [00:34<00:29, 2.44s/it]
Calculating loss...: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 14/25 [00:37<00:26, 2.42s/it]
Calculating loss...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 15/25 [00:39<00:24, 2.47s/it]
Calculating loss...: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 16/25 [00:41<00:21, 2.37s/it]
Calculating loss...: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 17/25 [00:44<00:19, 2.41s/it]
Calculating loss...: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 18/25 [00:46<00:16, 2.29s/it]
Calculating loss...: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 19/25 [00:49<00:14, 2.44s/it]
Calculating loss...: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 20/25 [00:52<00:13, 2.61s/it]
Calculating loss...: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 21/25 [00:54<00:09, 2.45s/it]
Calculating loss...: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/25 [00:56<00:07, 2.54s/it]
Calculating loss...: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 23/25 [00:59<00:05, 2.60s/it]
Calculating loss...: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 24/25 [01:02<00:02, 2.57s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:04<00:00, 2.50s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:04<00:00, 2.58s/it]
Iter 200: Val loss 0.573, Val took 64.542s
Iter 200: Train loss 0.489, Learning Rate 1.000e-05, It/sec 0.281, Tokens/sec 264.627, Trained Tokens 197242, Peak mem 52.133 GB
Iter 200: Saved adapter weights to models/lora/deepseek_lora_telegram_20251111_165211/adapters.safetensors and models/lora/deepseek_lora_telegram_20251111_165211/0000200_adapters.safetensors.
Iter 210: Train loss 0.478, Learning Rate 1.000e-05, It/sec 0.295, Tokens/sec 265.899, Trained Tokens 206252, Peak mem 52.133 GB
Iter 220: Train loss 0.500, Learning Rate 1.000e-05, It/sec 0.288, Tokens/sec 268.788, Trained Tokens 215583, Peak mem 52.133 GB
Iter 230: Train loss 0.658, Learning Rate 1.000e-05, It/sec 0.258, Tokens/sec 253.876, Trained Tokens 225430, Peak mem 52.133 GB
Iter 240: Train loss 0.583, Learning Rate 1.000e-05, It/sec 0.277, Tokens/sec 263.746, Trained Tokens 234953, Peak mem 52.133 GB
Iter 250: Train loss 0.531, Learning Rate 1.000e-05, It/sec 0.273, Tokens/sec 258.514, Trained Tokens 244424, Peak mem 52.133 GB
Iter 260: Train loss 0.540, Learning Rate 1.000e-05, It/sec 0.275, Tokens/sec 263.070, Trained Tokens 254004, Peak mem 52.133 GB
Iter 270: Train loss 0.464, Learning Rate 1.000e-05, It/sec 0.275, Tokens/sec 257.103, Trained Tokens 263367, Peak mem 52.133 GB
Iter 280: Train loss 0.445, Learning Rate 1.000e-05, It/sec 0.269, Tokens/sec 254.944, Trained Tokens 272830, Peak mem 52.133 GB
Iter 290: Train loss 0.567, Learning Rate 1.000e-05, It/sec 0.272, Tokens/sec 256.184, Trained Tokens 282233, Peak mem 52.133 GB
Calculating loss...: 0%| | 0/25 [00:00<?, ?it/s]
Calculating loss...: 4%|▍ | 1/25 [00:02<00:50, 2.12s/it]
Calculating loss...: 8%|β–Š | 2/25 [00:06<01:19, 3.46s/it]
Calculating loss...: 12%|β–ˆβ– | 3/25 [00:08<01:04, 2.94s/it]
Calculating loss...: 16%|β–ˆβ–Œ | 4/25 [00:11<00:56, 2.69s/it]
Calculating loss...: 20%|β–ˆβ–ˆ | 5/25 [00:13<00:51, 2.56s/it]
Calculating loss...: 24%|β–ˆβ–ˆβ– | 6/25 [00:15<00:45, 2.41s/it]
Calculating loss...: 28%|β–ˆβ–ˆβ–Š | 7/25 [00:17<00:42, 2.35s/it]
Calculating loss...: 32%|β–ˆβ–ˆβ–ˆβ– | 8/25 [00:19<00:38, 2.28s/it]
Calculating loss...: 36%|β–ˆβ–ˆβ–ˆβ–Œ | 9/25 [00:22<00:36, 2.29s/it]
Calculating loss...: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 10/25 [00:24<00:35, 2.35s/it]
Calculating loss...: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 11/25 [00:27<00:33, 2.37s/it]
Calculating loss...: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 12/25 [00:29<00:31, 2.45s/it]
Calculating loss...: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 13/25 [00:32<00:31, 2.59s/it]
Calculating loss...: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 14/25 [00:34<00:26, 2.41s/it]
Calculating loss...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 15/25 [00:39<00:30, 3.03s/it]
Calculating loss...: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 16/25 [00:41<00:24, 2.77s/it]
Calculating loss...: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 17/25 [00:43<00:21, 2.64s/it]
Calculating loss...: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 18/25 [00:46<00:18, 2.67s/it]
Calculating loss...: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 19/25 [00:49<00:16, 2.69s/it]
Calculating loss...: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 20/25 [00:51<00:13, 2.67s/it]
Calculating loss...: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 21/25 [00:53<00:09, 2.47s/it]
Calculating loss...: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/25 [00:56<00:07, 2.54s/it]
Calculating loss...: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 23/25 [00:59<00:05, 2.82s/it]
Calculating loss...: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 24/25 [01:02<00:02, 2.66s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:04<00:00, 2.50s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:04<00:00, 2.58s/it]
Iter 300: Val loss 0.551, Val took 64.368s
Iter 300: Train loss 0.578, Learning Rate 1.000e-05, It/sec 0.288, Tokens/sec 264.039, Trained Tokens 291397, Peak mem 52.133 GB
Iter 300: Saved adapter weights to models/lora/deepseek_lora_telegram_20251111_165211/adapters.safetensors and models/lora/deepseek_lora_telegram_20251111_165211/0000300_adapters.safetensors.
Iter 310: Train loss 0.490, Learning Rate 1.000e-05, It/sec 0.294, Tokens/sec 268.104, Trained Tokens 300506, Peak mem 52.133 GB
Iter 320: Train loss 0.494, Learning Rate 1.000e-05, It/sec 0.273, Tokens/sec 255.665, Trained Tokens 309868, Peak mem 52.133 GB
Iter 330: Train loss 0.553, Learning Rate 1.000e-05, It/sec 0.274, Tokens/sec 260.283, Trained Tokens 319352, Peak mem 52.133 GB
Iter 340: Train loss 0.510, Learning Rate 1.000e-05, It/sec 0.275, Tokens/sec 255.333, Trained Tokens 328629, Peak mem 52.133 GB
Iter 350: Train loss 0.754, Learning Rate 1.000e-05, It/sec 0.199, Tokens/sec 223.530, Trained Tokens 339841, Peak mem 53.360 GB
Iter 360: Train loss 0.582, Learning Rate 1.000e-05, It/sec 0.279, Tokens/sec 268.722, Trained Tokens 349481, Peak mem 53.360 GB
Iter 370: Train loss 0.637, Learning Rate 1.000e-05, It/sec 0.231, Tokens/sec 247.694, Trained Tokens 360226, Peak mem 53.360 GB
Iter 380: Train loss 0.558, Learning Rate 1.000e-05, It/sec 0.258, Tokens/sec 253.024, Trained Tokens 370048, Peak mem 53.360 GB
Iter 390: Train loss 0.568, Learning Rate 1.000e-05, It/sec 0.251, Tokens/sec 254.267, Trained Tokens 380194, Peak mem 53.360 GB
Calculating loss...: 0%| | 0/25 [00:00<?, ?it/s]
Calculating loss...: 4%|▍ | 1/25 [00:02<00:55, 2.29s/it]
Calculating loss...: 8%|β–Š | 2/25 [00:04<00:52, 2.29s/it]
Calculating loss...: 12%|β–ˆβ– | 3/25 [00:07<00:54, 2.48s/it]
Calculating loss...: 16%|β–ˆβ–Œ | 4/25 [00:12<01:15, 3.58s/it]
Calculating loss...: 20%|β–ˆβ–ˆ | 5/25 [00:14<01:01, 3.06s/it]
Calculating loss...: 24%|β–ˆβ–ˆβ– | 6/25 [00:16<00:52, 2.74s/it]
Calculating loss...: 28%|β–ˆβ–ˆβ–Š | 7/25 [00:19<00:48, 2.70s/it]
Calculating loss...: 32%|β–ˆβ–ˆβ–ˆβ– | 8/25 [00:21<00:43, 2.57s/it]
Calculating loss...: 36%|β–ˆβ–ˆβ–ˆβ–Œ | 9/25 [00:24<00:41, 2.58s/it]
Calculating loss...: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 10/25 [00:26<00:36, 2.44s/it]
Calculating loss...: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 11/25 [00:28<00:32, 2.31s/it]
Calculating loss...: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 12/25 [00:30<00:30, 2.34s/it]
Calculating loss...: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 13/25 [00:32<00:27, 2.27s/it]
Calculating loss...: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 14/25 [00:35<00:24, 2.22s/it]
Calculating loss...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 15/25 [00:38<00:25, 2.59s/it]
Calculating loss...: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 16/25 [00:40<00:22, 2.54s/it]
Calculating loss...: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 17/25 [00:42<00:19, 2.38s/it]
Calculating loss...: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 18/25 [00:47<00:20, 2.97s/it]
Calculating loss...: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 19/25 [00:50<00:17, 2.94s/it]
Calculating loss...: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 20/25 [00:52<00:13, 2.70s/it]
Calculating loss...: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 21/25 [00:54<00:09, 2.49s/it]
Calculating loss...: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/25 [00:56<00:07, 2.34s/it]
Calculating loss...: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 23/25 [00:58<00:04, 2.34s/it]
Calculating loss...: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 24/25 [01:00<00:02, 2.33s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:03<00:00, 2.37s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:03<00:00, 2.54s/it]
Iter 400: Val loss 0.584, Val took 63.439s
Iter 400: Train loss 0.464, Learning Rate 1.000e-05, It/sec 0.273, Tokens/sec 261.717, Trained Tokens 389793, Peak mem 53.360 GB
Iter 400: Saved adapter weights to models/lora/deepseek_lora_telegram_20251111_165211/adapters.safetensors and models/lora/deepseek_lora_telegram_20251111_165211/0000400_adapters.safetensors.
Iter 410: Train loss 0.492, Learning Rate 1.000e-05, It/sec 0.253, Tokens/sec 255.097, Trained Tokens 399884, Peak mem 53.360 GB
Iter 420: Train loss 0.434, Learning Rate 1.000e-05, It/sec 0.284, Tokens/sec 268.317, Trained Tokens 409347, Peak mem 53.360 GB
Iter 430: Train loss 0.477, Learning Rate 1.000e-05, It/sec 0.275, Tokens/sec 263.682, Trained Tokens 418919, Peak mem 53.360 GB
Iter 440: Train loss 0.496, Learning Rate 1.000e-05, It/sec 0.263, Tokens/sec 253.249, Trained Tokens 428532, Peak mem 53.360 GB
Iter 450: Train loss 0.497, Learning Rate 1.000e-05, It/sec 0.263, Tokens/sec 263.457, Trained Tokens 438545, Peak mem 53.360 GB
Iter 460: Train loss 0.485, Learning Rate 1.000e-05, It/sec 0.278, Tokens/sec 261.986, Trained Tokens 447955, Peak mem 53.360 GB
Iter 470: Train loss 0.471, Learning Rate 1.000e-05, It/sec 0.274, Tokens/sec 261.977, Trained Tokens 457532, Peak mem 53.360 GB
Iter 480: Train loss 0.498, Learning Rate 1.000e-05, It/sec 0.245, Tokens/sec 249.742, Trained Tokens 467722, Peak mem 53.360 GB
Iter 490: Train loss 0.471, Learning Rate 1.000e-05, It/sec 0.282, Tokens/sec 265.978, Trained Tokens 477156, Peak mem 53.360 GB
Calculating loss...: 0%| | 0/25 [00:00<?, ?it/s]
Calculating loss...: 4%|▍ | 1/25 [00:02<00:56, 2.35s/it]
Calculating loss...: 8%|β–Š | 2/25 [00:04<00:51, 2.22s/it]
Calculating loss...: 12%|β–ˆβ– | 3/25 [00:06<00:47, 2.18s/it]
Calculating loss...: 16%|β–ˆβ–Œ | 4/25 [00:08<00:46, 2.24s/it]
Calculating loss...: 20%|β–ˆβ–ˆ | 5/25 [00:11<00:49, 2.48s/it]
Calculating loss...: 24%|β–ˆβ–ˆβ– | 6/25 [00:13<00:44, 2.32s/it]
Calculating loss...: 28%|β–ˆβ–ˆβ–Š | 7/25 [00:16<00:45, 2.54s/it]
Calculating loss...: 32%|β–ˆβ–ˆβ–ˆβ– | 8/25 [00:19<00:41, 2.42s/it]
Calculating loss...: 36%|β–ˆβ–ˆβ–ˆβ–Œ | 9/25 [00:21<00:40, 2.51s/it]
Calculating loss...: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 10/25 [00:24<00:36, 2.45s/it]
Calculating loss...: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 11/25 [00:26<00:32, 2.35s/it]
Calculating loss...: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 12/25 [00:28<00:30, 2.34s/it]
Calculating loss...: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 13/25 [00:30<00:28, 2.37s/it]
Calculating loss...: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 14/25 [00:33<00:26, 2.39s/it]
Calculating loss...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 15/25 [00:37<00:29, 2.98s/it]
Calculating loss...: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 16/25 [00:40<00:25, 2.81s/it]
Calculating loss...: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 17/25 [00:42<00:20, 2.60s/it]
Calculating loss...: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 18/25 [00:44<00:16, 2.42s/it]
Calculating loss...: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 19/25 [00:46<00:13, 2.32s/it]
Calculating loss...: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 20/25 [00:48<00:11, 2.26s/it]
Calculating loss...: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 21/25 [00:51<00:09, 2.37s/it]
Calculating loss...: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/25 [00:53<00:07, 2.35s/it]
Calculating loss...: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 23/25 [00:55<00:04, 2.27s/it]
Calculating loss...: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 24/25 [00:57<00:02, 2.29s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:00<00:00, 2.30s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [01:00<00:00, 2.40s/it]
Iter 500: Val loss 0.516, Val took 60.113s
Iter 500: Train loss 0.482, Learning Rate 1.000e-05, It/sec 0.258, Tokens/sec 252.005, Trained Tokens 486939, Peak mem 53.360 GB
Iter 500: Saved adapter weights to models/lora/deepseek_lora_telegram_20251111_165211/adapters.safetensors and models/lora/deepseek_lora_telegram_20251111_165211/0000500_adapters.safetensors.
Iter 510: Train loss 0.618, Learning Rate 1.000e-05, It/sec 0.248, Tokens/sec 253.918, Trained Tokens 497197, Peak mem 53.360 GB
Iter 520: Train loss 0.454, Learning Rate 1.000e-05, It/sec 0.270, Tokens/sec 257.128, Trained Tokens 506732, Peak mem 53.360 GB
Iter 530: Train loss 0.564, Learning Rate 1.000e-05, It/sec 0.263, Tokens/sec 260.847, Trained Tokens 516645, Peak mem 53.360 GB
Iter 540: Train loss 0.401, Learning Rate 1.000e-05, It/sec 0.307, Tokens/sec 272.605, Trained Tokens 525534, Peak mem 53.360 GB
Iter 550: Train loss 0.472, Learning Rate 1.000e-05, It/sec 0.270, Tokens/sec 258.736, Trained Tokens 535129, Peak mem 53.360 GB
Iter 560: Train loss 0.661, Learning Rate 1.000e-05, It/sec 0.218, Tokens/sec 236.552, Trained Tokens 545967, Peak mem 53.360 GB
Iter 570: Train loss 0.491, Learning Rate 1.000e-05, It/sec 0.271, Tokens/sec 261.655, Trained Tokens 555617, Peak mem 53.360 GB
Iter 580: Train loss 0.465, Learning Rate 1.000e-05, It/sec 0.276, Tokens/sec 260.449, Trained Tokens 565065, Peak mem 53.360 GB
Iter 590: Train loss 0.447, Learning Rate 1.000e-05, It/sec 0.282, Tokens/sec 267.644, Trained Tokens 574556, Peak mem 53.360 GB
Calculating loss...: 0%| | 0/25 [00:00<?, ?it/s]
Calculating loss...: 4%|▍ | 1/25 [00:02<01:11, 2.97s/it]
Calculating loss...: 8%|β–Š | 2/25 [00:05<00:57, 2.48s/it]
Calculating loss...: 12%|β–ˆβ– | 3/25 [00:07<00:51, 2.32s/it]
Calculating loss...: 16%|β–ˆβ–Œ | 4/25 [00:09<00:48, 2.31s/it]
Calculating loss...: 20%|β–ˆβ–ˆ | 5/25 [00:12<00:48, 2.42s/it]
Calculating loss...: 24%|β–ˆβ–ˆβ– | 6/25 [00:14<00:45, 2.38s/it]
Calculating loss...: 28%|β–ˆβ–ˆβ–Š | 7/25 [00:16<00:41, 2.29s/it]
Calculating loss...: 32%|β–ˆβ–ˆβ–ˆβ– | 8/25 [00:18<00:39, 2.34s/it]
Calculating loss...: 36%|β–ˆβ–ˆβ–ˆβ–Œ | 9/25 [00:21<00:40, 2.51s/it]
Calculating loss...: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 10/25 [00:24<00:39, 2.62s/it]
Calculating loss...: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 11/25 [00:28<00:40, 2.88s/it]
Calculating loss...: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 12/25 [00:30<00:35, 2.70s/it]
Calculating loss...: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 13/25 [00:32<00:30, 2.52s/it]
Calculating loss...: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 14/25 [00:34<00:26, 2.40s/it]
Calculating loss...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 15/25 [00:36<00:23, 2.32s/it]
Calculating loss...: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 16/25 [00:38<00:20, 2.22s/it]
Calculating loss...: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 17/25 [00:40<00:17, 2.16s/it]
Calculating loss...: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 18/25 [00:43<00:15, 2.20s/it]
Calculating loss...: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 19/25 [00:45<00:12, 2.14s/it]
Calculating loss...: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 20/25 [00:47<00:10, 2.13s/it]
Calculating loss...: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 21/25 [00:49<00:08, 2.18s/it]
Calculating loss...: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 22/25 [00:51<00:06, 2.13s/it]
Calculating loss...: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 23/25 [00:54<00:04, 2.28s/it]
Calculating loss...: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 24/25 [00:56<00:02, 2.32s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [00:58<00:00, 2.27s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [00:58<00:00, 2.35s/it]
Iter 600: Val loss 0.463, Val took 58.782s
Iter 600: Train loss 0.456, Learning Rate 1.000e-05, It/sec 0.292, Tokens/sec 265.446, Trained Tokens 583661, Peak mem 53.360 GB
Iter 600: Saved adapter weights to models/lora/deepseek_lora_telegram_20251111_165211/adapters.safetensors and models/lora/deepseek_lora_telegram_20251111_165211/0000600_adapters.safetensors.
Saved final weights to models/lora/deepseek_lora_telegram_20251111_165211/adapters.safetensors.
Testing
Calculating loss...: 0%| | 0/50 [00:00<?, ?it/s]
Calculating loss...: 2%|▏ | 1/50 [00:02<01:58, 2.41s/it]
Calculating loss...: 4%|▍ | 2/50 [00:04<01:47, 2.24s/it]
Calculating loss...: 6%|β–Œ | 3/50 [00:07<01:59, 2.54s/it]
Calculating loss...: 8%|β–Š | 4/50 [00:09<01:54, 2.50s/it]
Calculating loss...: 10%|β–ˆ | 5/50 [00:12<01:51, 2.47s/it]
Calculating loss...: 12%|β–ˆβ– | 6/50 [00:14<01:45, 2.41s/it]
Calculating loss...: 14%|β–ˆβ– | 7/50 [00:16<01:41, 2.37s/it]
Calculating loss...: 16%|β–ˆβ–Œ | 8/50 [00:18<01:35, 2.26s/it]
Calculating loss...: 18%|β–ˆβ–Š | 9/50 [00:20<01:29, 2.19s/it]
Calculating loss...: 20%|β–ˆβ–ˆ | 10/50 [00:23<01:36, 2.41s/it]
Calculating loss...: 22%|β–ˆβ–ˆβ– | 11/50 [00:26<01:32, 2.37s/it]
Calculating loss...: 24%|β–ˆβ–ˆβ– | 12/50 [00:28<01:25, 2.26s/it]
Calculating loss...: 26%|β–ˆβ–ˆβ–Œ | 13/50 [00:30<01:22, 2.23s/it]
Calculating loss...: 28%|β–ˆβ–ˆβ–Š | 14/50 [00:32<01:20, 2.25s/it]
Calculating loss...: 30%|β–ˆβ–ˆβ–ˆ | 15/50 [00:34<01:16, 2.18s/it]
Calculating loss...: 32%|β–ˆβ–ˆβ–ˆβ– | 16/50 [00:37<01:18, 2.31s/it]
Calculating loss...: 34%|β–ˆβ–ˆβ–ˆβ– | 17/50 [00:39<01:20, 2.44s/it]
Calculating loss...: 36%|β–ˆβ–ˆβ–ˆβ–Œ | 18/50 [00:42<01:18, 2.44s/it]
Calculating loss...: 38%|β–ˆβ–ˆβ–ˆβ–Š | 19/50 [00:44<01:12, 2.34s/it]
Calculating loss...: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 20/50 [00:46<01:08, 2.28s/it]
Calculating loss...: 42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 21/50 [00:49<01:09, 2.38s/it]
Calculating loss...: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 22/50 [00:51<01:05, 2.35s/it]
Calculating loss...: 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 23/50 [00:53<01:01, 2.29s/it]
Calculating loss...: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 24/50 [00:55<00:58, 2.23s/it]
Calculating loss...: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 25/50 [00:58<00:58, 2.35s/it]
Calculating loss...: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 26/50 [01:00<00:54, 2.26s/it]
Calculating loss...: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 27/50 [01:02<00:50, 2.21s/it]
Calculating loss...: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 28/50 [01:04<00:48, 2.19s/it]
Calculating loss...: 58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 29/50 [01:06<00:45, 2.17s/it]
Calculating loss...: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 30/50 [01:09<00:45, 2.25s/it]
Calculating loss...: 62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 31/50 [01:11<00:41, 2.21s/it]
Calculating loss...: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 32/50 [01:14<00:43, 2.41s/it]
Calculating loss...: 66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 33/50 [01:16<00:41, 2.42s/it]
Calculating loss...: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 34/50 [01:19<00:38, 2.42s/it]
Calculating loss...: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 35/50 [01:23<00:43, 2.92s/it]
Calculating loss...: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 36/50 [01:25<00:37, 2.68s/it]
Calculating loss...: 74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 37/50 [01:28<00:35, 2.74s/it]
Calculating loss...: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 38/50 [01:30<00:32, 2.70s/it]
Calculating loss...: 78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 39/50 [01:33<00:28, 2.58s/it]
Calculating loss...: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 40/50 [01:37<00:31, 3.15s/it]
Calculating loss...: 82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 41/50 [01:39<00:25, 2.84s/it]
Calculating loss...: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 42/50 [01:42<00:22, 2.80s/it]
Calculating loss...: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 43/50 [01:45<00:19, 2.78s/it]
Calculating loss...: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 44/50 [01:48<00:17, 2.84s/it]
Calculating loss...: 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 45/50 [01:50<00:14, 2.85s/it]
Calculating loss...: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 46/50 [01:53<00:10, 2.69s/it]
Calculating loss...: 94%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 47/50 [01:55<00:07, 2.57s/it]
Calculating loss...: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 48/50 [01:57<00:04, 2.50s/it]
Calculating loss...: 98%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 49/50 [02:00<00:02, 2.44s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 50/50 [02:02<00:00, 2.54s/it]
Calculating loss...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 50/50 [02:02<00:00, 2.46s/it]
Test loss 0.526, Test ppl 1.693.