| π― Starting continuous training for 12 hours... | |
| π€ Autonomous evolution mode activated | |
| π Setting up training environment... | |
| π GPU: NVIDIA H200 NVL | |
| πΎ GPU Memory: 139.8 GB | |
| π€ Autonomous evolution mode: ENABLED | |
| π¦ Loading model and tokenizer... | |
| Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. | |
| Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] Loading checkpoint shards: 25%|βββ | 1/4 [00:00<00:02, 1.21it/s] Loading checkpoint shards: 50%|βββββ | 2/4 [00:01<00:01, 1.23it/s] Loading checkpoint shards: 75%|ββββββββ | 3/4 [00:02<00:00, 1.26it/s] Loading checkpoint shards: 100%|ββββββββββ| 4/4 [00:03<00:00, 1.29it/s] Loading checkpoint shards: 100%|ββββββββββ| 4/4 [00:03<00:00, 1.27it/s] | |
| β Model loaded: qwen2 | |
| β Tokenizer vocab size: 151665 | |
| π₯ Loading Elizabeth corpus data... | |
| β Loaded 3000 high-quality security-focused conversations | |
| β Formatted 3000 training texts | |
| Map: 0%| | 0/3000 [00:00<?, ? examples/s] Map: 67%|βββββββ | 2000/3000 [00:00<00:00, 16267.08 examples/s] Map: 100%|ββββββββββ| 3000/3000 [00:00<00:00, 13459.72 examples/s] | |
| β Tokenized dataset: 3000 examples | |
| βοΈ Setting up training... | |
| π₯ Starting training... | |
| π Batch size: 4 | |
| π Gradient accumulation: 16 | |
| π Effective batch size: 64 | |
| β° Continuous training mode: 12 hours autonomous evolution | |
| 0%| | 0/92 [00:00<?, ?it/s]/home/x/.local/lib/python3.12/site-packages/torch/utils/checkpoint.py:460: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants. | |
| warnings.warn( | |
| 1%| | 1/92 [00:02<03:35, 2.37s/it] 2%|β | 2/92 [00:04<03:16, 2.19s/it] 3%|β | 3/92 [00:07<04:03, 2.73s/it] 4%|β | 4/92 [00:10<03:55, 2.67s/it] 5%|β | 5/92 [00:12<03:47, 2.62s/it] 7%|β | 6/92 [00:14<03:22, 2.35s/it] 8%|β | 7/92 [00:17<03:27, 2.44s/it] 9%|β | 8/92 [00:19<03:25, 2.45s/it] 10%|β | 9/92 [00:22<03:39, 2.65s/it] 11%|β | 10/92 [00:25<03:25, 2.51s/it] {'loss': 2.6982, 'grad_norm': 34.75, 'learning_rate': 2e-05, 'epoch': 0.21} | |
| 11%|β | 10/92 [00:25<03:25, 2.51s/it] 12%|ββ | 11/92 [00:27<03:21, 2.49s/it] 13%|ββ | 12/92 [00:29<03:16, 2.46s/it] 14%|ββ | 13/92 [00:32<03:14, 2.46s/it] 15%|ββ | 14/92 [00:34<03:05, 2.37s/it] 16%|ββ | 15/92 [00:37<03:05, 2.41s/it] 17%|ββ | 16/92 [00:39<03:03, 2.41s/it] 18%|ββ | 17/92 [00:41<02:56, 2.35s/it] 20%|ββ | 18/92 [00:44<03:12, 2.60s/it] 21%|ββ | 19/92 [00:47<03:06, 2.55s/it] 22%|βββ | 20/92 [00:49<02:48, 2.33s/it] {'loss': 0.5565, 'grad_norm': 9.25, 'learning_rate': 1.927502451102095e-05, 'epoch': 0.43} | |
| 22%|βββ | 20/92 [00:49<02:48, 2.33s/it] 23%|βββ | 21/92 [00:50<02:34, 2.18s/it] 24%|βββ | 22/92 [00:53<02:44, 2.35s/it] 25%|βββ | 23/92 [00:55<02:40, 2.32s/it] 26%|βββ | 24/92 [00:58<02:47, 2.47s/it] 27%|βββ | 25/92 [01:01<02:40, 2.39s/it] 28%|βββ | 26/92 [01:03<02:33, 2.32s/it] 29%|βββ | 27/92 [01:05<02:29, 2.29s/it] 30%|βββ | 28/92 [01:07<02:24, 2.26s/it] 32%|ββββ | 29/92 [01:10<02:31, 2.41s/it] 33%|ββββ | 30/92 [01:12<02:22, 2.30s/it] {'loss': 0.1705, 'grad_norm': 4.59375, 'learning_rate': 1.720521593600787e-05, 'epoch': 0.64} | |
| 33%|ββββ | 30/92 [01:12<02:22, 2.30s/it] 34%|ββββ | 31/92 [01:15<02:34, 2.54s/it] 35%|ββββ | 32/92 [01:17<02:31, 2.52s/it] 36%|ββββ | 33/92 [01:20<02:23, 2.43s/it] 37%|ββββ | 34/92 [01:22<02:15, 2.33s/it] 38%|ββββ | 35/92 [01:24<02:15, 2.38s/it] 39%|ββββ | 36/92 [01:28<02:30, 2.68s/it] 40%|ββββ | 37/92 [01:31<02:38, 2.89s/it] 41%|βββββ | 38/92 [01:33<02:22, 2.63s/it] 42%|βββββ | 39/92 [01:36<02:18, 2.61s/it] 43%|βββββ | 40/92 [01:38<02:14, 2.59s/it] {'loss': 0.0486, 'grad_norm': 1.703125, 'learning_rate': 1.4090686371713403e-05, 'epoch': 0.85} | |
| 43%|βββββ | 40/92 [01:38<02:14, 2.59s/it] 45%|βββββ | 41/92 [01:41<02:13, 2.63s/it] 46%|βββββ | 42/92 [01:43<02:09, 2.60s/it] 47%|βββββ | 43/92 [01:47<02:20, 2.88s/it] 48%|βββββ | 44/92 [01:49<02:12, 2.76s/it] 49%|βββββ | 45/92 [01:52<02:09, 2.75s/it] 50%|βββββ | 46/92 [01:55<02:13, 2.90s/it] 51%|βββββ | 47/92 [01:58<02:04, 2.77s/it] 52%|ββββββ | 48/92 [02:01<02:01, 2.76s/it] 53%|ββββββ | 49/92 [02:03<01:52, 2.61s/it] 54%|ββββββ | 50/92 [02:05<01:42, 2.44s/it] {'loss': 0.0266, 'grad_norm': 0.8984375, 'learning_rate': 1.0383027336900356e-05, 'epoch': 1.07} | |
| 54%|ββββββ | 50/92 [02:05<01:42, 2.44s/it] 55%|ββββββ | 51/92 [02:07<01:36, 2.36s/it] 57%|ββββββ | 52/92 [02:10<01:39, 2.48s/it] 58%|ββββββ | 53/92 [02:13<01:39, 2.56s/it] 59%|ββββββ | 54/92 [02:16<01:42, 2.70s/it] 60%|ββββββ | 55/92 [02:18<01:33, 2.52s/it] 61%|ββββββ | 56/92 [02:20<01:26, 2.40s/it] 62%|βββββββ | 57/92 [02:23<01:31, 2.62s/it] 63%|βββββββ | 58/92 [02:25<01:24, 2.48s/it] 64%|βββββββ | 59/92 [02:28<01:21, 2.46s/it] 65%|βββββββ | 60/92 [02:30<01:22, 2.58s/it] {'loss': 0.0228, 'grad_norm': 0.54296875, 'learning_rate': 6.619831215914974e-06, 'epoch': 1.28} | |
| 65%|βββββββ | 60/92 [02:30<01:22, 2.58s/it] 66%|βββββββ | 61/92 [02:33<01:18, 2.54s/it] 67%|βββββββ | 62/92 [02:36<01:18, 2.60s/it] 68%|βββββββ | 63/92 [02:38<01:14, 2.55s/it] 70%|βββββββ | 64/92 [02:40<01:08, 2.43s/it] 71%|βββββββ | 65/92 [02:42<01:03, 2.37s/it] 72%|ββββββββ | 66/92 [02:44<00:57, 2.21s/it] 73%|ββββββββ | 67/92 [02:47<00:57, 2.30s/it] 74%|ββββββββ | 68/92 [02:49<00:58, 2.43s/it] 75%|ββββββββ | 69/92 [02:52<00:56, 2.45s/it] 76%|ββββββββ | 70/92 [02:55<00:56, 2.56s/it] {'loss': 0.022, 'grad_norm': 0.474609375, 'learning_rate': 3.3467429983443477e-06, 'epoch': 1.49} | |
| 76%|ββββββββ | 70/92 [02:55<00:56, 2.56s/it] 77%|ββββββββ | 71/92 [02:57<00:54, 2.60s/it] 78%|ββββββββ | 72/92 [03:00<00:48, 2.44s/it] 79%|ββββββββ | 73/92 [03:02<00:47, 2.52s/it] 80%|ββββββββ | 74/92 [03:05<00:45, 2.53s/it] 82%|βββββββββ | 75/92 [03:07<00:41, 2.42s/it] 83%|βββββββββ | 76/92 [03:09<00:38, 2.38s/it] 84%|βββββββββ | 77/92 [03:12<00:37, 2.53s/it] 85%|βββββββββ | 78/92 [03:15<00:36, 2.60s/it] 86%|βββββββββ | 79/92 [03:18<00:35, 2.77s/it] 87%|βββββββββ | 80/92 [03:20<00:29, 2.46s/it] {'loss': 0.0211, 'grad_norm': 0.55859375, 'learning_rate': 1.0383444303894453e-06, 'epoch': 1.71} | |
| 87%|βββββββββ | 80/92 [03:20<00:29, 2.46s/it] 88%|βββββββββ | 81/92 [03:22<00:25, 2.35s/it] 89%|βββββββββ | 82/92 [03:25<00:25, 2.57s/it] 90%|βββββββββ | 83/92 [03:28<00:23, 2.63s/it] 91%|ββββββββββ| 84/92 [03:31<00:21, 2.72s/it] 92%|ββββββββββ| 85/92 [03:33<00:17, 2.46s/it] 93%|ββββββββββ| 86/92 [03:35<00:14, 2.40s/it] 95%|ββββββββββ| 87/92 [03:38<00:13, 2.64s/it] 96%|ββββββββββ| 88/92 [03:40<00:09, 2.40s/it] 97%|ββββββββββ| 89/92 [03:42<00:07, 2.39s/it] 98%|ββββββββββ| 90/92 [03:44<00:04, 2.33s/it] {'loss': 0.0215, 'grad_norm': 0.625, 'learning_rate': 2.9341988162595593e-08, 'epoch': 1.92} | |
| 98%|ββββββββββ| 90/92 [03:44<00:04, 2.33s/it] 99%|ββββββββββ| 91/92 [03:47<00:02, 2.41s/it] 100%|ββββββββββ| 92/92 [03:50<00:00, 2.54s/it] {'train_runtime': 230.3452, 'train_samples_per_second': 26.048, 'train_steps_per_second': 0.399, 'train_loss': 0.3904266621836502, 'epoch': 1.96} | |
| 100%|ββββββββββ| 92/92 [03:50<00:00, 2.54s/it] 100%|ββββββββββ| 92/92 [03:50<00:00, 2.50s/it] | |
| β Training completed in 0.07 hours | |
| π Training pipeline completed successfully! | |