| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - === Tokenizer Training Started === |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Experiment Name: BTCUSDT_4h_finetune |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Log Directory: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/logs |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Rank: 0 |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Timestamp: 2025-10-23 15:04:40 |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Loading pretrained tokenizer... |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Tokenizer parameters: 3,958,042 |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - === Training Configuration === |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Data path: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/configs/config_btcusdt_4h.yaml |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Lookback window: 512 |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Predict window: 48 |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Batch size: 32 |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Learning rate: 0.0002 |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Training epochs: 30 |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Device: cuda:0 |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Distributed training: False |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Starting tokenizer fine-tuning training... |
| 2025-10-23 15:04:40 - tokenizer_training_rank_0 - INFO - Starting tokenizer training... |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - === Tokenizer Training Started === |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Experiment Name: BTCUSDT_4h_finetune |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Log Directory: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/logs |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Rank: 0 |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Timestamp: 2025-10-23 15:07:49 |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Loading pretrained tokenizer... |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Tokenizer parameters: 3,958,042 |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - === Training Configuration === |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Data path: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/configs/config_btcusdt_4h.yaml |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Lookback window: 512 |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Predict window: 48 |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Batch size: 32 |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Learning rate: 0.0002 |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Training epochs: 30 |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Device: cuda:0 |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Distributed training: False |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Starting tokenizer fine-tuning training... |
| 2025-10-23 15:07:49 - tokenizer_training_rank_0 - INFO - Starting tokenizer training... |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - === Tokenizer Training Started === |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Experiment Name: BTCUSDT_4h_finetune |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Log Directory: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/logs |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Rank: 0 |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Timestamp: 2025-10-23 15:08:24 |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Loading pretrained tokenizer... |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Tokenizer parameters: 3,958,042 |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - === Training Configuration === |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Data path: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/configs/config_btcusdt_4h.yaml |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Lookback window: 512 |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Predict window: 48 |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Batch size: 32 |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Learning rate: 0.0002 |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Training epochs: 30 |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Device: cuda:0 |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Distributed training: False |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Starting tokenizer fine-tuning training... |
| 2025-10-23 15:08:24 - tokenizer_training_rank_0 - INFO - Starting tokenizer training... |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - === Tokenizer Training Started === |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Experiment Name: BTCUSDT_4h_finetune |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Log Directory: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/logs |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Rank: 0 |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Timestamp: 2025-10-23 15:08:36 |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Loading pretrained tokenizer... |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Tokenizer parameters: 3,958,042 |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - === Training Configuration === |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Data path: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/configs/config_btcusdt_4h.yaml |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Lookback window: 512 |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Predict window: 48 |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Batch size: 32 |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Learning rate: 0.0002 |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Training epochs: 30 |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Device: cuda:0 |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Distributed training: False |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Starting tokenizer fine-tuning training... |
| 2025-10-23 15:08:36 - tokenizer_training_rank_0 - INFO - Starting tokenizer training... |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - === Tokenizer Training Started === |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Experiment Name: BTCUSDT_4h_finetune |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Log Directory: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/logs |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Rank: 0 |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Timestamp: 2025-10-23 15:10:59 |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Loading pretrained tokenizer... |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Tokenizer parameters: 3,958,042 |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - === Training Configuration === |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Data path: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/data/BTCUSDT_4h.csv |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Lookback window: 512 |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Predict window: 48 |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Batch size: 32 |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Learning rate: 0.0002 |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Training epochs: 30 |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Device: cuda:0 |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Distributed training: False |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Starting tokenizer fine-tuning training... |
| 2025-10-23 15:10:59 - tokenizer_training_rank_0 - INFO - Starting tokenizer training... |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - === Tokenizer Training Started === |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Experiment Name: BTCUSDT_4h_finetune |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Log Directory: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/logs |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Rank: 0 |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Timestamp: 2025-10-23 15:11:24 |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Loading pretrained tokenizer... |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Tokenizer parameters: 3,958,042 |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - === Training Configuration === |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Data path: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/data/BTCUSDT_4h_20251023_145012.csv |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Lookback window: 512 |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Predict window: 48 |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Batch size: 32 |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Learning rate: 0.0002 |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Training epochs: 30 |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Device: cuda:0 |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Distributed training: False |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Starting tokenizer fine-tuning training... |
| 2025-10-23 15:11:24 - tokenizer_training_rank_0 - INFO - Starting tokenizer training... |
| 2025-10-23 15:11:28 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 50/228] LR: 0.000045, Loss: -0.0279 |
| 2025-10-23 15:11:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0700 |
| - Recon Loss Pre: 0.0090 |
| - Recon Loss All: 0.0051 |
| 2025-10-23 15:11:31 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 100/228] LR: 0.000107, Loss: -0.0285 |
| 2025-10-23 15:11:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0700 |
| - Recon Loss Pre: 0.0083 |
| - Recon Loss All: 0.0047 |
| 2025-10-23 15:11:34 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 150/228] LR: 0.000170, Loss: -0.0291 |
| 2025-10-23 15:11:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0701 |
| - Recon Loss Pre: 0.0077 |
| - Recon Loss All: 0.0043 |
| 2025-10-23 15:11:37 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 200/228] LR: 0.000200, Loss: -0.0293 |
| 2025-10-23 15:11:37 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0701 |
| - Recon Loss Pre: 0.0073 |
| - Recon Loss All: 0.0042 |
| 2025-10-23 15:11:39 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 1/30 Summary --- |
| Validation Loss: 0.0056 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:11:39 - tokenizer_training_rank_0 - INFO - Best model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/tokenizer/best_model (validation loss: 0.0056) |
| 2025-10-23 15:11:41 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 22/228] LR: 0.000200, Loss: -0.0297 |
| 2025-10-23 15:11:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0702 |
| - Recon Loss Pre: 0.0069 |
| - Recon Loss All: 0.0039 |
| 2025-10-23 15:11:44 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 72/228] LR: 0.000200, Loss: -0.0295 |
| 2025-10-23 15:11:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0702 |
| - Recon Loss Pre: 0.0073 |
| - Recon Loss All: 0.0040 |
| 2025-10-23 15:11:47 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 122/228] LR: 0.000200, Loss: -0.0300 |
| 2025-10-23 15:11:47 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0704 |
| - Recon Loss Pre: 0.0065 |
| - Recon Loss All: 0.0038 |
| 2025-10-23 15:11:50 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 172/228] LR: 0.000200, Loss: -0.0294 |
| 2025-10-23 15:11:50 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0702 |
| - Recon Loss Pre: 0.0073 |
| - Recon Loss All: 0.0040 |
| 2025-10-23 15:11:53 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 222/228] LR: 0.000199, Loss: -0.0294 |
| 2025-10-23 15:11:53 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0703 |
| - Recon Loss Pre: 0.0073 |
| - Recon Loss All: 0.0042 |
| 2025-10-23 15:11:54 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 2/30 Summary --- |
| Validation Loss: 0.0051 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:11:54 - tokenizer_training_rank_0 - INFO - Best model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/tokenizer/best_model (validation loss: 0.0051) |
| 2025-10-23 15:11:56 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 44/228] LR: 0.000199, Loss: -0.0298 |
| 2025-10-23 15:11:56 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0704 |
| - Recon Loss Pre: 0.0070 |
| - Recon Loss All: 0.0038 |
| 2025-10-23 15:11:59 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 94/228] LR: 0.000199, Loss: -0.0299 |
| 2025-10-23 15:11:59 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0705 |
| - Recon Loss Pre: 0.0069 |
| - Recon Loss All: 0.0039 |
| 2025-10-23 15:12:02 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 144/228] LR: 0.000198, Loss: -0.0297 |
| 2025-10-23 15:12:02 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0704 |
| - Recon Loss Pre: 0.0071 |
| - Recon Loss All: 0.0039 |
| 2025-10-23 15:12:05 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 194/228] LR: 0.000198, Loss: -0.0302 |
| 2025-10-23 15:12:05 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0705 |
| - Recon Loss Pre: 0.0064 |
| - Recon Loss All: 0.0037 |
| 2025-10-23 15:12:08 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 3/30 Summary --- |
| Validation Loss: 0.0051 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:12:09 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 16/228] LR: 0.000197, Loss: -0.0305 |
| 2025-10-23 15:12:09 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0705 |
| - Recon Loss Pre: 0.0060 |
| - Recon Loss All: 0.0035 |
| 2025-10-23 15:12:12 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 66/228] LR: 0.000197, Loss: -0.0305 |
| 2025-10-23 15:12:12 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0063 |
| - Recon Loss All: 0.0034 |
| 2025-10-23 15:12:15 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 116/228] LR: 0.000196, Loss: -0.0303 |
| 2025-10-23 15:12:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0064 |
| - Recon Loss All: 0.0036 |
| 2025-10-23 15:12:18 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 166/228] LR: 0.000195, Loss: -0.0303 |
| 2025-10-23 15:12:18 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0705 |
| - Recon Loss Pre: 0.0063 |
| - Recon Loss All: 0.0036 |
| 2025-10-23 15:12:21 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 216/228] LR: 0.000195, Loss: -0.0306 |
| 2025-10-23 15:12:21 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0060 |
| - Recon Loss All: 0.0034 |
| 2025-10-23 15:12:22 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 4/30 Summary --- |
| Validation Loss: 0.0049 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:12:22 - tokenizer_training_rank_0 - INFO - Best model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/tokenizer/best_model (validation loss: 0.0049) |
| 2025-10-23 15:12:25 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 38/228] LR: 0.000194, Loss: -0.0303 |
| 2025-10-23 15:12:25 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0065 |
| - Recon Loss All: 0.0036 |
| 2025-10-23 15:12:28 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 88/228] LR: 0.000193, Loss: -0.0302 |
| 2025-10-23 15:12:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0705 |
| - Recon Loss Pre: 0.0064 |
| - Recon Loss All: 0.0037 |
| 2025-10-23 15:12:31 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 138/228] LR: 0.000192, Loss: -0.0303 |
| 2025-10-23 15:12:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0065 |
| - Recon Loss All: 0.0036 |
| 2025-10-23 15:12:34 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 188/228] LR: 0.000191, Loss: -0.0305 |
| 2025-10-23 15:12:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0062 |
| - Recon Loss All: 0.0035 |
| 2025-10-23 15:12:37 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 5/30 Summary --- |
| Validation Loss: 0.0048 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:12:37 - tokenizer_training_rank_0 - INFO - Best model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/tokenizer/best_model (validation loss: 0.0048) |
| 2025-10-23 15:12:37 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 10/228] LR: 0.000190, Loss: -0.0301 |
| 2025-10-23 15:12:37 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0066 |
| - Recon Loss All: 0.0038 |
| 2025-10-23 15:12:41 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 60/228] LR: 0.000189, Loss: -0.0307 |
| 2025-10-23 15:12:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0059 |
| - Recon Loss All: 0.0034 |
| 2025-10-23 15:12:44 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 110/228] LR: 0.000188, Loss: -0.0306 |
| 2025-10-23 15:12:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0060 |
| - Recon Loss All: 0.0034 |
| 2025-10-23 15:12:47 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 160/228] LR: 0.000187, Loss: -0.0305 |
| 2025-10-23 15:12:47 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0060 |
| - Recon Loss All: 0.0035 |
| 2025-10-23 15:12:50 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 210/228] LR: 0.000186, Loss: -0.0302 |
| 2025-10-23 15:12:50 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0705 |
| - Recon Loss Pre: 0.0064 |
| - Recon Loss All: 0.0036 |
| 2025-10-23 15:12:51 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 6/30 Summary --- |
| Validation Loss: 0.0048 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:12:51 - tokenizer_training_rank_0 - INFO - Best model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/tokenizer/best_model (validation loss: 0.0048) |
| 2025-10-23 15:12:53 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 32/228] LR: 0.000184, Loss: -0.0305 |
| 2025-10-23 15:12:54 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0062 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:12:56 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 82/228] LR: 0.000183, Loss: -0.0302 |
| 2025-10-23 15:12:56 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0064 |
| - Recon Loss All: 0.0037 |
| 2025-10-23 15:12:59 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 132/228] LR: 0.000182, Loss: -0.0307 |
| 2025-10-23 15:12:59 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0059 |
| - Recon Loss All: 0.0034 |
| 2025-10-23 15:13:02 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 182/228] LR: 0.000180, Loss: -0.0304 |
| 2025-10-23 15:13:02 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0061 |
| - Recon Loss All: 0.0036 |
| 2025-10-23 15:13:06 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 7/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:13:06 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 4/228] LR: 0.000179, Loss: -0.0308 |
| 2025-10-23 15:13:06 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0057 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:13:09 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 54/228] LR: 0.000177, Loss: -0.0307 |
| 2025-10-23 15:13:09 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0059 |
| - Recon Loss All: 0.0034 |
| 2025-10-23 15:13:12 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 104/228] LR: 0.000176, Loss: -0.0304 |
| 2025-10-23 15:13:12 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0062 |
| - Recon Loss All: 0.0035 |
| 2025-10-23 15:13:15 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 154/228] LR: 0.000174, Loss: -0.0304 |
| 2025-10-23 15:13:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0062 |
| - Recon Loss All: 0.0036 |
| 2025-10-23 15:13:18 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 204/228] LR: 0.000173, Loss: -0.0306 |
| 2025-10-23 15:13:18 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0059 |
| - Recon Loss All: 0.0036 |
| 2025-10-23 15:13:20 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 8/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:13:22 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 26/228] LR: 0.000171, Loss: -0.0306 |
| 2025-10-23 15:13:22 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0059 |
| - Recon Loss All: 0.0035 |
| 2025-10-23 15:13:25 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 76/228] LR: 0.000169, Loss: -0.0310 |
| 2025-10-23 15:13:25 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0055 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:13:28 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 126/228] LR: 0.000168, Loss: -0.0310 |
| 2025-10-23 15:13:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0055 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:13:31 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 176/228] LR: 0.000166, Loss: -0.0309 |
| 2025-10-23 15:13:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0057 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:13:34 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 226/228] LR: 0.000164, Loss: -0.0307 |
| 2025-10-23 15:13:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0059 |
| - Recon Loss All: 0.0035 |
| 2025-10-23 15:13:35 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 9/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:13:38 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 48/228] LR: 0.000162, Loss: -0.0309 |
| 2025-10-23 15:13:38 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0056 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:13:41 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 98/228] LR: 0.000160, Loss: -0.0305 |
| 2025-10-23 15:13:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706 |
| - Recon Loss Pre: 0.0061 |
| - Recon Loss All: 0.0036 |
| 2025-10-23 15:13:43 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 148/228] LR: 0.000159, Loss: -0.0306 |
| 2025-10-23 15:13:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0059 |
| - Recon Loss All: 0.0035 |
| 2025-10-23 15:13:47 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 198/228] LR: 0.000157, Loss: -0.0306 |
| 2025-10-23 15:13:47 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0061 |
| - Recon Loss All: 0.0035 |
| 2025-10-23 15:13:49 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 10/30 Summary --- |
| Validation Loss: 0.0049 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:13:50 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 20/228] LR: 0.000155, Loss: -0.0308 |
| 2025-10-23 15:13:50 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0058 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:13:53 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 70/228] LR: 0.000153, Loss: -0.0309 |
| 2025-10-23 15:13:53 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0708 |
| - Recon Loss Pre: 0.0057 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:13:56 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 120/228] LR: 0.000151, Loss: -0.0307 |
| 2025-10-23 15:13:56 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0708 |
| - Recon Loss Pre: 0.0060 |
| - Recon Loss All: 0.0034 |
| 2025-10-23 15:13:59 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 170/228] LR: 0.000149, Loss: -0.0311 |
| 2025-10-23 15:13:59 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0708 |
| - Recon Loss Pre: 0.0054 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:14:02 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 220/228] LR: 0.000147, Loss: -0.0308 |
| 2025-10-23 15:14:02 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0058 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:14:03 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 11/30 Summary --- |
| Validation Loss: 0.0051 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:14:06 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 42/228] LR: 0.000144, Loss: -0.0307 |
| 2025-10-23 15:14:06 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707 |
| - Recon Loss Pre: 0.0059 |
| - Recon Loss All: 0.0035 |
| 2025-10-23 15:14:09 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 92/228] LR: 0.000142, Loss: -0.0312 |
| 2025-10-23 15:14:09 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0708 |
| - Recon Loss Pre: 0.0054 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:14:12 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 142/228] LR: 0.000140, Loss: -0.0311 |
| 2025-10-23 15:14:12 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0708 |
| - Recon Loss Pre: 0.0055 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:14:15 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 192/228] LR: 0.000138, Loss: -0.0311 |
| 2025-10-23 15:14:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0054 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:14:17 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 12/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:14:18 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 14/228] LR: 0.000136, Loss: -0.0313 |
| 2025-10-23 15:14:18 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:14:21 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 64/228] LR: 0.000134, Loss: -0.0313 |
| 2025-10-23 15:14:21 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:14:24 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 114/228] LR: 0.000131, Loss: -0.0308 |
| 2025-10-23 15:14:24 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0058 |
| - Recon Loss All: 0.0034 |
| 2025-10-23 15:14:27 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 164/228] LR: 0.000129, Loss: -0.0311 |
| 2025-10-23 15:14:27 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0708 |
| - Recon Loss Pre: 0.0055 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:14:30 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 214/228] LR: 0.000127, Loss: -0.0313 |
| 2025-10-23 15:14:30 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:14:31 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 13/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:14:34 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 36/228] LR: 0.000124, Loss: -0.0310 |
| 2025-10-23 15:14:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0708 |
| - Recon Loss Pre: 0.0055 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:14:37 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 86/228] LR: 0.000122, Loss: -0.0317 |
| 2025-10-23 15:14:37 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0047 |
| - Recon Loss All: 0.0029 |
| 2025-10-23 15:14:40 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 136/228] LR: 0.000120, Loss: -0.0311 |
| 2025-10-23 15:14:40 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0054 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:14:43 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 186/228] LR: 0.000118, Loss: -0.0308 |
| 2025-10-23 15:14:43 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0058 |
| - Recon Loss All: 0.0034 |
| 2025-10-23 15:14:46 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 14/30 Summary --- |
| Validation Loss: 0.0051 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:14:46 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 8/228] LR: 0.000115, Loss: -0.0313 |
| 2025-10-23 15:14:46 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:14:49 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 58/228] LR: 0.000113, Loss: -0.0310 |
| 2025-10-23 15:14:49 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0056 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:14:52 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 108/228] LR: 0.000110, Loss: -0.0314 |
| 2025-10-23 15:14:52 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:14:55 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 158/228] LR: 0.000108, Loss: -0.0314 |
| 2025-10-23 15:14:55 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:14:58 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 208/228] LR: 0.000106, Loss: -0.0310 |
| 2025-10-23 15:14:58 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0708 |
| - Recon Loss Pre: 0.0056 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:15:00 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 15/30 Summary --- |
| Validation Loss: 0.0051 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:15:02 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 30/228] LR: 0.000103, Loss: -0.0313 |
| 2025-10-23 15:15:02 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:15:05 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 80/228] LR: 0.000101, Loss: -0.0313 |
| 2025-10-23 15:15:05 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:15:08 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 130/228] LR: 0.000099, Loss: -0.0312 |
| 2025-10-23 15:15:08 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0054 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:15:11 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 180/228] LR: 0.000096, Loss: -0.0312 |
| 2025-10-23 15:15:11 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:15:14 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 16/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:15:15 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 2/228] LR: 0.000094, Loss: -0.0313 |
| 2025-10-23 15:15:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:15:18 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 52/228] LR: 0.000092, Loss: -0.0313 |
| 2025-10-23 15:15:18 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:15:21 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 102/228] LR: 0.000089, Loss: -0.0313 |
| 2025-10-23 15:15:21 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:15:24 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 152/228] LR: 0.000087, Loss: -0.0315 |
| 2025-10-23 15:15:24 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:15:27 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 202/228] LR: 0.000085, Loss: -0.0311 |
| 2025-10-23 15:15:27 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0055 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:15:29 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 17/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:15:31 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 24/228] LR: 0.000082, Loss: -0.0313 |
| 2025-10-23 15:15:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:15:34 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 74/228] LR: 0.000080, Loss: -0.0312 |
| 2025-10-23 15:15:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:15:37 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 124/228] LR: 0.000078, Loss: -0.0317 |
| 2025-10-23 15:15:37 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711 |
| - Recon Loss Pre: 0.0047 |
| - Recon Loss All: 0.0029 |
| 2025-10-23 15:15:40 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 174/228] LR: 0.000075, Loss: -0.0315 |
| 2025-10-23 15:15:40 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0049 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:15:43 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 224/228] LR: 0.000073, Loss: -0.0312 |
| 2025-10-23 15:15:43 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:15:43 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 18/30 Summary --- |
| Validation Loss: 0.0049 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:15:46 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 46/228] LR: 0.000071, Loss: -0.0313 |
| 2025-10-23 15:15:46 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:15:49 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 96/228] LR: 0.000068, Loss: -0.0313 |
| 2025-10-23 15:15:49 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:15:52 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 146/228] LR: 0.000066, Loss: -0.0313 |
| 2025-10-23 15:15:52 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:15:55 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 196/228] LR: 0.000064, Loss: -0.0313 |
| 2025-10-23 15:15:55 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:15:58 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 19/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:15:59 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 18/228] LR: 0.000062, Loss: -0.0316 |
| 2025-10-23 15:15:59 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0048 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:16:02 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 68/228] LR: 0.000060, Loss: -0.0311 |
| 2025-10-23 15:16:02 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0055 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:16:05 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 118/228] LR: 0.000057, Loss: -0.0316 |
| 2025-10-23 15:16:05 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0049 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:16:08 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 168/228] LR: 0.000055, Loss: -0.0310 |
| 2025-10-23 15:16:08 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0055 |
| - Recon Loss All: 0.0034 |
| 2025-10-23 15:16:11 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 218/228] LR: 0.000053, Loss: -0.0314 |
| 2025-10-23 15:16:11 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:16:12 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 20/30 Summary --- |
| Validation Loss: 0.0049 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:16:15 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 40/228] LR: 0.000051, Loss: -0.0315 |
| 2025-10-23 15:16:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:16:18 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 90/228] LR: 0.000049, Loss: -0.0315 |
| 2025-10-23 15:16:18 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:16:21 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 140/228] LR: 0.000047, Loss: -0.0313 |
| 2025-10-23 15:16:21 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:16:24 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 190/228] LR: 0.000045, Loss: -0.0314 |
| 2025-10-23 15:16:24 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:16:27 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 21/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:16:28 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 12/228] LR: 0.000043, Loss: -0.0314 |
| 2025-10-23 15:16:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:16:31 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 62/228] LR: 0.000041, Loss: -0.0313 |
| 2025-10-23 15:16:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:16:34 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 112/228] LR: 0.000039, Loss: -0.0314 |
| 2025-10-23 15:16:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:16:37 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 162/228] LR: 0.000037, Loss: -0.0316 |
| 2025-10-23 15:16:37 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0049 |
| - Recon Loss All: 0.0029 |
| 2025-10-23 15:16:40 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 212/228] LR: 0.000036, Loss: -0.0316 |
| 2025-10-23 15:16:40 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0048 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:16:41 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 22/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:16:43 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 34/228] LR: 0.000034, Loss: -0.0313 |
| 2025-10-23 15:16:43 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:16:46 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 84/228] LR: 0.000032, Loss: -0.0315 |
| 2025-10-23 15:16:46 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0029 |
| 2025-10-23 15:16:49 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 134/228] LR: 0.000030, Loss: -0.0314 |
| 2025-10-23 15:16:49 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:16:53 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 184/228] LR: 0.000029, Loss: -0.0315 |
| 2025-10-23 15:16:53 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:16:56 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 23/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:16:57 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 6/228] LR: 0.000027, Loss: -0.0315 |
| 2025-10-23 15:16:57 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0049 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:17:00 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 56/228] LR: 0.000025, Loss: -0.0313 |
| 2025-10-23 15:17:00 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:17:03 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 106/228] LR: 0.000024, Loss: -0.0313 |
| 2025-10-23 15:17:03 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:17:06 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 156/228] LR: 0.000022, Loss: -0.0315 |
| 2025-10-23 15:17:06 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0049 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:17:09 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 206/228] LR: 0.000021, Loss: -0.0313 |
| 2025-10-23 15:17:09 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:17:11 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 24/30 Summary --- |
| Validation Loss: 0.0049 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:17:12 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 28/228] LR: 0.000019, Loss: -0.0316 |
| 2025-10-23 15:17:12 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711 |
| - Recon Loss Pre: 0.0049 |
| - Recon Loss All: 0.0029 |
| 2025-10-23 15:17:15 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 78/228] LR: 0.000018, Loss: -0.0314 |
| 2025-10-23 15:17:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:17:18 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 128/228] LR: 0.000017, Loss: -0.0314 |
| 2025-10-23 15:17:18 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:17:21 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 178/228] LR: 0.000015, Loss: -0.0314 |
| 2025-10-23 15:17:21 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:17:24 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 228/228] LR: 0.000014, Loss: -0.0315 |
| 2025-10-23 15:17:24 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:17:25 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 25/30 Summary --- |
| Validation Loss: 0.0050 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:17:28 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 50/228] LR: 0.000013, Loss: -0.0312 |
| 2025-10-23 15:17:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:17:31 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 100/228] LR: 0.000012, Loss: -0.0317 |
| 2025-10-23 15:17:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0047 |
| - Recon Loss All: 0.0029 |
| 2025-10-23 15:17:34 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 150/228] LR: 0.000011, Loss: -0.0315 |
| 2025-10-23 15:17:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0049 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:17:37 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 200/228] LR: 0.000010, Loss: -0.0312 |
| 2025-10-23 15:17:37 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:17:40 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 26/30 Summary --- |
| Validation Loss: 0.0049 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:17:41 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 22/228] LR: 0.000009, Loss: -0.0316 |
| 2025-10-23 15:17:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0029 |
| 2025-10-23 15:17:44 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 72/228] LR: 0.000008, Loss: -0.0314 |
| 2025-10-23 15:17:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:17:47 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 122/228] LR: 0.000007, Loss: -0.0315 |
| 2025-10-23 15:17:47 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0049 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:17:50 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 172/228] LR: 0.000006, Loss: -0.0314 |
| 2025-10-23 15:17:50 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:17:53 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 222/228] LR: 0.000005, Loss: -0.0316 |
| 2025-10-23 15:17:53 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0048 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:17:54 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 27/30 Summary --- |
| Validation Loss: 0.0049 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:17:57 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 44/228] LR: 0.000005, Loss: -0.0312 |
| 2025-10-23 15:17:57 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:18:00 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 94/228] LR: 0.000004, Loss: -0.0315 |
| 2025-10-23 15:18:00 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:18:03 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 144/228] LR: 0.000003, Loss: -0.0313 |
| 2025-10-23 15:18:03 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:18:06 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 194/228] LR: 0.000003, Loss: -0.0311 |
| 2025-10-23 15:18:06 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0054 |
| - Recon Loss All: 0.0033 |
| 2025-10-23 15:18:08 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 28/30 Summary --- |
| Validation Loss: 0.0049 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:18:10 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 16/228] LR: 0.000002, Loss: -0.0312 |
| 2025-10-23 15:18:10 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0054 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:18:13 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 66/228] LR: 0.000002, Loss: -0.0315 |
| 2025-10-23 15:18:13 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:18:16 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 116/228] LR: 0.000001, Loss: -0.0314 |
| 2025-10-23 15:18:16 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:18:19 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 166/228] LR: 0.000001, Loss: -0.0315 |
| 2025-10-23 15:18:19 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0050 |
| - Recon Loss All: 0.0030 |
| 2025-10-23 15:18:22 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 216/228] LR: 0.000001, Loss: -0.0313 |
| 2025-10-23 15:18:22 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0052 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:18:23 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 29/30 Summary --- |
| Validation Loss: 0.0049 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:18:25 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 38/228] LR: 0.000000, Loss: -0.0314 |
| 2025-10-23 15:18:25 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0051 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:18:28 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 88/228] LR: 0.000000, Loss: -0.0317 |
| 2025-10-23 15:18:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0047 |
| - Recon Loss All: 0.0029 |
| 2025-10-23 15:18:31 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 138/228] LR: 0.000000, Loss: -0.0312 |
| 2025-10-23 15:18:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0032 |
| 2025-10-23 15:18:34 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 188/228] LR: 0.000000, Loss: -0.0313 |
| 2025-10-23 15:18:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710 |
| - Recon Loss Pre: 0.0053 |
| - Recon Loss All: 0.0031 |
| 2025-10-23 15:18:37 - tokenizer_training_rank_0 - INFO - |
| --- Epoch 30/30 Summary --- |
| Validation Loss: 0.0049 |
| Epoch Time: 0:00:14 |
| Total Training Time: 0:00:14 |
| |
| 2025-10-23 15:18:37 - tokenizer_training_rank_0 - INFO - Tokenizer training completed! Best validation loss: 0.0048 |
| Training time: 7.23 minutes |
| Model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_4h_finetune/tokenizer |
| |