| ROBOMIMIC WARNING( |
| No private macro file found! |
| It is recommended to use a private macro file |
| To setup, run: python /home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/robomimic/scripts/setup_macros.py |
| ) |
| Warning: unknown parameter wm_ckpt |
| Warning: unknown parameter target_modules |
| wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. |
| wandb: Currently logged in as: yukizhang0527 (yukizhang0527-harbin-institute-of-technology). Use `wandb login --relogin` to force relogin |
| wandb: - Waiting for wandb.init()...
wandb: \ Waiting for wandb.init()...
wandb: | Waiting for wandb.init()...
wandb: / Waiting for wandb.init()...
wandb: Tracking run with wandb version 0.18.3 |
| wandb: Run data is saved locally in /home/zxn/Forewarn/wandb/run-20260409_101228-fk4jc4vy |
| wandb: Run `wandb offline` to turn off syncing. |
| wandb: Syncing run vlm_lora_move_hanger |
| wandb: βοΈ View project at https://wandb.ai/yukizhang0527-harbin-institute-of-technology/llama_recipes |
| wandb: π View run at https://wandb.ai/yukizhang0527-harbin-institute-of-technology/llama_recipes/runs/fk4jc4vy |
| Warning: custom_dataset does not accept parameter: custom_dataset.task_name |
|
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
Loading checkpoint shards: 25%|βββ | 1/4 [00:00<00:00, 6.01it/s]
Loading checkpoint shards: 50%|βββββ | 2/4 [00:00<00:00, 6.25it/s]
Loading checkpoint shards: 75%|ββββββββ | 3/4 [00:00<00:00, 6.11it/s]
Loading checkpoint shards: 100%|ββββββββββ| 4/4 [00:00<00:00, 6.33it/s]
Loading checkpoint shards: 100%|ββββββββββ| 4/4 [00:00<00:00, 6.25it/s] |
| --> Model /data/colosseum_dataset/wm_data/mllama_base/Llama-3.2-11B-Vision-Instruct/custom |
|
|
| --> /data/colosseum_dataset/wm_data/mllama_base/Llama-3.2-11B-Vision-Instruct/custom has 9808.885776 Million params |
|
|
| loading world model from ckpt path /home/zxn/Forewarn/logs/dreamer_cont/move_hanger/0408/102201/best_pretrain_joint_0_00.pt |
| Updating the agent for 3200 every 100 env steps |
| Observation shapes: {'cam_front_view_image': (64, 64, 3), 'cam_wrist_view_image': (64, 64, 3), 'discount': (1,), 'state': (8,)} |
| Encoder CNN shapes: {'cam_front_view_image': (64, 64, 3), 'cam_wrist_view_image': (64, 64, 3)} |
| Encoder MLP shapes: {'state': (8,)} |
| Decoder CNN shapes: {'cam_front_view_image': (64, 64, 3), 'cam_wrist_view_image': (64, 64, 3)} |
| Decoder MLP shapes: {'state': (8,)} |
| Optimizer model_opt has 18626830 variables. |
| Freezing embeddings from encoder during training |
| Optimizer pretrain has 17410062 variables. |
| trainable params: 26,251,520 || all params: 9,853,764,126 || trainable%: 0.2664 |
| dataset config 1 |
| Repo card metadata block was not found. Setting CardData to empty. |
| --> Training Set Length = 235 |
| Repo card metadata block was not found. Setting CardData to empty. |
| --> Validation Set Length = 145 |
| length of dataset_train 235 |
| custom_data_collator is used |
| --> Num of Training Set Batches loaded = 117 |
| --> Num of Validation Set Batches loaded = 145 |
| Can not find the custom data_collator in the dataset.py file (/home/zxn/Forewarn/vlm/llama-recipes/recipes/quickstart/finetuning/datasets/colosseum_dataset_latent.py). |
| Using the default data_collator instead. |
| Starting epoch 0/5 |
| train_config.max_train_step: 0 |
| /home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/cuda/memory.py:330: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats. |
| warnings.warn( |
|
Training Epoch: 1: 0%|[34m [0m| 0/23 [00:00<?, ?it/s]/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/_tensor.py:893: UserWarning: non-inplace resize is deprecated |
| warnings.warn("non-inplace resize is deprecated") |
|
Training Epoch: 1/5, step 0/117 completed (loss: 0.34938228130340576): 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 1/5, step 1/117 completed (loss: 0.36154496669769287): 0%|[34m [0m| 0/23 [00:01<?, ?it/s]
Training Epoch: 1/5, step 2/117 completed (loss: 0.3639431893825531): 0%|[34m [0m| 0/23 [00:01<?, ?it/s]
Training Epoch: 1/5, step 3/117 completed (loss: 0.347665935754776): 0%|[34m [0m| 0/23 [00:02<?, ?it/s]
Training Epoch: 1/5, step 3/117 completed (loss: 0.347665935754776): 4%|[34mβ [0m| 1/23 [00:02<01:00, 2.75s/it]
Training Epoch: 1/5, step 4/117 completed (loss: 0.3501148223876953): 4%|[34mβ [0m| 1/23 [00:02<01:00, 2.75s/it]
Training Epoch: 1/5, step 5/117 completed (loss: 0.2008986920118332): 4%|[34mβ [0m| 1/23 [00:03<01:00, 2.75s/it]
Training Epoch: 1/5, step 6/117 completed (loss: 0.209384486079216): 4%|[34mβ [0m| 1/23 [00:03<01:00, 2.75s/it]
Training Epoch: 1/5, step 7/117 completed (loss: 0.20926225185394287): 4%|[34mβ [0m| 1/23 [00:03<01:00, 2.75s/it]
Training Epoch: 1/5, step 8/117 completed (loss: 0.20845146477222443): 4%|[34mβ [0m| 1/23 [00:04<01:00, 2.75s/it]
Training Epoch: 1/5, step 8/117 completed (loss: 0.20845146477222443): 9%|[34mβ [0m| 2/23 [00:04<00:44, 2.14s/it]
Training Epoch: 1/5, step 9/117 completed (loss: 0.1950182169675827): 9%|[34mβ [0m| 2/23 [00:04<00:44, 2.14s/it]
Training Epoch: 1/5, step 10/117 completed (loss: 0.08019008487462997): 9%|[34mβ [0m| 2/23 [00:04<00:44, 2.14s/it]
Training Epoch: 1/5, step 11/117 completed (loss: 0.05812717601656914): 9%|[34mβ [0m| 2/23 [00:05<00:44, 2.14s/it]
Training Epoch: 1/5, step 12/117 completed (loss: 0.08032266050577164): 9%|[34mβ [0m| 2/23 [00:05<00:44, 2.14s/it]
Training Epoch: 1/5, step 13/117 completed (loss: 0.05841038376092911): 9%|[34mβ [0m| 2/23 [00:05<00:44, 2.14s/it]
Training Epoch: 1/5, step 13/117 completed (loss: 0.05841038376092911): 13%|[34mββ [0m| 3/23 [00:06<00:38, 1.93s/it]
Training Epoch: 1/5, step 14/117 completed (loss: 0.06855574250221252): 13%|[34mββ [0m| 3/23 [00:06<00:38, 1.93s/it]
Training Epoch: 1/5, step 15/117 completed (loss: 0.023069610819220543): 13%|[34mββ [0m| 3/23 [00:06<00:38, 1.93s/it]
Training Epoch: 1/5, step 16/117 completed (loss: 0.022502517327666283): 13%|[34mββ [0m| 3/23 [00:06<00:38, 1.93s/it]
Training Epoch: 1/5, step 17/117 completed (loss: 0.022348567843437195): 13%|[34mββ [0m| 3/23 [00:07<00:38, 1.93s/it]
Training Epoch: 1/5, step 18/117 completed (loss: 0.02179151587188244): 13%|[34mββ [0m| 3/23 [00:07<00:38, 1.93s/it]
Training Epoch: 1/5, step 18/117 completed (loss: 0.02179151587188244): 17%|[34mββ [0m| 4/23 [00:07<00:34, 1.83s/it]
Training Epoch: 1/5, step 19/117 completed (loss: 0.02230590209364891): 17%|[34mββ [0m| 4/23 [00:07<00:34, 1.83s/it]
Training Epoch: 1/5, step 20/117 completed (loss: 0.020546264946460724): 17%|[34mββ [0m| 4/23 [00:08<00:34, 1.83s/it]
Training Epoch: 1/5, step 21/117 completed (loss: 0.020481931045651436): 17%|[34mββ [0m| 4/23 [00:08<00:34, 1.83s/it]
Training Epoch: 1/5, step 22/117 completed (loss: 0.03287343308329582): 17%|[34mββ [0m| 4/23 [00:08<00:34, 1.83s/it]
Training Epoch: 1/5, step 23/117 completed (loss: 0.032024335116147995): 17%|[34mββ [0m| 4/23 [00:09<00:34, 1.83s/it]
Training Epoch: 1/5, step 23/117 completed (loss: 0.032024335116147995): 22%|[34mβββ [0m| 5/23 [00:09<00:32, 1.82s/it]
Training Epoch: 1/5, step 24/117 completed (loss: 0.032111164182424545): 22%|[34mβββ [0m| 5/23 [00:09<00:32, 1.82s/it]
Training Epoch: 1/5, step 25/117 completed (loss: 0.04299301281571388): 22%|[34mβββ [0m| 5/23 [00:09<00:32, 1.82s/it]
Training Epoch: 1/5, step 26/117 completed (loss: 0.04293528571724892): 22%|[34mβββ [0m| 5/23 [00:10<00:32, 1.82s/it]
Training Epoch: 1/5, step 27/117 completed (loss: 0.0035437492188066244): 22%|[34mβββ [0m| 5/23 [00:10<00:32, 1.82s/it]
Training Epoch: 1/5, step 28/117 completed (loss: 0.0035822757054120302): 22%|[34mβββ [0m| 5/23 [00:10<00:32, 1.82s/it]
Training Epoch: 1/5, step 28/117 completed (loss: 0.0035822757054120302): 26%|[34mβββ [0m| 6/23 [00:11<00:30, 1.77s/it]
Training Epoch: 1/5, step 29/117 completed (loss: 0.0035585814621299505): 26%|[34mβββ [0m| 6/23 [00:11<00:30, 1.77s/it]
Training Epoch: 1/5, step 30/117 completed (loss: 0.003098810790106654): 26%|[34mβββ [0m| 6/23 [00:11<00:30, 1.77s/it]
Training Epoch: 1/5, step 31/117 completed (loss: 0.0029225763864815235): 26%|[34mβββ [0m| 6/23 [00:11<00:30, 1.77s/it]
Training Epoch: 1/5, step 32/117 completed (loss: 0.0030902232974767685): 26%|[34mβββ [0m| 6/23 [00:12<00:30, 1.77s/it]
Training Epoch: 1/5, step 33/117 completed (loss: 0.0029030493460595608): 26%|[34mβββ [0m| 6/23 [00:12<00:30, 1.77s/it]
Training Epoch: 1/5, step 33/117 completed (loss: 0.0029030493460595608): 30%|[34mβββ [0m| 7/23 [00:12<00:27, 1.72s/it]
Training Epoch: 1/5, step 34/117 completed (loss: 0.0029025531839579344): 30%|[34mβββ [0m| 7/23 [00:12<00:27, 1.72s/it]
Training Epoch: 1/5, step 35/117 completed (loss: 0.05000774934887886): 30%|[34mβββ [0m| 7/23 [00:13<00:27, 1.72s/it]
Training Epoch: 1/5, step 36/117 completed (loss: 0.0022081935312598944): 30%|[34mβββ [0m| 7/23 [00:13<00:27, 1.72s/it]
Training Epoch: 1/5, step 37/117 completed (loss: 0.0023308272939175367): 30%|[34mβββ [0m| 7/23 [00:13<00:27, 1.72s/it]
Training Epoch: 1/5, step 38/117 completed (loss: 0.02490362897515297): 30%|[34mβββ [0m| 7/23 [00:14<00:27, 1.72s/it]
Training Epoch: 1/5, step 38/117 completed (loss: 0.02490362897515297): 35%|[34mββββ [0m| 8/23 [00:14<00:25, 1.68s/it]
Training Epoch: 1/5, step 39/117 completed (loss: 0.048796396702528): 35%|[34mββββ [0m| 8/23 [00:14<00:25, 1.68s/it]
Training Epoch: 1/5, step 40/117 completed (loss: 0.005991561338305473): 35%|[34mββββ [0m| 8/23 [00:14<00:25, 1.68s/it]
Training Epoch: 1/5, step 41/117 completed (loss: 0.02944684959948063): 35%|[34mββββ [0m| 8/23 [00:15<00:25, 1.68s/it]
Training Epoch: 1/5, step 42/117 completed (loss: 0.030444292351603508): 35%|[34mββββ [0m| 8/23 [00:15<00:25, 1.68s/it]
Training Epoch: 1/5, step 43/117 completed (loss: 0.005989838857203722): 35%|[34mββββ [0m| 8/23 [00:15<00:25, 1.68s/it]
Training Epoch: 1/5, step 43/117 completed (loss: 0.005989838857203722): 39%|[34mββββ [0m| 9/23 [00:16<00:23, 1.66s/it]
Training Epoch: 1/5, step 44/117 completed (loss: 0.005671808961778879): 39%|[34mββββ [0m| 9/23 [00:16<00:23, 1.66s/it]
Training Epoch: 1/5, step 45/117 completed (loss: 0.0129935834556818): 39%|[34mββββ [0m| 9/23 [00:16<00:23, 1.66s/it]
Training Epoch: 1/5, step 46/117 completed (loss: 0.015197706408798695): 39%|[34mββββ [0m| 9/23 [00:16<00:23, 1.66s/it]
Training Epoch: 1/5, step 47/117 completed (loss: 0.015196231193840504): 39%|[34mββββ [0m| 9/23 [00:17<00:23, 1.66s/it]
Training Epoch: 1/5, step 48/117 completed (loss: 0.01301873754709959): 39%|[34mββββ [0m| 9/23 [00:17<00:23, 1.66s/it]
Training Epoch: 1/5, step 48/117 completed (loss: 0.01301873754709959): 43%|[34mβββββ [0m| 10/23 [00:17<00:21, 1.66s/it]
Training Epoch: 1/5, step 49/117 completed (loss: 0.015197078697383404): 43%|[34mβββββ [0m| 10/23 [00:17<00:21, 1.66s/it]
Training Epoch: 1/5, step 50/117 completed (loss: 0.02278268150985241): 43%|[34mβββββ [0m| 10/23 [00:18<00:21, 1.66s/it]
Training Epoch: 1/5, step 51/117 completed (loss: 0.008316192775964737): 43%|[34mβββββ [0m| 10/23 [00:18<00:21, 1.66s/it]
Training Epoch: 1/5, step 52/117 completed (loss: 0.021951032802462578): 43%|[34mβββββ [0m| 10/23 [00:18<00:21, 1.66s/it]
Training Epoch: 1/5, step 53/117 completed (loss: 0.008303049020469189): 43%|[34mβββββ [0m| 10/23 [00:18<00:21, 1.66s/it]
Training Epoch: 1/5, step 53/117 completed (loss: 0.008303049020469189): 48%|[34mβββββ [0m| 11/23 [00:19<00:19, 1.63s/it]
Training Epoch: 1/5, step 54/117 completed (loss: 0.022781431674957275): 48%|[34mβββββ [0m| 11/23 [00:19<00:19, 1.63s/it]
Training Epoch: 1/5, step 55/117 completed (loss: 0.018769148737192154): 48%|[34mβββββ [0m| 11/23 [00:19<00:19, 1.63s/it]
Training Epoch: 1/5, step 56/117 completed (loss: 0.01952901855111122): 48%|[34mβββββ [0m| 11/23 [00:20<00:19, 1.63s/it]
Training Epoch: 1/5, step 57/117 completed (loss: 0.01952338218688965): 48%|[34mβββββ [0m| 11/23 [00:20<00:19, 1.63s/it]
Training Epoch: 1/5, step 58/117 completed (loss: 0.01952510140836239): 48%|[34mβββββ [0m| 11/23 [00:20<00:19, 1.63s/it]
Training Epoch: 1/5, step 58/117 completed (loss: 0.01952510140836239): 52%|[34mββββββ [0m| 12/23 [00:20<00:17, 1.62s/it]
Training Epoch: 1/5, step 59/117 completed (loss: 0.019528020173311234): 52%|[34mββββββ [0m| 12/23 [00:20<00:17, 1.62s/it]
Training Epoch: 1/5, step 60/117 completed (loss: 0.025356439873576164): 52%|[34mββββββ [0m| 12/23 [00:21<00:17, 1.62s/it]
Training Epoch: 1/5, step 61/117 completed (loss: 0.007007028441876173): 52%|[34mββββββ [0m| 12/23 [00:21<00:17, 1.62s/it]
Training Epoch: 1/5, step 62/117 completed (loss: 0.024432888254523277): 52%|[34mββββββ [0m| 12/23 [00:21<00:17, 1.62s/it]
Training Epoch: 1/5, step 63/117 completed (loss: 0.007003359030932188): 52%|[34mββββββ [0m| 12/23 [00:22<00:17, 1.62s/it]
Training Epoch: 1/5, step 63/117 completed (loss: 0.007003359030932188): 57%|[34mββββββ [0m| 13/23 [00:22<00:16, 1.62s/it]
Training Epoch: 1/5, step 64/117 completed (loss: 0.02535950019955635): 57%|[34mββββββ [0m| 13/23 [00:22<00:16, 1.62s/it]
Training Epoch: 1/5, step 65/117 completed (loss: 0.004805144388228655): 57%|[34mββββββ [0m| 13/23 [00:22<00:16, 1.62s/it]
Training Epoch: 1/5, step 66/117 completed (loss: 0.004540515597909689): 57%|[34mββββββ [0m| 13/23 [00:23<00:16, 1.62s/it]
Training Epoch: 1/5, step 67/117 completed (loss: 0.004538893699645996): 57%|[34mββββββ [0m| 13/23 [00:23<00:16, 1.62s/it]
Training Epoch: 1/5, step 68/117 completed (loss: 0.004538845270872116): 57%|[34mββββββ [0m| 13/23 [00:23<00:16, 1.62s/it]
Training Epoch: 1/5, step 68/117 completed (loss: 0.004538845270872116): 61%|[34mββββββ [0m| 14/23 [00:24<00:14, 1.63s/it]
Training Epoch: 1/5, step 69/117 completed (loss: 0.004538937471807003): 61%|[34mββββββ [0m| 14/23 [00:24<00:14, 1.63s/it]
Training Epoch: 1/5, step 70/117 completed (loss: 0.0024229593109339476): 61%|[34mββββββ [0m| 14/23 [00:24<00:14, 1.63s/it]
Training Epoch: 1/5, step 71/117 completed (loss: 0.0024243395309895277): 61%|[34mββββββ [0m| 14/23 [00:24<00:14, 1.63s/it]
Training Epoch: 1/5, step 72/117 completed (loss: 0.002565172268077731): 61%|[34mββββββ [0m| 14/23 [00:25<00:14, 1.63s/it]
Training Epoch: 1/5, step 73/117 completed (loss: 0.044899892061948776): 61%|[34mββββββ [0m| 14/23 [00:25<00:14, 1.63s/it]
Training Epoch: 1/5, step 73/117 completed (loss: 0.044899892061948776): 65%|[34mβββββββ [0m| 15/23 [00:25<00:12, 1.58s/it]
Training Epoch: 1/5, step 74/117 completed (loss: 0.04733910784125328): 65%|[34mβββββββ [0m| 15/23 [00:25<00:12, 1.58s/it]
Training Epoch: 1/5, step 75/117 completed (loss: 0.046105317771434784): 65%|[34mβββββββ [0m| 15/23 [00:26<00:12, 1.58s/it]
Training Epoch: 1/5, step 76/117 completed (loss: 0.04489556699991226): 65%|[34mβββββββ [0m| 15/23 [00:26<00:12, 1.58s/it]
Training Epoch: 1/5, step 77/117 completed (loss: 0.0024197332095354795): 65%|[34mβββββββ [0m| 15/23 [00:26<00:12, 1.58s/it]
Training Epoch: 1/5, step 78/117 completed (loss: 0.046105533838272095): 65%|[34mβββββββ [0m| 15/23 [00:27<00:12, 1.58s/it]
Training Epoch: 1/5, step 78/117 completed (loss: 0.046105533838272095): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.61s/it]
Training Epoch: 1/5, step 79/117 completed (loss: 0.0025628823786973953): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.61s/it]
Training Epoch: 1/5, step 80/117 completed (loss: 0.0323745422065258): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.61s/it]
Training Epoch: 1/5, step 81/117 completed (loss: 0.004796578548848629): 70%|[34mβββββββ [0m| 16/23 [00:28<00:11, 1.61s/it]
Training Epoch: 1/5, step 82/117 completed (loss: 0.03237868472933769): 70%|[34mβββββββ [0m| 16/23 [00:28<00:11, 1.61s/it]
Training Epoch: 1/5, step 83/117 completed (loss: 0.031319957226514816): 70%|[34mβββββββ [0m| 16/23 [00:28<00:11, 1.61s/it]
Training Epoch: 1/5, step 83/117 completed (loss: 0.031319957226514816): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.65s/it]
Training Epoch: 1/5, step 84/117 completed (loss: 0.03343101963400841): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.65s/it]
Training Epoch: 1/5, step 85/117 completed (loss: 0.011545124463737011): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.65s/it]
Training Epoch: 1/5, step 86/117 completed (loss: 0.017665578052401543): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.65s/it]
Training Epoch: 1/5, step 87/117 completed (loss: 0.011543991975486279): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.65s/it]
Training Epoch: 1/5, step 88/117 completed (loss: 0.017659349367022514): 74%|[34mββββββββ [0m| 17/23 [00:30<00:09, 1.65s/it]
Training Epoch: 1/5, step 88/117 completed (loss: 0.017659349367022514): 78%|[34mββββββββ [0m| 18/23 [00:30<00:08, 1.67s/it]
Training Epoch: 1/5, step 89/117 completed (loss: 0.016904575750231743): 78%|[34mββββββββ [0m| 18/23 [00:30<00:08, 1.67s/it]
Training Epoch: 1/5, step 90/117 completed (loss: 0.008247880265116692): 78%|[34mββββββββ [0m| 18/23 [00:31<00:08, 1.67s/it]
Training Epoch: 1/5, step 91/117 completed (loss: 0.021101439371705055): 78%|[34mββββββββ [0m| 18/23 [00:31<00:08, 1.67s/it]
Training Epoch: 1/5, step 92/117 completed (loss: 0.008239339105784893): 78%|[34mββββββββ [0m| 18/23 [00:31<00:08, 1.67s/it]
Training Epoch: 1/5, step 93/117 completed (loss: 0.008673922158777714): 78%|[34mββββββββ [0m| 18/23 [00:32<00:08, 1.67s/it]
Training Epoch: 1/5, step 93/117 completed (loss: 0.008673922158777714): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.66s/it]
Training Epoch: 1/5, step 94/117 completed (loss: 0.008677404373884201): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.66s/it]
Training Epoch: 1/5, step 95/117 completed (loss: 0.0320374071598053): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.66s/it]
Training Epoch: 1/5, step 96/117 completed (loss: 0.03203871473670006): 83%|[34mβββββββββ [0m| 19/23 [00:33<00:06, 1.66s/it]
Training Epoch: 1/5, step 97/117 completed (loss: 0.03203645348548889): 83%|[34mβββββββββ [0m| 19/23 [00:33<00:06, 1.66s/it]
Training Epoch: 1/5, step 98/117 completed (loss: 0.004320335574448109): 83%|[34mβββββββββ [0m| 19/23 [00:33<00:06, 1.66s/it]
Training Epoch: 1/5, step 98/117 completed (loss: 0.004320335574448109): 87%|[34mβββββββββ [0m| 20/23 [00:33<00:04, 1.60s/it]
Training Epoch: 1/5, step 99/117 completed (loss: 0.00431021535769105): 87%|[34mβββββββββ [0m| 20/23 [00:33<00:04, 1.60s/it]
Training Epoch: 1/5, step 100/117 completed (loss: 0.03509899601340294): 87%|[34mβββββββββ [0m| 20/23 [00:34<00:04, 1.60s/it]
Training Epoch: 1/5, step 101/117 completed (loss: 0.03406235948204994): 87%|[34mβββββββββ [0m| 20/23 [00:34<00:04, 1.60s/it]
Training Epoch: 1/5, step 102/117 completed (loss: 0.003648315789178014): 87%|[34mβββββββββ [0m| 20/23 [00:34<00:04, 1.60s/it]
Training Epoch: 1/5, step 103/117 completed (loss: 0.036131903529167175): 87%|[34mβββββββββ [0m| 20/23 [00:35<00:04, 1.60s/it]
Training Epoch: 1/5, step 103/117 completed (loss: 0.036131903529167175): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.62s/it]
Training Epoch: 1/5, step 104/117 completed (loss: 0.03613198176026344): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.62s/it]
Training Epoch: 1/5, step 105/117 completed (loss: 0.005435926374047995): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.62s/it]
Training Epoch: 1/5, step 106/117 completed (loss: 0.005137276370078325): 91%|[34mββββββββββ[0m| 21/23 [00:36<00:03, 1.62s/it]
Training Epoch: 1/5, step 107/117 completed (loss: 0.028171075507998466): 91%|[34mββββββββββ[0m| 21/23 [00:36<00:03, 1.62s/it]
Training Epoch: 1/5, step 108/117 completed (loss: 0.005721846129745245): 91%|[34mββββββββββ[0m| 21/23 [00:36<00:03, 1.62s/it]
Training Epoch: 1/5, step 108/117 completed (loss: 0.005721846129745245): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.66s/it]
Training Epoch: 1/5, step 109/117 completed (loss: 0.028166836127638817): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.66s/it]
Training Epoch: 1/5, step 110/117 completed (loss: 0.020328283309936523): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.66s/it]
Training Epoch: 1/5, step 111/117 completed (loss: 0.01953105814754963): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.66s/it]
Training Epoch: 1/5, step 112/117 completed (loss: 0.020326588302850723): 96%|[34mββββββββββ[0m| 22/23 [00:38<00:01, 1.66s/it]
Training Epoch: 1/5, step 113/117 completed (loss: 0.00921958964318037): 96%|[34mββββββββββ[0m| 22/23 [00:38<00:01, 1.66s/it]
Training Epoch: 1/5, step 113/117 completed (loss: 0.00921958964318037): 100%|[34mββββββββββ[0m| 23/23 [00:38<00:00, 1.63s/it]
Training Epoch: 1/5, step 114/117 completed (loss: 0.019531382247805595): 100%|[34mββββββββββ[0m| 23/23 [00:38<00:00, 1.63s/it]
Training Epoch: 1/5, step 115/117 completed (loss: 0.011574520729482174): 100%|[34mββββββββββ[0m| 23/23 [00:39<00:00, 1.63s/it]
Training Epoch: 1/5, step 115/117 completed (loss: 0.011574520729482174): : 24it [00:39, 1.34s/it]
Training Epoch: 1/5, step 116/117 completed (loss: 0.01699228771030903): : 24it [00:39, 1.34s/it]
Training Epoch: 1/5, step 116/117 completed (loss: 0.01699228771030903): : 24it [00:39, 1.65s/it] |
| Max CUDA memory allocated was 19 GB |
| Max CUDA memory reserved was 19 GB |
| Peak active CUDA memory was 19 GB |
| CUDA Malloc retries : 0 |
| CPU Total Peak Memory consumed during the train (max): 2 GB |
|
evaluating Epoch: 0%|[32m [0m| 0/145 [00:00<?, ?it/s]
evaluating Epoch: 1%|[32m [0m| 1/145 [00:00<00:38, 3.73it/s]
evaluating Epoch: 1%|[32mβ [0m| 2/145 [00:00<00:27, 5.26it/s]
evaluating Epoch: 2%|[32mβ [0m| 3/145 [00:00<00:23, 6.08it/s]
evaluating Epoch: 3%|[32mβ [0m| 4/145 [00:00<00:21, 6.47it/s]
evaluating Epoch: 3%|[32mβ [0m| 5/145 [00:00<00:20, 6.72it/s]
evaluating Epoch: 4%|[32mβ [0m| 6/145 [00:00<00:21, 6.52it/s]
evaluating Epoch: 5%|[32mβ [0m| 7/145 [00:01<00:20, 6.80it/s]
evaluating Epoch: 6%|[32mβ [0m| 8/145 [00:01<00:19, 7.04it/s]
evaluating Epoch: 6%|[32mβ [0m| 9/145 [00:01<00:18, 7.16it/s]
evaluating Epoch: 7%|[32mβ [0m| 10/145 [00:01<00:18, 7.34it/s]
evaluating Epoch: 8%|[32mβ [0m| 11/145 [00:01<00:17, 7.47it/s]
evaluating Epoch: 8%|[32mβ [0m| 12/145 [00:01<00:17, 7.49it/s]
evaluating Epoch: 9%|[32mβ [0m| 13/145 [00:01<00:17, 7.45it/s]
evaluating Epoch: 10%|[32mβ [0m| 14/145 [00:02<00:17, 7.42it/s]
evaluating Epoch: 10%|[32mβ [0m| 15/145 [00:02<00:17, 7.33it/s]
evaluating Epoch: 11%|[32mβ [0m| 16/145 [00:02<00:17, 7.26it/s]
evaluating Epoch: 12%|[32mββ [0m| 17/145 [00:02<00:17, 7.16it/s]
evaluating Epoch: 12%|[32mββ [0m| 18/145 [00:02<00:17, 7.14it/s]
evaluating Epoch: 13%|[32mββ [0m| 19/145 [00:02<00:17, 7.09it/s]
evaluating Epoch: 14%|[32mββ [0m| 20/145 [00:02<00:17, 7.06it/s]
evaluating Epoch: 14%|[32mββ [0m| 21/145 [00:03<00:18, 6.82it/s]
evaluating Epoch: 15%|[32mββ [0m| 22/145 [00:03<00:18, 6.55it/s]
evaluating Epoch: 16%|[32mββ [0m| 23/145 [00:03<00:18, 6.75it/s]
evaluating Epoch: 17%|[32mββ [0m| 24/145 [00:03<00:17, 6.84it/s]
evaluating Epoch: 17%|[32mββ [0m| 25/145 [00:03<00:17, 6.95it/s]
evaluating Epoch: 18%|[32mββ [0m| 26/145 [00:03<00:17, 6.88it/s]
evaluating Epoch: 19%|[32mββ [0m| 27/145 [00:03<00:17, 6.77it/s]
evaluating Epoch: 19%|[32mββ [0m| 28/145 [00:04<00:17, 6.78it/s]
evaluating Epoch: 20%|[32mββ [0m| 29/145 [00:04<00:16, 6.85it/s]
evaluating Epoch: 21%|[32mββ [0m| 30/145 [00:04<00:16, 6.88it/s]
evaluating Epoch: 21%|[32mβββ [0m| 31/145 [00:04<00:16, 7.00it/s]
evaluating Epoch: 22%|[32mβββ [0m| 32/145 [00:04<00:15, 7.08it/s]
evaluating Epoch: 23%|[32mβββ [0m| 33/145 [00:04<00:15, 7.18it/s]
evaluating Epoch: 23%|[32mβββ [0m| 34/145 [00:04<00:15, 7.25it/s]
evaluating Epoch: 24%|[32mβββ [0m| 35/145 [00:05<00:15, 7.20it/s]
evaluating Epoch: 25%|[32mβββ [0m| 36/145 [00:05<00:15, 7.26it/s]
evaluating Epoch: 26%|[32mβββ [0m| 37/145 [00:05<00:14, 7.31it/s]
evaluating Epoch: 26%|[32mβββ [0m| 38/145 [00:05<00:14, 7.29it/s]
evaluating Epoch: 27%|[32mβββ [0m| 39/145 [00:05<00:14, 7.28it/s]
evaluating Epoch: 28%|[32mβββ [0m| 40/145 [00:05<00:14, 7.23it/s]
evaluating Epoch: 28%|[32mβββ [0m| 41/145 [00:05<00:14, 6.94it/s]
evaluating Epoch: 29%|[32mβββ [0m| 42/145 [00:06<00:16, 6.25it/s]
evaluating Epoch: 30%|[32mβββ [0m| 43/145 [00:06<00:16, 6.14it/s]
evaluating Epoch: 30%|[32mβββ [0m| 44/145 [00:06<00:16, 6.28it/s]
evaluating Epoch: 31%|[32mβββ [0m| 45/145 [00:06<00:15, 6.54it/s]
evaluating Epoch: 32%|[32mββββ [0m| 46/145 [00:06<00:14, 6.77it/s]
evaluating Epoch: 32%|[32mββββ [0m| 47/145 [00:06<00:14, 6.91it/s]
evaluating Epoch: 33%|[32mββββ [0m| 48/145 [00:06<00:13, 7.02it/s]
evaluating Epoch: 34%|[32mββββ [0m| 49/145 [00:07<00:13, 7.08it/s]
evaluating Epoch: 34%|[32mββββ [0m| 50/145 [00:07<00:13, 7.11it/s]
evaluating Epoch: 35%|[32mββββ [0m| 51/145 [00:07<00:13, 7.10it/s]
evaluating Epoch: 36%|[32mββββ [0m| 52/145 [00:07<00:12, 7.25it/s]
evaluating Epoch: 37%|[32mββββ [0m| 53/145 [00:07<00:12, 7.28it/s]
evaluating Epoch: 37%|[32mββββ [0m| 54/145 [00:07<00:12, 7.21it/s]
evaluating Epoch: 38%|[32mββββ [0m| 55/145 [00:07<00:12, 7.19it/s]
evaluating Epoch: 39%|[32mββββ [0m| 56/145 [00:08<00:12, 7.14it/s]
evaluating Epoch: 39%|[32mββββ [0m| 57/145 [00:08<00:12, 7.13it/s]
evaluating Epoch: 40%|[32mββββ [0m| 58/145 [00:08<00:12, 7.21it/s]
evaluating Epoch: 41%|[32mββββ [0m| 59/145 [00:08<00:11, 7.18it/s]
evaluating Epoch: 41%|[32mβββββ [0m| 60/145 [00:08<00:11, 7.21it/s]
evaluating Epoch: 42%|[32mβββββ [0m| 61/145 [00:08<00:11, 7.14it/s]
evaluating Epoch: 43%|[32mβββββ [0m| 62/145 [00:08<00:11, 7.05it/s]
evaluating Epoch: 43%|[32mβββββ [0m| 63/145 [00:09<00:11, 7.13it/s]
evaluating Epoch: 44%|[32mβββββ [0m| 64/145 [00:09<00:11, 7.11it/s]
evaluating Epoch: 45%|[32mβββββ [0m| 65/145 [00:09<00:11, 7.15it/s]
evaluating Epoch: 46%|[32mβββββ [0m| 66/145 [00:09<00:11, 7.01it/s]
evaluating Epoch: 46%|[32mβββββ [0m| 67/145 [00:09<00:11, 6.88it/s]
evaluating Epoch: 47%|[32mβββββ [0m| 68/145 [00:09<00:11, 6.93it/s]
evaluating Epoch: 48%|[32mβββββ [0m| 69/145 [00:09<00:10, 7.01it/s]
evaluating Epoch: 48%|[32mβββββ [0m| 70/145 [00:10<00:10, 7.12it/s]
evaluating Epoch: 49%|[32mβββββ [0m| 71/145 [00:10<00:10, 7.18it/s]
evaluating Epoch: 50%|[32mβββββ [0m| 72/145 [00:10<00:10, 7.20it/s]
evaluating Epoch: 50%|[32mβββββ [0m| 73/145 [00:10<00:10, 7.17it/s]
evaluating Epoch: 51%|[32mβββββ [0m| 74/145 [00:10<00:10, 7.09it/s]
evaluating Epoch: 52%|[32mββββββ [0m| 75/145 [00:10<00:09, 7.02it/s]
evaluating Epoch: 52%|[32mββββββ [0m| 76/145 [00:10<00:10, 6.82it/s]
evaluating Epoch: 53%|[32mββββββ [0m| 77/145 [00:11<00:09, 6.90it/s]
evaluating Epoch: 54%|[32mββββββ [0m| 78/145 [00:11<00:09, 7.02it/s]
evaluating Epoch: 54%|[32mββββββ [0m| 79/145 [00:11<00:09, 7.14it/s]
evaluating Epoch: 55%|[32mββββββ [0m| 80/145 [00:11<00:09, 7.19it/s]
evaluating Epoch: 56%|[32mββββββ [0m| 81/145 [00:11<00:08, 7.19it/s]
evaluating Epoch: 57%|[32mββββββ [0m| 82/145 [00:11<00:08, 7.19it/s]
evaluating Epoch: 57%|[32mββββββ [0m| 83/145 [00:11<00:08, 7.23it/s]
evaluating Epoch: 58%|[32mββββββ [0m| 84/145 [00:12<00:08, 7.22it/s]
evaluating Epoch: 59%|[32mββββββ [0m| 85/145 [00:12<00:08, 7.29it/s]
evaluating Epoch: 59%|[32mββββββ [0m| 86/145 [00:12<00:08, 7.21it/s]
evaluating Epoch: 60%|[32mββββββ [0m| 87/145 [00:12<00:07, 7.29it/s]
evaluating Epoch: 61%|[32mββββββ [0m| 88/145 [00:12<00:07, 7.28it/s]
evaluating Epoch: 61%|[32mβββββββ [0m| 89/145 [00:12<00:07, 7.24it/s]
evaluating Epoch: 62%|[32mβββββββ [0m| 90/145 [00:12<00:07, 7.02it/s]
evaluating Epoch: 63%|[32mβββββββ [0m| 91/145 [00:13<00:07, 6.89it/s]
evaluating Epoch: 63%|[32mβββββββ [0m| 92/145 [00:13<00:07, 6.72it/s]
evaluating Epoch: 64%|[32mβββββββ [0m| 93/145 [00:13<00:07, 6.87it/s]
evaluating Epoch: 65%|[32mβββββββ [0m| 94/145 [00:13<00:07, 6.94it/s]
evaluating Epoch: 66%|[32mβββββββ [0m| 95/145 [00:13<00:07, 6.95it/s]
evaluating Epoch: 66%|[32mβββββββ [0m| 96/145 [00:13<00:06, 7.04it/s]
evaluating Epoch: 67%|[32mβββββββ [0m| 97/145 [00:13<00:06, 7.12it/s]
evaluating Epoch: 68%|[32mβββββββ [0m| 98/145 [00:14<00:06, 7.12it/s]
evaluating Epoch: 68%|[32mβββββββ [0m| 99/145 [00:14<00:06, 7.10it/s]
evaluating Epoch: 69%|[32mβββββββ [0m| 100/145 [00:14<00:06, 7.10it/s]
evaluating Epoch: 70%|[32mβββββββ [0m| 101/145 [00:14<00:06, 7.07it/s]
evaluating Epoch: 70%|[32mβββββββ [0m| 102/145 [00:14<00:06, 6.93it/s]
evaluating Epoch: 71%|[32mβββββββ [0m| 103/145 [00:14<00:05, 7.05it/s]
evaluating Epoch: 72%|[32mββββββββ [0m| 104/145 [00:14<00:05, 7.15it/s]
evaluating Epoch: 72%|[32mββββββββ [0m| 105/145 [00:14<00:05, 7.11it/s]
evaluating Epoch: 73%|[32mββββββββ [0m| 106/145 [00:15<00:05, 7.12it/s]
evaluating Epoch: 74%|[32mββββββββ [0m| 107/145 [00:15<00:05, 7.05it/s]
evaluating Epoch: 74%|[32mββββββββ [0m| 108/145 [00:15<00:05, 7.06it/s]
evaluating Epoch: 75%|[32mββββββββ [0m| 109/145 [00:15<00:05, 7.11it/s]
evaluating Epoch: 76%|[32mββββββββ [0m| 110/145 [00:15<00:04, 7.07it/s]
evaluating Epoch: 77%|[32mββββββββ [0m| 111/145 [00:15<00:04, 7.05it/s]
evaluating Epoch: 77%|[32mββββββββ [0m| 112/145 [00:15<00:04, 7.02it/s]
evaluating Epoch: 78%|[32mββββββββ [0m| 113/145 [00:16<00:04, 7.10it/s]
evaluating Epoch: 79%|[32mββββββββ [0m| 114/145 [00:16<00:04, 7.15it/s]
evaluating Epoch: 79%|[32mββββββββ [0m| 115/145 [00:16<00:04, 7.19it/s]
evaluating Epoch: 80%|[32mββββββββ [0m| 116/145 [00:16<00:04, 7.20it/s]
evaluating Epoch: 81%|[32mββββββββ [0m| 117/145 [00:16<00:04, 6.94it/s]
evaluating Epoch: 81%|[32mβββββββββ [0m| 118/145 [00:16<00:04, 6.55it/s]
evaluating Epoch: 82%|[32mβββββββββ [0m| 119/145 [00:17<00:04, 6.14it/s]
evaluating Epoch: 83%|[32mβββββββββ [0m| 120/145 [00:17<00:03, 6.44it/s]
evaluating Epoch: 83%|[32mβββββββββ [0m| 121/145 [00:17<00:03, 6.73it/s]
evaluating Epoch: 84%|[32mβββββββββ [0m| 122/145 [00:17<00:03, 6.91it/s]
evaluating Epoch: 85%|[32mβββββββββ [0m| 123/145 [00:17<00:03, 7.06it/s]
evaluating Epoch: 86%|[32mβββββββββ [0m| 124/145 [00:17<00:02, 7.23it/s]
evaluating Epoch: 86%|[32mβββββββββ [0m| 125/145 [00:17<00:02, 7.31it/s]
evaluating Epoch: 87%|[32mβββββββββ [0m| 126/145 [00:17<00:02, 7.26it/s]
evaluating Epoch: 88%|[32mβββββββββ [0m| 127/145 [00:18<00:02, 7.35it/s]
evaluating Epoch: 88%|[32mβββββββββ [0m| 128/145 [00:18<00:02, 7.46it/s]
evaluating Epoch: 89%|[32mβββββββββ [0m| 129/145 [00:18<00:02, 7.54it/s]
evaluating Epoch: 90%|[32mβββββββββ [0m| 130/145 [00:18<00:02, 7.44it/s]
evaluating Epoch: 90%|[32mβββββββββ [0m| 131/145 [00:18<00:01, 7.58it/s]
evaluating Epoch: 91%|[32mβββββββββ [0m| 132/145 [00:18<00:01, 7.66it/s]
evaluating Epoch: 92%|[32mββββββββββ[0m| 133/145 [00:18<00:01, 7.74it/s]
evaluating Epoch: 92%|[32mββββββββββ[0m| 134/145 [00:19<00:01, 7.76it/s]
evaluating Epoch: 93%|[32mββββββββββ[0m| 135/145 [00:19<00:01, 7.67it/s]
evaluating Epoch: 94%|[32mββββββββββ[0m| 136/145 [00:19<00:01, 7.54it/s]
evaluating Epoch: 94%|[32mββββββββββ[0m| 137/145 [00:19<00:01, 7.52it/s]
evaluating Epoch: 95%|[32mββββββββββ[0m| 138/145 [00:19<00:00, 7.47it/s]
evaluating Epoch: 96%|[32mββββββββββ[0m| 139/145 [00:19<00:00, 7.48it/s]
evaluating Epoch: 97%|[32mββββββββββ[0m| 140/145 [00:19<00:00, 7.41it/s]
evaluating Epoch: 97%|[32mββββββββββ[0m| 141/145 [00:19<00:00, 7.39it/s]
evaluating Epoch: 98%|[32mββββββββββ[0m| 142/145 [00:20<00:00, 7.17it/s]
evaluating Epoch: 99%|[32mββββββββββ[0m| 143/145 [00:20<00:00, 7.00it/s]
evaluating Epoch: 99%|[32mββββββββββ[0m| 144/145 [00:20<00:00, 7.09it/s]
evaluating Epoch: 100%|[32mββββββββββ[0m| 145/145 [00:20<00:00, 7.22it/s]
evaluating Epoch: 100%|[32mββββββββββ[0m| 145/145 [00:20<00:00, 7.03it/s] |
| eval_ppl=tensor(1.1342, device='cuda:0') eval_epoch_loss=tensor(0.1259, device='cuda:0') |
| we are about to save the PEFT modules |
| PEFT modules are saved in /data/colosseum_dataset/wm_data/finetuned_models/vlm_move_hanger directory |
| best eval loss on epoch 1 is 0.1259218156337738 |
| Epoch 1: train_perplexity=1.2393, train_epoch_loss=0.2145, epoch time 40.05342884827405s |
| Starting epoch 1/5 |
| train_config.max_train_step: 0 |
|
Training Epoch: 2: 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 2/5, step 0/117 completed (loss: 0.007026622537523508): 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 2/5, step 1/117 completed (loss: 0.007021935191005468): 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 2/5, step 2/117 completed (loss: 0.006670953240245581): 0%|[34m [0m| 0/23 [00:01<?, ?it/s]
Training Epoch: 2/5, step 3/117 completed (loss: 0.02543712593615055): 0%|[34m [0m| 0/23 [00:01<?, ?it/s]
Training Epoch: 2/5, step 3/117 completed (loss: 0.02543712593615055): 4%|[34mβ [0m| 1/23 [00:01<00:43, 1.98s/it]
Training Epoch: 2/5, step 4/117 completed (loss: 0.024511082097887993): 4%|[34mβ [0m| 1/23 [00:01<00:43, 1.98s/it]
Training Epoch: 2/5, step 5/117 completed (loss: 0.005098418332636356): 4%|[34mβ [0m| 1/23 [00:02<00:43, 1.98s/it]
Training Epoch: 2/5, step 6/117 completed (loss: 0.0050905616953969): 4%|[34mβ [0m| 1/23 [00:02<00:43, 1.98s/it]
Training Epoch: 2/5, step 7/117 completed (loss: 0.004833019804209471): 4%|[34mβ [0m| 1/23 [00:02<00:43, 1.98s/it]
Training Epoch: 2/5, step 8/117 completed (loss: 0.029341552406549454): 4%|[34mβ [0m| 1/23 [00:03<00:43, 1.98s/it]
Training Epoch: 2/5, step 8/117 completed (loss: 0.029341552406549454): 9%|[34mβ [0m| 2/23 [00:03<00:38, 1.81s/it]
Training Epoch: 2/5, step 9/117 completed (loss: 0.02932901121675968): 9%|[34mβ [0m| 2/23 [00:03<00:38, 1.81s/it]
Training Epoch: 2/5, step 10/117 completed (loss: 0.03245056793093681): 9%|[34mβ [0m| 2/23 [00:03<00:38, 1.81s/it]
Training Epoch: 2/5, step 11/117 completed (loss: 0.0045676338486373425): 9%|[34mβ [0m| 2/23 [00:04<00:38, 1.81s/it]
Training Epoch: 2/5, step 12/117 completed (loss: 0.032446712255477905): 9%|[34mβ [0m| 2/23 [00:04<00:38, 1.81s/it]
Training Epoch: 2/5, step 13/117 completed (loss: 0.00455852784216404): 9%|[34mβ [0m| 2/23 [00:04<00:38, 1.81s/it]
Training Epoch: 2/5, step 13/117 completed (loss: 0.00455852784216404): 13%|[34mββ [0m| 3/23 [00:05<00:33, 1.67s/it]
Training Epoch: 2/5, step 14/117 completed (loss: 0.0045675295405089855): 13%|[34mββ [0m| 3/23 [00:05<00:33, 1.67s/it]
Training Epoch: 2/5, step 15/117 completed (loss: 0.004563211929053068): 13%|[34mββ [0m| 3/23 [00:05<00:33, 1.67s/it]
Training Epoch: 2/5, step 16/117 completed (loss: 0.004831889644265175): 13%|[34mββ [0m| 3/23 [00:05<00:33, 1.67s/it]
Training Epoch: 2/5, step 17/117 completed (loss: 0.030336786061525345): 13%|[34mββ [0m| 3/23 [00:06<00:33, 1.67s/it]
Training Epoch: 2/5, step 18/117 completed (loss: 0.004567534197121859): 13%|[34mββ [0m| 3/23 [00:06<00:33, 1.67s/it]
Training Epoch: 2/5, step 18/117 completed (loss: 0.004567534197121859): 17%|[34mββ [0m| 4/23 [00:06<00:32, 1.73s/it]
Training Epoch: 2/5, step 19/117 completed (loss: 0.004828555043786764): 17%|[34mββ [0m| 4/23 [00:06<00:32, 1.73s/it]
Training Epoch: 2/5, step 20/117 completed (loss: 0.005095365922898054): 17%|[34mββ [0m| 4/23 [00:07<00:32, 1.73s/it]
Training Epoch: 2/5, step 21/117 completed (loss: 0.02829686366021633): 17%|[34mββ [0m| 4/23 [00:07<00:32, 1.73s/it]
Training Epoch: 2/5, step 22/117 completed (loss: 0.005096774082630873): 17%|[34mββ [0m| 4/23 [00:07<00:32, 1.73s/it]
Training Epoch: 2/5, step 23/117 completed (loss: 0.029288509860634804): 17%|[34mββ [0m| 4/23 [00:08<00:32, 1.73s/it]
Training Epoch: 2/5, step 23/117 completed (loss: 0.029288509860634804): 22%|[34mβββ [0m| 5/23 [00:08<00:30, 1.68s/it]
Training Epoch: 2/5, step 24/117 completed (loss: 0.029292751103639603): 22%|[34mβββ [0m| 5/23 [00:08<00:30, 1.68s/it]
Training Epoch: 2/5, step 25/117 completed (loss: 0.023511452600359917): 22%|[34mβββ [0m| 5/23 [00:08<00:30, 1.68s/it]
Training Epoch: 2/5, step 26/117 completed (loss: 0.023501375690102577): 22%|[34mβββ [0m| 5/23 [00:09<00:30, 1.68s/it]
Training Epoch: 2/5, step 27/117 completed (loss: 0.0063243890181183815): 22%|[34mβββ [0m| 5/23 [00:09<00:30, 1.68s/it]
Training Epoch: 2/5, step 28/117 completed (loss: 0.023518512025475502): 22%|[34mβββ [0m| 5/23 [00:09<00:30, 1.68s/it]
Training Epoch: 2/5, step 28/117 completed (loss: 0.023518512025475502): 26%|[34mβββ [0m| 6/23 [00:10<00:27, 1.64s/it]
Training Epoch: 2/5, step 29/117 completed (loss: 0.0063208648934960365): 26%|[34mβββ [0m| 6/23 [00:10<00:27, 1.64s/it]
Training Epoch: 2/5, step 30/117 completed (loss: 0.018454324454069138): 26%|[34mβββ [0m| 6/23 [00:10<00:27, 1.64s/it]
Training Epoch: 2/5, step 31/117 completed (loss: 0.009545947425067425): 26%|[34mβββ [0m| 6/23 [00:10<00:27, 1.64s/it]
Training Epoch: 2/5, step 32/117 completed (loss: 0.009089703671634197): 26%|[34mβββ [0m| 6/23 [00:11<00:27, 1.64s/it]
Training Epoch: 2/5, step 33/117 completed (loss: 0.009094237349927425): 26%|[34mβββ [0m| 6/23 [00:11<00:27, 1.64s/it]
Training Epoch: 2/5, step 33/117 completed (loss: 0.009094237349927425): 30%|[34mβββ [0m| 7/23 [00:11<00:26, 1.65s/it]
Training Epoch: 2/5, step 34/117 completed (loss: 0.017695944756269455): 30%|[34mβββ [0m| 7/23 [00:11<00:26, 1.65s/it]
Training Epoch: 2/5, step 35/117 completed (loss: 0.012806236743927002): 30%|[34mβββ [0m| 7/23 [00:12<00:26, 1.65s/it]
Training Epoch: 2/5, step 36/117 completed (loss: 0.01271452009677887): 30%|[34mβββ [0m| 7/23 [00:12<00:26, 1.65s/it]
Training Epoch: 2/5, step 37/117 completed (loss: 0.011582980863749981): 30%|[34mβββ [0m| 7/23 [00:12<00:26, 1.65s/it]
Training Epoch: 2/5, step 38/117 completed (loss: 0.011586996726691723): 30%|[34mβββ [0m| 7/23 [00:13<00:26, 1.65s/it]
Training Epoch: 2/5, step 38/117 completed (loss: 0.011586996726691723): 35%|[34mββββ [0m| 8/23 [00:13<00:25, 1.73s/it]
Training Epoch: 2/5, step 39/117 completed (loss: 0.011586520820856094): 35%|[34mββββ [0m| 8/23 [00:13<00:25, 1.73s/it]
Training Epoch: 2/5, step 40/117 completed (loss: 0.01116449199616909): 35%|[34mββββ [0m| 8/23 [00:14<00:25, 1.73s/it]
Training Epoch: 2/5, step 41/117 completed (loss: 0.012705248780548573): 35%|[34mββββ [0m| 8/23 [00:14<00:25, 1.73s/it]
Training Epoch: 2/5, step 42/117 completed (loss: 0.012713990174233913): 35%|[34mββββ [0m| 8/23 [00:14<00:25, 1.73s/it]
Training Epoch: 2/5, step 43/117 completed (loss: 0.011711716651916504): 35%|[34mββββ [0m| 8/23 [00:15<00:25, 1.73s/it]
Training Epoch: 2/5, step 43/117 completed (loss: 0.011711716651916504): 39%|[34mββββ [0m| 9/23 [00:15<00:23, 1.69s/it]
Training Epoch: 2/5, step 44/117 completed (loss: 0.013317689299583435): 39%|[34mββββ [0m| 9/23 [00:15<00:23, 1.69s/it]
Training Epoch: 2/5, step 45/117 completed (loss: 0.013914189301431179): 39%|[34mββββ [0m| 9/23 [00:15<00:23, 1.69s/it]
Training Epoch: 2/5, step 46/117 completed (loss: 0.013917169533669949): 39%|[34mββββ [0m| 9/23 [00:15<00:23, 1.69s/it]
Training Epoch: 2/5, step 47/117 completed (loss: 0.01391235738992691): 39%|[34mββββ [0m| 9/23 [00:16<00:23, 1.69s/it]
Training Epoch: 2/5, step 48/117 completed (loss: 0.013916459865868092): 39%|[34mββββ [0m| 9/23 [00:16<00:23, 1.69s/it]
Training Epoch: 2/5, step 48/117 completed (loss: 0.013916459865868092): 43%|[34mβββββ [0m| 10/23 [00:16<00:21, 1.66s/it]
Training Epoch: 2/5, step 49/117 completed (loss: 0.013917050324380398): 43%|[34mβββββ [0m| 10/23 [00:16<00:21, 1.66s/it]
Training Epoch: 2/5, step 50/117 completed (loss: 0.011683397926390171): 43%|[34mβββββ [0m| 10/23 [00:17<00:21, 1.66s/it]
Training Epoch: 2/5, step 51/117 completed (loss: 0.012128344736993313): 43%|[34mβββββ [0m| 10/23 [00:17<00:21, 1.66s/it]
Training Epoch: 2/5, step 52/117 completed (loss: 0.012694385834038258): 43%|[34mβββββ [0m| 10/23 [00:17<00:21, 1.66s/it]
Training Epoch: 2/5, step 53/117 completed (loss: 0.011683701537549496): 43%|[34mβββββ [0m| 10/23 [00:18<00:21, 1.66s/it]
Training Epoch: 2/5, step 53/117 completed (loss: 0.011683701537549496): 48%|[34mβββββ [0m| 11/23 [00:18<00:19, 1.65s/it]
Training Epoch: 2/5, step 54/117 completed (loss: 0.012695378623902798): 48%|[34mβββββ [0m| 11/23 [00:18<00:19, 1.65s/it]
Training Epoch: 2/5, step 55/117 completed (loss: 0.014103295281529427): 48%|[34mβββββ [0m| 11/23 [00:18<00:19, 1.65s/it]
Training Epoch: 2/5, step 56/117 completed (loss: 0.014100921340286732): 48%|[34mβββββ [0m| 11/23 [00:19<00:19, 1.65s/it]
Training Epoch: 2/5, step 57/117 completed (loss: 0.012843613512814045): 48%|[34mβββββ [0m| 11/23 [00:19<00:19, 1.65s/it]
Training Epoch: 2/5, step 58/117 completed (loss: 0.01410272903740406): 48%|[34mβββββ [0m| 11/23 [00:19<00:19, 1.65s/it]
Training Epoch: 2/5, step 58/117 completed (loss: 0.01410272903740406): 52%|[34mββββββ [0m| 12/23 [00:20<00:17, 1.61s/it]
Training Epoch: 2/5, step 59/117 completed (loss: 0.013475029729306698): 52%|[34mββββββ [0m| 12/23 [00:20<00:17, 1.61s/it]
Training Epoch: 2/5, step 60/117 completed (loss: 0.01166281383484602): 52%|[34mββββββ [0m| 12/23 [00:20<00:17, 1.61s/it]
Training Epoch: 2/5, step 61/117 completed (loss: 0.011551263742148876): 52%|[34mββββββ [0m| 12/23 [00:20<00:17, 1.61s/it]
Training Epoch: 2/5, step 62/117 completed (loss: 0.011663886718451977): 52%|[34mββββββ [0m| 12/23 [00:21<00:17, 1.61s/it]
Training Epoch: 2/5, step 63/117 completed (loss: 0.012836702167987823): 52%|[34mββββββ [0m| 12/23 [00:21<00:17, 1.61s/it]
Training Epoch: 2/5, step 63/117 completed (loss: 0.012836702167987823): 57%|[34mββββββ [0m| 13/23 [00:21<00:16, 1.61s/it]
Training Epoch: 2/5, step 64/117 completed (loss: 0.011605541221797466): 57%|[34mββββββ [0m| 13/23 [00:21<00:16, 1.61s/it]
Training Epoch: 2/5, step 65/117 completed (loss: 0.01111472025513649): 57%|[34mββββββ [0m| 13/23 [00:21<00:16, 1.61s/it]
Training Epoch: 2/5, step 66/117 completed (loss: 0.011547817848622799): 57%|[34mββββββ [0m| 13/23 [00:22<00:16, 1.61s/it]
Training Epoch: 2/5, step 67/117 completed (loss: 0.011660663411021233): 57%|[34mββββββ [0m| 13/23 [00:22<00:16, 1.61s/it]
Training Epoch: 2/5, step 68/117 completed (loss: 0.011547247879207134): 57%|[34mββββββ [0m| 13/23 [00:22<00:16, 1.61s/it]
Training Epoch: 2/5, step 68/117 completed (loss: 0.011547247879207134): 61%|[34mββββββ [0m| 14/23 [00:23<00:14, 1.62s/it]
Training Epoch: 2/5, step 69/117 completed (loss: 0.011019174009561539): 61%|[34mββββββ [0m| 14/23 [00:23<00:14, 1.62s/it]
Training Epoch: 2/5, step 70/117 completed (loss: 0.01006337534636259): 61%|[34mββββββ [0m| 14/23 [00:23<00:14, 1.62s/it]
Training Epoch: 2/5, step 71/117 completed (loss: 0.01154430489987135): 61%|[34mββββββ [0m| 14/23 [00:24<00:14, 1.62s/it]
Training Epoch: 2/5, step 72/117 completed (loss: 0.011544225737452507): 61%|[34mββββββ [0m| 14/23 [00:24<00:14, 1.62s/it]
Training Epoch: 2/5, step 73/117 completed (loss: 0.010565278120338917): 61%|[34mββββββ [0m| 14/23 [00:24<00:14, 1.62s/it]
Training Epoch: 2/5, step 73/117 completed (loss: 0.010565278120338917): 65%|[34mβββββββ [0m| 15/23 [00:24<00:13, 1.63s/it]
Training Epoch: 2/5, step 74/117 completed (loss: 0.010566866025328636): 65%|[34mβββββββ [0m| 15/23 [00:24<00:13, 1.63s/it]
Training Epoch: 2/5, step 75/117 completed (loss: 0.011542613618075848): 65%|[34mβββββββ [0m| 15/23 [00:25<00:13, 1.63s/it]
Training Epoch: 2/5, step 76/117 completed (loss: 0.00909024104475975): 65%|[34mβββββββ [0m| 15/23 [00:25<00:13, 1.63s/it]
Training Epoch: 2/5, step 77/117 completed (loss: 0.011541350744664669): 65%|[34mβββββββ [0m| 15/23 [00:26<00:13, 1.63s/it]
Training Epoch: 2/5, step 78/117 completed (loss: 0.012108348309993744): 65%|[34mβββββββ [0m| 15/23 [00:26<00:13, 1.63s/it]
Training Epoch: 2/5, step 78/117 completed (loss: 0.012108348309993744): 70%|[34mβββββββ [0m| 16/23 [00:26<00:11, 1.69s/it]
Training Epoch: 2/5, step 79/117 completed (loss: 0.009593687020242214): 70%|[34mβββββββ [0m| 16/23 [00:26<00:11, 1.69s/it]
Training Epoch: 2/5, step 80/117 completed (loss: 0.011538987047970295): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.69s/it]
Training Epoch: 2/5, step 81/117 completed (loss: 0.011539872735738754): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.69s/it]
Training Epoch: 2/5, step 82/117 completed (loss: 0.009552603587508202): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.69s/it]
Training Epoch: 2/5, step 83/117 completed (loss: 0.009086120873689651): 70%|[34mβββββββ [0m| 16/23 [00:28<00:11, 1.69s/it]
Training Epoch: 2/5, step 83/117 completed (loss: 0.009086120873689651): 74%|[34mββββββββ [0m| 17/23 [00:28<00:09, 1.66s/it]
Training Epoch: 2/5, step 84/117 completed (loss: 0.0105984415858984): 74%|[34mββββββββ [0m| 17/23 [00:28<00:09, 1.66s/it]
Training Epoch: 2/5, step 85/117 completed (loss: 0.011009667068719864): 74%|[34mββββββββ [0m| 17/23 [00:28<00:09, 1.66s/it]
Training Epoch: 2/5, step 86/117 completed (loss: 0.010155159048736095): 74%|[34mββββββββ [0m| 17/23 [00:28<00:09, 1.66s/it]
Training Epoch: 2/5, step 87/117 completed (loss: 0.011538274586200714): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.66s/it]
Training Epoch: 2/5, step 88/117 completed (loss: 0.011009334586560726): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.66s/it]
Training Epoch: 2/5, step 88/117 completed (loss: 0.011009334586560726): 78%|[34mββββββββ [0m| 18/23 [00:29<00:08, 1.62s/it]
Training Epoch: 2/5, step 89/117 completed (loss: 0.008192185312509537): 78%|[34mββββββββ [0m| 18/23 [00:29<00:08, 1.62s/it]
Training Epoch: 2/5, step 90/117 completed (loss: 0.00998955499380827): 78%|[34mββββββββ [0m| 18/23 [00:30<00:08, 1.62s/it]
Training Epoch: 2/5, step 91/117 completed (loss: 0.010479754768311977): 78%|[34mββββββββ [0m| 18/23 [00:30<00:08, 1.62s/it]
Training Epoch: 2/5, step 92/117 completed (loss: 0.00949926394969225): 78%|[34mββββββββ [0m| 18/23 [00:30<00:08, 1.62s/it]
Training Epoch: 2/5, step 93/117 completed (loss: 0.008616231381893158): 78%|[34mββββββββ [0m| 18/23 [00:31<00:08, 1.62s/it]
Training Epoch: 2/5, step 93/117 completed (loss: 0.008616231381893158): 83%|[34mβββββββββ [0m| 19/23 [00:31<00:06, 1.67s/it]
Training Epoch: 2/5, step 94/117 completed (loss: 0.008616754785180092): 83%|[34mβββββββββ [0m| 19/23 [00:31<00:06, 1.67s/it]
Training Epoch: 2/5, step 95/117 completed (loss: 0.008614777587354183): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.67s/it]
Training Epoch: 2/5, step 96/117 completed (loss: 0.008590023964643478): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.67s/it]
Training Epoch: 2/5, step 97/117 completed (loss: 0.009545262902975082): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.67s/it]
Training Epoch: 2/5, step 98/117 completed (loss: 0.008591227233409882): 83%|[34mβββββββββ [0m| 19/23 [00:33<00:06, 1.67s/it]
Training Epoch: 2/5, step 98/117 completed (loss: 0.008591227233409882): 87%|[34mβββββββββ [0m| 20/23 [00:33<00:04, 1.65s/it]
Training Epoch: 2/5, step 99/117 completed (loss: 0.008590929210186005): 87%|[34mβββββββββ [0m| 20/23 [00:33<00:04, 1.65s/it]
Training Epoch: 2/5, step 100/117 completed (loss: 0.006984254810959101): 87%|[34mβββββββββ [0m| 20/23 [00:33<00:04, 1.65s/it]
Training Epoch: 2/5, step 101/117 completed (loss: 0.009543568827211857): 87%|[34mβββββββββ [0m| 20/23 [00:33<00:04, 1.65s/it]
Training Epoch: 2/5, step 102/117 completed (loss: 0.006984218955039978): 87%|[34mβββββββββ [0m| 20/23 [00:34<00:04, 1.65s/it]
Training Epoch: 2/5, step 103/117 completed (loss: 0.008196677081286907): 87%|[34mβββββββββ [0m| 20/23 [00:34<00:04, 1.65s/it]
Training Epoch: 2/5, step 103/117 completed (loss: 0.008196677081286907): 91%|[34mββββββββββ[0m| 21/23 [00:34<00:03, 1.60s/it]
Training Epoch: 2/5, step 104/117 completed (loss: 0.006983855273574591): 91%|[34mββββββββββ[0m| 21/23 [00:34<00:03, 1.60s/it]
Training Epoch: 2/5, step 105/117 completed (loss: 0.005958686117082834): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.60s/it]
Training Epoch: 2/5, step 106/117 completed (loss: 0.005637133959680796): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.60s/it]
Training Epoch: 2/5, step 107/117 completed (loss: 0.005637442227452993): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.60s/it]
Training Epoch: 2/5, step 108/117 completed (loss: 0.0059595596976578236): 91%|[34mββββββββββ[0m| 21/23 [00:36<00:03, 1.60s/it]
Training Epoch: 2/5, step 108/117 completed (loss: 0.0059595596976578236): 96%|[34mββββββββββ[0m| 22/23 [00:36<00:01, 1.63s/it]
Training Epoch: 2/5, step 109/117 completed (loss: 0.01004678662866354): 96%|[34mββββββββββ[0m| 22/23 [00:36<00:01, 1.63s/it]
Training Epoch: 2/5, step 110/117 completed (loss: 0.004041817970573902): 96%|[34mββββββββββ[0m| 22/23 [00:36<00:01, 1.63s/it]
Training Epoch: 2/5, step 111/117 completed (loss: 0.01055156346410513): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.63s/it]
Training Epoch: 2/5, step 112/117 completed (loss: 0.01109678391367197): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.63s/it]
Training Epoch: 2/5, step 113/117 completed (loss: 0.0038252004887908697): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.63s/it]
Training Epoch: 2/5, step 113/117 completed (loss: 0.0038252004887908697): 100%|[34mββββββββββ[0m| 23/23 [00:38<00:00, 1.63s/it]
Training Epoch: 2/5, step 114/117 completed (loss: 0.00404228363186121): 100%|[34mββββββββββ[0m| 23/23 [00:38<00:00, 1.63s/it]
Training Epoch: 2/5, step 115/117 completed (loss: 0.011096677742898464): 100%|[34mββββββββββ[0m| 23/23 [00:38<00:00, 1.63s/it]
Training Epoch: 2/5, step 115/117 completed (loss: 0.011096677742898464): : 24it [00:38, 1.32s/it]
Training Epoch: 2/5, step 116/117 completed (loss: 0.011097948998212814): : 24it [00:38, 1.32s/it]
Training Epoch: 2/5, step 116/117 completed (loss: 0.011097948998212814): : 24it [00:38, 1.61s/it] |
| Max CUDA memory allocated was 19 GB |
| Max CUDA memory reserved was 19 GB |
| Peak active CUDA memory was 19 GB |
| CUDA Malloc retries : 0 |
| CPU Total Peak Memory consumed during the train (max): 2 GB |
|
evaluating Epoch: 0%|[32m [0m| 0/145 [00:00<?, ?it/s]
evaluating Epoch: 1%|[32m [0m| 1/145 [00:00<00:33, 4.27it/s]
evaluating Epoch: 1%|[32mβ [0m| 2/145 [00:00<00:25, 5.61it/s]
evaluating Epoch: 2%|[32mβ [0m| 3/145 [00:00<00:22, 6.22it/s]
evaluating Epoch: 3%|[32mβ [0m| 4/145 [00:00<00:21, 6.56it/s]
evaluating Epoch: 3%|[32mβ [0m| 5/145 [00:00<00:20, 6.75it/s]
evaluating Epoch: 4%|[32mβ [0m| 6/145 [00:00<00:20, 6.84it/s]
evaluating Epoch: 5%|[32mβ [0m| 7/145 [00:01<00:19, 6.97it/s]
evaluating Epoch: 6%|[32mβ [0m| 8/145 [00:01<00:19, 7.09it/s]
evaluating Epoch: 6%|[32mβ [0m| 9/145 [00:01<00:19, 7.05it/s]
evaluating Epoch: 7%|[32mβ [0m| 10/145 [00:01<00:19, 7.10it/s]
evaluating Epoch: 8%|[32mβ [0m| 11/145 [00:01<00:18, 7.09it/s]
evaluating Epoch: 8%|[32mβ [0m| 12/145 [00:01<00:18, 7.13it/s]
evaluating Epoch: 9%|[32mβ [0m| 13/145 [00:01<00:18, 7.17it/s]
evaluating Epoch: 10%|[32mβ [0m| 14/145 [00:02<00:18, 7.16it/s]
evaluating Epoch: 10%|[32mβ [0m| 15/145 [00:02<00:18, 7.07it/s]
evaluating Epoch: 11%|[32mβ [0m| 16/145 [00:02<00:18, 6.84it/s]
evaluating Epoch: 12%|[32mββ [0m| 17/145 [00:02<00:19, 6.61it/s]
evaluating Epoch: 12%|[32mββ [0m| 18/145 [00:02<00:19, 6.49it/s]
evaluating Epoch: 13%|[32mββ [0m| 19/145 [00:02<00:18, 6.63it/s]
evaluating Epoch: 14%|[32mββ [0m| 20/145 [00:02<00:18, 6.85it/s]
evaluating Epoch: 14%|[32mββ [0m| 21/145 [00:03<00:17, 7.07it/s]
evaluating Epoch: 15%|[32mββ [0m| 22/145 [00:03<00:17, 7.20it/s]
evaluating Epoch: 16%|[32mββ [0m| 23/145 [00:03<00:16, 7.32it/s]
evaluating Epoch: 17%|[32mββ [0m| 24/145 [00:03<00:16, 7.40it/s]
evaluating Epoch: 17%|[32mββ [0m| 25/145 [00:03<00:16, 7.45it/s]
evaluating Epoch: 18%|[32mββ [0m| 26/145 [00:03<00:15, 7.50it/s]
evaluating Epoch: 19%|[32mββ [0m| 27/145 [00:03<00:15, 7.51it/s]
evaluating Epoch: 19%|[32mββ [0m| 28/145 [00:04<00:15, 7.48it/s]
evaluating Epoch: 20%|[32mββ [0m| 29/145 [00:04<00:15, 7.53it/s]
evaluating Epoch: 21%|[32mββ [0m| 30/145 [00:04<00:15, 7.53it/s]
evaluating Epoch: 21%|[32mβββ [0m| 31/145 [00:04<00:15, 7.50it/s]
evaluating Epoch: 22%|[32mβββ [0m| 32/145 [00:04<00:16, 7.01it/s]
evaluating Epoch: 23%|[32mβββ [0m| 33/145 [00:04<00:15, 7.01it/s]
evaluating Epoch: 23%|[32mβββ [0m| 34/145 [00:04<00:15, 7.01it/s]
evaluating Epoch: 24%|[32mβββ [0m| 35/145 [00:05<00:15, 7.01it/s]
evaluating Epoch: 25%|[32mβββ [0m| 36/145 [00:05<00:15, 7.15it/s]
evaluating Epoch: 26%|[32mβββ [0m| 37/145 [00:05<00:14, 7.23it/s]
evaluating Epoch: 26%|[32mβββ [0m| 38/145 [00:05<00:14, 7.19it/s]
evaluating Epoch: 27%|[32mβββ [0m| 39/145 [00:05<00:14, 7.33it/s]
evaluating Epoch: 28%|[32mβββ [0m| 40/145 [00:05<00:14, 7.37it/s]
evaluating Epoch: 28%|[32mβββ [0m| 41/145 [00:05<00:14, 7.38it/s]
evaluating Epoch: 29%|[32mβββ [0m| 42/145 [00:05<00:13, 7.36it/s]
evaluating Epoch: 30%|[32mβββ [0m| 43/145 [00:06<00:13, 7.29it/s]
evaluating Epoch: 30%|[32mβββ [0m| 44/145 [00:06<00:13, 7.27it/s]
evaluating Epoch: 31%|[32mβββ [0m| 45/145 [00:06<00:13, 7.20it/s]
evaluating Epoch: 32%|[32mββββ [0m| 46/145 [00:06<00:13, 7.14it/s]
evaluating Epoch: 32%|[32mββββ [0m| 47/145 [00:06<00:13, 7.11it/s]
evaluating Epoch: 33%|[32mββββ [0m| 48/145 [00:06<00:13, 7.19it/s]
evaluating Epoch: 34%|[32mββββ [0m| 49/145 [00:06<00:13, 7.22it/s]
evaluating Epoch: 34%|[32mββββ [0m| 50/145 [00:07<00:13, 7.16it/s]
evaluating Epoch: 35%|[32mββββ [0m| 51/145 [00:07<00:12, 7.25it/s]
evaluating Epoch: 36%|[32mββββ [0m| 52/145 [00:07<00:12, 7.23it/s]
evaluating Epoch: 37%|[32mββββ [0m| 53/145 [00:07<00:12, 7.15it/s]
evaluating Epoch: 37%|[32mββββ [0m| 54/145 [00:07<00:12, 7.13it/s]
evaluating Epoch: 38%|[32mββββ [0m| 55/145 [00:07<00:12, 7.12it/s]
evaluating Epoch: 39%|[32mββββ [0m| 56/145 [00:07<00:12, 7.10it/s]
evaluating Epoch: 39%|[32mββββ [0m| 57/145 [00:08<00:12, 7.04it/s]
evaluating Epoch: 40%|[32mββββ [0m| 58/145 [00:08<00:12, 7.10it/s]
evaluating Epoch: 41%|[32mββββ [0m| 59/145 [00:08<00:12, 7.15it/s]
evaluating Epoch: 41%|[32mβββββ [0m| 60/145 [00:08<00:11, 7.16it/s]
evaluating Epoch: 42%|[32mβββββ [0m| 61/145 [00:08<00:11, 7.08it/s]
evaluating Epoch: 43%|[32mβββββ [0m| 62/145 [00:08<00:11, 7.00it/s]
evaluating Epoch: 43%|[32mβββββ [0m| 63/145 [00:08<00:11, 6.97it/s]
evaluating Epoch: 44%|[32mβββββ [0m| 64/145 [00:09<00:11, 6.90it/s]
evaluating Epoch: 45%|[32mβββββ [0m| 65/145 [00:09<00:11, 6.85it/s]
evaluating Epoch: 46%|[32mβββββ [0m| 66/145 [00:09<00:12, 6.43it/s]
evaluating Epoch: 46%|[32mβββββ [0m| 67/145 [00:09<00:11, 6.54it/s]
evaluating Epoch: 47%|[32mβββββ [0m| 68/145 [00:09<00:11, 6.66it/s]
evaluating Epoch: 48%|[32mβββββ [0m| 69/145 [00:09<00:11, 6.76it/s]
evaluating Epoch: 48%|[32mβββββ [0m| 70/145 [00:09<00:11, 6.63it/s]
evaluating Epoch: 49%|[32mβββββ [0m| 71/145 [00:10<00:11, 6.33it/s]
evaluating Epoch: 50%|[32mβββββ [0m| 72/145 [00:10<00:11, 6.53it/s]
evaluating Epoch: 50%|[32mβββββ [0m| 73/145 [00:10<00:10, 6.74it/s]
evaluating Epoch: 51%|[32mβββββ [0m| 74/145 [00:10<00:11, 6.38it/s]
evaluating Epoch: 52%|[32mββββββ [0m| 75/145 [00:10<00:10, 6.50it/s]
evaluating Epoch: 52%|[32mββββββ [0m| 76/145 [00:10<00:10, 6.77it/s]
evaluating Epoch: 53%|[32mββββββ [0m| 77/145 [00:11<00:09, 6.97it/s]
evaluating Epoch: 54%|[32mββββββ [0m| 78/145 [00:11<00:09, 7.12it/s]
evaluating Epoch: 54%|[32mββββββ [0m| 79/145 [00:11<00:09, 7.21it/s]
evaluating Epoch: 55%|[32mββββββ [0m| 80/145 [00:11<00:08, 7.27it/s]
evaluating Epoch: 56%|[32mββββββ [0m| 81/145 [00:11<00:08, 7.26it/s]
evaluating Epoch: 57%|[32mββββββ [0m| 82/145 [00:11<00:08, 7.20it/s]
evaluating Epoch: 57%|[32mββββββ [0m| 83/145 [00:11<00:08, 7.17it/s]
evaluating Epoch: 58%|[32mββββββ [0m| 84/145 [00:11<00:08, 7.15it/s]
evaluating Epoch: 59%|[32mββββββ [0m| 85/145 [00:12<00:08, 7.14it/s]
evaluating Epoch: 59%|[32mββββββ [0m| 86/145 [00:12<00:08, 7.06it/s]
evaluating Epoch: 60%|[32mββββββ [0m| 87/145 [00:12<00:08, 6.96it/s]
evaluating Epoch: 61%|[32mββββββ [0m| 88/145 [00:12<00:08, 6.97it/s]
evaluating Epoch: 61%|[32mβββββββ [0m| 89/145 [00:12<00:08, 6.92it/s]
evaluating Epoch: 62%|[32mβββββββ [0m| 90/145 [00:12<00:08, 6.82it/s]
evaluating Epoch: 63%|[32mβββββββ [0m| 91/145 [00:12<00:07, 6.90it/s]
evaluating Epoch: 63%|[32mβββββββ [0m| 92/145 [00:13<00:07, 7.07it/s]
evaluating Epoch: 64%|[32mβββββββ [0m| 93/145 [00:13<00:07, 7.16it/s]
evaluating Epoch: 65%|[32mβββββββ [0m| 94/145 [00:13<00:07, 7.14it/s]
evaluating Epoch: 66%|[32mβββββββ [0m| 95/145 [00:13<00:06, 7.20it/s]
evaluating Epoch: 66%|[32mβββββββ [0m| 96/145 [00:13<00:06, 7.27it/s]
evaluating Epoch: 67%|[32mβββββββ [0m| 97/145 [00:13<00:06, 7.35it/s]
evaluating Epoch: 68%|[32mβββββββ [0m| 98/145 [00:13<00:06, 7.42it/s]
evaluating Epoch: 68%|[32mβββββββ [0m| 99/145 [00:14<00:06, 7.49it/s]
evaluating Epoch: 69%|[32mβββββββ [0m| 100/145 [00:14<00:06, 7.48it/s]
evaluating Epoch: 70%|[32mβββββββ [0m| 101/145 [00:14<00:05, 7.38it/s]
evaluating Epoch: 70%|[32mβββββββ [0m| 102/145 [00:14<00:05, 7.36it/s]
evaluating Epoch: 71%|[32mβββββββ [0m| 103/145 [00:14<00:05, 7.21it/s]
evaluating Epoch: 72%|[32mββββββββ [0m| 104/145 [00:14<00:05, 6.94it/s]
evaluating Epoch: 72%|[32mββββββββ [0m| 105/145 [00:14<00:05, 7.05it/s]
evaluating Epoch: 73%|[32mββββββββ [0m| 106/145 [00:15<00:05, 7.00it/s]
evaluating Epoch: 74%|[32mββββββββ [0m| 107/145 [00:15<00:05, 7.04it/s]
evaluating Epoch: 74%|[32mββββββββ [0m| 108/145 [00:15<00:05, 7.01it/s]
evaluating Epoch: 75%|[32mββββββββ [0m| 109/145 [00:15<00:05, 6.96it/s]
evaluating Epoch: 76%|[32mββββββββ [0m| 110/145 [00:15<00:05, 6.94it/s]
evaluating Epoch: 77%|[32mββββββββ [0m| 111/145 [00:15<00:04, 6.94it/s]
evaluating Epoch: 77%|[32mββββββββ [0m| 112/145 [00:15<00:04, 6.90it/s]
evaluating Epoch: 78%|[32mββββββββ [0m| 113/145 [00:16<00:04, 6.94it/s]
evaluating Epoch: 79%|[32mββββββββ [0m| 114/145 [00:16<00:04, 7.03it/s]
evaluating Epoch: 79%|[32mββββββββ [0m| 115/145 [00:16<00:04, 7.08it/s]
evaluating Epoch: 80%|[32mββββββββ [0m| 116/145 [00:16<00:04, 7.04it/s]
evaluating Epoch: 81%|[32mββββββββ [0m| 117/145 [00:16<00:03, 7.02it/s]
evaluating Epoch: 81%|[32mβββββββββ [0m| 118/145 [00:16<00:03, 6.97it/s]
evaluating Epoch: 82%|[32mβββββββββ [0m| 119/145 [00:16<00:03, 6.93it/s]
evaluating Epoch: 83%|[32mβββββββββ [0m| 120/145 [00:17<00:03, 7.10it/s]
evaluating Epoch: 83%|[32mβββββββββ [0m| 121/145 [00:17<00:03, 7.09it/s]
evaluating Epoch: 84%|[32mβββββββββ [0m| 122/145 [00:17<00:03, 7.11it/s]
evaluating Epoch: 85%|[32mβββββββββ [0m| 123/145 [00:17<00:03, 7.17it/s]
evaluating Epoch: 86%|[32mβββββββββ [0m| 124/145 [00:17<00:02, 7.24it/s]
evaluating Epoch: 86%|[32mβββββββββ [0m| 125/145 [00:17<00:02, 7.30it/s]
evaluating Epoch: 87%|[32mβββββββββ [0m| 126/145 [00:17<00:02, 7.09it/s]
evaluating Epoch: 88%|[32mβββββββββ [0m| 127/145 [00:18<00:02, 7.00it/s]
evaluating Epoch: 88%|[32mβββββββββ [0m| 128/145 [00:18<00:02, 7.11it/s]
evaluating Epoch: 89%|[32mβββββββββ [0m| 129/145 [00:18<00:02, 6.66it/s]
evaluating Epoch: 90%|[32mβββββββββ [0m| 130/145 [00:18<00:02, 6.80it/s]
evaluating Epoch: 90%|[32mβββββββββ [0m| 131/145 [00:18<00:02, 6.95it/s]
evaluating Epoch: 91%|[32mβββββββββ [0m| 132/145 [00:18<00:01, 7.03it/s]
evaluating Epoch: 92%|[32mββββββββββ[0m| 133/145 [00:18<00:01, 7.11it/s]
evaluating Epoch: 92%|[32mββββββββββ[0m| 134/145 [00:19<00:01, 7.15it/s]
evaluating Epoch: 93%|[32mββββββββββ[0m| 135/145 [00:19<00:01, 7.20it/s]
evaluating Epoch: 94%|[32mββββββββββ[0m| 136/145 [00:19<00:01, 7.22it/s]
evaluating Epoch: 94%|[32mββββββββββ[0m| 137/145 [00:19<00:01, 7.35it/s]
evaluating Epoch: 95%|[32mββββββββββ[0m| 138/145 [00:19<00:00, 7.45it/s]
evaluating Epoch: 96%|[32mββββββββββ[0m| 139/145 [00:19<00:00, 7.46it/s]
evaluating Epoch: 97%|[32mββββββββββ[0m| 140/145 [00:19<00:00, 7.50it/s]
evaluating Epoch: 97%|[32mββββββββββ[0m| 141/145 [00:19<00:00, 7.48it/s]
evaluating Epoch: 98%|[32mββββββββββ[0m| 142/145 [00:20<00:00, 7.53it/s]
evaluating Epoch: 99%|[32mββββββββββ[0m| 143/145 [00:20<00:00, 7.51it/s]
evaluating Epoch: 99%|[32mββββββββββ[0m| 144/145 [00:20<00:00, 7.61it/s]
evaluating Epoch: 100%|[32mββββββββββ[0m| 145/145 [00:20<00:00, 7.72it/s]
evaluating Epoch: 100%|[32mββββββββββ[0m| 145/145 [00:20<00:00, 7.06it/s] |
| eval_ppl=tensor(1.1182, device='cuda:0') eval_epoch_loss=tensor(0.1117, device='cuda:0') |
| we are about to save the PEFT modules |
| PEFT modules are saved in /data/colosseum_dataset/wm_data/finetuned_models/vlm_move_hanger directory |
| best eval loss on epoch 2 is 0.11167849600315094 |
| Epoch 2: train_perplexity=1.0610, train_epoch_loss=0.0592, epoch time 39.146529966033995s |
| Starting epoch 2/5 |
| train_config.max_train_step: 0 |
|
Training Epoch: 3: 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 3/5, step 0/117 completed (loss: 0.009544853121042252): 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 3/5, step 1/117 completed (loss: 0.0024118402507156134): 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 3/5, step 2/117 completed (loss: 0.002412324771285057): 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 3/5, step 3/117 completed (loss: 0.0022709679324179888): 0%|[34m [0m| 0/23 [00:01<?, ?it/s]
Training Epoch: 3/5, step 3/117 completed (loss: 0.0022709679324179888): 4%|[34mβ [0m| 1/23 [00:01<00:37, 1.71s/it]
Training Epoch: 3/5, step 4/117 completed (loss: 0.0022713744547218084): 4%|[34mβ [0m| 1/23 [00:01<00:37, 1.71s/it]
Training Epoch: 3/5, step 5/117 completed (loss: 0.008616849780082703): 4%|[34mβ [0m| 1/23 [00:02<00:37, 1.71s/it]
Training Epoch: 3/5, step 6/117 completed (loss: 0.001908860169351101): 4%|[34mβ [0m| 1/23 [00:02<00:37, 1.71s/it]
Training Epoch: 3/5, step 7/117 completed (loss: 0.002021672436967492): 4%|[34mβ [0m| 1/23 [00:02<00:37, 1.71s/it]
Training Epoch: 3/5, step 8/117 completed (loss: 0.0017948299646377563): 4%|[34mβ [0m| 1/23 [00:02<00:37, 1.71s/it]
Training Epoch: 3/5, step 8/117 completed (loss: 0.0017948299646377563): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.63s/it]
Training Epoch: 3/5, step 9/117 completed (loss: 0.008617264218628407): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.63s/it]
Training Epoch: 3/5, step 10/117 completed (loss: 0.007021590601652861): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.63s/it]
Training Epoch: 3/5, step 11/117 completed (loss: 0.0015994067071005702): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.63s/it]
Training Epoch: 3/5, step 12/117 completed (loss: 0.006629291921854019): 9%|[34mβ [0m| 2/23 [00:04<00:34, 1.63s/it]
Training Epoch: 3/5, step 13/117 completed (loss: 0.0017992694629356265): 9%|[34mβ [0m| 2/23 [00:04<00:34, 1.63s/it]
Training Epoch: 3/5, step 13/117 completed (loss: 0.0017992694629356265): 13%|[34mββ [0m| 3/23 [00:04<00:32, 1.62s/it]
Training Epoch: 3/5, step 14/117 completed (loss: 0.0015985865611582994): 13%|[34mββ [0m| 3/23 [00:04<00:32, 1.62s/it]
Training Epoch: 3/5, step 15/117 completed (loss: 0.001816060277633369): 13%|[34mββ [0m| 3/23 [00:05<00:32, 1.62s/it]
Training Epoch: 3/5, step 16/117 completed (loss: 0.002757652895525098): 13%|[34mββ [0m| 3/23 [00:05<00:32, 1.62s/it]
Training Epoch: 3/5, step 17/117 completed (loss: 0.001612927415408194): 13%|[34mββ [0m| 3/23 [00:05<00:32, 1.62s/it]
Training Epoch: 3/5, step 18/117 completed (loss: 0.003614209359511733): 13%|[34mββ [0m| 3/23 [00:06<00:32, 1.62s/it]
Training Epoch: 3/5, step 18/117 completed (loss: 0.003614209359511733): 17%|[34mββ [0m| 4/23 [00:06<00:31, 1.65s/it]
Training Epoch: 3/5, step 19/117 completed (loss: 0.0016126197297126055): 17%|[34mββ [0m| 4/23 [00:06<00:31, 1.65s/it]
Training Epoch: 3/5, step 20/117 completed (loss: 0.0020693112164735794): 17%|[34mββ [0m| 4/23 [00:06<00:31, 1.65s/it]
Training Epoch: 3/5, step 21/117 completed (loss: 0.0015068203210830688): 17%|[34mββ [0m| 4/23 [00:07<00:31, 1.65s/it]
Training Epoch: 3/5, step 22/117 completed (loss: 0.002097375225275755): 17%|[34mββ [0m| 4/23 [00:07<00:31, 1.65s/it]
Training Epoch: 3/5, step 23/117 completed (loss: 0.00183890201151371): 17%|[34mββ [0m| 4/23 [00:07<00:31, 1.65s/it]
Training Epoch: 3/5, step 23/117 completed (loss: 0.00183890201151371): 22%|[34mβββ [0m| 5/23 [00:08<00:29, 1.66s/it]
Training Epoch: 3/5, step 24/117 completed (loss: 0.00219177408143878): 22%|[34mβββ [0m| 5/23 [00:08<00:29, 1.66s/it]
Training Epoch: 3/5, step 25/117 completed (loss: 0.000641215592622757): 22%|[34mβββ [0m| 5/23 [00:08<00:29, 1.66s/it]
Training Epoch: 3/5, step 26/117 completed (loss: 0.001996841747313738): 22%|[34mβββ [0m| 5/23 [00:08<00:29, 1.66s/it]
Training Epoch: 3/5, step 27/117 completed (loss: 0.0008638245053589344): 22%|[34mβββ [0m| 5/23 [00:09<00:29, 1.66s/it]
Training Epoch: 3/5, step 28/117 completed (loss: 0.0015373624628409743): 22%|[34mβββ [0m| 5/23 [00:09<00:29, 1.66s/it]
Training Epoch: 3/5, step 28/117 completed (loss: 0.0015373624628409743): 26%|[34mβββ [0m| 6/23 [00:10<00:29, 1.72s/it]
Training Epoch: 3/5, step 29/117 completed (loss: 0.002105658408254385): 26%|[34mβββ [0m| 6/23 [00:10<00:29, 1.72s/it]
Training Epoch: 3/5, step 30/117 completed (loss: 0.001802388927899301): 26%|[34mβββ [0m| 6/23 [00:10<00:29, 1.72s/it]
Training Epoch: 3/5, step 31/117 completed (loss: 0.0011145226890221238): 26%|[34mβββ [0m| 6/23 [00:10<00:29, 1.72s/it]
Training Epoch: 3/5, step 32/117 completed (loss: 0.0018027883488684893): 26%|[34mβββ [0m| 6/23 [00:11<00:29, 1.72s/it]
Training Epoch: 3/5, step 33/117 completed (loss: 0.0003361940907780081): 26%|[34mβββ [0m| 6/23 [00:11<00:29, 1.72s/it]
Training Epoch: 3/5, step 33/117 completed (loss: 0.0003361940907780081): 30%|[34mβββ [0m| 7/23 [00:11<00:27, 1.73s/it]
Training Epoch: 3/5, step 34/117 completed (loss: 0.0018947115167975426): 30%|[34mβββ [0m| 7/23 [00:11<00:27, 1.73s/it]
Training Epoch: 3/5, step 35/117 completed (loss: 0.000825441733468324): 30%|[34mβββ [0m| 7/23 [00:12<00:27, 1.73s/it]
Training Epoch: 3/5, step 36/117 completed (loss: 0.0009250822477042675): 30%|[34mβββ [0m| 7/23 [00:12<00:27, 1.73s/it]
Training Epoch: 3/5, step 37/117 completed (loss: 0.0008263051859103143): 30%|[34mβββ [0m| 7/23 [00:12<00:27, 1.73s/it]
Training Epoch: 3/5, step 38/117 completed (loss: 0.0009241208317689598): 30%|[34mβββ [0m| 7/23 [00:13<00:27, 1.73s/it]
Training Epoch: 3/5, step 38/117 completed (loss: 0.0009241208317689598): 35%|[34mββββ [0m| 8/23 [00:13<00:26, 1.75s/it]
Training Epoch: 3/5, step 39/117 completed (loss: 0.000979957520030439): 35%|[34mββββ [0m| 8/23 [00:13<00:26, 1.75s/it]
Training Epoch: 3/5, step 40/117 completed (loss: 0.0003102328337263316): 35%|[34mββββ [0m| 8/23 [00:14<00:26, 1.75s/it]
Training Epoch: 3/5, step 41/117 completed (loss: 0.0003699892258737236): 35%|[34mββββ [0m| 8/23 [00:14<00:26, 1.75s/it]
Training Epoch: 3/5, step 42/117 completed (loss: 0.00037995699676685035): 35%|[34mββββ [0m| 8/23 [00:14<00:26, 1.75s/it]
Training Epoch: 3/5, step 43/117 completed (loss: 0.0003390238562133163): 35%|[34mββββ [0m| 8/23 [00:15<00:26, 1.75s/it]
Training Epoch: 3/5, step 43/117 completed (loss: 0.0003390238562133163): 39%|[34mββββ [0m| 9/23 [00:15<00:24, 1.74s/it]
Training Epoch: 3/5, step 44/117 completed (loss: 0.0002863140543922782): 39%|[34mββββ [0m| 9/23 [00:15<00:24, 1.74s/it]
Training Epoch: 3/5, step 45/117 completed (loss: 0.0004301618319004774): 39%|[34mββββ [0m| 9/23 [00:15<00:24, 1.74s/it]
Training Epoch: 3/5, step 46/117 completed (loss: 0.0001226658932864666): 39%|[34mββββ [0m| 9/23 [00:16<00:24, 1.74s/it]
Training Epoch: 3/5, step 47/117 completed (loss: 0.0005487588932737708): 39%|[34mββββ [0m| 9/23 [00:16<00:24, 1.74s/it]
Training Epoch: 3/5, step 48/117 completed (loss: 0.00012896701809950173): 39%|[34mββββ [0m| 9/23 [00:17<00:24, 1.74s/it]
Training Epoch: 3/5, step 48/117 completed (loss: 0.00012896701809950173): 43%|[34mβββββ [0m| 10/23 [00:17<00:24, 1.88s/it]
Training Epoch: 3/5, step 49/117 completed (loss: 0.00012923222675453871): 43%|[34mβββββ [0m| 10/23 [00:17<00:24, 1.88s/it]
Training Epoch: 3/5, step 50/117 completed (loss: 0.0004276486288290471): 43%|[34mβββββ [0m| 10/23 [00:17<00:24, 1.88s/it]
Training Epoch: 3/5, step 51/117 completed (loss: 5.3222724091028795e-05): 43%|[34mβββββ [0m| 10/23 [00:18<00:24, 1.88s/it]
Training Epoch: 3/5, step 52/117 completed (loss: 0.00036716487375088036): 43%|[34mβββββ [0m| 10/23 [00:18<00:24, 1.88s/it]
Training Epoch: 3/5, step 53/117 completed (loss: 0.0005704248906113207): 43%|[34mβββββ [0m| 10/23 [00:19<00:24, 1.88s/it]
Training Epoch: 3/5, step 53/117 completed (loss: 0.0005704248906113207): 48%|[34mβββββ [0m| 11/23 [00:19<00:22, 1.86s/it]
Training Epoch: 3/5, step 54/117 completed (loss: 0.00036939760320819914): 48%|[34mβββββ [0m| 11/23 [00:19<00:22, 1.86s/it]
Training Epoch: 3/5, step 55/117 completed (loss: 0.00030495933606289327): 48%|[34mβββββ [0m| 11/23 [00:19<00:22, 1.86s/it]
Training Epoch: 3/5, step 56/117 completed (loss: 2.906302506744396e-05): 48%|[34mβββββ [0m| 11/23 [00:20<00:22, 1.86s/it]
Training Epoch: 3/5, step 57/117 completed (loss: 2.7121319362777285e-05): 48%|[34mβββββ [0m| 11/23 [00:20<00:22, 1.86s/it]
Training Epoch: 3/5, step 58/117 completed (loss: 0.0003442694141995162): 48%|[34mβββββ [0m| 11/23 [00:20<00:22, 1.86s/it]
Training Epoch: 3/5, step 58/117 completed (loss: 0.0003442694141995162): 52%|[34mββββββ [0m| 12/23 [00:21<00:19, 1.81s/it]
Training Epoch: 3/5, step 59/117 completed (loss: 2.567354567872826e-05): 52%|[34mββββββ [0m| 12/23 [00:21<00:19, 1.81s/it]
Training Epoch: 3/5, step 60/117 completed (loss: 0.00013036768359597772): 52%|[34mββββββ [0m| 12/23 [00:21<00:19, 1.81s/it]
Training Epoch: 3/5, step 61/117 completed (loss: 0.00018634586012922227): 52%|[34mββββββ [0m| 12/23 [00:21<00:19, 1.81s/it]
Training Epoch: 3/5, step 62/117 completed (loss: 1.6092020814539865e-05): 52%|[34mββββββ [0m| 12/23 [00:22<00:19, 1.81s/it]
Training Epoch: 3/5, step 63/117 completed (loss: 1.7654876501183026e-05): 52%|[34mββββββ [0m| 12/23 [00:22<00:19, 1.81s/it]
Training Epoch: 3/5, step 63/117 completed (loss: 1.7654876501183026e-05): 57%|[34mββββββ [0m| 13/23 [00:22<00:17, 1.73s/it]
Training Epoch: 3/5, step 64/117 completed (loss: 0.00016531739674974233): 57%|[34mββββββ [0m| 13/23 [00:22<00:17, 1.73s/it]
Training Epoch: 3/5, step 65/117 completed (loss: 1.148603860201547e-05): 57%|[34mββββββ [0m| 13/23 [00:22<00:17, 1.73s/it]
Training Epoch: 3/5, step 66/117 completed (loss: 1.1022500075341668e-05): 57%|[34mββββββ [0m| 13/23 [00:23<00:17, 1.73s/it]
Training Epoch: 3/5, step 67/117 completed (loss: 1.0451721209392417e-05): 57%|[34mββββββ [0m| 13/23 [00:23<00:17, 1.73s/it]
Training Epoch: 3/5, step 68/117 completed (loss: 4.668060500989668e-05): 57%|[34mββββββ [0m| 13/23 [00:23<00:17, 1.73s/it]
Training Epoch: 3/5, step 68/117 completed (loss: 4.668060500989668e-05): 61%|[34mββββββ [0m| 14/23 [00:24<00:15, 1.67s/it]
Training Epoch: 3/5, step 69/117 completed (loss: 9.138718451140448e-05): 61%|[34mββββββ [0m| 14/23 [00:24<00:15, 1.67s/it]
Training Epoch: 3/5, step 70/117 completed (loss: 6.085361383156851e-05): 61%|[34mββββββ [0m| 14/23 [00:24<00:15, 1.67s/it]
Training Epoch: 3/5, step 71/117 completed (loss: 7.64649757911684e-06): 61%|[34mββββββ [0m| 14/23 [00:24<00:15, 1.67s/it]
Training Epoch: 3/5, step 72/117 completed (loss: 8.405644621234387e-06): 61%|[34mββββββ [0m| 14/23 [00:25<00:15, 1.67s/it]
Training Epoch: 3/5, step 73/117 completed (loss: 7.690583515795879e-06): 61%|[34mββββββ [0m| 14/23 [00:25<00:15, 1.67s/it]
Training Epoch: 3/5, step 73/117 completed (loss: 7.690583515795879e-06): 65%|[34mβββββββ [0m| 15/23 [00:25<00:13, 1.63s/it]
Training Epoch: 3/5, step 74/117 completed (loss: 4.278756387066096e-05): 65%|[34mβββββββ [0m| 15/23 [00:25<00:13, 1.63s/it]
Training Epoch: 3/5, step 75/117 completed (loss: 2.1285362890921533e-05): 65%|[34mβββββββ [0m| 15/23 [00:26<00:13, 1.63s/it]
Training Epoch: 3/5, step 76/117 completed (loss: 5.9934441196674015e-06): 65%|[34mβββββββ [0m| 15/23 [00:26<00:13, 1.63s/it]
Training Epoch: 3/5, step 77/117 completed (loss: 5.657315341522917e-06): 65%|[34mβββββββ [0m| 15/23 [00:26<00:13, 1.63s/it]
Training Epoch: 3/5, step 78/117 completed (loss: 1.9154480469296686e-05): 65%|[34mβββββββ [0m| 15/23 [00:26<00:13, 1.63s/it]
Training Epoch: 3/5, step 78/117 completed (loss: 1.9154480469296686e-05): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.64s/it]
Training Epoch: 3/5, step 79/117 completed (loss: 6.3581524045730475e-06): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.64s/it]
Training Epoch: 3/5, step 80/117 completed (loss: 5.209163191466359e-06): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.64s/it]
Training Epoch: 3/5, step 81/117 completed (loss: 5.163875357538927e-06): 70%|[34mβββββββ [0m| 16/23 [00:28<00:11, 1.64s/it]
Training Epoch: 3/5, step 82/117 completed (loss: 1.1578474186535459e-05): 70%|[34mβββββββ [0m| 16/23 [00:28<00:11, 1.64s/it]
Training Epoch: 3/5, step 83/117 completed (loss: 1.052457719197264e-05): 70%|[34mβββββββ [0m| 16/23 [00:28<00:11, 1.64s/it]
Training Epoch: 3/5, step 83/117 completed (loss: 1.052457719197264e-05): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.65s/it]
Training Epoch: 3/5, step 84/117 completed (loss: 1.083702409232501e-05): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.65s/it]
Training Epoch: 3/5, step 85/117 completed (loss: 4.412947873788653e-06): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.65s/it]
Training Epoch: 3/5, step 86/117 completed (loss: 4.4975781747780275e-06): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.65s/it]
Training Epoch: 3/5, step 87/117 completed (loss: 4.746696049551247e-06): 74%|[34mββββββββ [0m| 17/23 [00:29<00:09, 1.65s/it]
Training Epoch: 3/5, step 88/117 completed (loss: 4.703784270532196e-06): 74%|[34mββββββββ [0m| 17/23 [00:30<00:09, 1.65s/it]
Training Epoch: 3/5, step 88/117 completed (loss: 4.703784270532196e-06): 78%|[34mββββββββ [0m| 18/23 [00:30<00:07, 1.59s/it]
Training Epoch: 3/5, step 89/117 completed (loss: 7.98092605691636e-06): 78%|[34mββββββββ [0m| 18/23 [00:30<00:07, 1.59s/it]
Training Epoch: 3/5, step 90/117 completed (loss: 8.669480848766398e-06): 78%|[34mββββββββ [0m| 18/23 [00:30<00:07, 1.59s/it]
Training Epoch: 3/5, step 91/117 completed (loss: 8.187465027731378e-06): 78%|[34mββββββββ [0m| 18/23 [00:31<00:07, 1.59s/it]
Training Epoch: 3/5, step 92/117 completed (loss: 6.6250067902728915e-06): 78%|[34mββββββββ [0m| 18/23 [00:31<00:07, 1.59s/it]
Training Epoch: 3/5, step 93/117 completed (loss: 3.9850215216574725e-06): 78%|[34mββββββββ [0m| 18/23 [00:31<00:07, 1.59s/it]
Training Epoch: 3/5, step 93/117 completed (loss: 3.9850215216574725e-06): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.60s/it]
Training Epoch: 3/5, step 94/117 completed (loss: 7.375794666586444e-06): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.60s/it]
Training Epoch: 3/5, step 95/117 completed (loss: 3.716822902788408e-06): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.60s/it]
Training Epoch: 3/5, step 96/117 completed (loss: 6.5891990743693896e-06): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.60s/it]
Training Epoch: 3/5, step 97/117 completed (loss: 4.0410477595287375e-06): 83%|[34mβββββββββ [0m| 19/23 [00:33<00:06, 1.60s/it]
Training Epoch: 3/5, step 98/117 completed (loss: 3.6655635540228104e-06): 83%|[34mβββββββββ [0m| 19/23 [00:33<00:06, 1.60s/it]
Training Epoch: 3/5, step 98/117 completed (loss: 3.6655635540228104e-06): 87%|[34mβββββββββ [0m| 20/23 [00:33<00:04, 1.61s/it]
Training Epoch: 3/5, step 99/117 completed (loss: 4.097071268915897e-06): 87%|[34mβββββββββ [0m| 20/23 [00:33<00:04, 1.61s/it]
Training Epoch: 3/5, step 100/117 completed (loss: 6.615740403503878e-06): 87%|[34mβββββββββ [0m| 20/23 [00:34<00:04, 1.61s/it]
Training Epoch: 3/5, step 101/117 completed (loss: 6.366683464875678e-06): 87%|[34mβββββββββ [0m| 20/23 [00:34<00:04, 1.61s/it]
Training Epoch: 3/5, step 102/117 completed (loss: 4.052962594869314e-06): 87%|[34mβββββββββ [0m| 20/23 [00:34<00:04, 1.61s/it]
Training Epoch: 3/5, step 103/117 completed (loss: 3.5415912407188443e-06): 87%|[34mβββββββββ [0m| 20/23 [00:35<00:04, 1.61s/it]
Training Epoch: 3/5, step 103/117 completed (loss: 3.5415912407188443e-06): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.66s/it]
Training Epoch: 3/5, step 104/117 completed (loss: 3.6321853258414194e-06): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.66s/it]
Training Epoch: 3/5, step 105/117 completed (loss: 7.379757789749419e-06): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.66s/it]
Training Epoch: 3/5, step 106/117 completed (loss: 7.051412012515357e-06): 91%|[34mββββββββββ[0m| 21/23 [00:36<00:03, 1.66s/it]
Training Epoch: 3/5, step 107/117 completed (loss: 6.8553927121683955e-06): 91%|[34mββββββββββ[0m| 21/23 [00:36<00:03, 1.66s/it]
Training Epoch: 3/5, step 108/117 completed (loss: 3.6500666737993015e-06): 91%|[34mββββββββββ[0m| 21/23 [00:36<00:03, 1.66s/it]
Training Epoch: 3/5, step 108/117 completed (loss: 3.6500666737993015e-06): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.68s/it]
Training Epoch: 3/5, step 109/117 completed (loss: 3.399740990062128e-06): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.68s/it]
Training Epoch: 3/5, step 110/117 completed (loss: 7.406358690786874e-06): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.68s/it]
Training Epoch: 3/5, step 111/117 completed (loss: 6.696484433632577e-06): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.68s/it]
Training Epoch: 3/5, step 112/117 completed (loss: 7.080530394887319e-06): 96%|[34mββββββββββ[0m| 22/23 [00:38<00:01, 1.68s/it]
Training Epoch: 3/5, step 113/117 completed (loss: 7.30966849005199e-06): 96%|[34mββββββββββ[0m| 22/23 [00:38<00:01, 1.68s/it]
Training Epoch: 3/5, step 113/117 completed (loss: 7.30966849005199e-06): 100%|[34mββββββββββ[0m| 23/23 [00:39<00:00, 1.74s/it]
Training Epoch: 3/5, step 114/117 completed (loss: 5.085571956442436e-06): 100%|[34mββββββββββ[0m| 23/23 [00:39<00:00, 1.74s/it]
Training Epoch: 3/5, step 115/117 completed (loss: 3.3437147521908628e-06): 100%|[34mββββββββββ[0m| 23/23 [00:39<00:00, 1.74s/it]
Training Epoch: 3/5, step 115/117 completed (loss: 3.3437147521908628e-06): : 24it [00:39, 1.40s/it]
Training Epoch: 3/5, step 116/117 completed (loss: 3.3115295536845224e-06): : 24it [00:39, 1.40s/it]
Training Epoch: 3/5, step 116/117 completed (loss: 3.3115295536845224e-06): : 24it [00:39, 1.66s/it] |
| Max CUDA memory allocated was 19 GB |
| Max CUDA memory reserved was 19 GB |
| Peak active CUDA memory was 19 GB |
| CUDA Malloc retries : 0 |
| CPU Total Peak Memory consumed during the train (max): 2 GB |
|
evaluating Epoch: 0%|[32m [0m| 0/145 [00:00<?, ?it/s]
evaluating Epoch: 1%|[32m [0m| 1/145 [00:00<00:34, 4.23it/s]
evaluating Epoch: 1%|[32mβ [0m| 2/145 [00:00<00:25, 5.64it/s]
evaluating Epoch: 2%|[32mβ [0m| 3/145 [00:00<00:22, 6.25it/s]
evaluating Epoch: 3%|[32mβ [0m| 4/145 [00:00<00:21, 6.67it/s]
evaluating Epoch: 3%|[32mβ [0m| 5/145 [00:00<00:20, 6.90it/s]
evaluating Epoch: 4%|[32mβ [0m| 6/145 [00:00<00:19, 7.05it/s]
evaluating Epoch: 5%|[32mβ [0m| 7/145 [00:01<00:19, 7.15it/s]
evaluating Epoch: 6%|[32mβ [0m| 8/145 [00:01<00:19, 7.14it/s]
evaluating Epoch: 6%|[32mβ [0m| 9/145 [00:01<00:19, 7.07it/s]
evaluating Epoch: 7%|[32mβ [0m| 10/145 [00:01<00:18, 7.16it/s]
evaluating Epoch: 8%|[32mβ [0m| 11/145 [00:01<00:18, 7.18it/s]
evaluating Epoch: 8%|[32mβ [0m| 12/145 [00:01<00:18, 7.20it/s]
evaluating Epoch: 9%|[32mβ [0m| 13/145 [00:01<00:19, 6.90it/s]
evaluating Epoch: 10%|[32mβ [0m| 14/145 [00:02<00:18, 7.11it/s]
evaluating Epoch: 10%|[32mβ [0m| 15/145 [00:02<00:18, 7.18it/s]
evaluating Epoch: 11%|[32mβ [0m| 16/145 [00:02<00:17, 7.26it/s]
evaluating Epoch: 12%|[32mββ [0m| 17/145 [00:02<00:17, 7.36it/s]
evaluating Epoch: 12%|[32mββ [0m| 18/145 [00:02<00:17, 7.28it/s]
evaluating Epoch: 13%|[32mββ [0m| 19/145 [00:02<00:17, 7.30it/s]
evaluating Epoch: 14%|[32mββ [0m| 20/145 [00:02<00:16, 7.40it/s]
evaluating Epoch: 14%|[32mββ [0m| 21/145 [00:02<00:16, 7.31it/s]
evaluating Epoch: 15%|[32mββ [0m| 22/145 [00:03<00:16, 7.30it/s]
evaluating Epoch: 16%|[32mββ [0m| 23/145 [00:03<00:16, 7.36it/s]
evaluating Epoch: 17%|[32mββ [0m| 24/145 [00:03<00:16, 7.43it/s]
evaluating Epoch: 17%|[32mββ [0m| 25/145 [00:03<00:16, 7.40it/s]
evaluating Epoch: 18%|[32mββ [0m| 26/145 [00:03<00:16, 7.33it/s]
evaluating Epoch: 19%|[32mββ [0m| 27/145 [00:03<00:16, 7.21it/s]
evaluating Epoch: 19%|[32mββ [0m| 28/145 [00:03<00:16, 7.22it/s]
evaluating Epoch: 20%|[32mββ [0m| 29/145 [00:04<00:16, 6.98it/s]
evaluating Epoch: 21%|[32mββ [0m| 30/145 [00:04<00:16, 6.93it/s]
evaluating Epoch: 21%|[32mβββ [0m| 31/145 [00:04<00:16, 6.94it/s]
evaluating Epoch: 22%|[32mβββ [0m| 32/145 [00:04<00:16, 6.90it/s]
evaluating Epoch: 23%|[32mβββ [0m| 33/145 [00:04<00:16, 6.91it/s]
evaluating Epoch: 23%|[32mβββ [0m| 34/145 [00:04<00:15, 6.95it/s]
evaluating Epoch: 24%|[32mβββ [0m| 35/145 [00:04<00:15, 6.95it/s]
evaluating Epoch: 25%|[32mβββ [0m| 36/145 [00:05<00:15, 6.89it/s]
evaluating Epoch: 26%|[32mβββ [0m| 37/145 [00:05<00:16, 6.69it/s]
evaluating Epoch: 26%|[32mβββ [0m| 38/145 [00:05<00:15, 6.87it/s]
evaluating Epoch: 27%|[32mβββ [0m| 39/145 [00:05<00:16, 6.56it/s]
evaluating Epoch: 28%|[32mβββ [0m| 40/145 [00:05<00:17, 6.05it/s]
evaluating Epoch: 28%|[32mβββ [0m| 41/145 [00:05<00:17, 6.03it/s]
evaluating Epoch: 29%|[32mβββ [0m| 42/145 [00:06<00:16, 6.38it/s]
evaluating Epoch: 30%|[32mβββ [0m| 43/145 [00:06<00:15, 6.60it/s]
evaluating Epoch: 30%|[32mβββ [0m| 44/145 [00:06<00:14, 6.77it/s]
evaluating Epoch: 31%|[32mβββ [0m| 45/145 [00:06<00:14, 6.91it/s]
evaluating Epoch: 32%|[32mββββ [0m| 46/145 [00:06<00:14, 7.03it/s]
evaluating Epoch: 32%|[32mββββ [0m| 47/145 [00:06<00:13, 7.07it/s]
evaluating Epoch: 33%|[32mββββ [0m| 48/145 [00:06<00:13, 7.11it/s]
evaluating Epoch: 34%|[32mββββ [0m| 49/145 [00:07<00:13, 7.12it/s]
evaluating Epoch: 34%|[32mββββ [0m| 50/145 [00:07<00:13, 7.12it/s]
evaluating Epoch: 35%|[32mββββ [0m| 51/145 [00:07<00:13, 7.12it/s]
evaluating Epoch: 36%|[32mββββ [0m| 52/145 [00:07<00:13, 6.78it/s]
evaluating Epoch: 37%|[32mββββ [0m| 53/145 [00:07<00:14, 6.37it/s]
evaluating Epoch: 37%|[32mββββ [0m| 54/145 [00:07<00:14, 6.24it/s]
evaluating Epoch: 38%|[32mββββ [0m| 55/145 [00:07<00:14, 6.39it/s]
evaluating Epoch: 39%|[32mββββ [0m| 56/145 [00:08<00:13, 6.53it/s]
evaluating Epoch: 39%|[32mββββ [0m| 57/145 [00:08<00:13, 6.57it/s]
evaluating Epoch: 40%|[32mββββ [0m| 58/145 [00:08<00:12, 6.72it/s]
evaluating Epoch: 41%|[32mββββ [0m| 59/145 [00:08<00:12, 6.81it/s]
evaluating Epoch: 41%|[32mβββββ [0m| 60/145 [00:08<00:12, 6.57it/s]
evaluating Epoch: 42%|[32mβββββ [0m| 61/145 [00:08<00:13, 6.28it/s]
evaluating Epoch: 43%|[32mβββββ [0m| 62/145 [00:09<00:13, 6.14it/s]
evaluating Epoch: 43%|[32mβββββ [0m| 63/145 [00:09<00:13, 6.00it/s]
evaluating Epoch: 44%|[32mβββββ [0m| 64/145 [00:09<00:13, 5.97it/s]
evaluating Epoch: 45%|[32mβββββ [0m| 65/145 [00:09<00:13, 6.14it/s]
evaluating Epoch: 46%|[32mβββββ [0m| 66/145 [00:09<00:12, 6.38it/s]
evaluating Epoch: 46%|[32mβββββ [0m| 67/145 [00:09<00:11, 6.61it/s]
evaluating Epoch: 47%|[32mβββββ [0m| 68/145 [00:10<00:11, 6.52it/s]
evaluating Epoch: 48%|[32mβββββ [0m| 69/145 [00:10<00:11, 6.56it/s]
evaluating Epoch: 48%|[32mβββββ [0m| 70/145 [00:10<00:11, 6.72it/s]
evaluating Epoch: 49%|[32mβββββ [0m| 71/145 [00:10<00:10, 6.87it/s]
evaluating Epoch: 50%|[32mβββββ [0m| 72/145 [00:10<00:10, 6.93it/s]
evaluating Epoch: 50%|[32mβββββ [0m| 73/145 [00:10<00:10, 6.98it/s]
evaluating Epoch: 51%|[32mβββββ [0m| 74/145 [00:10<00:10, 6.99it/s]
evaluating Epoch: 52%|[32mββββββ [0m| 75/145 [00:11<00:10, 6.99it/s]
evaluating Epoch: 52%|[32mββββββ [0m| 76/145 [00:11<00:09, 6.99it/s]
evaluating Epoch: 53%|[32mββββββ [0m| 77/145 [00:11<00:09, 7.01it/s]
evaluating Epoch: 54%|[32mββββββ [0m| 78/145 [00:11<00:09, 6.91it/s]
evaluating Epoch: 54%|[32mββββββ [0m| 79/145 [00:11<00:09, 6.81it/s]
evaluating Epoch: 55%|[32mββββββ [0m| 80/145 [00:11<00:09, 6.91it/s]
evaluating Epoch: 56%|[32mββββββ [0m| 81/145 [00:11<00:09, 7.01it/s]
evaluating Epoch: 57%|[32mββββββ [0m| 82/145 [00:12<00:08, 7.07it/s]
evaluating Epoch: 57%|[32mββββββ [0m| 83/145 [00:12<00:08, 7.02it/s]
evaluating Epoch: 58%|[32mββββββ [0m| 84/145 [00:12<00:08, 6.90it/s]
evaluating Epoch: 59%|[32mββββββ [0m| 85/145 [00:12<00:09, 6.63it/s]
evaluating Epoch: 59%|[32mββββββ [0m| 86/145 [00:12<00:09, 6.52it/s]
evaluating Epoch: 60%|[32mββββββ [0m| 87/145 [00:12<00:08, 6.59it/s]
evaluating Epoch: 61%|[32mββββββ [0m| 88/145 [00:12<00:08, 6.68it/s]
evaluating Epoch: 61%|[32mβββββββ [0m| 89/145 [00:13<00:08, 6.83it/s]
evaluating Epoch: 62%|[32mβββββββ [0m| 90/145 [00:13<00:07, 6.92it/s]
evaluating Epoch: 63%|[32mβββββββ [0m| 91/145 [00:13<00:07, 6.96it/s]
evaluating Epoch: 63%|[32mβββββββ [0m| 92/145 [00:13<00:07, 6.83it/s]
evaluating Epoch: 64%|[32mβββββββ [0m| 93/145 [00:13<00:07, 6.87it/s]
evaluating Epoch: 65%|[32mβββββββ [0m| 94/145 [00:13<00:07, 6.93it/s]
evaluating Epoch: 66%|[32mβββββββ [0m| 95/145 [00:13<00:07, 6.98it/s]
evaluating Epoch: 66%|[32mβββββββ [0m| 96/145 [00:14<00:06, 7.01it/s]
evaluating Epoch: 67%|[32mβββββββ [0m| 97/145 [00:14<00:06, 6.90it/s]
evaluating Epoch: 68%|[32mβββββββ [0m| 98/145 [00:14<00:06, 6.94it/s]
evaluating Epoch: 68%|[32mβββββββ [0m| 99/145 [00:14<00:06, 6.94it/s]
evaluating Epoch: 69%|[32mβββββββ [0m| 100/145 [00:14<00:06, 6.86it/s]
evaluating Epoch: 70%|[32mβββββββ [0m| 101/145 [00:14<00:06, 6.79it/s]
evaluating Epoch: 70%|[32mβββββββ [0m| 102/145 [00:14<00:06, 6.69it/s]
evaluating Epoch: 71%|[32mβββββββ [0m| 103/145 [00:15<00:06, 6.59it/s]
evaluating Epoch: 72%|[32mββββββββ [0m| 104/145 [00:15<00:06, 6.49it/s]
evaluating Epoch: 72%|[32mββββββββ [0m| 105/145 [00:15<00:06, 6.60it/s]
evaluating Epoch: 73%|[32mββββββββ [0m| 106/145 [00:15<00:06, 6.37it/s]
evaluating Epoch: 74%|[32mββββββββ [0m| 107/145 [00:15<00:06, 6.29it/s]
evaluating Epoch: 74%|[32mββββββββ [0m| 108/145 [00:15<00:05, 6.29it/s]
evaluating Epoch: 75%|[32mββββββββ [0m| 109/145 [00:16<00:05, 6.50it/s]
evaluating Epoch: 76%|[32mββββββββ [0m| 110/145 [00:16<00:05, 6.76it/s]
evaluating Epoch: 77%|[32mββββββββ [0m| 111/145 [00:16<00:04, 6.94it/s]
evaluating Epoch: 77%|[32mββββββββ [0m| 112/145 [00:16<00:04, 7.15it/s]
evaluating Epoch: 78%|[32mββββββββ [0m| 113/145 [00:16<00:04, 7.15it/s]
evaluating Epoch: 79%|[32mββββββββ [0m| 114/145 [00:16<00:04, 7.16it/s]
evaluating Epoch: 79%|[32mββββββββ [0m| 115/145 [00:16<00:04, 7.14it/s]
evaluating Epoch: 80%|[32mββββββββ [0m| 116/145 [00:17<00:04, 7.18it/s]
evaluating Epoch: 81%|[32mββββββββ [0m| 117/145 [00:17<00:03, 7.27it/s]
evaluating Epoch: 81%|[32mβββββββββ [0m| 118/145 [00:17<00:03, 7.30it/s]
evaluating Epoch: 82%|[32mβββββββββ [0m| 119/145 [00:17<00:03, 7.29it/s]
evaluating Epoch: 83%|[32mβββββββββ [0m| 120/145 [00:17<00:03, 7.28it/s]
evaluating Epoch: 83%|[32mβββββββββ [0m| 121/145 [00:17<00:03, 7.31it/s]
evaluating Epoch: 84%|[32mβββββββββ [0m| 122/145 [00:17<00:03, 7.28it/s]
evaluating Epoch: 85%|[32mβββββββββ [0m| 123/145 [00:17<00:03, 7.22it/s]
evaluating Epoch: 86%|[32mβββββββββ [0m| 124/145 [00:18<00:02, 7.20it/s]
evaluating Epoch: 86%|[32mβββββββββ [0m| 125/145 [00:18<00:02, 7.22it/s]
evaluating Epoch: 87%|[32mβββββββββ [0m| 126/145 [00:18<00:02, 7.22it/s]
evaluating Epoch: 88%|[32mβββββββββ [0m| 127/145 [00:18<00:02, 7.22it/s]
evaluating Epoch: 88%|[32mβββββββββ [0m| 128/145 [00:18<00:02, 7.27it/s]
evaluating Epoch: 89%|[32mβββββββββ [0m| 129/145 [00:18<00:02, 7.30it/s]
evaluating Epoch: 90%|[32mβββββββββ [0m| 130/145 [00:18<00:02, 7.24it/s]
evaluating Epoch: 90%|[32mβββββββββ [0m| 131/145 [00:19<00:01, 7.21it/s]
evaluating Epoch: 91%|[32mβββββββββ [0m| 132/145 [00:19<00:01, 7.20it/s]
evaluating Epoch: 92%|[32mββββββββββ[0m| 133/145 [00:19<00:01, 7.19it/s]
evaluating Epoch: 92%|[32mββββββββββ[0m| 134/145 [00:19<00:01, 7.15it/s]
evaluating Epoch: 93%|[32mββββββββββ[0m| 135/145 [00:19<00:01, 7.20it/s]
evaluating Epoch: 94%|[32mββββββββββ[0m| 136/145 [00:19<00:01, 7.26it/s]
evaluating Epoch: 94%|[32mββββββββββ[0m| 137/145 [00:19<00:01, 6.73it/s]
evaluating Epoch: 95%|[32mββββββββββ[0m| 138/145 [00:20<00:01, 6.82it/s]
evaluating Epoch: 96%|[32mββββββββββ[0m| 139/145 [00:20<00:00, 6.87it/s]
evaluating Epoch: 97%|[32mββββββββββ[0m| 140/145 [00:20<00:00, 6.82it/s]
evaluating Epoch: 97%|[32mββββββββββ[0m| 141/145 [00:20<00:00, 6.78it/s]
evaluating Epoch: 98%|[32mββββββββββ[0m| 142/145 [00:20<00:00, 6.74it/s]
evaluating Epoch: 99%|[32mββββββββββ[0m| 143/145 [00:20<00:00, 6.63it/s]
evaluating Epoch: 99%|[32mββββββββββ[0m| 144/145 [00:20<00:00, 6.70it/s]
evaluating Epoch: 100%|[32mββββββββββ[0m| 145/145 [00:21<00:00, 6.85it/s]
evaluating Epoch: 100%|[32mββββββββββ[0m| 145/145 [00:21<00:00, 6.85it/s] |
| eval_ppl=tensor(1.5009, device='cuda:0') eval_epoch_loss=tensor(0.4061, device='cuda:0') |
| we are about to save the PEFT modules |
| PEFT modules are saved in /data/colosseum_dataset/wm_data/finetuned_models/vlm_move_hanger directory |
| Epoch 3: train_perplexity=1.0046, train_epoch_loss=0.0046, epoch time 40.177380909677595s |
| Starting epoch 3/5 |
| train_config.max_train_step: 0 |
|
Training Epoch: 4: 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 4/5, step 0/117 completed (loss: 3.3985486425081035e-06): 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 4/5, step 1/117 completed (loss: 8.284404430014547e-06): 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 4/5, step 2/117 completed (loss: 7.828791240171995e-06): 0%|[34m [0m| 0/23 [00:01<?, ?it/s]
Training Epoch: 4/5, step 3/117 completed (loss: 8.841830094752368e-06): 0%|[34m [0m| 0/23 [00:01<?, ?it/s]
Training Epoch: 4/5, step 3/117 completed (loss: 8.841830094752368e-06): 4%|[34mβ [0m| 1/23 [00:01<00:37, 1.70s/it]
Training Epoch: 4/5, step 4/117 completed (loss: 3.3270187032030663e-06): 4%|[34mβ [0m| 1/23 [00:01<00:37, 1.70s/it]
Training Epoch: 4/5, step 5/117 completed (loss: 8.625979717180599e-06): 4%|[34mβ [0m| 1/23 [00:01<00:37, 1.70s/it]
Training Epoch: 4/5, step 6/117 completed (loss: 9.049773325386923e-06): 4%|[34mβ [0m| 1/23 [00:02<00:37, 1.70s/it]
Training Epoch: 4/5, step 7/117 completed (loss: 8.23394839244429e-06): 4%|[34mβ [0m| 1/23 [00:02<00:37, 1.70s/it]
Training Epoch: 4/5, step 8/117 completed (loss: 8.44057558424538e-06): 4%|[34mβ [0m| 1/23 [00:02<00:37, 1.70s/it]
Training Epoch: 4/5, step 8/117 completed (loss: 8.44057558424538e-06): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.62s/it]
Training Epoch: 4/5, step 9/117 completed (loss: 8.443290425930172e-06): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.62s/it]
Training Epoch: 4/5, step 10/117 completed (loss: 3.2876821478566853e-06): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.62s/it]
Training Epoch: 4/5, step 11/117 completed (loss: 3.4831705306714866e-06): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.62s/it]
Training Epoch: 4/5, step 12/117 completed (loss: 8.766428436501883e-06): 9%|[34mβ [0m| 2/23 [00:04<00:34, 1.62s/it]
Training Epoch: 4/5, step 13/117 completed (loss: 3.1839801977184834e-06): 9%|[34mβ [0m| 2/23 [00:04<00:34, 1.62s/it]
Training Epoch: 4/5, step 13/117 completed (loss: 3.1839801977184834e-06): 13%|[34mββ [0m| 3/23 [00:05<00:33, 1.68s/it]
Training Epoch: 4/5, step 14/117 completed (loss: 9.194182894134428e-06): 13%|[34mββ [0m| 3/23 [00:05<00:33, 1.68s/it]
Training Epoch: 4/5, step 15/117 completed (loss: 3.170867557855672e-06): 13%|[34mββ [0m| 3/23 [00:05<00:33, 1.68s/it]
Training Epoch: 4/5, step 16/117 completed (loss: 9.053723260876723e-06): 13%|[34mββ [0m| 3/23 [00:05<00:33, 1.68s/it]
Training Epoch: 4/5, step 17/117 completed (loss: 3.1613308237865567e-06): 13%|[34mββ [0m| 3/23 [00:05<00:33, 1.68s/it]
Training Epoch: 4/5, step 18/117 completed (loss: 3.1744432362756925e-06): 13%|[34mββ [0m| 3/23 [00:06<00:33, 1.68s/it]
Training Epoch: 4/5, step 18/117 completed (loss: 3.1744432362756925e-06): 17%|[34mββ [0m| 4/23 [00:06<00:30, 1.61s/it]
Training Epoch: 4/5, step 19/117 completed (loss: 3.1696756650489988e-06): 17%|[34mββ [0m| 4/23 [00:06<00:30, 1.61s/it]
Training Epoch: 4/5, step 20/117 completed (loss: 3.228085915907286e-06): 17%|[34mββ [0m| 4/23 [00:06<00:30, 1.61s/it]
Training Epoch: 4/5, step 21/117 completed (loss: 8.708088898856658e-06): 17%|[34mββ [0m| 4/23 [00:07<00:30, 1.61s/it]
Training Epoch: 4/5, step 22/117 completed (loss: 8.659090781293344e-06): 17%|[34mββ [0m| 4/23 [00:07<00:30, 1.61s/it]
Training Epoch: 4/5, step 23/117 completed (loss: 9.000748832477257e-06): 17%|[34mββ [0m| 4/23 [00:07<00:30, 1.61s/it]
Training Epoch: 4/5, step 23/117 completed (loss: 9.000748832477257e-06): 22%|[34mβββ [0m| 5/23 [00:07<00:27, 1.55s/it]
Training Epoch: 4/5, step 24/117 completed (loss: 3.281727686044178e-06): 22%|[34mβββ [0m| 5/23 [00:07<00:27, 1.55s/it]
Training Epoch: 4/5, step 25/117 completed (loss: 3.1863564800005406e-06): 22%|[34mβββ [0m| 5/23 [00:08<00:27, 1.55s/it]
Training Epoch: 4/5, step 26/117 completed (loss: 3.2280793220706983e-06): 22%|[34mβββ [0m| 5/23 [00:08<00:27, 1.55s/it]
Training Epoch: 4/5, step 27/117 completed (loss: 3.443832611083053e-06): 22%|[34mβββ [0m| 5/23 [00:08<00:27, 1.55s/it]
Training Epoch: 4/5, step 28/117 completed (loss: 8.71725205797702e-06): 22%|[34mβββ [0m| 5/23 [00:09<00:27, 1.55s/it]
Training Epoch: 4/5, step 28/117 completed (loss: 8.71725205797702e-06): 26%|[34mβββ [0m| 6/23 [00:09<00:25, 1.52s/it]
Training Epoch: 4/5, step 29/117 completed (loss: 3.2209268283622805e-06): 26%|[34mβββ [0m| 6/23 [00:09<00:25, 1.52s/it]
Training Epoch: 4/5, step 30/117 completed (loss: 3.2233110687229782e-06): 26%|[34mβββ [0m| 6/23 [00:09<00:25, 1.52s/it]
Training Epoch: 4/5, step 31/117 completed (loss: 3.2352320431527914e-06): 26%|[34mβββ [0m| 6/23 [00:09<00:25, 1.52s/it]
Training Epoch: 4/5, step 32/117 completed (loss: 3.325826810396393e-06): 26%|[34mβββ [0m| 6/23 [00:10<00:25, 1.52s/it]
Training Epoch: 4/5, step 33/117 completed (loss: 3.1315291835198877e-06): 26%|[34mβββ [0m| 6/23 [00:10<00:25, 1.52s/it]
Training Epoch: 4/5, step 33/117 completed (loss: 3.1315291835198877e-06): 30%|[34mβββ [0m| 7/23 [00:10<00:24, 1.50s/it]
Training Epoch: 4/5, step 34/117 completed (loss: 8.999343663163017e-06): 30%|[34mβββ [0m| 7/23 [00:10<00:24, 1.50s/it]
Training Epoch: 4/5, step 35/117 completed (loss: 2.9956354410387576e-06): 30%|[34mβββ [0m| 7/23 [00:11<00:24, 1.50s/it]
Training Epoch: 4/5, step 36/117 completed (loss: 9.023196071211714e-06): 30%|[34mβββ [0m| 7/23 [00:11<00:24, 1.50s/it]
Training Epoch: 4/5, step 37/117 completed (loss: 3.0945757316658273e-06): 30%|[34mβββ [0m| 7/23 [00:11<00:24, 1.50s/it]
Training Epoch: 4/5, step 38/117 completed (loss: 3.2757532153482316e-06): 30%|[34mβββ [0m| 7/23 [00:12<00:24, 1.50s/it]
Training Epoch: 4/5, step 38/117 completed (loss: 3.2757532153482316e-06): 35%|[34mββββ [0m| 8/23 [00:12<00:23, 1.58s/it]
Training Epoch: 4/5, step 39/117 completed (loss: 3.2197342534345808e-06): 35%|[34mββββ [0m| 8/23 [00:12<00:23, 1.58s/it]
Training Epoch: 4/5, step 40/117 completed (loss: 3.195893214069656e-06): 35%|[34mββββ [0m| 8/23 [00:12<00:23, 1.58s/it]
Training Epoch: 4/5, step 41/117 completed (loss: 3.241191734559834e-06): 35%|[34mββββ [0m| 8/23 [00:13<00:23, 1.58s/it]
Training Epoch: 4/5, step 42/117 completed (loss: 9.657486771175172e-06): 35%|[34mββββ [0m| 8/23 [00:13<00:23, 1.58s/it]
Training Epoch: 4/5, step 43/117 completed (loss: 9.205886271956842e-06): 35%|[34mββββ [0m| 8/23 [00:13<00:23, 1.58s/it]
Training Epoch: 4/5, step 43/117 completed (loss: 9.205886271956842e-06): 39%|[34mββββ [0m| 9/23 [00:14<00:22, 1.62s/it]
Training Epoch: 4/5, step 44/117 completed (loss: 8.784685633145273e-06): 39%|[34mββββ [0m| 9/23 [00:14<00:22, 1.62s/it]
Training Epoch: 4/5, step 45/117 completed (loss: 1.0150223715754692e-05): 39%|[34mββββ [0m| 9/23 [00:14<00:22, 1.62s/it]
Training Epoch: 4/5, step 46/117 completed (loss: 3.1350980407296447e-06): 39%|[34mββββ [0m| 9/23 [00:14<00:22, 1.62s/it]
Training Epoch: 4/5, step 47/117 completed (loss: 3.055238039451069e-06): 39%|[34mββββ [0m| 9/23 [00:15<00:22, 1.62s/it]
Training Epoch: 4/5, step 48/117 completed (loss: 3.236423481212114e-06): 39%|[34mββββ [0m| 9/23 [00:15<00:22, 1.62s/it]
Training Epoch: 4/5, step 48/117 completed (loss: 3.236423481212114e-06): 43%|[34mβββββ [0m| 10/23 [00:15<00:20, 1.58s/it]
Training Epoch: 4/5, step 49/117 completed (loss: 3.2924422157520894e-06): 43%|[34mβββββ [0m| 10/23 [00:15<00:20, 1.58s/it]
Training Epoch: 4/5, step 50/117 completed (loss: 9.850807145994622e-06): 43%|[34mβββββ [0m| 10/23 [00:16<00:20, 1.58s/it]
Training Epoch: 4/5, step 51/117 completed (loss: 3.1696679343440337e-06): 43%|[34mβββββ [0m| 10/23 [00:16<00:20, 1.58s/it]
Training Epoch: 4/5, step 52/117 completed (loss: 5.622483058687067e-06): 43%|[34mβββββ [0m| 10/23 [00:16<00:20, 1.58s/it]
Training Epoch: 4/5, step 53/117 completed (loss: 3.0826558941043913e-06): 43%|[34mβββββ [0m| 10/23 [00:17<00:20, 1.58s/it]
Training Epoch: 4/5, step 53/117 completed (loss: 3.0826558941043913e-06): 48%|[34mβββββ [0m| 11/23 [00:17<00:18, 1.57s/it]
Training Epoch: 4/5, step 54/117 completed (loss: 9.235019206244033e-06): 48%|[34mβββββ [0m| 11/23 [00:17<00:18, 1.57s/it]
Training Epoch: 4/5, step 55/117 completed (loss: 2.9694097065657843e-06): 48%|[34mβββββ [0m| 11/23 [00:17<00:18, 1.57s/it]
Training Epoch: 4/5, step 56/117 completed (loss: 9.60587385634426e-06): 48%|[34mβββββ [0m| 11/23 [00:18<00:18, 1.57s/it]
Training Epoch: 4/5, step 57/117 completed (loss: 9.367402526549995e-06): 48%|[34mβββββ [0m| 11/23 [00:18<00:18, 1.57s/it]
Training Epoch: 4/5, step 58/117 completed (loss: 9.534433047519997e-06): 48%|[34mβββββ [0m| 11/23 [00:18<00:18, 1.57s/it]
Training Epoch: 4/5, step 58/117 completed (loss: 9.534433047519997e-06): 52%|[34mββββββ [0m| 12/23 [00:18<00:17, 1.57s/it]
Training Epoch: 4/5, step 59/117 completed (loss: 3.2364150683861226e-06): 52%|[34mββββββ [0m| 12/23 [00:18<00:17, 1.57s/it]
Training Epoch: 4/5, step 60/117 completed (loss: 3.1005286018626066e-06): 52%|[34mββββββ [0m| 12/23 [00:19<00:17, 1.57s/it]
Training Epoch: 4/5, step 61/117 completed (loss: 9.440313078812324e-06): 52%|[34mββββββ [0m| 12/23 [00:19<00:17, 1.57s/it]
Training Epoch: 4/5, step 62/117 completed (loss: 3.01589989248896e-06): 52%|[34mββββββ [0m| 12/23 [00:19<00:17, 1.57s/it]
Training Epoch: 4/5, step 63/117 completed (loss: 3.236429620301351e-06): 52%|[34mββββββ [0m| 12/23 [00:20<00:17, 1.57s/it]
Training Epoch: 4/5, step 63/117 completed (loss: 3.236429620301351e-06): 57%|[34mββββββ [0m| 13/23 [00:20<00:15, 1.59s/it]
Training Epoch: 4/5, step 64/117 completed (loss: 3.329403398311115e-06): 57%|[34mββββββ [0m| 13/23 [00:20<00:15, 1.59s/it]
Training Epoch: 4/5, step 65/117 completed (loss: 3.063582653339836e-06): 57%|[34mββββββ [0m| 13/23 [00:21<00:15, 1.59s/it]
Training Epoch: 4/5, step 66/117 completed (loss: 3.089800202360493e-06): 57%|[34mββββββ [0m| 13/23 [00:21<00:15, 1.59s/it]
Training Epoch: 4/5, step 67/117 completed (loss: 5.370273811422521e-06): 57%|[34mββββββ [0m| 13/23 [00:21<00:15, 1.59s/it]
Training Epoch: 4/5, step 68/117 completed (loss: 3.1339063752966467e-06): 57%|[34mββββββ [0m| 13/23 [00:22<00:15, 1.59s/it]
Training Epoch: 4/5, step 68/117 completed (loss: 3.1339063752966467e-06): 61%|[34mββββββ [0m| 14/23 [00:22<00:14, 1.67s/it]
Training Epoch: 4/5, step 69/117 completed (loss: 9.140988368017133e-06): 61%|[34mββββββ [0m| 14/23 [00:22<00:14, 1.67s/it]
Training Epoch: 4/5, step 70/117 completed (loss: 8.60329600982368e-06): 61%|[34mββββββ [0m| 14/23 [00:22<00:14, 1.67s/it]
Training Epoch: 4/5, step 71/117 completed (loss: 9.28135068534175e-06): 61%|[34mββββββ [0m| 14/23 [00:23<00:14, 1.67s/it]
Training Epoch: 4/5, step 72/117 completed (loss: 3.1279457743949024e-06): 61%|[34mββββββ [0m| 14/23 [00:23<00:14, 1.67s/it]
Training Epoch: 4/5, step 73/117 completed (loss: 3.0754956696910085e-06): 61%|[34mββββββ [0m| 14/23 [00:23<00:14, 1.67s/it]
Training Epoch: 4/5, step 73/117 completed (loss: 3.0754956696910085e-06): 65%|[34mβββββββ [0m| 15/23 [00:24<00:13, 1.65s/it]
Training Epoch: 4/5, step 74/117 completed (loss: 9.107917321671266e-06): 65%|[34mβββββββ [0m| 15/23 [00:24<00:13, 1.65s/it]
Training Epoch: 4/5, step 75/117 completed (loss: 3.2149660000868607e-06): 65%|[34mβββββββ [0m| 15/23 [00:24<00:13, 1.65s/it]
Training Epoch: 4/5, step 76/117 completed (loss: 3.2614480005577207e-06): 65%|[34mβββββββ [0m| 15/23 [00:24<00:13, 1.65s/it]
Training Epoch: 4/5, step 77/117 completed (loss: 8.429919944319408e-06): 65%|[34mβββββββ [0m| 15/23 [00:24<00:13, 1.65s/it]
Training Epoch: 4/5, step 78/117 completed (loss: 7.954459761094768e-06): 65%|[34mβββββββ [0m| 15/23 [00:25<00:13, 1.65s/it]
Training Epoch: 4/5, step 78/117 completed (loss: 7.954459761094768e-06): 70%|[34mβββββββ [0m| 16/23 [00:25<00:11, 1.63s/it]
Training Epoch: 4/5, step 79/117 completed (loss: 2.969409479192109e-06): 70%|[34mβββββββ [0m| 16/23 [00:25<00:11, 1.63s/it]
Training Epoch: 4/5, step 80/117 completed (loss: 8.220708878070582e-06): 70%|[34mβββββββ [0m| 16/23 [00:25<00:11, 1.63s/it]
Training Epoch: 4/5, step 81/117 completed (loss: 3.1398581086250488e-06): 70%|[34mβββββββ [0m| 16/23 [00:26<00:11, 1.63s/it]
Training Epoch: 4/5, step 82/117 completed (loss: 2.9741781872871798e-06): 70%|[34mβββββββ [0m| 16/23 [00:26<00:11, 1.63s/it]
Training Epoch: 4/5, step 83/117 completed (loss: 8.424550287600141e-06): 70%|[34mβββββββ [0m| 16/23 [00:26<00:11, 1.63s/it]
Training Epoch: 4/5, step 83/117 completed (loss: 8.424550287600141e-06): 74%|[34mββββββββ [0m| 17/23 [00:27<00:09, 1.61s/it]
Training Epoch: 4/5, step 84/117 completed (loss: 2.913382786573493e-06): 74%|[34mββββββββ [0m| 17/23 [00:27<00:09, 1.61s/it]
Training Epoch: 4/5, step 85/117 completed (loss: 8.52658831718145e-06): 74%|[34mββββββββ [0m| 17/23 [00:27<00:09, 1.61s/it]
Training Epoch: 4/5, step 86/117 completed (loss: 2.968217813759111e-06): 74%|[34mββββββββ [0m| 17/23 [00:27<00:09, 1.61s/it]
Training Epoch: 4/5, step 87/117 completed (loss: 8.451009307464119e-06): 74%|[34mββββββββ [0m| 17/23 [00:28<00:09, 1.61s/it]
Training Epoch: 4/5, step 88/117 completed (loss: 7.651141459064092e-06): 74%|[34mββββββββ [0m| 17/23 [00:28<00:09, 1.61s/it]
Training Epoch: 4/5, step 88/117 completed (loss: 7.651141459064092e-06): 78%|[34mββββββββ [0m| 18/23 [00:28<00:08, 1.62s/it]
Training Epoch: 4/5, step 89/117 completed (loss: 8.521160452801269e-06): 78%|[34mββββββββ [0m| 18/23 [00:28<00:08, 1.62s/it]
Training Epoch: 4/5, step 90/117 completed (loss: 3.107680868197349e-06): 78%|[34mββββββββ [0m| 18/23 [00:29<00:08, 1.62s/it]
Training Epoch: 4/5, step 91/117 completed (loss: 2.9169591471145395e-06): 78%|[34mββββββββ [0m| 18/23 [00:29<00:08, 1.62s/it]
Training Epoch: 4/5, step 92/117 completed (loss: 2.995635213665082e-06): 78%|[34mββββββββ [0m| 18/23 [00:29<00:08, 1.62s/it]
Training Epoch: 4/5, step 93/117 completed (loss: 3.152971203235211e-06): 78%|[34mββββββββ [0m| 18/23 [00:30<00:08, 1.62s/it]
Training Epoch: 4/5, step 93/117 completed (loss: 3.152971203235211e-06): 83%|[34mβββββββββ [0m| 19/23 [00:30<00:06, 1.60s/it]
Training Epoch: 4/5, step 94/117 completed (loss: 5.336368758435128e-06): 83%|[34mβββββββββ [0m| 19/23 [00:30<00:06, 1.60s/it]
Training Epoch: 4/5, step 95/117 completed (loss: 3.023044655492413e-06): 83%|[34mβββββββββ [0m| 19/23 [00:30<00:06, 1.60s/it]
Training Epoch: 4/5, step 96/117 completed (loss: 2.937224053312093e-06): 83%|[34mβββββββββ [0m| 19/23 [00:30<00:06, 1.60s/it]
Training Epoch: 4/5, step 97/117 completed (loss: 3.0719195365236374e-06): 83%|[34mβββββββββ [0m| 19/23 [00:31<00:06, 1.60s/it]
Training Epoch: 4/5, step 98/117 completed (loss: 8.420564881816972e-06): 83%|[34mβββββββββ [0m| 19/23 [00:31<00:06, 1.60s/it]
Training Epoch: 4/5, step 98/117 completed (loss: 8.420564881816972e-06): 87%|[34mβββββββββ [0m| 20/23 [00:31<00:04, 1.58s/it]
Training Epoch: 4/5, step 99/117 completed (loss: 2.8716610813717125e-06): 87%|[34mβββββββββ [0m| 20/23 [00:31<00:04, 1.58s/it]
Training Epoch: 4/5, step 100/117 completed (loss: 3.1601309729012428e-06): 87%|[34mβββββββββ [0m| 20/23 [00:32<00:04, 1.58s/it]
Training Epoch: 4/5, step 101/117 completed (loss: 2.937224508059444e-06): 87%|[34mβββββββββ [0m| 20/23 [00:32<00:04, 1.58s/it]
Training Epoch: 4/5, step 102/117 completed (loss: 2.9193436148489127e-06): 87%|[34mβββββββββ [0m| 20/23 [00:32<00:04, 1.58s/it]
Training Epoch: 4/5, step 103/117 completed (loss: 7.3532296482881065e-06): 87%|[34mβββββββββ [0m| 20/23 [00:33<00:04, 1.58s/it]
Training Epoch: 4/5, step 103/117 completed (loss: 7.3532296482881065e-06): 91%|[34mββββββββββ[0m| 21/23 [00:33<00:03, 1.57s/it]
Training Epoch: 4/5, step 104/117 completed (loss: 2.877621227526106e-06): 91%|[34mββββββββββ[0m| 21/23 [00:33<00:03, 1.57s/it]
Training Epoch: 4/5, step 105/117 completed (loss: 7.38236940378556e-06): 91%|[34mββββββββββ[0m| 21/23 [00:33<00:03, 1.57s/it]
Training Epoch: 4/5, step 106/117 completed (loss: 7.500235369661823e-06): 91%|[34mββββββββββ[0m| 21/23 [00:34<00:03, 1.57s/it]
Training Epoch: 4/5, step 107/117 completed (loss: 7.3890173553081695e-06): 91%|[34mββββββββββ[0m| 21/23 [00:34<00:03, 1.57s/it]
Training Epoch: 4/5, step 108/117 completed (loss: 4.734199137601536e-06): 91%|[34mββββββββββ[0m| 21/23 [00:34<00:03, 1.57s/it]
Training Epoch: 4/5, step 108/117 completed (loss: 4.734199137601536e-06): 96%|[34mββββββββββ[0m| 22/23 [00:35<00:01, 1.60s/it]
Training Epoch: 4/5, step 109/117 completed (loss: 2.8955023481103126e-06): 96%|[34mββββββββββ[0m| 22/23 [00:35<00:01, 1.60s/it]
Training Epoch: 4/5, step 110/117 completed (loss: 7.120167992979987e-06): 96%|[34mββββββββββ[0m| 22/23 [00:35<00:01, 1.60s/it]
Training Epoch: 4/5, step 111/117 completed (loss: 3.077879682678031e-06): 96%|[34mββββββββββ[0m| 22/23 [00:35<00:01, 1.60s/it]
Training Epoch: 4/5, step 112/117 completed (loss: 2.8287468012422323e-06): 96%|[34mββββββββββ[0m| 22/23 [00:36<00:01, 1.60s/it]
Training Epoch: 4/5, step 113/117 completed (loss: 2.834706492649275e-06): 96%|[34mββββββββββ[0m| 22/23 [00:36<00:01, 1.60s/it]
Training Epoch: 4/5, step 113/117 completed (loss: 2.834706492649275e-06): 100%|[34mββββββββββ[0m| 23/23 [00:36<00:00, 1.65s/it]
Training Epoch: 4/5, step 114/117 completed (loss: 7.357145932473941e-06): 100%|[34mββββββββββ[0m| 23/23 [00:36<00:00, 1.65s/it]
Training Epoch: 4/5, step 115/117 completed (loss: 6.329567440843675e-06): 100%|[34mββββββββββ[0m| 23/23 [00:37<00:00, 1.65s/it]
Training Epoch: 4/5, step 115/117 completed (loss: 6.329567440843675e-06): : 24it [00:37, 1.39s/it]
Training Epoch: 4/5, step 116/117 completed (loss: 5.031552518630633e-06): : 24it [00:37, 1.39s/it]
Training Epoch: 4/5, step 116/117 completed (loss: 5.031552518630633e-06): : 24it [00:37, 1.57s/it] |
| Max CUDA memory allocated was 19 GB |
| Max CUDA memory reserved was 19 GB |
| Peak active CUDA memory was 19 GB |
| CUDA Malloc retries : 0 |
| CPU Total Peak Memory consumed during the train (max): 2 GB |
|
evaluating Epoch: 0%|[32m [0m| 0/145 [00:00<?, ?it/s]
evaluating Epoch: 1%|[32m [0m| 1/145 [00:00<00:37, 3.82it/s]
evaluating Epoch: 1%|[32mβ [0m| 2/145 [00:00<00:28, 5.08it/s]
evaluating Epoch: 2%|[32mβ [0m| 3/145 [00:00<00:24, 5.72it/s]
evaluating Epoch: 3%|[32mβ [0m| 4/145 [00:00<00:23, 5.98it/s]
evaluating Epoch: 3%|[32mβ [0m| 5/145 [00:00<00:22, 6.11it/s]
evaluating Epoch: 4%|[32mβ [0m| 6/145 [00:01<00:22, 6.19it/s]
evaluating Epoch: 5%|[32mβ [0m| 7/145 [00:01<00:22, 6.23it/s]
evaluating Epoch: 6%|[32mβ [0m| 8/145 [00:01<00:21, 6.39it/s]
evaluating Epoch: 6%|[32mβ [0m| 9/145 [00:01<00:22, 6.16it/s]
evaluating Epoch: 7%|[32mβ [0m| 10/145 [00:01<00:21, 6.29it/s]
evaluating Epoch: 8%|[32mβ [0m| 11/145 [00:01<00:20, 6.48it/s]
evaluating Epoch: 8%|[32mβ [0m| 12/145 [00:01<00:20, 6.39it/s]
evaluating Epoch: 9%|[32mβ [0m| 13/145 [00:02<00:20, 6.43it/s]
evaluating Epoch: 10%|[32mβ [0m| 14/145 [00:02<00:20, 6.47it/s]
evaluating Epoch: 10%|[32mβ [0m| 15/145 [00:02<00:19, 6.52it/s]
evaluating Epoch: 11%|[32mβ [0m| 16/145 [00:02<00:19, 6.56it/s]
evaluating Epoch: 12%|[32mββ [0m| 17/145 [00:02<00:19, 6.68it/s]
evaluating Epoch: 12%|[32mββ [0m| 18/145 [00:02<00:19, 6.54it/s]
evaluating Epoch: 13%|[32mββ [0m| 19/145 [00:03<00:19, 6.53it/s]
evaluating Epoch: 14%|[32mββ [0m| 20/145 [00:03<00:19, 6.54it/s]
evaluating Epoch: 14%|[32mββ [0m| 21/145 [00:03<00:18, 6.56it/s]
evaluating Epoch: 15%|[32mββ [0m| 22/145 [00:03<00:18, 6.54it/s]
evaluating Epoch: 16%|[32mββ [0m| 23/145 [00:03<00:18, 6.45it/s]
evaluating Epoch: 17%|[32mββ [0m| 24/145 [00:03<00:18, 6.39it/s]
evaluating Epoch: 17%|[32mββ [0m| 25/145 [00:03<00:18, 6.35it/s]
evaluating Epoch: 18%|[32mββ [0m| 26/145 [00:04<00:18, 6.28it/s]
evaluating Epoch: 19%|[32mββ [0m| 27/145 [00:04<00:18, 6.31it/s]
evaluating Epoch: 19%|[32mββ [0m| 28/145 [00:04<00:18, 6.34it/s]
evaluating Epoch: 20%|[32mββ [0m| 29/145 [00:04<00:18, 6.41it/s]
evaluating Epoch: 21%|[32mββ [0m| 30/145 [00:04<00:17, 6.44it/s]
evaluating Epoch: 21%|[32mβββ [0m| 31/145 [00:04<00:17, 6.51it/s]
evaluating Epoch: 22%|[32mβββ [0m| 32/145 [00:05<00:17, 6.48it/s]
evaluating Epoch: 23%|[32mβββ [0m| 33/145 [00:05<00:17, 6.48it/s]
evaluating Epoch: 23%|[32mβββ [0m| 34/145 [00:05<00:17, 6.45it/s]
evaluating Epoch: 24%|[32mβββ [0m| 35/145 [00:05<00:17, 6.36it/s]
evaluating Epoch: 25%|[32mβββ [0m| 36/145 [00:05<00:17, 6.37it/s]
evaluating Epoch: 26%|[32mβββ [0m| 37/145 [00:05<00:16, 6.42it/s]
evaluating Epoch: 26%|[32mβββ [0m| 38/145 [00:05<00:16, 6.45it/s]
evaluating Epoch: 27%|[32mβββ [0m| 39/145 [00:06<00:16, 6.43it/s]
evaluating Epoch: 28%|[32mβββ [0m| 40/145 [00:06<00:16, 6.44it/s]
evaluating Epoch: 28%|[32mβββ [0m| 41/145 [00:06<00:16, 6.46it/s]
evaluating Epoch: 29%|[32mβββ [0m| 42/145 [00:06<00:15, 6.46it/s]
evaluating Epoch: 30%|[32mβββ [0m| 43/145 [00:06<00:15, 6.60it/s]
evaluating Epoch: 30%|[32mβββ [0m| 44/145 [00:06<00:15, 6.54it/s]
evaluating Epoch: 31%|[32mβββ [0m| 45/145 [00:07<00:15, 6.61it/s]
evaluating Epoch: 32%|[32mββββ [0m| 46/145 [00:07<00:15, 6.59it/s]
evaluating Epoch: 32%|[32mββββ [0m| 47/145 [00:07<00:14, 6.64it/s]
evaluating Epoch: 33%|[32mββββ [0m| 48/145 [00:07<00:14, 6.64it/s]
evaluating Epoch: 34%|[32mββββ [0m| 49/145 [00:07<00:14, 6.72it/s]
evaluating Epoch: 34%|[32mββββ [0m| 50/145 [00:07<00:14, 6.67it/s]
evaluating Epoch: 35%|[32mββββ [0m| 51/145 [00:07<00:14, 6.53it/s]
evaluating Epoch: 36%|[32mββββ [0m| 52/145 [00:08<00:14, 6.57it/s]
evaluating Epoch: 37%|[32mββββ [0m| 53/145 [00:08<00:14, 6.56it/s]
evaluating Epoch: 37%|[32mββββ [0m| 54/145 [00:08<00:14, 6.29it/s]
evaluating Epoch: 38%|[32mββββ [0m| 55/145 [00:08<00:14, 6.13it/s]
evaluating Epoch: 39%|[32mββββ [0m| 56/145 [00:08<00:14, 6.31it/s]
evaluating Epoch: 39%|[32mββββ [0m| 57/145 [00:08<00:13, 6.55it/s]
evaluating Epoch: 40%|[32mββββ [0m| 58/145 [00:09<00:12, 6.78it/s]
evaluating Epoch: 41%|[32mββββ [0m| 59/145 [00:09<00:12, 6.90it/s]
evaluating Epoch: 41%|[32mβββββ [0m| 60/145 [00:09<00:12, 7.02it/s]
evaluating Epoch: 42%|[32mβββββ [0m| 61/145 [00:09<00:11, 7.08it/s]
evaluating Epoch: 43%|[32mβββββ [0m| 62/145 [00:09<00:11, 7.17it/s]
evaluating Epoch: 43%|[32mβββββ [0m| 63/145 [00:09<00:11, 7.18it/s]
evaluating Epoch: 44%|[32mβββββ [0m| 64/145 [00:09<00:11, 7.13it/s]
evaluating Epoch: 45%|[32mβββββ [0m| 65/145 [00:10<00:11, 7.03it/s]
evaluating Epoch: 46%|[32mβββββ [0m| 66/145 [00:10<00:11, 7.13it/s]
evaluating Epoch: 46%|[32mβββββ [0m| 67/145 [00:10<00:11, 7.09it/s]
evaluating Epoch: 47%|[32mβββββ [0m| 68/145 [00:10<00:11, 6.94it/s]
evaluating Epoch: 48%|[32mβββββ [0m| 69/145 [00:10<00:10, 6.95it/s]
evaluating Epoch: 48%|[32mβββββ [0m| 70/145 [00:10<00:10, 6.97it/s]
evaluating Epoch: 49%|[32mβββββ [0m| 71/145 [00:10<00:10, 6.83it/s]
evaluating Epoch: 50%|[32mβββββ [0m| 72/145 [00:11<00:10, 6.83it/s]
evaluating Epoch: 50%|[32mβββββ [0m| 73/145 [00:11<00:10, 6.80it/s]
evaluating Epoch: 51%|[32mβββββ [0m| 74/145 [00:11<00:10, 6.90it/s]
evaluating Epoch: 52%|[32mββββββ [0m| 75/145 [00:11<00:10, 6.89it/s]
evaluating Epoch: 52%|[32mββββββ [0m| 76/145 [00:11<00:09, 6.98it/s]
evaluating Epoch: 53%|[32mββββββ [0m| 77/145 [00:11<00:09, 7.05it/s]
evaluating Epoch: 54%|[32mββββββ [0m| 78/145 [00:11<00:09, 7.04it/s]
evaluating Epoch: 54%|[32mββββββ [0m| 79/145 [00:12<00:09, 6.94it/s]
evaluating Epoch: 55%|[32mββββββ [0m| 80/145 [00:12<00:09, 7.00it/s]
evaluating Epoch: 56%|[32mββββββ [0m| 81/145 [00:12<00:09, 6.92it/s]
evaluating Epoch: 57%|[32mββββββ [0m| 82/145 [00:12<00:09, 6.90it/s]
evaluating Epoch: 57%|[32mββββββ [0m| 83/145 [00:12<00:09, 6.86it/s]
evaluating Epoch: 58%|[32mββββββ [0m| 84/145 [00:12<00:08, 6.79it/s]
evaluating Epoch: 59%|[32mββββββ [0m| 85/145 [00:12<00:08, 6.81it/s]
evaluating Epoch: 59%|[32mββββββ [0m| 86/145 [00:13<00:08, 6.94it/s]
evaluating Epoch: 60%|[32mββββββ [0m| 87/145 [00:13<00:08, 6.99it/s]
evaluating Epoch: 61%|[32mββββββ [0m| 88/145 [00:13<00:08, 7.11it/s]
evaluating Epoch: 61%|[32mβββββββ [0m| 89/145 [00:13<00:07, 7.15it/s]
evaluating Epoch: 62%|[32mβββββββ [0m| 90/145 [00:13<00:07, 7.10it/s]
evaluating Epoch: 63%|[32mβββββββ [0m| 91/145 [00:13<00:07, 7.08it/s]
evaluating Epoch: 63%|[32mβββββββ [0m| 92/145 [00:13<00:07, 7.01it/s]
evaluating Epoch: 64%|[32mβββββββ [0m| 93/145 [00:14<00:07, 7.00it/s]
evaluating Epoch: 65%|[32mβββββββ [0m| 94/145 [00:14<00:07, 7.02it/s]
evaluating Epoch: 66%|[32mβββββββ [0m| 95/145 [00:14<00:07, 6.98it/s]
evaluating Epoch: 66%|[32mβββββββ [0m| 96/145 [00:14<00:06, 7.01it/s]
evaluating Epoch: 67%|[32mβββββββ [0m| 97/145 [00:14<00:06, 7.06it/s]
evaluating Epoch: 68%|[32mβββββββ [0m| 98/145 [00:14<00:06, 7.10it/s]
evaluating Epoch: 68%|[32mβββββββ [0m| 99/145 [00:14<00:06, 7.02it/s]
evaluating Epoch: 69%|[32mβββββββ [0m| 100/145 [00:15<00:06, 6.91it/s]
evaluating Epoch: 70%|[32mβββββββ [0m| 101/145 [00:15<00:06, 6.92it/s]
evaluating Epoch: 70%|[32mβββββββ [0m| 102/145 [00:15<00:06, 7.00it/s]
evaluating Epoch: 71%|[32mβββββββ [0m| 103/145 [00:15<00:05, 7.02it/s]
evaluating Epoch: 72%|[32mββββββββ [0m| 104/145 [00:15<00:06, 6.77it/s]
evaluating Epoch: 72%|[32mββββββββ [0m| 105/145 [00:15<00:05, 6.71it/s]
evaluating Epoch: 73%|[32mββββββββ [0m| 106/145 [00:15<00:06, 6.44it/s]
evaluating Epoch: 74%|[32mββββββββ [0m| 107/145 [00:16<00:05, 6.58it/s]
evaluating Epoch: 74%|[32mββββββββ [0m| 108/145 [00:16<00:05, 6.72it/s]
evaluating Epoch: 75%|[32mββββββββ [0m| 109/145 [00:16<00:05, 6.96it/s]
evaluating Epoch: 76%|[32mββββββββ [0m| 110/145 [00:16<00:04, 7.11it/s]
evaluating Epoch: 77%|[32mββββββββ [0m| 111/145 [00:16<00:04, 7.22it/s]
evaluating Epoch: 77%|[32mββββββββ [0m| 112/145 [00:16<00:04, 7.23it/s]
evaluating Epoch: 78%|[32mββββββββ [0m| 113/145 [00:16<00:04, 7.31it/s]
evaluating Epoch: 79%|[32mββββββββ [0m| 114/145 [00:17<00:04, 7.30it/s]
evaluating Epoch: 79%|[32mββββββββ [0m| 115/145 [00:17<00:04, 7.24it/s]
evaluating Epoch: 80%|[32mββββββββ [0m| 116/145 [00:17<00:04, 6.99it/s]
evaluating Epoch: 81%|[32mββββββββ [0m| 117/145 [00:17<00:04, 6.98it/s]
evaluating Epoch: 81%|[32mβββββββββ [0m| 118/145 [00:17<00:03, 6.99it/s]
evaluating Epoch: 82%|[32mβββββββββ [0m| 119/145 [00:17<00:03, 7.00it/s]
evaluating Epoch: 83%|[32mβββββββββ [0m| 120/145 [00:17<00:03, 7.12it/s]
evaluating Epoch: 83%|[32mβββββββββ [0m| 121/145 [00:18<00:03, 7.22it/s]
evaluating Epoch: 84%|[32mβββββββββ [0m| 122/145 [00:18<00:03, 7.15it/s]
evaluating Epoch: 85%|[32mβββββββββ [0m| 123/145 [00:18<00:03, 7.06it/s]
evaluating Epoch: 86%|[32mβββββββββ [0m| 124/145 [00:18<00:02, 7.19it/s]
evaluating Epoch: 86%|[32mβββββββββ [0m| 125/145 [00:18<00:02, 7.24it/s]
evaluating Epoch: 87%|[32mβββββββββ [0m| 126/145 [00:18<00:02, 7.23it/s]
evaluating Epoch: 88%|[32mβββββββββ [0m| 127/145 [00:18<00:02, 7.13it/s]
evaluating Epoch: 88%|[32mβββββββββ [0m| 128/145 [00:19<00:02, 7.16it/s]
evaluating Epoch: 89%|[32mβββββββββ [0m| 129/145 [00:19<00:02, 6.97it/s]
evaluating Epoch: 90%|[32mβββββββββ [0m| 130/145 [00:19<00:02, 6.93it/s]
evaluating Epoch: 90%|[32mβββββββββ [0m| 131/145 [00:19<00:02, 6.88it/s]
evaluating Epoch: 91%|[32mβββββββββ [0m| 132/145 [00:19<00:01, 6.95it/s]
evaluating Epoch: 92%|[32mββββββββββ[0m| 133/145 [00:19<00:01, 7.06it/s]
evaluating Epoch: 92%|[32mββββββββββ[0m| 134/145 [00:19<00:01, 7.09it/s]
evaluating Epoch: 93%|[32mββββββββββ[0m| 135/145 [00:20<00:01, 7.06it/s]
evaluating Epoch: 94%|[32mββββββββββ[0m| 136/145 [00:20<00:01, 7.02it/s]
evaluating Epoch: 94%|[32mββββββββββ[0m| 137/145 [00:20<00:01, 7.00it/s]
evaluating Epoch: 95%|[32mββββββββββ[0m| 138/145 [00:20<00:01, 6.96it/s]
evaluating Epoch: 96%|[32mββββββββββ[0m| 139/145 [00:20<00:00, 6.88it/s]
evaluating Epoch: 97%|[32mββββββββββ[0m| 140/145 [00:20<00:00, 6.88it/s]
evaluating Epoch: 97%|[32mββββββββββ[0m| 141/145 [00:20<00:00, 7.04it/s]
evaluating Epoch: 98%|[32mββββββββββ[0m| 142/145 [00:21<00:00, 6.97it/s]
evaluating Epoch: 99%|[32mββββββββββ[0m| 143/145 [00:21<00:00, 6.90it/s]
evaluating Epoch: 99%|[32mββββββββββ[0m| 144/145 [00:21<00:00, 7.10it/s]
evaluating Epoch: 100%|[32mββββββββββ[0m| 145/145 [00:21<00:00, 7.25it/s]
evaluating Epoch: 100%|[32mββββββββββ[0m| 145/145 [00:21<00:00, 6.75it/s] |
| eval_ppl=tensor(1.5901, device='cuda:0') eval_epoch_loss=tensor(0.4638, device='cuda:0') |
| we are about to save the PEFT modules |
| PEFT modules are saved in /data/colosseum_dataset/wm_data/finetuned_models/vlm_move_hanger directory |
| Epoch 4: train_perplexity=1.0000, train_epoch_loss=0.0000, epoch time 38.136865402106196s |
| Starting epoch 4/5 |
| train_config.max_train_step: 0 |
|
Training Epoch: 5: 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 5/5, step 0/117 completed (loss: 2.982514388349955e-06): 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 5/5, step 1/117 completed (loss: 2.933653604486608e-06): 0%|[34m [0m| 0/23 [00:00<?, ?it/s]
Training Epoch: 5/5, step 2/117 completed (loss: 6.467267667176202e-06): 0%|[34m [0m| 0/23 [00:01<?, ?it/s]
Training Epoch: 5/5, step 3/117 completed (loss: 2.866892600650317e-06): 0%|[34m [0m| 0/23 [00:01<?, ?it/s]
Training Epoch: 5/5, step 3/117 completed (loss: 2.866892600650317e-06): 4%|[34mβ [0m| 1/23 [00:01<00:38, 1.77s/it]
Training Epoch: 5/5, step 4/117 completed (loss: 2.8811971333198017e-06): 4%|[34mβ [0m| 1/23 [00:01<00:38, 1.77s/it]
Training Epoch: 5/5, step 5/117 completed (loss: 6.317652150755748e-06): 4%|[34mβ [0m| 1/23 [00:02<00:38, 1.77s/it]
Training Epoch: 5/5, step 6/117 completed (loss: 2.980138333441573e-06): 4%|[34mβ [0m| 1/23 [00:02<00:38, 1.77s/it]
Training Epoch: 5/5, step 7/117 completed (loss: 7.0301207415468525e-06): 4%|[34mβ [0m| 1/23 [00:02<00:38, 1.77s/it]
Training Epoch: 5/5, step 8/117 completed (loss: 6.536146884172922e-06): 4%|[34mβ [0m| 1/23 [00:03<00:38, 1.77s/it]
Training Epoch: 5/5, step 8/117 completed (loss: 6.536146884172922e-06): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.64s/it]
Training Epoch: 5/5, step 9/117 completed (loss: 4.963784249412129e-06): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.64s/it]
Training Epoch: 5/5, step 10/117 completed (loss: 2.9074226404190995e-06): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.64s/it]
Training Epoch: 5/5, step 11/117 completed (loss: 6.0580819081224035e-06): 9%|[34mβ [0m| 2/23 [00:03<00:34, 1.64s/it]
Training Epoch: 5/5, step 12/117 completed (loss: 2.838283080563997e-06): 9%|[34mβ [0m| 2/23 [00:04<00:34, 1.64s/it]
Training Epoch: 5/5, step 13/117 completed (loss: 6.646083875239128e-06): 9%|[34mβ [0m| 2/23 [00:04<00:34, 1.64s/it]
Training Epoch: 5/5, step 13/117 completed (loss: 6.646083875239128e-06): 13%|[34mββ [0m| 3/23 [00:04<00:32, 1.60s/it]
Training Epoch: 5/5, step 14/117 completed (loss: 6.435508112190291e-06): 13%|[34mββ [0m| 3/23 [00:04<00:32, 1.60s/it]
Training Epoch: 5/5, step 15/117 completed (loss: 2.870468051696662e-06): 13%|[34mββ [0m| 3/23 [00:05<00:32, 1.60s/it]
Training Epoch: 5/5, step 16/117 completed (loss: 2.9205348255345598e-06): 13%|[34mββ [0m| 3/23 [00:05<00:32, 1.60s/it]
Training Epoch: 5/5, step 17/117 completed (loss: 2.792984105326468e-06): 13%|[34mββ [0m| 3/23 [00:05<00:32, 1.60s/it]
Training Epoch: 5/5, step 18/117 completed (loss: 2.9253089905978413e-06): 13%|[34mββ [0m| 3/23 [00:06<00:32, 1.60s/it]
Training Epoch: 5/5, step 18/117 completed (loss: 2.9253089905978413e-06): 17%|[34mββ [0m| 4/23 [00:06<00:29, 1.55s/it]
Training Epoch: 5/5, step 19/117 completed (loss: 4.710359917226015e-06): 17%|[34mββ [0m| 4/23 [00:06<00:29, 1.55s/it]
Training Epoch: 5/5, step 20/117 completed (loss: 2.8108654532843502e-06): 17%|[34mββ [0m| 4/23 [00:06<00:29, 1.55s/it]
Training Epoch: 5/5, step 21/117 completed (loss: 2.791798578982707e-06): 17%|[34mββ [0m| 4/23 [00:06<00:29, 1.55s/it]
Training Epoch: 5/5, step 22/117 completed (loss: 2.683320190044469e-06): 17%|[34mββ [0m| 4/23 [00:07<00:29, 1.55s/it]
Training Epoch: 5/5, step 23/117 completed (loss: 2.8537854177557165e-06): 17%|[34mββ [0m| 4/23 [00:07<00:29, 1.55s/it]
Training Epoch: 5/5, step 23/117 completed (loss: 2.8537854177557165e-06): 22%|[34mβββ [0m| 5/23 [00:07<00:28, 1.57s/it]
Training Epoch: 5/5, step 24/117 completed (loss: 5.593235073320102e-06): 22%|[34mβββ [0m| 5/23 [00:07<00:28, 1.57s/it]
Training Epoch: 5/5, step 25/117 completed (loss: 2.8585479867615504e-06): 22%|[34mβββ [0m| 5/23 [00:08<00:28, 1.57s/it]
Training Epoch: 5/5, step 26/117 completed (loss: 4.187156719126506e-06): 22%|[34mβββ [0m| 5/23 [00:08<00:28, 1.57s/it]
Training Epoch: 5/5, step 27/117 completed (loss: 6.044826477591414e-06): 22%|[34mβββ [0m| 5/23 [00:09<00:28, 1.57s/it]
Training Epoch: 5/5, step 28/117 completed (loss: 5.88590000916156e-06): 22%|[34mβββ [0m| 5/23 [00:09<00:28, 1.57s/it]
Training Epoch: 5/5, step 28/117 completed (loss: 5.88590000916156e-06): 26%|[34mβββ [0m| 6/23 [00:09<00:28, 1.67s/it]
Training Epoch: 5/5, step 29/117 completed (loss: 2.8668923732766416e-06): 26%|[34mβββ [0m| 6/23 [00:09<00:28, 1.67s/it]
Training Epoch: 5/5, step 30/117 completed (loss: 2.865706619559205e-06): 26%|[34mβββ [0m| 6/23 [00:10<00:28, 1.67s/it]
Training Epoch: 5/5, step 31/117 completed (loss: 5.598521056526806e-06): 26%|[34mβββ [0m| 6/23 [00:10<00:28, 1.67s/it]
Training Epoch: 5/5, step 32/117 completed (loss: 2.8072952318325406e-06): 26%|[34mβββ [0m| 6/23 [00:10<00:28, 1.67s/it]
Training Epoch: 5/5, step 33/117 completed (loss: 2.853779960787506e-06): 26%|[34mβββ [0m| 6/23 [00:11<00:28, 1.67s/it]
Training Epoch: 5/5, step 33/117 completed (loss: 2.853779960787506e-06): 30%|[34mβββ [0m| 7/23 [00:11<00:28, 1.77s/it]
Training Epoch: 5/5, step 34/117 completed (loss: 5.940210030530579e-06): 30%|[34mβββ [0m| 7/23 [00:11<00:28, 1.77s/it]
Training Epoch: 5/5, step 35/117 completed (loss: 2.8907254545629257e-06): 30%|[34mβββ [0m| 7/23 [00:12<00:28, 1.77s/it]
Training Epoch: 5/5, step 36/117 completed (loss: 2.809673105730326e-06): 30%|[34mβββ [0m| 7/23 [00:12<00:28, 1.77s/it]
Training Epoch: 5/5, step 37/117 completed (loss: 2.7965604658675147e-06): 30%|[34mβββ [0m| 7/23 [00:12<00:28, 1.77s/it]
Training Epoch: 5/5, step 38/117 completed (loss: 2.8239776383998105e-06): 30%|[34mβββ [0m| 7/23 [00:13<00:28, 1.77s/it]
Training Epoch: 5/5, step 38/117 completed (loss: 2.8239776383998105e-06): 35%|[34mββββ [0m| 8/23 [00:13<00:27, 1.82s/it]
Training Epoch: 5/5, step 39/117 completed (loss: 5.4634365369565785e-06): 35%|[34mββββ [0m| 8/23 [00:13<00:27, 1.82s/it]
Training Epoch: 5/5, step 40/117 completed (loss: 2.813255377986934e-06): 35%|[34mββββ [0m| 8/23 [00:14<00:27, 1.82s/it]
Training Epoch: 5/5, step 41/117 completed (loss: 2.8025203846482327e-06): 35%|[34mββββ [0m| 8/23 [00:14<00:27, 1.82s/it]
Training Epoch: 5/5, step 42/117 completed (loss: 3.745517460629344e-06): 35%|[34mββββ [0m| 8/23 [00:14<00:27, 1.82s/it]
Training Epoch: 5/5, step 43/117 completed (loss: 5.729657004849287e-06): 35%|[34mββββ [0m| 8/23 [00:15<00:27, 1.82s/it]
Training Epoch: 5/5, step 43/117 completed (loss: 5.729657004849287e-06): 39%|[34mββββ [0m| 9/23 [00:15<00:25, 1.80s/it]
Training Epoch: 5/5, step 44/117 completed (loss: 2.7667585982271703e-06): 39%|[34mββββ [0m| 9/23 [00:15<00:25, 1.80s/it]
Training Epoch: 5/5, step 45/117 completed (loss: 5.2833252084383275e-06): 39%|[34mββββ [0m| 9/23 [00:15<00:25, 1.80s/it]
Training Epoch: 5/5, step 46/117 completed (loss: 2.723851366681629e-06): 39%|[34mββββ [0m| 9/23 [00:16<00:25, 1.80s/it]
Training Epoch: 5/5, step 47/117 completed (loss: 2.6988177523890045e-06): 39%|[34mββββ [0m| 9/23 [00:16<00:25, 1.80s/it]
Training Epoch: 5/5, step 48/117 completed (loss: 2.8299380119278794e-06): 39%|[34mββββ [0m| 9/23 [00:16<00:25, 1.80s/it]
Training Epoch: 5/5, step 48/117 completed (loss: 2.8299380119278794e-06): 43%|[34mβββββ [0m| 10/23 [00:17<00:23, 1.80s/it]
Training Epoch: 5/5, step 49/117 completed (loss: 2.686896550585516e-06): 43%|[34mβββββ [0m| 10/23 [00:17<00:23, 1.80s/it]
Training Epoch: 5/5, step 50/117 completed (loss: 2.708354031710769e-06): 43%|[34mβββββ [0m| 10/23 [00:17<00:23, 1.80s/it]
Training Epoch: 5/5, step 51/117 completed (loss: 5.684619281964842e-06): 43%|[34mβββββ [0m| 10/23 [00:17<00:23, 1.80s/it]
Training Epoch: 5/5, step 52/117 completed (loss: 2.6415987122163642e-06): 43%|[34mβββββ [0m| 10/23 [00:18<00:23, 1.80s/it]
Training Epoch: 5/5, step 53/117 completed (loss: 5.177379534870852e-06): 43%|[34mβββββ [0m| 10/23 [00:18<00:23, 1.80s/it]
Training Epoch: 5/5, step 53/117 completed (loss: 5.177379534870852e-06): 48%|[34mβββββ [0m| 11/23 [00:19<00:21, 1.79s/it]
Training Epoch: 5/5, step 54/117 completed (loss: 2.878813347706455e-06): 48%|[34mβββββ [0m| 11/23 [00:19<00:21, 1.79s/it]
Training Epoch: 5/5, step 55/117 completed (loss: 2.7143075840285746e-06): 48%|[34mβββββ [0m| 11/23 [00:19<00:21, 1.79s/it]
Training Epoch: 5/5, step 56/117 completed (loss: 2.6284858449798776e-06): 48%|[34mβββββ [0m| 11/23 [00:19<00:21, 1.79s/it]
Training Epoch: 5/5, step 57/117 completed (loss: 2.6547111247055e-06): 48%|[34mβββββ [0m| 11/23 [00:20<00:21, 1.79s/it]
Training Epoch: 5/5, step 58/117 completed (loss: 5.006532774132211e-06): 48%|[34mβββββ [0m| 11/23 [00:20<00:21, 1.79s/it]
Training Epoch: 5/5, step 58/117 completed (loss: 5.006532774132211e-06): 52%|[34mββββββ [0m| 12/23 [00:20<00:19, 1.77s/it]
Training Epoch: 5/5, step 59/117 completed (loss: 2.6594796054268954e-06): 52%|[34mββββββ [0m| 12/23 [00:20<00:19, 1.77s/it]
Training Epoch: 5/5, step 60/117 completed (loss: 5.178691480978159e-06): 52%|[34mββββββ [0m| 12/23 [00:21<00:19, 1.77s/it]
Training Epoch: 5/5, step 61/117 completed (loss: 5.373386557039339e-06): 52%|[34mββββββ [0m| 12/23 [00:21<00:19, 1.77s/it]
Training Epoch: 5/5, step 62/117 completed (loss: 2.7012017653760267e-06): 52%|[34mββββββ [0m| 12/23 [00:21<00:19, 1.77s/it]
Training Epoch: 5/5, step 63/117 completed (loss: 5.234325271885609e-06): 52%|[34mββββββ [0m| 12/23 [00:22<00:19, 1.77s/it]
Training Epoch: 5/5, step 63/117 completed (loss: 5.234325271885609e-06): 57%|[34mββββββ [0m| 13/23 [00:22<00:17, 1.74s/it]
Training Epoch: 5/5, step 64/117 completed (loss: 2.698817525015329e-06): 57%|[34mββββββ [0m| 13/23 [00:22<00:17, 1.74s/it]
Training Epoch: 5/5, step 65/117 completed (loss: 2.634446445881622e-06): 57%|[34mββββββ [0m| 13/23 [00:22<00:17, 1.74s/it]
Training Epoch: 5/5, step 66/117 completed (loss: 4.72577812615782e-06): 57%|[34mββββββ [0m| 13/23 [00:23<00:17, 1.74s/it]
Training Epoch: 5/5, step 67/117 completed (loss: 2.617763584567001e-06): 57%|[34mββββββ [0m| 13/23 [00:23<00:17, 1.74s/it]
Training Epoch: 5/5, step 68/117 completed (loss: 4.994627943233354e-06): 57%|[34mββββββ [0m| 13/23 [00:23<00:17, 1.74s/it]
Training Epoch: 5/5, step 68/117 completed (loss: 4.994627943233354e-06): 61%|[34mββββββ [0m| 14/23 [00:23<00:15, 1.68s/it]
Training Epoch: 5/5, step 69/117 completed (loss: 4.811836788576329e-06): 61%|[34mββββββ [0m| 14/23 [00:23<00:15, 1.68s/it]
Training Epoch: 5/5, step 70/117 completed (loss: 2.709545697143767e-06): 61%|[34mββββββ [0m| 14/23 [00:24<00:15, 1.68s/it]
Training Epoch: 5/5, step 71/117 completed (loss: 3.769347358684172e-06): 61%|[34mββββββ [0m| 14/23 [00:24<00:15, 1.68s/it]
Training Epoch: 5/5, step 72/117 completed (loss: 5.047586000728188e-06): 61%|[34mββββββ [0m| 14/23 [00:25<00:15, 1.68s/it]
Training Epoch: 5/5, step 73/117 completed (loss: 5.0197836571896914e-06): 61%|[34mββββββ [0m| 14/23 [00:25<00:15, 1.68s/it]
Training Epoch: 5/5, step 73/117 completed (loss: 5.0197836571896914e-06): 65%|[34mβββββββ [0m| 15/23 [00:25<00:13, 1.69s/it]
Training Epoch: 5/5, step 74/117 completed (loss: 2.8001363716612104e-06): 65%|[34mβββββββ [0m| 15/23 [00:25<00:13, 1.69s/it]
Training Epoch: 5/5, step 75/117 completed (loss: 5.255494670564076e-06): 65%|[34mβββββββ [0m| 15/23 [00:25<00:13, 1.69s/it]
Training Epoch: 5/5, step 76/117 completed (loss: 5.058171609562123e-06): 65%|[34mβββββββ [0m| 15/23 [00:26<00:13, 1.69s/it]
Training Epoch: 5/5, step 77/117 completed (loss: 5.083362339064479e-06): 65%|[34mβββββββ [0m| 15/23 [00:26<00:13, 1.69s/it]
Training Epoch: 5/5, step 78/117 completed (loss: 4.719141088571632e-06): 65%|[34mβββββββ [0m| 15/23 [00:26<00:13, 1.69s/it]
Training Epoch: 5/5, step 78/117 completed (loss: 4.719141088571632e-06): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.67s/it]
Training Epoch: 5/5, step 79/117 completed (loss: 2.5390875180164585e-06): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.67s/it]
Training Epoch: 5/5, step 80/117 completed (loss: 4.976075160811888e-06): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.67s/it]
Training Epoch: 5/5, step 81/117 completed (loss: 2.7429175588622456e-06): 70%|[34mβββββββ [0m| 16/23 [00:27<00:11, 1.67s/it]
Training Epoch: 5/5, step 82/117 completed (loss: 4.981354322808329e-06): 70%|[34mβββββββ [0m| 16/23 [00:28<00:11, 1.67s/it]
Training Epoch: 5/5, step 83/117 completed (loss: 2.6189486561634112e-06): 70%|[34mβββββββ [0m| 16/23 [00:28<00:11, 1.67s/it]
Training Epoch: 5/5, step 83/117 completed (loss: 2.6189486561634112e-06): 74%|[34mββββββββ [0m| 17/23 [00:29<00:10, 1.68s/it]
Training Epoch: 5/5, step 84/117 completed (loss: 2.670207550181658e-06): 74%|[34mββββββββ [0m| 17/23 [00:29<00:10, 1.68s/it]
Training Epoch: 5/5, step 85/117 completed (loss: 5.152214725967497e-06): 74%|[34mββββββββ [0m| 17/23 [00:29<00:10, 1.68s/it]
Training Epoch: 5/5, step 86/117 completed (loss: 5.076725756225642e-06): 74%|[34mββββββββ [0m| 17/23 [00:29<00:10, 1.68s/it]
Training Epoch: 5/5, step 87/117 completed (loss: 2.5331266897410387e-06): 74%|[34mββββββββ [0m| 17/23 [00:30<00:10, 1.68s/it]
Training Epoch: 5/5, step 88/117 completed (loss: 2.5712724891491234e-06): 74%|[34mββββββββ [0m| 17/23 [00:30<00:10, 1.68s/it]
Training Epoch: 5/5, step 88/117 completed (loss: 2.5712724891491234e-06): 78%|[34mββββββββ [0m| 18/23 [00:30<00:08, 1.67s/it]
Training Epoch: 5/5, step 89/117 completed (loss: 2.6797449663718e-06): 78%|[34mββββββββ [0m| 18/23 [00:30<00:08, 1.67s/it]
Training Epoch: 5/5, step 90/117 completed (loss: 4.704554157797247e-06): 78%|[34mββββββββ [0m| 18/23 [00:30<00:08, 1.67s/it]
Training Epoch: 5/5, step 91/117 completed (loss: 4.668812380259624e-06): 78%|[34mββββββββ [0m| 18/23 [00:31<00:08, 1.67s/it]
Training Epoch: 5/5, step 92/117 completed (loss: 2.6129885100090178e-06): 78%|[34mββββββββ [0m| 18/23 [00:31<00:08, 1.67s/it]
Training Epoch: 5/5, step 93/117 completed (loss: 4.998610165785067e-06): 78%|[34mββββββββ [0m| 18/23 [00:31<00:08, 1.67s/it]
Training Epoch: 5/5, step 93/117 completed (loss: 4.998610165785067e-06): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.65s/it]
Training Epoch: 5/5, step 94/117 completed (loss: 4.662168066715822e-06): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.65s/it]
Training Epoch: 5/5, step 95/117 completed (loss: 2.6082202566612978e-06): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.65s/it]
Training Epoch: 5/5, step 96/117 completed (loss: 4.205264303891454e-06): 83%|[34mβββββββββ [0m| 19/23 [00:32<00:06, 1.65s/it]
Training Epoch: 5/5, step 97/117 completed (loss: 4.48604760094895e-06): 83%|[34mβββββββββ [0m| 19/23 [00:33<00:06, 1.65s/it]
Training Epoch: 5/5, step 98/117 completed (loss: 4.470148269319907e-06): 83%|[34mβββββββββ [0m| 19/23 [00:33<00:06, 1.65s/it]
Training Epoch: 5/5, step 98/117 completed (loss: 4.470148269319907e-06): 87%|[34mβββββββββ [0m| 20/23 [00:33<00:04, 1.65s/it]
Training Epoch: 5/5, step 99/117 completed (loss: 4.626451755029848e-06): 87%|[34mβββββββββ [0m| 20/23 [00:33<00:04, 1.65s/it]
Training Epoch: 5/5, step 100/117 completed (loss: 2.623717364258482e-06): 87%|[34mβββββββββ [0m| 20/23 [00:34<00:04, 1.65s/it]
Training Epoch: 5/5, step 101/117 completed (loss: 2.4878279418771854e-06): 87%|[34mβββββββββ [0m| 20/23 [00:34<00:04, 1.65s/it]
Training Epoch: 5/5, step 102/117 completed (loss: 2.6880823043029523e-06): 87%|[34mβββββββββ [0m| 20/23 [00:34<00:04, 1.65s/it]
Training Epoch: 5/5, step 103/117 completed (loss: 2.6404065920360154e-06): 87%|[34mβββββββββ [0m| 20/23 [00:35<00:04, 1.65s/it]
Training Epoch: 5/5, step 103/117 completed (loss: 2.6404065920360154e-06): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.65s/it]
Training Epoch: 5/5, step 104/117 completed (loss: 2.480675675542443e-06): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.65s/it]
Training Epoch: 5/5, step 105/117 completed (loss: 4.52182348453789e-06): 91%|[34mββββββββββ[0m| 21/23 [00:35<00:03, 1.65s/it]
Training Epoch: 5/5, step 106/117 completed (loss: 2.474715756761725e-06): 91%|[34mββββββββββ[0m| 21/23 [00:36<00:03, 1.65s/it]
Training Epoch: 5/5, step 107/117 completed (loss: 4.535041625786107e-06): 91%|[34mββββββββββ[0m| 21/23 [00:36<00:03, 1.65s/it]
Training Epoch: 5/5, step 108/117 completed (loss: 2.480675675542443e-06): 91%|[34mββββββββββ[0m| 21/23 [00:36<00:03, 1.65s/it]
Training Epoch: 5/5, step 108/117 completed (loss: 2.480675675542443e-06): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.65s/it]
Training Epoch: 5/5, step 109/117 completed (loss: 4.537697805062635e-06): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.65s/it]
Training Epoch: 5/5, step 110/117 completed (loss: 4.516516128205694e-06): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.65s/it]
Training Epoch: 5/5, step 111/117 completed (loss: 2.6511343094171025e-06): 96%|[34mββββββββββ[0m| 22/23 [00:37<00:01, 1.65s/it]
Training Epoch: 5/5, step 112/117 completed (loss: 2.568888930909452e-06): 96%|[34mββββββββββ[0m| 22/23 [00:38<00:01, 1.65s/it]
Training Epoch: 5/5, step 113/117 completed (loss: 2.7000098725693533e-06): 96%|[34mββββββββββ[0m| 22/23 [00:38<00:01, 1.65s/it]
Training Epoch: 5/5, step 113/117 completed (loss: 2.7000098725693533e-06): 100%|[34mββββββββββ[0m| 23/23 [00:38<00:00, 1.67s/it]
Training Epoch: 5/5, step 114/117 completed (loss: 4.2357341953902505e-06): 100%|[34mββββββββββ[0m| 23/23 [00:38<00:00, 1.67s/it]
Training Epoch: 5/5, step 115/117 completed (loss: 2.6058362436742755e-06): 100%|[34mββββββββββ[0m| 23/23 [00:39<00:00, 1.67s/it]
Training Epoch: 5/5, step 115/117 completed (loss: 2.6058362436742755e-06): : 24it [00:39, 1.36s/it]
Training Epoch: 5/5, step 116/117 completed (loss: 2.6141806301893666e-06): : 24it [00:39, 1.36s/it]
Training Epoch: 5/5, step 116/117 completed (loss: 2.6141806301893666e-06): : 24it [00:39, 1.65s/it] |
| Max CUDA memory allocated was 19 GB |
| Max CUDA memory reserved was 19 GB |
| Peak active CUDA memory was 19 GB |
| CUDA Malloc retries : 0 |
| CPU Total Peak Memory consumed during the train (max): 2 GB |
|
evaluating Epoch: 0%|[32m [0m| 0/145 [00:00<?, ?it/s]
evaluating Epoch: 1%|[32m [0m| 1/145 [00:00<00:37, 3.88it/s]
evaluating Epoch: 1%|[32mβ [0m| 2/145 [00:00<00:27, 5.21it/s]
evaluating Epoch: 2%|[32mβ [0m| 3/145 [00:00<00:23, 5.99it/s]
evaluating Epoch: 3%|[32mβ [0m| 4/145 [00:00<00:22, 6.40it/s]
evaluating Epoch: 3%|[32mβ [0m| 5/145 [00:00<00:21, 6.41it/s]
evaluating Epoch: 4%|[32mβ [0m| 6/145 [00:00<00:21, 6.53it/s]
evaluating Epoch: 5%|[32mβ [0m| 7/145 [00:01<00:20, 6.72it/s]
evaluating Epoch: 6%|[32mβ [0m| 8/145 [00:01<00:19, 6.94it/s]
evaluating Epoch: 6%|[32mβ [0m| 9/145 [00:01<00:19, 7.11it/s]
evaluating Epoch: 7%|[32mβ [0m| 10/145 [00:01<00:19, 7.11it/s]
evaluating Epoch: 8%|[32mβ [0m| 11/145 [00:01<00:19, 7.00it/s]
evaluating Epoch: 8%|[32mβ [0m| 12/145 [00:01<00:18, 7.00it/s]
evaluating Epoch: 9%|[32mβ [0m| 13/145 [00:01<00:18, 7.00it/s]
evaluating Epoch: 10%|[32mβ [0m| 14/145 [00:02<00:20, 6.29it/s]
evaluating Epoch: 10%|[32mβ [0m| 15/145 [00:02<00:20, 6.44it/s]
evaluating Epoch: 11%|[32mβ [0m| 16/145 [00:02<00:19, 6.67it/s]
evaluating Epoch: 12%|[32mββ [0m| 17/145 [00:02<00:18, 6.83it/s]
evaluating Epoch: 12%|[32mββ [0m| 18/145 [00:02<00:18, 6.99it/s]
evaluating Epoch: 13%|[32mββ [0m| 19/145 [00:02<00:18, 6.70it/s]
evaluating Epoch: 14%|[32mββ [0m| 20/145 [00:03<00:19, 6.44it/s]
evaluating Epoch: 14%|[32mββ [0m| 21/145 [00:03<00:19, 6.25it/s]
evaluating Epoch: 15%|[32mββ [0m| 22/145 [00:03<00:20, 6.10it/s]
evaluating Epoch: 16%|[32mββ [0m| 23/145 [00:03<00:20, 6.00it/s]
evaluating Epoch: 17%|[32mββ [0m| 24/145 [00:03<00:19, 6.24it/s]
evaluating Epoch: 17%|[32mββ [0m| 25/145 [00:03<00:18, 6.42it/s]
evaluating Epoch: 18%|[32mββ [0m| 26/145 [00:04<00:18, 6.48it/s]
evaluating Epoch: 19%|[32mββ [0m| 27/145 [00:04<00:18, 6.48it/s]
evaluating Epoch: 19%|[32mββ [0m| 28/145 [00:04<00:17, 6.62it/s]
evaluating Epoch: 20%|[32mββ [0m| 29/145 [00:04<00:17, 6.73it/s]
evaluating Epoch: 21%|[32mββ [0m| 30/145 [00:04<00:16, 6.96it/s]
evaluating Epoch: 21%|[32mβββ [0m| 31/145 [00:04<00:16, 7.04it/s]
evaluating Epoch: 22%|[32mβββ [0m| 32/145 [00:04<00:15, 7.08it/s]
evaluating Epoch: 23%|[32mβββ [0m| 33/145 [00:05<00:16, 6.99it/s]
evaluating Epoch: 23%|[32mβββ [0m| 34/145 [00:05<00:16, 6.84it/s]
evaluating Epoch: 24%|[32mβββ [0m| 35/145 [00:05<00:16, 6.83it/s]
evaluating Epoch: 25%|[32mβββ [0m| 36/145 [00:05<00:15, 6.93it/s]
evaluating Epoch: 26%|[32mβββ [0m| 37/145 [00:05<00:15, 6.95it/s]
evaluating Epoch: 26%|[32mβββ [0m| 38/145 [00:05<00:15, 6.95it/s]
evaluating Epoch: 27%|[32mβββ [0m| 39/145 [00:05<00:15, 6.89it/s]
evaluating Epoch: 28%|[32mβββ [0m| 40/145 [00:06<00:15, 6.93it/s]
evaluating Epoch: 28%|[32mβββ [0m| 41/145 [00:06<00:15, 6.92it/s]
evaluating Epoch: 29%|[32mβββ [0m| 42/145 [00:06<00:14, 6.93it/s]
evaluating Epoch: 30%|[32mβββ [0m| 43/145 [00:06<00:14, 7.01it/s]
evaluating Epoch: 30%|[32mβββ [0m| 44/145 [00:06<00:14, 7.01it/s]
evaluating Epoch: 31%|[32mβββ [0m| 45/145 [00:06<00:14, 7.05it/s]
evaluating Epoch: 32%|[32mββββ [0m| 46/145 [00:06<00:14, 7.05it/s]
evaluating Epoch: 32%|[32mββββ [0m| 47/145 [00:07<00:13, 7.08it/s]
evaluating Epoch: 33%|[32mββββ [0m| 48/145 [00:07<00:13, 7.10it/s]
evaluating Epoch: 34%|[32mββββ [0m| 49/145 [00:07<00:13, 7.12it/s]
evaluating Epoch: 34%|[32mββββ [0m| 50/145 [00:07<00:13, 7.04it/s]
evaluating Epoch: 35%|[32mββββ [0m| 51/145 [00:07<00:13, 7.07it/s]
evaluating Epoch: 36%|[32mββββ [0m| 52/145 [00:07<00:13, 7.14it/s]
evaluating Epoch: 37%|[32mββββ [0m| 53/145 [00:07<00:12, 7.16it/s]
evaluating Epoch: 37%|[32mββββ [0m| 54/145 [00:07<00:12, 7.20it/s]
evaluating Epoch: 38%|[32mββββ [0m| 55/145 [00:08<00:12, 7.13it/s]
evaluating Epoch: 39%|[32mββββ [0m| 56/145 [00:08<00:12, 7.12it/s]
evaluating Epoch: 39%|[32mββββ [0m| 57/145 [00:08<00:12, 7.10it/s]
evaluating Epoch: 40%|[32mββββ [0m| 58/145 [00:08<00:12, 6.94it/s]
evaluating Epoch: 41%|[32mββββ [0m| 59/145 [00:08<00:12, 6.96it/s]
evaluating Epoch: 41%|[32mβββββ [0m| 60/145 [00:08<00:12, 6.99it/s]
evaluating Epoch: 42%|[32mβββββ [0m| 61/145 [00:09<00:12, 6.96it/s]
evaluating Epoch: 43%|[32mβββββ [0m| 62/145 [00:09<00:12, 6.82it/s]
evaluating Epoch: 43%|[32mβββββ [0m| 63/145 [00:09<00:12, 6.72it/s]
evaluating Epoch: 44%|[32mβββββ [0m| 64/145 [00:09<00:12, 6.61it/s]
evaluating Epoch: 45%|[32mβββββ [0m| 65/145 [00:09<00:11, 6.72it/s]
evaluating Epoch: 46%|[32mβββββ [0m| 66/145 [00:09<00:11, 6.89it/s]
evaluating Epoch: 46%|[32mβββββ [0m| 67/145 [00:09<00:11, 7.01it/s]
evaluating Epoch: 47%|[32mβββββ [0m| 68/145 [00:10<00:10, 7.08it/s]
evaluating Epoch: 48%|[32mβββββ [0m| 69/145 [00:10<00:10, 7.10it/s]
evaluating Epoch: 48%|[32mβββββ [0m| 70/145 [00:10<00:10, 7.08it/s]
evaluating Epoch: 49%|[32mβββββ [0m| 71/145 [00:10<00:10, 7.11it/s]
evaluating Epoch: 50%|[32mβββββ [0m| 72/145 [00:10<00:10, 7.10it/s]
evaluating Epoch: 50%|[32mβββββ [0m| 73/145 [00:10<00:10, 7.14it/s]
evaluating Epoch: 51%|[32mβββββ [0m| 74/145 [00:10<00:09, 7.12it/s]
evaluating Epoch: 52%|[32mββββββ [0m| 75/145 [00:11<00:09, 7.08it/s]
evaluating Epoch: 52%|[32mββββββ [0m| 76/145 [00:11<00:09, 7.13it/s]
evaluating Epoch: 53%|[32mββββββ [0m| 77/145 [00:11<00:09, 7.16it/s]
evaluating Epoch: 54%|[32mββββββ [0m| 78/145 [00:11<00:09, 7.09it/s]
evaluating Epoch: 54%|[32mββββββ [0m| 79/145 [00:11<00:09, 7.06it/s]
evaluating Epoch: 55%|[32mββββββ [0m| 80/145 [00:11<00:09, 7.13it/s]
evaluating Epoch: 56%|[32mββββββ [0m| 81/145 [00:11<00:08, 7.18it/s]
evaluating Epoch: 57%|[32mββββββ [0m| 82/145 [00:11<00:08, 7.15it/s]
evaluating Epoch: 57%|[32mββββββ [0m| 83/145 [00:12<00:08, 7.14it/s]
evaluating Epoch: 58%|[32mββββββ [0m| 84/145 [00:12<00:08, 7.12it/s]
evaluating Epoch: 59%|[32mββββββ [0m| 85/145 [00:12<00:08, 7.16it/s]
evaluating Epoch: 59%|[32mββββββ [0m| 86/145 [00:12<00:08, 7.10it/s]
evaluating Epoch: 60%|[32mββββββ [0m| 87/145 [00:12<00:08, 7.02it/s]
evaluating Epoch: 61%|[32mββββββ [0m| 88/145 [00:12<00:08, 6.96it/s]
evaluating Epoch: 61%|[32mβββββββ [0m| 89/145 [00:12<00:08, 6.80it/s]
evaluating Epoch: 62%|[32mβββββββ [0m| 90/145 [00:13<00:08, 6.82it/s]
evaluating Epoch: 63%|[32mβββββββ [0m| 91/145 [00:13<00:07, 6.87it/s]
evaluating Epoch: 63%|[32mβββββββ [0m| 92/145 [00:13<00:07, 6.90it/s]
evaluating Epoch: 64%|[32mβββββββ [0m| 93/145 [00:13<00:07, 7.02it/s]
evaluating Epoch: 65%|[32mβββββββ [0m| 94/145 [00:13<00:07, 7.05it/s]
evaluating Epoch: 66%|[32mβββββββ [0m| 95/145 [00:13<00:07, 7.06it/s]
evaluating Epoch: 66%|[32mβββββββ [0m| 96/145 [00:13<00:06, 7.16it/s]
evaluating Epoch: 67%|[32mβββββββ [0m| 97/145 [00:14<00:06, 7.22it/s]
evaluating Epoch: 68%|[32mβββββββ [0m| 98/145 [00:14<00:06, 7.24it/s]
evaluating Epoch: 68%|[32mβββββββ [0m| 99/145 [00:14<00:06, 7.24it/s]
evaluating Epoch: 69%|[32mβββββββ [0m| 100/145 [00:14<00:06, 7.26it/s]
evaluating Epoch: 70%|[32mβββββββ [0m| 101/145 [00:14<00:06, 7.24it/s]
evaluating Epoch: 70%|[32mβββββββ [0m| 102/145 [00:14<00:05, 7.27it/s]
evaluating Epoch: 71%|[32mβββββββ [0m| 103/145 [00:14<00:05, 7.21it/s]
evaluating Epoch: 72%|[32mββββββββ [0m| 104/145 [00:15<00:05, 7.28it/s]
evaluating Epoch: 72%|[32mββββββββ [0m| 105/145 [00:15<00:05, 7.32it/s]
evaluating Epoch: 73%|[32mββββββββ [0m| 106/145 [00:15<00:05, 7.27it/s]
evaluating Epoch: 74%|[32mββββββββ [0m| 107/145 [00:15<00:05, 7.19it/s]
evaluating Epoch: 74%|[32mββββββββ [0m| 108/145 [00:15<00:05, 7.23it/s]
evaluating Epoch: 75%|[32mββββββββ [0m| 109/145 [00:15<00:05, 7.14it/s]
evaluating Epoch: 76%|[32mββββββββ [0m| 110/145 [00:15<00:04, 7.08it/s]
evaluating Epoch: 77%|[32mββββββββ [0m| 111/145 [00:16<00:04, 7.04it/s]
evaluating Epoch: 77%|[32mββββββββ [0m| 112/145 [00:16<00:04, 6.97it/s]
evaluating Epoch: 78%|[32mββββββββ [0m| 113/145 [00:16<00:04, 6.88it/s]
evaluating Epoch: 79%|[32mββββββββ [0m| 114/145 [00:16<00:04, 6.93it/s]
evaluating Epoch: 79%|[32mββββββββ [0m| 115/145 [00:16<00:04, 6.88it/s]
evaluating Epoch: 80%|[32mββββββββ [0m| 116/145 [00:16<00:04, 6.87it/s]
evaluating Epoch: 81%|[32mββββββββ [0m| 117/145 [00:16<00:04, 6.87it/s]
evaluating Epoch: 81%|[32mβββββββββ [0m| 118/145 [00:17<00:03, 6.75it/s]
evaluating Epoch: 82%|[32mβββββββββ [0m| 119/145 [00:17<00:03, 6.86it/s]
evaluating Epoch: 83%|[32mβββββββββ [0m| 120/145 [00:17<00:03, 6.94it/s]
evaluating Epoch: 83%|[32mβββββββββ [0m| 121/145 [00:17<00:03, 6.89it/s]
evaluating Epoch: 84%|[32mβββββββββ [0m| 122/145 [00:17<00:03, 6.99it/s]
evaluating Epoch: 85%|[32mβββββββββ [0m| 123/145 [00:17<00:03, 7.12it/s]
evaluating Epoch: 86%|[32mβββββββββ [0m| 124/145 [00:17<00:02, 7.18it/s]
evaluating Epoch: 86%|[32mβββββββββ [0m| 125/145 [00:18<00:02, 7.21it/s]
evaluating Epoch: 87%|[32mβββββββββ [0m| 126/145 [00:18<00:02, 7.16it/s]
evaluating Epoch: 88%|[32mβββββββββ [0m| 127/145 [00:18<00:02, 6.98it/s]
evaluating Epoch: 88%|[32mβββββββββ [0m| 128/145 [00:18<00:02, 7.01it/s]
evaluating Epoch: 89%|[32mβββββββββ [0m| 129/145 [00:18<00:02, 7.05it/s]
evaluating Epoch: 90%|[32mβββββββββ [0m| 130/145 [00:18<00:02, 7.05it/s]
evaluating Epoch: 90%|[32mβββββββββ [0m| 131/145 [00:18<00:02, 6.98it/s]
evaluating Epoch: 91%|[32mβββββββββ [0m| 132/145 [00:19<00:01, 6.92it/s]
evaluating Epoch: 92%|[32mββββββββββ[0m| 133/145 [00:19<00:01, 6.89it/s]
evaluating Epoch: 92%|[32mββββββββββ[0m| 134/145 [00:19<00:01, 6.92it/s]
evaluating Epoch: 93%|[32mββββββββββ[0m| 135/145 [00:19<00:01, 6.94it/s]
evaluating Epoch: 94%|[32mββββββββββ[0m| 136/145 [00:19<00:01, 6.83it/s]
evaluating Epoch: 94%|[32mββββββββββ[0m| 137/145 [00:19<00:01, 6.76it/s]
evaluating Epoch: 95%|[32mββββββββββ[0m| 138/145 [00:19<00:01, 6.57it/s]
evaluating Epoch: 96%|[32mββββββββββ[0m| 139/145 [00:20<00:00, 6.81it/s]
evaluating Epoch: 97%|[32mββββββββββ[0m| 140/145 [00:20<00:00, 6.89it/s]
evaluating Epoch: 97%|[32mββββββββββ[0m| 141/145 [00:20<00:00, 6.99it/s]
evaluating Epoch: 98%|[32mββββββββββ[0m| 142/145 [00:20<00:00, 6.98it/s]
evaluating Epoch: 99%|[32mββββββββββ[0m| 143/145 [00:20<00:00, 7.04it/s]
evaluating Epoch: 99%|[32mββββββββββ[0m| 144/145 [00:20<00:00, 6.57it/s]
evaluating Epoch: 100%|[32mββββββββββ[0m| 145/145 [00:20<00:00, 6.83it/s]
evaluating Epoch: 100%|[32mββββββββββ[0m| 145/145 [00:21<00:00, 6.89it/s] |
| eval_ppl=tensor(1.5961, device='cuda:0') eval_epoch_loss=tensor(0.4675, device='cuda:0') |
| we are about to save the PEFT modules |
| PEFT modules are saved in /data/colosseum_dataset/wm_data/finetuned_models/vlm_move_hanger directory |
| Epoch 5: train_perplexity=1.0000, train_epoch_loss=0.0000, epoch time 40.07555335666984s |
| Key: avg_train_prep, Value: 1.060967993736267 |
| Key: avg_train_loss, Value: 0.05565963617045781 |
| Key: avg_eval_prep, Value: 1.3878916501998901 |
| Key: avg_eval_loss, Value: 0.31500654518604276 |
| Key: avg_epoch_time, Value: 39.51795169655234 |
| Key: avg_checkpoint_time, Value: 0.2974603595212102 |
| we are about to save the PEFT modules |
| PEFT modules are saved in /data/colosseum_dataset/wm_data/finetuned_models/vlm_move_hanger directory |
| Traceback (most recent call last): |
| File "/home/zxn/Forewarn/vlm/llama-recipes/recipes/quickstart/finetuning/finetuning_wm.py", line 9, in <module> |
| fire.Fire(main) |
| File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/fire/core.py", line 135, in Fire |
| component_trace = _Fire(component, args, parsed_flag_args, context, name) |
| File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/fire/core.py", line 468, in _Fire |
| component, remaining_args = _CallAndUpdateTrace( |
| File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace |
| component = fn(*varargs, **kwargs) |
| File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/llama_recipes/finetuning_wm.py", line 694, in main |
| results, overall_acc = eval(model, train_dataloader, device_id, processor, dataset_config, split ='train') |
| UnboundLocalError: local variable 'device_id' referenced before assignment |
| [1;34mwandb[0m: π View run [33mvlm_lora_move_hanger[0m at: [34mhttps://wandb.ai/yukizhang0527-harbin-institute-of-technology/llama_recipes/runs/fk4jc4vy[0m |
| [1;34mwandb[0m: Find logs at: [1;35mwandb/run-20260409_101228-fk4jc4vy/logs[0m |
| E0409 10:18:01.123575 140112953571136 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 659266) of binary: /home/zxn/anaconda3/envs/dreamer/bin/python3.10 |
| Traceback (most recent call last): |
| File "/home/zxn/anaconda3/envs/dreamer/bin/torchrun", line 6, in <module> |
| sys.exit(main()) |
| File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper |
| return f(*args, **kwargs) |
| File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main |
| run(args) |
| File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run |
| elastic_launch( |
| File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ |
| return launch_agent(self._config, self._entrypoint, list(args)) |
| File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent |
| raise ChildFailedError( |
| torch.distributed.elastic.multiprocessing.errors.ChildFailedError: |
| ============================================================ |
| /home/zxn/Forewarn/vlm/llama-recipes/recipes/quickstart/finetuning/finetuning_wm.py FAILED |
| ------------------------------------------------------------ |
| Failures: |
| <NO_OTHER_FAILURES> |
| ------------------------------------------------------------ |
| Root Cause (first observed failure): |
| [0]: |
| time : 2026-04-09_10:18:01 |
| host : node0029 |
| rank : 0 (local_rank: 0) |
| exitcode : 1 (pid: 659266) |
| error_file: <N/A> |
| traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html |
| ============================================================ |
|
|