[12/28 07:27:29][INFO] [Exp Name]: finetune_ [12/28 07:27:29][INFO] [GPU x Batch] = 1 x 32 [12/28 07:27:38][INFO] [Exp Name]: finetune_ [12/28 07:27:38][INFO] [GPU x Batch] = 1 x 32 [12/28 18:25:12][INFO] [Exp Name]: finetune_ [12/28 18:25:12][INFO] [GPU x Batch] = 1 x 32 [12/28 18:26:34][INFO] [Exp Name]: finetune_ [12/28 18:26:34][INFO] [GPU x Batch] = 1 x 32 [12/28 18:26:41][INFO] [Exp Name]: finetune_ [12/28 18:26:41][INFO] [GPU x Batch] = 1 x 32 [12/28 18:29:59][INFO] [Exp Name]: finetune_ [12/28 18:29:59][INFO] [GPU x Batch] = 1 x 32 [12/28 18:30:02][INFO] [AMASS] Loading from inputs/AMASS/hmr4d_support/smplxpose_v2.pth ... [12/28 18:32:12][INFO] [Exp Name]: finetune_ [12/28 18:32:12][INFO] [GPU x Batch] = 1 x 32 [12/28 18:32:14][INFO] [AMASS] Loading from inputs/AMASS/hmr4d_support/smplxpose_v2.pth ... [12/28 18:35:21][INFO] [Exp Name]: finetune_ [12/28 18:35:21][INFO] [GPU x Batch] = 1 x 32 [12/28 18:35:23][INFO] [AMASS] Loading from inputs/AMASS/hmr4d_support/smplxpose_v2.pth ... [12/28 18:35:23][WARNING] [Train Dataset] Skipping amass_train_v11 due to error: Error in call to target 'genmo.datasets.pure_motion.amass.AmassDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.train.amass_train_v11 [12/28 18:35:24][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 18:35:24][WARNING] [Train Dataset] Skipping humanml3d_static_train due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.train.humanml3d_static_train [12/28 18:35:25][INFO] [BEDLAM] Loading from inputs/BEDLAM/hmr4d_support [12/28 18:35:25][WARNING] [Train Dataset] Skipping bedlam_v2 due to error: Error in call to target 'genmo.datasets.bedlam.bedlam.BedlamDatasetV2': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.train.bedlam_v2 [12/28 18:35:25][INFO] [H36M] Loading from inputs/H36M/hmr4d_support/smplxpose_v1.pt ... [12/28 18:35:25][WARNING] [Train Dataset] Skipping h36m_v1 due to error: Error in call to target 'genmo.datasets.h36m.h36m.H36mSmplDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.train.h36m_v1 [12/28 18:35:25][WARNING] [Train Dataset] Skipping 3dpw_v1 due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_train.ThreedpwSmplDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.train.3dpw_v1 [12/28 18:35:25][WARNING] [Train Dataset] Skipping 3dpw_occ_v1 due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_train.ThreedpwOccSmplDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.train.3dpw_occ_v1 [12/28 18:35:25][WARNING] [Train Dataset] Skipping aistpp_train due to error: Error locating target 'genmo.datasets.aistplusplus.aistplusplus.AISTPlusPlusSmplDataset', set env var HYDRA_FULL_ERROR=1 to see chained exception. full_key: dataset_opts.train.aistpp_train [12/28 18:35:25][WARNING] [Train Dataset] Skipping beat2_static_train due to error: Error locating target 'genmo.datasets.beat2.beat2.BEAT2SmplDataset', set env var HYDRA_FULL_ERROR=1 to see chained exception. full_key: dataset_opts.train.beat2_static_train [12/28 18:35:25][WARNING] [Train Dataset] Skipping unity due to error: Error locating target 'genmo.datasets.unity_dataset.UnityDataset', set env var HYDRA_FULL_ERROR=1 to see chained exception. full_key: dataset_opts.train.unity [12/28 18:38:07][INFO] [Exp Name]: finetune_ [12/28 18:38:07][INFO] [GPU x Batch] = 1 x 32 [12/28 18:38:07][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:38:07][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:38:07][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 18:38:07][INFO] [12/28 18:38:10][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 18:38:10][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 18:38:10][INFO] [EMDB] Full sequence, split=1 [12/28 18:38:10][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 18:38:10][INFO] [EMDB] Full sequence, split=2 [12/28 18:38:10][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 18:38:10][INFO] [RICH] Full sequence, Test [12/28 18:38:10][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 18:38:10][INFO] [3DPW] Full sequence [12/28 18:38:10][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 18:38:10][INFO] [3DPW_OCC] Full sequence [12/28 18:38:10][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 18:38:10][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:38:10][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:38:10][INFO] [12/28 18:38:14][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 18:38:37][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_0/checkpoints' [12/28 18:40:27][INFO] [Exp Name]: finetune_ [12/28 18:40:27][INFO] [GPU x Batch] = 1 x 32 [12/28 18:40:27][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:40:27][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:40:27][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 18:40:27][INFO] [12/28 18:40:30][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 18:40:30][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 18:40:30][INFO] [EMDB] Full sequence, split=1 [12/28 18:40:30][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 18:40:30][INFO] [EMDB] Full sequence, split=2 [12/28 18:40:30][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 18:40:30][INFO] [RICH] Full sequence, Test [12/28 18:40:30][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 18:40:30][INFO] [3DPW] Full sequence [12/28 18:40:30][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 18:40:30][INFO] [3DPW_OCC] Full sequence [12/28 18:40:30][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 18:40:30][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:40:30][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:40:30][INFO] [12/28 18:40:34][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 18:40:58][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_1/checkpoints' [12/28 18:46:06][INFO] [Exp Name]: finetune_ [12/28 18:46:06][INFO] [GPU x Batch] = 1 x 32 [12/28 18:46:06][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:46:06][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:46:06][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 18:46:06][INFO] [12/28 18:46:09][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 18:46:09][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 18:46:09][INFO] [EMDB] Full sequence, split=1 [12/28 18:46:09][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 18:46:09][INFO] [EMDB] Full sequence, split=2 [12/28 18:46:09][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 18:46:09][INFO] [RICH] Full sequence, Test [12/28 18:46:09][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 18:46:09][INFO] [3DPW] Full sequence [12/28 18:46:09][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 18:46:09][INFO] [3DPW_OCC] Full sequence [12/28 18:46:09][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 18:46:09][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:46:09][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:46:09][INFO] [12/28 18:46:13][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 18:46:26][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_2/checkpoints' [12/28 18:47:12][INFO] [Exp Name]: finetune_ [12/28 18:47:12][INFO] [GPU x Batch] = 1 x 32 [12/28 18:47:12][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:47:12][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:47:12][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 18:47:12][INFO] [12/28 18:47:15][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 18:47:15][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 18:47:15][INFO] [EMDB] Full sequence, split=1 [12/28 18:47:15][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 18:47:15][INFO] [EMDB] Full sequence, split=2 [12/28 18:47:15][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 18:47:15][INFO] [RICH] Full sequence, Test [12/28 18:47:15][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 18:47:15][INFO] [3DPW] Full sequence [12/28 18:47:15][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 18:47:15][INFO] [3DPW_OCC] Full sequence [12/28 18:47:15][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 18:47:15][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:47:15][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:47:15][INFO] [12/28 18:47:19][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 18:47:25][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_3/checkpoints' [12/28 18:49:40][INFO] [Exp Name]: finetune_ [12/28 18:49:40][INFO] [GPU x Batch] = 1 x 32 [12/28 18:49:40][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:49:40][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:49:40][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 18:49:40][INFO] [12/28 18:49:42][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 18:49:42][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 18:49:42][INFO] [EMDB] Full sequence, split=1 [12/28 18:49:42][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 18:49:42][INFO] [EMDB] Full sequence, split=2 [12/28 18:49:42][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 18:49:42][INFO] [RICH] Full sequence, Test [12/28 18:49:42][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 18:49:42][INFO] [3DPW] Full sequence [12/28 18:49:42][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 18:49:42][INFO] [3DPW_OCC] Full sequence [12/28 18:49:42][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 18:49:42][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:49:42][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:49:42][INFO] [12/28 18:49:47][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 18:49:53][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_4/checkpoints' [12/28 18:55:06][INFO] [Exp Name]: finetune_ [12/28 18:55:06][INFO] [GPU x Batch] = 1 x 32 [12/28 18:55:06][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:55:06][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:55:06][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 18:55:06][INFO] [12/28 18:55:09][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 18:55:09][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 18:55:09][INFO] [EMDB] Full sequence, split=1 [12/28 18:55:09][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 18:55:09][INFO] [EMDB] Full sequence, split=2 [12/28 18:55:09][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 18:55:09][INFO] [RICH] Full sequence, Test [12/28 18:55:09][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 18:55:09][INFO] [3DPW] Full sequence [12/28 18:55:09][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 18:55:09][INFO] [3DPW_OCC] Full sequence [12/28 18:55:09][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 18:55:09][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:55:09][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:55:09][INFO] [12/28 18:55:13][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 18:55:27][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_5/checkpoints' [12/28 18:58:14][INFO] [Exp Name]: finetune_ [12/28 18:58:14][INFO] [GPU x Batch] = 1 x 32 [12/28 18:58:14][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:58:14][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:58:14][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 18:58:14][INFO] [12/28 18:58:16][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 18:58:16][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 18:58:16][INFO] [EMDB] Full sequence, split=1 [12/28 18:58:16][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 18:58:16][INFO] [EMDB] Full sequence, split=2 [12/28 18:58:16][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 18:58:16][INFO] [RICH] Full sequence, Test [12/28 18:58:16][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 18:58:16][INFO] [3DPW] Full sequence [12/28 18:58:16][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 18:58:16][INFO] [3DPW_OCC] Full sequence [12/28 18:58:16][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 18:58:16][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 18:58:16][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 18:58:16][INFO] [12/28 18:58:21][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 18:58:30][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_6/checkpoints' [12/28 19:00:20][INFO] [Exp Name]: finetune_ [12/28 19:00:20][INFO] [GPU x Batch] = 1 x 32 [12/28 19:00:20][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:00:20][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:00:20][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 19:00:20][INFO] [12/28 19:00:23][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:00:23][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:00:23][INFO] [EMDB] Full sequence, split=1 [12/28 19:00:23][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:00:23][INFO] [EMDB] Full sequence, split=2 [12/28 19:00:23][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:00:23][INFO] [RICH] Full sequence, Test [12/28 19:00:23][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:00:23][INFO] [3DPW] Full sequence [12/28 19:00:23][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:00:23][INFO] [3DPW_OCC] Full sequence [12/28 19:00:23][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:00:23][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:00:23][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:00:23][INFO] [12/28 19:00:28][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:00:41][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_7/checkpoints' [12/28 19:05:28][INFO] [Exp Name]: finetune_ [12/28 19:05:28][INFO] [GPU x Batch] = 1 x 32 [12/28 19:05:28][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:05:28][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:05:28][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 19:05:28][INFO] [12/28 19:05:31][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:05:31][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:05:31][INFO] [EMDB] Full sequence, split=1 [12/28 19:05:31][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:05:31][INFO] [EMDB] Full sequence, split=2 [12/28 19:05:31][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:05:31][INFO] [RICH] Full sequence, Test [12/28 19:05:31][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:05:31][INFO] [3DPW] Full sequence [12/28 19:05:31][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:05:31][INFO] [3DPW_OCC] Full sequence [12/28 19:05:31][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:05:31][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:05:31][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:05:31][INFO] [12/28 19:05:35][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:05:53][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_8/checkpoints' [12/28 19:07:04][INFO] [Exp Name]: finetune_ [12/28 19:07:04][INFO] [GPU x Batch] = 1 x 32 [12/28 19:07:04][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:07:04][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:07:04][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 19:07:04][INFO] [12/28 19:07:07][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:07:07][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:07:08][INFO] [EMDB] Full sequence, split=1 [12/28 19:07:08][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:07:08][INFO] [EMDB] Full sequence, split=2 [12/28 19:07:08][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:07:08][INFO] [RICH] Full sequence, Test [12/28 19:07:08][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:07:08][INFO] [3DPW] Full sequence [12/28 19:07:08][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:07:08][INFO] [3DPW_OCC] Full sequence [12/28 19:07:08][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:07:08][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:07:08][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:07:08][INFO] [12/28 19:07:13][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:07:28][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_9/checkpoints' [12/28 19:11:11][INFO] [Exp Name]: finetune_ [12/28 19:11:11][INFO] [GPU x Batch] = 1 x 32 [12/28 19:11:11][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:11:11][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:11:11][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 19:11:11][INFO] [12/28 19:11:14][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:11:14][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:11:14][INFO] [EMDB] Full sequence, split=1 [12/28 19:11:14][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:11:14][INFO] [EMDB] Full sequence, split=2 [12/28 19:11:14][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:11:14][INFO] [RICH] Full sequence, Test [12/28 19:11:14][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:11:14][INFO] [3DPW] Full sequence [12/28 19:11:14][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:11:14][INFO] [3DPW_OCC] Full sequence [12/28 19:11:14][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:11:14][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:11:14][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:11:14][INFO] [12/28 19:11:20][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:11:31][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_10/checkpoints' [12/28 19:13:44][INFO] [Exp Name]: finetune_ [12/28 19:13:44][INFO] [GPU x Batch] = 1 x 32 [12/28 19:13:44][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:13:44][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:13:44][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 19:13:44][INFO] [12/28 19:13:47][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:13:47][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:13:47][INFO] [EMDB] Full sequence, split=1 [12/28 19:13:47][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:13:47][INFO] [EMDB] Full sequence, split=2 [12/28 19:13:47][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:13:47][INFO] [RICH] Full sequence, Test [12/28 19:13:47][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:13:47][INFO] [3DPW] Full sequence [12/28 19:13:47][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:13:47][INFO] [3DPW_OCC] Full sequence [12/28 19:13:47][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:13:47][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:13:47][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:13:47][INFO] [12/28 19:13:52][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:14:11][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_11/checkpoints' [12/28 19:16:44][INFO] [Exp Name]: finetune_ [12/28 19:16:44][INFO] [GPU x Batch] = 1 x 32 [12/28 19:16:44][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:16:44][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:16:44][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 19:16:44][INFO] [12/28 19:16:46][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:16:46][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:16:46][INFO] [EMDB] Full sequence, split=1 [12/28 19:16:46][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:16:46][INFO] [EMDB] Full sequence, split=2 [12/28 19:16:46][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:16:46][INFO] [RICH] Full sequence, Test [12/28 19:16:46][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:16:46][INFO] [3DPW] Full sequence [12/28 19:16:46][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:16:46][INFO] [3DPW_OCC] Full sequence [12/28 19:16:46][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:16:46][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:16:46][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:16:46][INFO] [12/28 19:16:50][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:17:07][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_12/checkpoints' [12/28 19:18:17][INFO] [Exp Name]: finetune_ [12/28 19:18:17][INFO] [GPU x Batch] = 1 x 32 [12/28 19:18:17][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:18:17][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:18:17][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 19:18:17][INFO] [12/28 19:18:20][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:18:20][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:18:20][INFO] [EMDB] Full sequence, split=1 [12/28 19:18:20][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:18:20][INFO] [EMDB] Full sequence, split=2 [12/28 19:18:20][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:18:20][INFO] [RICH] Full sequence, Test [12/28 19:18:20][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:18:20][INFO] [3DPW] Full sequence [12/28 19:18:20][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:18:20][INFO] [3DPW_OCC] Full sequence [12/28 19:18:20][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:18:20][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:18:20][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:18:20][INFO] [12/28 19:18:24][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:18:41][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_13/checkpoints' [12/28 19:19:09][INFO] Start Fitting... [12/28 19:19:19][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 19:19:19][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/data.py:106: Total length of `DataLoader` across ranks is zero. Please make sure this was your intention. [12/28 19:19:19][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/data.py:106: Total length of `CombinedLoader` across ranks is zero. Please make sure this was your intention. [12/28 19:19:21][INFO] End of script. [12/28 19:20:37][INFO] [Exp Name]: finetune_ [12/28 19:20:37][INFO] [GPU x Batch] = 1 x 1 [12/28 19:20:37][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:20:37][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:20:37][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 19:20:37][INFO] [12/28 19:20:41][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:20:41][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:20:41][INFO] [EMDB] Full sequence, split=1 [12/28 19:20:41][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:20:41][INFO] [EMDB] Full sequence, split=2 [12/28 19:20:41][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:20:41][INFO] [RICH] Full sequence, Test [12/28 19:20:41][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:20:41][INFO] [3DPW] Full sequence [12/28 19:20:41][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:20:41][INFO] [3DPW_OCC] Full sequence [12/28 19:20:41][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:20:41][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:20:41][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:20:41][INFO] [12/28 19:20:46][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:20:59][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_14/checkpoints' [12/28 19:21:27][INFO] Start Fitting... [12/28 19:21:50][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 19:23:35][INFO] [Exp Name]: finetune_ [12/28 19:23:35][INFO] [GPU x Batch] = 1 x 1 [12/28 19:23:35][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:23:35][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:23:35][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 19:23:35][INFO] [12/28 19:23:38][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:23:38][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:23:38][INFO] [EMDB] Full sequence, split=1 [12/28 19:23:38][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:23:38][INFO] [EMDB] Full sequence, split=2 [12/28 19:23:38][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:23:38][INFO] [RICH] Full sequence, Test [12/28 19:23:38][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:23:38][INFO] [3DPW] Full sequence [12/28 19:23:38][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:23:38][INFO] [3DPW_OCC] Full sequence [12/28 19:23:38][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:23:38][INFO] [UnityDataset] Initialized with root=/root/miko/puni/train/GVHMR/processed_dataset, split=train [12/28 19:23:38][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:23:38][INFO] [12/28 19:23:43][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:23:59][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_15/checkpoints' [12/28 19:24:26][INFO] Start Fitting... [12/28 19:24:38][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 19:24:38][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 19:30:56][INFO] [Exp Name]: finetune_ [12/28 19:30:56][INFO] [GPU x Batch] = 1 x 1 [12/28 19:30:56][INFO] [UnityDataset] Initialized with 1 sequences from root=/root/miko/puni/train/GVHMR/processed_dataset/gvhmr, split=train [12/28 19:30:56][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:30:56][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/28 19:30:56][INFO] [12/28 19:30:59][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:30:59][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:30:59][INFO] [EMDB] Full sequence, split=1 [12/28 19:30:59][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:30:59][INFO] [EMDB] Full sequence, split=2 [12/28 19:30:59][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:30:59][INFO] [RICH] Full sequence, Test [12/28 19:30:59][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:30:59][INFO] [3DPW] Full sequence [12/28 19:30:59][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:30:59][INFO] [3DPW_OCC] Full sequence [12/28 19:30:59][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:30:59][INFO] [UnityDataset] Initialized with 1 sequences from root=/root/miko/puni/train/GVHMR/processed_dataset/gvhmr, split=train [12/28 19:30:59][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/28 19:30:59][INFO] [12/28 19:31:04][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:31:24][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_16/checkpoints' [12/28 19:31:51][INFO] Start Fitting... [12/28 19:32:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 19:32:00][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 19:43:06][INFO] [Exp Name]: finetune_ [12/28 19:43:06][INFO] [GPU x Batch] = 1 x 1 [12/28 19:43:06][INFO] [UnityDataset] Found 5 sequences. [12/28 19:43:06][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 19:43:06][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 19:43:06][INFO] [12/28 19:43:09][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:43:09][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:43:09][INFO] [EMDB] Full sequence, split=1 [12/28 19:43:09][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:43:09][INFO] [EMDB] Full sequence, split=2 [12/28 19:43:09][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:43:09][INFO] [RICH] Full sequence, Test [12/28 19:43:09][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:43:09][INFO] [3DPW] Full sequence [12/28 19:43:09][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:43:09][INFO] [3DPW_OCC] Full sequence [12/28 19:43:09][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:43:09][INFO] [UnityDataset] Found 5 sequences. [12/28 19:43:09][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 19:43:09][INFO] [12/28 19:43:15][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:43:35][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_17/checkpoints' [12/28 19:43:58][INFO] Start Fitting... [12/28 19:44:11][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 19:44:11][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 19:53:21][INFO] [Exp Name]: finetune_ [12/28 19:53:21][INFO] [GPU x Batch] = 1 x 1 [12/28 19:53:21][INFO] [UnityDataset] Found 5 sequences. [12/28 19:53:21][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 19:53:21][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 19:53:21][INFO] [12/28 19:53:24][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:53:24][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:53:24][INFO] [EMDB] Full sequence, split=1 [12/28 19:53:24][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:53:24][INFO] [EMDB] Full sequence, split=2 [12/28 19:53:24][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:53:24][INFO] [RICH] Full sequence, Test [12/28 19:53:24][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:53:24][INFO] [3DPW] Full sequence [12/28 19:53:24][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:53:24][INFO] [3DPW_OCC] Full sequence [12/28 19:53:24][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:53:24][INFO] [UnityDataset] Found 5 sequences. [12/28 19:53:24][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 19:53:24][INFO] [12/28 19:53:31][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:53:49][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_18/checkpoints' [12/28 19:54:14][INFO] Start Fitting... [12/28 19:54:25][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 19:54:25][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 19:54:47][INFO] [Exp Name]: finetune_ [12/28 19:54:47][INFO] [GPU x Batch] = 1 x 1 [12/28 19:54:48][INFO] [UnityDataset] Found 5 sequences. [12/28 19:54:48][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 19:54:48][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 19:54:48][INFO] [12/28 19:54:50][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:54:50][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:54:50][INFO] [EMDB] Full sequence, split=1 [12/28 19:54:50][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:54:50][INFO] [EMDB] Full sequence, split=2 [12/28 19:54:50][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:54:50][INFO] [RICH] Full sequence, Test [12/28 19:54:50][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:54:50][INFO] [3DPW] Full sequence [12/28 19:54:50][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:54:50][INFO] [3DPW_OCC] Full sequence [12/28 19:54:50][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:54:50][INFO] [UnityDataset] Found 5 sequences. [12/28 19:54:50][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 19:54:50][INFO] [12/28 19:54:55][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:55:15][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_19/checkpoints' [12/28 19:55:39][INFO] Start Fitting... [12/28 19:55:51][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 19:55:51][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 19:58:54][INFO] [Exp Name]: finetune_ [12/28 19:58:54][INFO] [GPU x Batch] = 1 x 1 [12/28 19:58:54][INFO] [UnityDataset] Found 5 sequences. [12/28 19:58:54][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 19:58:54][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 19:58:54][INFO] [12/28 19:58:57][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 19:58:57][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 19:58:57][INFO] [EMDB] Full sequence, split=1 [12/28 19:58:57][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 19:58:57][INFO] [EMDB] Full sequence, split=2 [12/28 19:58:57][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 19:58:57][INFO] [RICH] Full sequence, Test [12/28 19:58:57][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 19:58:57][INFO] [3DPW] Full sequence [12/28 19:58:57][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 19:58:57][INFO] [3DPW_OCC] Full sequence [12/28 19:58:57][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 19:58:57][INFO] [UnityDataset] Found 5 sequences. [12/28 19:58:57][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 19:58:57][INFO] [12/28 19:59:02][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 19:59:20][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_20/checkpoints' [12/28 19:59:43][INFO] Start Fitting... [12/28 19:59:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 19:59:55][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 20:00:16][INFO] [Exp Name]: finetune_ [12/28 20:00:16][INFO] [GPU x Batch] = 1 x 1 [12/28 20:00:16][INFO] [UnityDataset] Found 5 sequences. [12/28 20:00:16][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:00:16][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 20:00:16][INFO] [12/28 20:00:19][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 20:00:19][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 20:00:19][INFO] [EMDB] Full sequence, split=1 [12/28 20:00:19][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 20:00:19][INFO] [EMDB] Full sequence, split=2 [12/28 20:00:19][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 20:00:19][INFO] [RICH] Full sequence, Test [12/28 20:00:19][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 20:00:19][INFO] [3DPW] Full sequence [12/28 20:00:19][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 20:00:19][INFO] [3DPW_OCC] Full sequence [12/28 20:00:19][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 20:00:19][INFO] [UnityDataset] Found 5 sequences. [12/28 20:00:19][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:00:19][INFO] [12/28 20:00:24][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 20:00:40][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_21/checkpoints' [12/28 20:01:09][INFO] Start Fitting... [12/28 20:01:21][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 20:01:21][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 20:04:52][INFO] [Exp Name]: finetune_ [12/28 20:04:52][INFO] [GPU x Batch] = 1 x 1 [12/28 20:04:52][INFO] [UnityDataset] Found 5 sequences. [12/28 20:04:52][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:04:52][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 20:04:52][INFO] [12/28 20:04:55][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 20:04:55][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 20:04:55][INFO] [EMDB] Full sequence, split=1 [12/28 20:04:55][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 20:04:55][INFO] [EMDB] Full sequence, split=2 [12/28 20:04:55][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 20:04:55][INFO] [RICH] Full sequence, Test [12/28 20:04:55][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 20:04:55][INFO] [3DPW] Full sequence [12/28 20:04:55][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 20:04:55][INFO] [3DPW_OCC] Full sequence [12/28 20:04:55][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 20:04:55][INFO] [UnityDataset] Found 5 sequences. [12/28 20:04:55][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:04:55][INFO] [12/28 20:05:00][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 20:05:15][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_22/checkpoints' [12/28 20:05:38][INFO] Start Fitting... [12/28 20:05:51][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 20:05:51][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 20:07:48][INFO] [Exp Name]: finetune_ [12/28 20:07:48][INFO] [GPU x Batch] = 1 x 1 [12/28 20:07:48][INFO] [UnityDataset] Found 5 sequences. [12/28 20:07:48][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:07:48][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 20:07:48][INFO] [12/28 20:07:51][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 20:07:51][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 20:07:51][INFO] [EMDB] Full sequence, split=1 [12/28 20:07:51][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 20:07:51][INFO] [EMDB] Full sequence, split=2 [12/28 20:07:51][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 20:07:51][INFO] [RICH] Full sequence, Test [12/28 20:07:51][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 20:07:51][INFO] [3DPW] Full sequence [12/28 20:07:51][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 20:07:51][INFO] [3DPW_OCC] Full sequence [12/28 20:07:51][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 20:07:51][INFO] [UnityDataset] Found 5 sequences. [12/28 20:07:51][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:07:51][INFO] [12/28 20:07:56][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 20:08:15][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_23/checkpoints' [12/28 20:08:40][INFO] Start Fitting... [12/28 20:08:50][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 20:08:50][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 20:08:52][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 20:08:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 20:08:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 20:08:58][INFO] ✅[FIT][Epoch 0] finished! 00:08→14:29 | loss_epoch=132 [12/28 20:08:58][INFO] 🚀[FIT][Epoch 1] Data: unity Experiment: finetune_ [12/28 20:09:02][INFO] ✅[FIT][Epoch 1] finished! 00:12→10:31 | loss_epoch=125 [12/28 20:09:02][INFO] 🚀[FIT][Epoch 2] Data: unity Experiment: finetune_ [12/28 20:09:07][INFO] ✅[FIT][Epoch 2] finished! 00:17→09:34 | loss_epoch=445 [12/28 20:09:07][INFO] 🚀[FIT][Epoch 3] Data: unity Experiment: finetune_ [12/28 20:09:14][INFO] ✅[FIT][Epoch 3] finished! 00:25→10:09 | loss_epoch=52.7 [12/28 20:09:14][INFO] 🚀[FIT][Epoch 4] Data: unity Experiment: finetune_ [12/28 20:09:19][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 20:28:58][INFO] [Exp Name]: finetune_ [12/28 20:28:58][INFO] [GPU x Batch] = 1 x 1 [12/28 20:28:58][INFO] [UnityDataset] Found 5 sequences. [12/28 20:28:58][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:28:58][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 20:28:58][INFO] [12/28 20:29:02][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 20:29:02][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 20:29:02][INFO] [EMDB] Full sequence, split=1 [12/28 20:29:02][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 20:29:02][INFO] [EMDB] Full sequence, split=2 [12/28 20:29:02][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 20:29:02][INFO] [RICH] Full sequence, Test [12/28 20:29:02][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 20:29:02][INFO] [3DPW] Full sequence [12/28 20:29:02][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 20:29:02][INFO] [3DPW_OCC] Full sequence [12/28 20:29:02][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 20:29:02][INFO] [UnityDataset] Found 5 sequences. [12/28 20:29:02][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:29:02][INFO] [12/28 20:29:07][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 20:29:32][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_24/checkpoints' [12/28 20:29:56][INFO] Start Fitting... [12/28 20:30:07][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 20:30:07][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 20:30:08][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 20:30:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 20:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 20:30:14][INFO] ✅[FIT][Epoch 0] finished! 00:07→12:32 | loss_epoch=132 [12/28 20:30:14][INFO] 🚀[FIT][Epoch 1] Data: unity Experiment: finetune_ [12/28 20:30:18][INFO] ✅[FIT][Epoch 1] finished! 00:12→10:06 | loss_epoch=125 [12/28 20:30:18][INFO] 🚀[FIT][Epoch 2] Data: unity Experiment: finetune_ [12/28 20:30:24][INFO] ✅[FIT][Epoch 2] finished! 00:18→09:46 | loss_epoch=445 [12/28 20:30:24][INFO] 🚀[FIT][Epoch 3] Data: unity Experiment: finetune_ [12/28 20:30:30][INFO] ✅[FIT][Epoch 3] finished! 00:23→09:26 | loss_epoch=52.7 [12/28 20:30:30][INFO] 🚀[FIT][Epoch 4] Data: unity Experiment: finetune_ [12/28 20:30:35][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 20:34:09][INFO] [Exp Name]: finetune_ [12/28 20:34:09][INFO] [GPU x Batch] = 1 x 1 [12/28 20:34:09][INFO] [UnityDataset] Found 5 sequences. [12/28 20:34:09][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:34:09][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 20:34:09][INFO] [12/28 20:34:13][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 20:34:13][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 20:34:13][INFO] [EMDB] Full sequence, split=1 [12/28 20:34:13][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 20:34:13][INFO] [EMDB] Full sequence, split=2 [12/28 20:34:13][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 20:34:13][INFO] [RICH] Full sequence, Test [12/28 20:34:13][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 20:34:13][INFO] [3DPW] Full sequence [12/28 20:34:13][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 20:34:13][INFO] [3DPW_OCC] Full sequence [12/28 20:34:13][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 20:34:13][INFO] [UnityDataset] Found 5 sequences. [12/28 20:34:13][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:34:13][INFO] [12/28 20:34:18][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 20:34:43][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_0/checkpoints' [12/28 20:35:08][INFO] Start Fitting... [12/28 20:35:21][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 20:35:21][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 20:35:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 20:35:25][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 20:35:29][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 20:35:29][INFO] ✅[FIT][Epoch 0] finished! 00:08→14:20 | loss_epoch=132 [12/28 20:35:29][INFO] 🚀[FIT][Epoch 1] Data: unity Experiment: finetune_ [12/28 20:35:33][INFO] ✅[FIT][Epoch 1] finished! 00:13→10:52 | loss_epoch=125 [12/28 20:35:33][INFO] 🚀[FIT][Epoch 2] Data: unity Experiment: finetune_ [12/28 20:35:39][INFO] ✅[FIT][Epoch 2] finished! 00:19→10:27 | loss_epoch=445 [12/28 20:35:39][INFO] 🚀[FIT][Epoch 3] Data: unity Experiment: finetune_ [12/28 20:35:45][INFO] ✅[FIT][Epoch 3] finished! 00:24→09:58 | loss_epoch=52.7 [12/28 20:35:45][INFO] 🚀[FIT][Epoch 4] Data: unity Experiment: finetune_ [12/28 20:35:50][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 20:36:16][INFO] 0 sequences evaluated in MetricMocap [12/28 20:36:16][INFO] 0 sequences evaluated in MetricMocap [12/28 20:36:16][INFO] 0 sequences evaluated in MetricMocap [12/28 20:36:16][INFO] 0 sequences evaluated in MetricMocap [12/28 20:36:16][INFO] 0 sequences evaluated in MetricMocap [12/28 20:36:16][INFO] ✅[FIT][Epoch 4] finished! 00:55→17:40 | loss_epoch=53.2 [12/28 20:36:16][INFO] 🚀[FIT][Epoch 5] Data: unity Experiment: finetune_ [12/28 20:36:21][INFO] ✅[FIT][Epoch 5] finished! 01:01→15:58 | loss_epoch=24.8 [12/28 20:36:21][INFO] 🚀[FIT][Epoch 6] Data: unity Experiment: finetune_ [12/28 20:36:27][INFO] ✅[FIT][Epoch 6] finished! 01:06→14:46 | loss_epoch=25 [12/28 20:36:27][INFO] 🚀[FIT][Epoch 7] Data: unity Experiment: finetune_ [12/28 20:36:32][INFO] ✅[FIT][Epoch 7] finished! 01:11→13:47 | loss_epoch=35.1 [12/28 20:36:32][INFO] 🚀[FIT][Epoch 8] Data: unity Experiment: finetune_ [12/28 20:36:38][INFO] ✅[FIT][Epoch 8] finished! 01:18→13:09 | loss_epoch=19.4 [12/28 20:36:38][INFO] 🚀[FIT][Epoch 9] Data: unity Experiment: finetune_ [12/28 20:37:10][INFO] 0 sequences evaluated in MetricMocap [12/28 20:37:10][INFO] 0 sequences evaluated in MetricMocap [12/28 20:37:10][INFO] 0 sequences evaluated in MetricMocap [12/28 20:37:10][INFO] 0 sequences evaluated in MetricMocap [12/28 20:37:10][INFO] 0 sequences evaluated in MetricMocap [12/28 20:37:10][INFO] ✅[FIT][Epoch 9] finished! 01:49→16:27 | loss_epoch=19.2 [12/28 20:37:10][INFO] 🚀[FIT][Epoch 10] Data: unity Experiment: finetune_ [12/28 20:37:16][INFO] ✅[FIT][Epoch 10] finished! 01:55→15:36 | loss_epoch=23.1 [12/28 20:37:16][INFO] 🚀[FIT][Epoch 11] Data: unity Experiment: finetune_ [12/28 20:37:22][INFO] ✅[FIT][Epoch 11] finished! 02:01→14:53 | loss_epoch=20.9 [12/28 20:37:22][INFO] 🚀[FIT][Epoch 12] Data: unity Experiment: finetune_ [12/28 20:37:28][INFO] ✅[FIT][Epoch 12] finished! 02:07→14:16 | loss_epoch=23.8 [12/28 20:37:28][INFO] 🚀[FIT][Epoch 13] Data: unity Experiment: finetune_ [12/28 20:37:34][INFO] ✅[FIT][Epoch 13] finished! 02:13→13:42 | loss_epoch=20.3 [12/28 20:37:34][INFO] 🚀[FIT][Epoch 14] Data: unity Experiment: finetune_ [12/28 20:38:04][INFO] 0 sequences evaluated in MetricMocap [12/28 20:38:04][INFO] 0 sequences evaluated in MetricMocap [12/28 20:38:04][INFO] 0 sequences evaluated in MetricMocap [12/28 20:38:04][INFO] 0 sequences evaluated in MetricMocap [12/28 20:38:04][INFO] 0 sequences evaluated in MetricMocap [12/28 20:38:04][INFO] ✅[FIT][Epoch 14] finished! 02:44→15:31 | loss_epoch=24.1 [12/28 20:38:04][INFO] 🚀[FIT][Epoch 15] Data: unity Experiment: finetune_ [12/28 20:38:11][INFO] ✅[FIT][Epoch 15] finished! 02:50→14:56 | loss_epoch=19.2 [12/28 20:38:11][INFO] 🚀[FIT][Epoch 16] Data: unity Experiment: finetune_ [12/28 20:38:18][INFO] ✅[FIT][Epoch 16] finished! 02:57→14:26 | loss_epoch=15.9 [12/28 20:38:18][INFO] 🚀[FIT][Epoch 17] Data: unity Experiment: finetune_ [12/28 20:38:24][INFO] ✅[FIT][Epoch 17] finished! 03:03→13:57 | loss_epoch=30 [12/28 20:38:24][INFO] 🚀[FIT][Epoch 18] Data: unity Experiment: finetune_ [12/28 20:38:30][INFO] ✅[FIT][Epoch 18] finished! 03:09→13:29 | loss_epoch=19.6 [12/28 20:38:30][INFO] 🚀[FIT][Epoch 19] Data: unity Experiment: finetune_ [12/28 20:39:02][INFO] 0 sequences evaluated in MetricMocap [12/28 20:39:02][INFO] 0 sequences evaluated in MetricMocap [12/28 20:39:02][INFO] 0 sequences evaluated in MetricMocap [12/28 20:39:02][INFO] 0 sequences evaluated in MetricMocap [12/28 20:39:02][INFO] 0 sequences evaluated in MetricMocap [12/28 20:39:02][INFO] ✅[FIT][Epoch 19] finished! 03:42→14:49 | loss_epoch=20.5 [12/28 20:39:02][INFO] 🚀[FIT][Epoch 20] Data: unity Experiment: finetune_ [12/28 20:39:09][INFO] ✅[FIT][Epoch 20] finished! 03:48→14:20 | loss_epoch=22.6 [12/28 20:39:09][INFO] 🚀[FIT][Epoch 21] Data: unity Experiment: finetune_ [12/28 20:39:15][INFO] ✅[FIT][Epoch 21] finished! 03:55→13:54 | loss_epoch=14.1 [12/28 20:39:15][INFO] 🚀[FIT][Epoch 22] Data: unity Experiment: finetune_ [12/28 20:39:22][INFO] ✅[FIT][Epoch 22] finished! 04:01→13:28 | loss_epoch=14.6 [12/28 20:39:22][INFO] 🚀[FIT][Epoch 23] Data: unity Experiment: finetune_ [12/28 20:39:28][INFO] ✅[FIT][Epoch 23] finished! 04:07→13:03 | loss_epoch=18.5 [12/28 20:39:28][INFO] 🚀[FIT][Epoch 24] Data: unity Experiment: finetune_ [12/28 20:40:00][INFO] 0 sequences evaluated in MetricMocap [12/28 20:40:00][INFO] 0 sequences evaluated in MetricMocap [12/28 20:40:00][INFO] 0 sequences evaluated in MetricMocap [12/28 20:40:00][INFO] 0 sequences evaluated in MetricMocap [12/28 20:40:00][INFO] 0 sequences evaluated in MetricMocap [12/28 20:40:00][INFO] ✅[FIT][Epoch 24] finished! 04:40→14:00 | loss_epoch=18.3 [12/28 20:40:00][INFO] 🚀[FIT][Epoch 25] Data: unity Experiment: finetune_ [12/28 20:40:07][INFO] ✅[FIT][Epoch 25] finished! 04:46→13:36 | loss_epoch=14.8 [12/28 20:40:07][INFO] 🚀[FIT][Epoch 26] Data: unity Experiment: finetune_ [12/28 20:40:13][INFO] ✅[FIT][Epoch 26] finished! 04:53→13:13 | loss_epoch=12.4 [12/28 20:40:13][INFO] 🚀[FIT][Epoch 27] Data: unity Experiment: finetune_ [12/28 20:40:20][INFO] ✅[FIT][Epoch 27] finished! 04:59→12:50 | loss_epoch=16.6 [12/28 20:40:20][INFO] 🚀[FIT][Epoch 28] Data: unity Experiment: finetune_ [12/28 20:40:26][INFO] ✅[FIT][Epoch 28] finished! 05:05→12:28 | loss_epoch=13.8 [12/28 20:40:26][INFO] 🚀[FIT][Epoch 29] Data: unity Experiment: finetune_ [12/28 20:41:18][INFO] [Exp Name]: finetune_ [12/28 20:41:18][INFO] [GPU x Batch] = 1 x 1 [12/28 20:41:18][INFO] [UnityDataset] Found 5 sequences. [12/28 20:41:18][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:41:18][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 20:41:18][INFO] [12/28 20:41:21][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 20:41:21][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 20:41:21][INFO] [EMDB] Full sequence, split=1 [12/28 20:41:21][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 20:41:21][INFO] [EMDB] Full sequence, split=2 [12/28 20:41:21][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 20:41:22][INFO] [RICH] Full sequence, Test [12/28 20:41:22][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 20:41:22][INFO] [3DPW] Full sequence [12/28 20:41:22][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 20:41:22][INFO] [3DPW_OCC] Full sequence [12/28 20:41:22][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 20:41:22][INFO] [UnityDataset] Found 5 sequences. [12/28 20:41:22][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:41:22][INFO] [12/28 20:41:26][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 20:41:50][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_1/checkpoints' [12/28 20:42:15][INFO] Start Fitting... [12/28 20:42:27][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 20:42:27][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 20:42:29][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 20:42:31][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 20:42:35][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 20:42:35][INFO] ✅[FIT][Epoch 0] finished! 00:09→00:36 | loss_epoch=132 [12/28 20:42:35][INFO] 🚀[FIT][Epoch 1] Data: unity Experiment: finetune_ [12/28 20:42:39][INFO] ✅[FIT][Epoch 1] finished! 00:13→00:20 | loss_epoch=125 [12/28 20:42:39][INFO] 🚀[FIT][Epoch 2] Data: unity Experiment: finetune_ [12/28 20:42:44][INFO] ✅[FIT][Epoch 2] finished! 00:18→00:12 | loss_epoch=445 [12/28 20:42:44][INFO] 🚀[FIT][Epoch 3] Data: unity Experiment: finetune_ [12/28 20:42:49][INFO] ✅[FIT][Epoch 3] finished! 00:23→00:05 | loss_epoch=52.7 [12/28 20:42:49][INFO] 🚀[FIT][Epoch 4] Data: unity Experiment: finetune_ [12/28 20:42:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 20:43:22][INFO] 0 sequences evaluated in MetricMocap [12/28 20:43:22][INFO] 0 sequences evaluated in MetricMocap [12/28 20:43:22][INFO] 0 sequences evaluated in MetricMocap [12/28 20:43:22][INFO] 0 sequences evaluated in MetricMocap [12/28 20:43:22][INFO] 0 sequences evaluated in MetricMocap [12/28 20:43:22][INFO] ✅[FIT][Epoch 4] finished! 00:56→00:00 | loss_epoch=53.2 [12/28 20:43:27][INFO] End of script. [12/28 20:51:09][INFO] [Exp Name]: finetune_ [12/28 20:51:09][INFO] [GPU x Batch] = 1 x 1 [12/28 20:51:09][INFO] [UnityDataset] Found 5 sequences. [12/28 20:51:09][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:51:09][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 20:51:09][INFO] [12/28 20:51:12][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 20:51:12][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 20:51:12][INFO] [EMDB] Full sequence, split=1 [12/28 20:51:12][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 20:51:12][INFO] [EMDB] Full sequence, split=2 [12/28 20:51:12][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 20:51:12][INFO] [RICH] Full sequence, Test [12/28 20:51:12][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 20:51:12][INFO] [3DPW] Full sequence [12/28 20:51:12][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 20:51:12][INFO] [3DPW_OCC] Full sequence [12/28 20:51:12][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 20:51:12][INFO] [UnityDataset] Found 5 sequences. [12/28 20:51:12][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:51:12][INFO] [12/28 20:51:17][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 20:51:42][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_2/checkpoints' [12/28 20:52:07][INFO] Start Fitting... [12/28 20:52:34][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 20:52:34][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 20:52:34][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 20:52:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 20:52:38][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 20:52:42][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 20:54:55][INFO] [Exp Name]: finetune_ [12/28 20:54:55][INFO] [GPU x Batch] = 1 x 1 [12/28 20:54:55][INFO] [UnityDataset] Found 5 sequences. [12/28 20:54:55][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:54:55][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 20:54:55][INFO] [12/28 20:54:58][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 20:54:58][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 20:54:58][INFO] [EMDB] Full sequence, split=1 [12/28 20:54:58][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 20:54:58][INFO] [EMDB] Full sequence, split=2 [12/28 20:54:58][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 20:54:58][INFO] [RICH] Full sequence, Test [12/28 20:54:58][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 20:54:58][INFO] [3DPW] Full sequence [12/28 20:54:58][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 20:54:58][INFO] [3DPW_OCC] Full sequence [12/28 20:54:58][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 20:54:58][INFO] [UnityDataset] Found 5 sequences. [12/28 20:54:58][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:54:58][INFO] [12/28 20:55:03][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 20:55:27][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_3/checkpoints' [12/28 20:55:51][INFO] Start Fitting... [12/28 20:56:04][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 20:56:04][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 20:56:04][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 20:56:07][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 20:56:09][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 20:56:12][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 20:56:46][INFO] [Exp Name]: finetune_ [12/28 20:56:46][INFO] [GPU x Batch] = 1 x 1 [12/28 20:56:47][INFO] [UnityDataset] Found 5 sequences. [12/28 20:56:47][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:56:47][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 20:56:47][INFO] [12/28 20:56:50][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 20:56:50][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 20:56:50][INFO] [EMDB] Full sequence, split=1 [12/28 20:56:50][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 20:56:50][INFO] [EMDB] Full sequence, split=2 [12/28 20:56:50][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 20:56:50][INFO] [RICH] Full sequence, Test [12/28 20:56:50][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 20:56:50][INFO] [3DPW] Full sequence [12/28 20:56:50][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 20:56:50][INFO] [3DPW_OCC] Full sequence [12/28 20:56:50][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 20:56:50][INFO] [UnityDataset] Found 5 sequences. [12/28 20:56:50][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 20:56:50][INFO] [12/28 20:56:54][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 20:57:18][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_4/checkpoints' [12/28 20:57:50][INFO] Start Fitting... [12/28 20:58:01][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 20:58:01][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 20:58:01][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 20:58:04][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 20:58:06][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 20:58:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 21:00:47][INFO] [Exp Name]: finetune_ [12/28 21:00:47][INFO] [GPU x Batch] = 1 x 1 [12/28 21:00:47][INFO] [UnityDataset] Found 5 sequences. [12/28 21:00:47][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 21:00:47][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 21:00:47][INFO] [12/28 21:00:50][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 21:00:50][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 21:00:50][INFO] [EMDB] Full sequence, split=1 [12/28 21:00:50][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 21:00:50][INFO] [EMDB] Full sequence, split=2 [12/28 21:00:50][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 21:00:50][INFO] [RICH] Full sequence, Test [12/28 21:00:50][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 21:00:50][INFO] [3DPW] Full sequence [12/28 21:00:50][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 21:00:50][INFO] [3DPW_OCC] Full sequence [12/28 21:00:50][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 21:00:50][INFO] [UnityDataset] Found 5 sequences. [12/28 21:00:50][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 21:00:50][INFO] [12/28 21:00:56][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 21:01:18][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_5/checkpoints' [12/28 21:01:44][INFO] Start Fitting... [12/28 21:01:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 21:01:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 21:01:58][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 21:03:57][INFO] [Exp Name]: finetune_ [12/28 21:03:57][INFO] [GPU x Batch] = 1 x 1 [12/28 21:03:57][INFO] [UnityDataset] Found 5 sequences. [12/28 21:03:57][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 21:03:57][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 21:03:57][INFO] [12/28 21:04:00][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 21:04:00][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 21:04:00][INFO] [EMDB] Full sequence, split=1 [12/28 21:04:00][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 21:04:00][INFO] [EMDB] Full sequence, split=2 [12/28 21:04:00][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 21:04:00][INFO] [RICH] Full sequence, Test [12/28 21:04:00][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 21:04:00][INFO] [3DPW] Full sequence [12/28 21:04:00][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 21:04:00][INFO] [3DPW_OCC] Full sequence [12/28 21:04:00][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 21:04:00][INFO] [UnityDataset] Found 5 sequences. [12/28 21:04:00][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 21:04:00][INFO] [12/28 21:04:05][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 21:04:21][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_6/checkpoints' [12/28 21:04:46][INFO] Start Fitting... [12/28 21:05:02][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 21:05:02][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 21:05:02][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 21:05:04][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 21:05:06][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 21:05:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 21:05:42][INFO] 5 sequences evaluated in MetricMocap [12/28 21:05:42][INFO] monitored metric mpjpe per sequence 756.7 : 100_biboo_birthday_speech_explosion_1 719.1 : 105_biboo_birthday_speech_explosion_6 711.3 : 101_biboo_birthday_speech_explosion_2 669.2 : 102_biboo_birthday_speech_explosion_3 619.6 : 103_biboo_birthday_speech_explosion_4 ------ [12/28 21:05:42][INFO] [Metrics] Unity: pa_mpjpe: 234.3 mpjpe: 695.2 pve: 812.0 accel: 8.1 ------ [12/28 21:05:42][INFO] 5 sequences evaluated in MetricMocap [12/28 21:05:42][INFO] monitored metric wa2_mpjpe per sequence 5629.3 : 100_biboo_birthday_speech_explosion_1 3964.5 : 102_biboo_birthday_speech_explosion_3 3530.3 : 101_biboo_birthday_speech_explosion_2 3050.9 : 103_biboo_birthday_speech_explosion_4 2628.2 : 105_biboo_birthday_speech_explosion_6 ------ [12/28 21:05:42][INFO] [Metrics] Unity: wa2_mpjpe: 3760.7 waa_mpjpe: 422.7 rte: 419.6 jitter: 960.2 fs: 249.6 ------ [12/28 21:05:42][INFO] 0 sequences evaluated in MetricMocap [12/28 21:05:42][INFO] 0 sequences evaluated in MetricMocap [12/28 21:05:42][INFO] 0 sequences evaluated in MetricMocap [12/28 21:05:42][INFO] ✅[FIT][Epoch 0] finished! 00:41→00:00 | loss_epoch=132 [12/28 21:05:44][INFO] End of script. [12/28 21:09:50][INFO] [Exp Name]: finetune_ [12/28 21:09:50][INFO] [GPU x Batch] = 1 x 1 [12/28 21:09:50][INFO] [UnityDataset] Found 5 sequences. [12/28 21:09:50][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 21:09:50][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 21:09:50][INFO] [12/28 21:09:53][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 21:09:53][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 21:09:53][INFO] [EMDB] Full sequence, split=1 [12/28 21:09:53][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 21:09:53][INFO] [EMDB] Full sequence, split=2 [12/28 21:09:53][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 21:09:53][INFO] [RICH] Full sequence, Test [12/28 21:09:53][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 21:09:53][INFO] [3DPW] Full sequence [12/28 21:09:53][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 21:09:53][INFO] [3DPW_OCC] Full sequence [12/28 21:09:53][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 21:09:53][INFO] [UnityDataset] Found 5 sequences. [12/28 21:09:53][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 21:09:53][INFO] [12/28 21:09:59][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 21:10:19][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_7/checkpoints' [12/28 21:10:46][INFO] Start Fitting... [12/28 21:10:57][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 21:10:57][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 21:10:57][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 21:10:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 21:11:01][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 21:11:05][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 21:11:37][INFO] 5 sequences evaluated in MetricMocap [12/28 21:11:37][INFO] monitored metric mpjpe per sequence 756.7 : 100_biboo_birthday_speech_explosion_1 719.1 : 105_biboo_birthday_speech_explosion_6 711.3 : 101_biboo_birthday_speech_explosion_2 669.2 : 102_biboo_birthday_speech_explosion_3 619.6 : 103_biboo_birthday_speech_explosion_4 ------ [12/28 21:11:37][INFO] [Metrics] EMDB_1: pa_mpjpe: 234.3 mpjpe: 695.2 pve: 812.0 accel: 8.1 ------ [12/28 21:11:37][INFO] 5 sequences evaluated in MetricMocap [12/28 21:11:37][INFO] monitored metric wa2_mpjpe per sequence 5629.3 : 100_biboo_birthday_speech_explosion_1 3964.5 : 102_biboo_birthday_speech_explosion_3 3530.3 : 101_biboo_birthday_speech_explosion_2 3050.9 : 103_biboo_birthday_speech_explosion_4 2628.2 : 105_biboo_birthday_speech_explosion_6 ------ [12/28 21:11:37][INFO] [Metrics] EMDB_2: wa2_mpjpe: 3760.7 waa_mpjpe: 422.7 rte: 419.6 jitter: 960.2 fs: 249.6 ------ [12/28 21:11:37][INFO] 0 sequences evaluated in MetricMocap [12/28 21:11:37][INFO] 0 sequences evaluated in MetricMocap [12/28 21:11:37][INFO] 0 sequences evaluated in MetricMocap [12/28 21:11:37][INFO] ✅[FIT][Epoch 0] finished! 00:41→00:00 | loss_epoch=132 [12/28 21:11:38][INFO] End of script. [12/28 21:24:49][INFO] [Exp Name]: finetune_ [12/28 21:24:49][INFO] [GPU x Batch] = 1 x 1 [12/28 21:24:49][INFO] [UnityDataset] Found 5 sequences. [12/28 21:24:49][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 21:24:49][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 21:24:49][INFO] [12/28 21:24:52][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 21:24:52][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 21:24:52][INFO] [EMDB] Full sequence, split=1 [12/28 21:24:52][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 21:24:52][INFO] [EMDB] Full sequence, split=2 [12/28 21:24:52][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 21:24:52][INFO] [RICH] Full sequence, Test [12/28 21:24:52][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 21:24:52][INFO] [3DPW] Full sequence [12/28 21:24:52][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 21:24:52][INFO] [3DPW_OCC] Full sequence [12/28 21:24:52][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 21:24:52][INFO] [UnityDataset] Found 5 sequences. [12/28 21:24:52][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 21:24:52][INFO] [12/28 21:24:57][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 21:25:18][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_8/checkpoints' [12/28 21:25:48][INFO] Start Fitting... [12/28 21:25:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 21:25:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 21:25:59][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 21:26:02][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 21:26:04][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 21:26:07][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 21:26:40][INFO] 5 sequences evaluated in MetricMocap [12/28 21:26:40][INFO] monitored metric mpjpe per sequence 756.7 : 100_biboo_birthday_speech_explosion_1 719.1 : 105_biboo_birthday_speech_explosion_6 711.3 : 101_biboo_birthday_speech_explosion_2 669.2 : 102_biboo_birthday_speech_explosion_3 619.6 : 103_biboo_birthday_speech_explosion_4 ------ [12/28 21:26:40][INFO] [Metrics] EMDB_1: pa_mpjpe: 234.3 mpjpe: 695.2 pve: 812.0 accel: 8.1 ------ [12/28 21:26:40][INFO] 5 sequences evaluated in MetricMocap [12/28 21:26:40][INFO] monitored metric wa2_mpjpe per sequence 5629.3 : 100_biboo_birthday_speech_explosion_1 3964.5 : 102_biboo_birthday_speech_explosion_3 3530.3 : 101_biboo_birthday_speech_explosion_2 3050.9 : 103_biboo_birthday_speech_explosion_4 2628.2 : 105_biboo_birthday_speech_explosion_6 ------ [12/28 21:26:40][INFO] [Metrics] EMDB_2: wa2_mpjpe: 3760.7 waa_mpjpe: 422.7 rte: 419.6 jitter: 960.2 fs: 249.6 ------ [12/28 21:26:40][INFO] 0 sequences evaluated in MetricMocap [12/28 21:26:40][INFO] 0 sequences evaluated in MetricMocap [12/28 21:26:40][INFO] 0 sequences evaluated in MetricMocap [12/28 21:26:40][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 21:26:40][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 21:26:40][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 21:26:40][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 21:26:40][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/wa2_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 21:26:40][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/waa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 21:26:40][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/rte', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 21:26:40][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/jitter', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 21:26:40][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/fs', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 21:26:40][INFO] ✅[FIT][Epoch 0] finished! 00:42→00:00 | loss_epoch=132 [12/28 21:26:42][INFO] End of script. [12/28 21:33:06][INFO] [Exp Name]: finetune_ [12/28 21:33:06][INFO] [GPU x Batch] = 1 x 1 [12/28 21:33:06][INFO] [UnityDataset] Found 5 sequences. [12/28 21:33:06][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 21:33:06][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 21:33:06][INFO] [12/28 21:33:10][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 21:33:10][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 21:33:10][INFO] [EMDB] Full sequence, split=1 [12/28 21:33:10][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 21:33:10][INFO] [EMDB] Full sequence, split=2 [12/28 21:33:10][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 21:33:10][INFO] [RICH] Full sequence, Test [12/28 21:33:10][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 21:33:10][INFO] [3DPW] Full sequence [12/28 21:33:10][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 21:33:10][INFO] [3DPW_OCC] Full sequence [12/28 21:33:10][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 21:33:10][INFO] [UnityDataset] Found 5 sequences. [12/28 21:33:10][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 21:33:10][INFO] [12/28 21:33:15][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 21:33:38][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_9/checkpoints' [12/28 21:34:07][INFO] Start Fitting... [12/28 21:34:42][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 21:34:42][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 21:34:42][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 21:34:44][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 21:34:46][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 21:34:50][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 21:35:25][INFO] 5 sequences evaluated in MetricMocap [12/28 21:35:25][INFO] monitored metric mpjpe per sequence 756.7 : 100_biboo_birthday_speech_explosion_1 719.1 : 105_biboo_birthday_speech_explosion_6 711.3 : 101_biboo_birthday_speech_explosion_2 669.2 : 102_biboo_birthday_speech_explosion_3 619.6 : 103_biboo_birthday_speech_explosion_4 ------ [12/28 21:35:25][INFO] [Metrics] EMDB_1: pa_mpjpe: 234.3 mpjpe: 695.2 pve: 812.0 accel: 8.1 ------ [12/28 21:35:25][INFO] 5 sequences evaluated in MetricMocap [12/28 21:35:25][INFO] monitored metric wa2_mpjpe per sequence 5629.3 : 100_biboo_birthday_speech_explosion_1 3964.5 : 102_biboo_birthday_speech_explosion_3 3530.3 : 101_biboo_birthday_speech_explosion_2 3050.9 : 103_biboo_birthday_speech_explosion_4 2628.2 : 105_biboo_birthday_speech_explosion_6 ------ [12/28 21:35:25][INFO] [Metrics] EMDB_2: wa2_mpjpe: 3760.7 waa_mpjpe: 422.7 rte: 419.6 jitter: 960.2 fs: 249.6 ------ [12/28 21:35:25][INFO] 0 sequences evaluated in MetricMocap [12/28 21:35:25][INFO] 0 sequences evaluated in MetricMocap [12/28 21:35:25][INFO] 0 sequences evaluated in MetricMocap [12/28 21:35:25][INFO] ✅[FIT][Epoch 0] finished! 00:44→00:00 | loss_epoch=132 [12/28 21:36:06][INFO] Manually saved checkpoint to /root/miko/puni/train/GENMO/checkpoints/last_manual.ckpt [12/28 21:36:07][INFO] End of script. [12/28 22:03:16][INFO] [Exp Name]: finetune_ [12/28 22:03:16][INFO] [GPU x Batch] = 1 x 1 [12/28 22:03:16][INFO] [UnityDataset] Found 5 sequences. [12/28 22:03:16][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 22:03:16][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 22:03:16][INFO] [12/28 22:03:19][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 22:03:19][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 22:03:19][INFO] [EMDB] Full sequence, split=1 [12/28 22:03:19][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 22:03:19][INFO] [EMDB] Full sequence, split=2 [12/28 22:03:19][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 22:03:19][INFO] [RICH] Full sequence, Test [12/28 22:03:19][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 22:03:19][INFO] [3DPW] Full sequence [12/28 22:03:19][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 22:03:19][INFO] [3DPW_OCC] Full sequence [12/28 22:03:19][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 22:03:19][INFO] [UnityDataset] Found 5 sequences. [12/28 22:03:19][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 22:03:19][INFO] [12/28 22:03:24][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 22:03:35][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_0/checkpoints' [12/28 22:04:02][INFO] Start Fitting... [12/28 22:04:13][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 22:04:13][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 22:04:13][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 22:04:16][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 22:04:18][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 22:04:22][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 22:04:52][INFO] 5 sequences evaluated in MetricMocap [12/28 22:04:52][INFO] monitored metric mpjpe per sequence 756.7 : 100_biboo_birthday_speech_explosion_1 719.1 : 105_biboo_birthday_speech_explosion_6 711.3 : 101_biboo_birthday_speech_explosion_2 669.2 : 102_biboo_birthday_speech_explosion_3 619.6 : 103_biboo_birthday_speech_explosion_4 ------ [12/28 22:04:52][INFO] [Metrics] EMDB_1: pa_mpjpe: 234.3 mpjpe: 695.2 pve: 812.0 accel: 8.1 ------ [12/28 22:04:52][INFO] 5 sequences evaluated in MetricMocap [12/28 22:04:52][INFO] monitored metric wa2_mpjpe per sequence 5629.3 : 100_biboo_birthday_speech_explosion_1 3964.5 : 102_biboo_birthday_speech_explosion_3 3530.3 : 101_biboo_birthday_speech_explosion_2 3050.9 : 103_biboo_birthday_speech_explosion_4 2628.2 : 105_biboo_birthday_speech_explosion_6 ------ [12/28 22:04:52][INFO] [Metrics] EMDB_2: wa2_mpjpe: 3760.7 waa_mpjpe: 422.7 rte: 419.6 jitter: 960.2 fs: 249.6 ------ [12/28 22:04:52][INFO] 0 sequences evaluated in MetricMocap [12/28 22:04:52][INFO] 0 sequences evaluated in MetricMocap [12/28 22:04:52][INFO] 0 sequences evaluated in MetricMocap [12/28 22:04:52][INFO] ✅[FIT][Epoch 0] finished! 00:40→00:00 | loss_epoch=132 [12/28 22:05:00][INFO] Manually saved checkpoint to /root/miko/puni/train/GENMO/checkpoints/last_manual.ckpt [12/28 22:05:02][INFO] End of script. [12/28 22:14:01][INFO] [Exp Name]: finetune_ [12/28 22:14:01][INFO] [GPU x Batch] = 1 x 1 [12/28 22:14:01][INFO] [UnityDataset] Found 5 sequences. [12/28 22:14:01][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 22:14:01][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 22:14:01][INFO] [12/28 22:14:04][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 22:14:04][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 22:14:04][INFO] [EMDB] Full sequence, split=1 [12/28 22:14:04][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 22:14:04][INFO] [EMDB] Full sequence, split=2 [12/28 22:14:04][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 22:14:04][INFO] [RICH] Full sequence, Test [12/28 22:14:04][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 22:14:04][INFO] [3DPW] Full sequence [12/28 22:14:04][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 22:14:04][INFO] [3DPW_OCC] Full sequence [12/28 22:14:04][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 22:14:04][INFO] [UnityDataset] Found 5 sequences. [12/28 22:14:04][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 22:14:04][INFO] [12/28 22:14:09][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 22:14:21][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_1/checkpoints' [12/28 22:14:49][INFO] Start Fitting... [12/28 22:15:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 22:15:11][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 22:15:11][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 22:15:13][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 22:15:15][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 22:15:19][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 22:15:50][INFO] 5 sequences evaluated in MetricMocap [12/28 22:15:50][INFO] monitored metric mpjpe per sequence 756.7 : 100_biboo_birthday_speech_explosion_1 719.1 : 105_biboo_birthday_speech_explosion_6 711.3 : 101_biboo_birthday_speech_explosion_2 669.2 : 102_biboo_birthday_speech_explosion_3 619.6 : 103_biboo_birthday_speech_explosion_4 ------ [12/28 22:15:50][INFO] [Metrics] EMDB_1: pa_mpjpe: 234.3 mpjpe: 695.2 pve: 812.0 accel: 8.1 ------ [12/28 22:15:50][INFO] 5 sequences evaluated in MetricMocap [12/28 22:15:50][INFO] monitored metric wa2_mpjpe per sequence 5629.3 : 100_biboo_birthday_speech_explosion_1 3964.5 : 102_biboo_birthday_speech_explosion_3 3530.3 : 101_biboo_birthday_speech_explosion_2 3050.9 : 103_biboo_birthday_speech_explosion_4 2628.2 : 105_biboo_birthday_speech_explosion_6 ------ [12/28 22:15:50][INFO] [Metrics] EMDB_2: wa2_mpjpe: 3760.7 waa_mpjpe: 422.7 rte: 419.6 jitter: 960.2 fs: 249.6 ------ [12/28 22:15:50][INFO] 0 sequences evaluated in MetricMocap [12/28 22:15:50][INFO] 0 sequences evaluated in MetricMocap [12/28 22:15:50][INFO] 0 sequences evaluated in MetricMocap [12/28 22:15:50][INFO] ✅[FIT][Epoch 0] finished! 00:40→00:00 | loss_epoch=132 [12/28 22:17:03][INFO] Manually saved checkpoint to /root/miko/puni/train/GENMO/checkpoints/last_manual.ckpt [12/28 22:17:04][INFO] End of script. [12/28 22:20:51][INFO] [Exp Name]: finetune_ [12/28 22:20:51][INFO] [GPU x Batch] = 1 x 1 [12/28 22:20:51][INFO] [UnityDataset] Found 5 sequences. [12/28 22:20:51][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 22:20:51][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 22:20:51][INFO] [12/28 22:20:54][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 22:20:54][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 22:20:54][INFO] [EMDB] Full sequence, split=1 [12/28 22:20:54][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 22:20:54][INFO] [EMDB] Full sequence, split=2 [12/28 22:20:54][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 22:20:54][INFO] [RICH] Full sequence, Test [12/28 22:20:54][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 22:20:54][INFO] [3DPW] Full sequence [12/28 22:20:54][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 22:20:54][INFO] [3DPW_OCC] Full sequence [12/28 22:20:54][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 22:20:54][INFO] [UnityDataset] Found 5 sequences. [12/28 22:20:54][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 22:20:54][INFO] [12/28 22:20:59][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 22:21:10][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_2/checkpoints' [12/28 22:21:38][INFO] Start Fitting... [12/28 22:21:50][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 22:21:51][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 22:21:51][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 22:21:53][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 22:21:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 22:21:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 22:22:29][INFO] 5 sequences evaluated in MetricMocap [12/28 22:22:29][INFO] monitored metric mpjpe per sequence 756.7 : 100_biboo_birthday_speech_explosion_1 719.1 : 105_biboo_birthday_speech_explosion_6 711.3 : 101_biboo_birthday_speech_explosion_2 669.2 : 102_biboo_birthday_speech_explosion_3 619.6 : 103_biboo_birthday_speech_explosion_4 ------ [12/28 22:22:29][INFO] [Metrics] EMDB_1: pa_mpjpe: 234.3 mpjpe: 695.2 pve: 812.0 accel: 8.1 ------ [12/28 22:22:29][INFO] 5 sequences evaluated in MetricMocap [12/28 22:22:29][INFO] monitored metric wa2_mpjpe per sequence 5629.3 : 100_biboo_birthday_speech_explosion_1 3964.5 : 102_biboo_birthday_speech_explosion_3 3530.3 : 101_biboo_birthday_speech_explosion_2 3050.9 : 103_biboo_birthday_speech_explosion_4 2628.2 : 105_biboo_birthday_speech_explosion_6 ------ [12/28 22:22:29][INFO] [Metrics] EMDB_2: wa2_mpjpe: 3760.7 waa_mpjpe: 422.7 rte: 419.6 jitter: 960.2 fs: 249.6 ------ [12/28 22:22:29][INFO] 0 sequences evaluated in MetricMocap [12/28 22:22:29][INFO] 0 sequences evaluated in MetricMocap [12/28 22:22:29][INFO] 0 sequences evaluated in MetricMocap [12/28 22:22:29][INFO] ✅[FIT][Epoch 0] finished! 00:39→00:00 | loss_epoch=132 [12/28 22:23:27][INFO] Manually saved checkpoint to /root/miko/puni/train/GENMO/checkpoints/manual_epoch_0.ckpt [12/28 22:23:29][INFO] End of script. [12/28 22:25:04][INFO] [Exp Name]: finetune_ [12/28 22:25:04][INFO] [GPU x Batch] = 1 x 1 [12/28 22:25:04][INFO] [UnityDataset] Found 5 sequences. [12/28 22:25:04][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 22:25:04][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 22:25:04][INFO] [12/28 22:25:08][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 22:25:08][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 22:25:08][INFO] [EMDB] Full sequence, split=1 [12/28 22:25:08][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 22:25:08][INFO] [EMDB] Full sequence, split=2 [12/28 22:25:08][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 22:25:08][INFO] [RICH] Full sequence, Test [12/28 22:25:08][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 22:25:08][INFO] [3DPW] Full sequence [12/28 22:25:08][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 22:25:08][INFO] [3DPW_OCC] Full sequence [12/28 22:25:08][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 22:25:08][INFO] [UnityDataset] Found 5 sequences. [12/28 22:25:08][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 22:25:08][INFO] [12/28 22:25:12][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 22:25:19][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_3/checkpoints' [12/28 22:25:48][INFO] Start Fitting... [12/28 22:26:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 22:26:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 22:26:14][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 22:26:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 22:26:19][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 22:26:22][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 22:26:55][INFO] 5 sequences evaluated in MetricMocap [12/28 22:26:55][INFO] monitored metric mpjpe per sequence 756.7 : 100_biboo_birthday_speech_explosion_1 719.1 : 105_biboo_birthday_speech_explosion_6 711.3 : 101_biboo_birthday_speech_explosion_2 669.2 : 102_biboo_birthday_speech_explosion_3 619.6 : 103_biboo_birthday_speech_explosion_4 ------ [12/28 22:26:55][INFO] [Metrics] EMDB_1: pa_mpjpe: 234.3 mpjpe: 695.2 pve: 812.0 accel: 8.1 ------ [12/28 22:26:55][INFO] 5 sequences evaluated in MetricMocap [12/28 22:26:55][INFO] monitored metric wa2_mpjpe per sequence 5629.3 : 100_biboo_birthday_speech_explosion_1 3964.5 : 102_biboo_birthday_speech_explosion_3 3530.3 : 101_biboo_birthday_speech_explosion_2 3050.9 : 103_biboo_birthday_speech_explosion_4 2628.2 : 105_biboo_birthday_speech_explosion_6 ------ [12/28 22:26:55][INFO] [Metrics] EMDB_2: wa2_mpjpe: 3760.7 waa_mpjpe: 422.7 rte: 419.6 jitter: 960.2 fs: 249.6 ------ [12/28 22:26:55][INFO] 0 sequences evaluated in MetricMocap [12/28 22:26:55][INFO] 0 sequences evaluated in MetricMocap [12/28 22:26:55][INFO] 0 sequences evaluated in MetricMocap [12/28 22:26:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:26:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:26:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:26:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:26:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/wa2_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:26:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/waa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:26:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/rte', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:26:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/jitter', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:26:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/fs', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:26:55][INFO] ✅[FIT][Epoch 0] finished! 00:42→00:00 | loss_epoch=132 [12/28 22:26:59][INFO] End of script. [12/28 22:27:36][INFO] [Exp Name]: finetune_ [12/28 22:27:36][INFO] [GPU x Batch] = 1 x 1 [12/28 22:27:36][INFO] [UnityDataset] Found 5 sequences. [12/28 22:27:36][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 22:27:36][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 22:27:36][INFO] [12/28 22:27:39][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 22:27:39][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 22:27:39][INFO] [EMDB] Full sequence, split=1 [12/28 22:27:39][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 22:27:39][INFO] [EMDB] Full sequence, split=2 [12/28 22:27:39][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 22:27:39][INFO] [RICH] Full sequence, Test [12/28 22:27:39][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 22:27:39][INFO] [3DPW] Full sequence [12/28 22:27:39][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 22:27:39][INFO] [3DPW_OCC] Full sequence [12/28 22:27:39][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 22:27:39][INFO] [UnityDataset] Found 5 sequences. [12/28 22:27:39][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 22:27:39][INFO] [12/28 22:27:44][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 22:28:03][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_4/checkpoints' [12/28 22:28:31][INFO] Start Fitting... [12/28 22:29:35][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 22:29:35][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 22:29:35][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 22:29:37][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 22:29:39][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 22:29:43][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 22:30:14][INFO] 5 sequences evaluated in MetricMocap [12/28 22:30:14][INFO] monitored metric mpjpe per sequence 756.7 : 100_biboo_birthday_speech_explosion_1 719.1 : 105_biboo_birthday_speech_explosion_6 711.3 : 101_biboo_birthday_speech_explosion_2 669.2 : 102_biboo_birthday_speech_explosion_3 619.6 : 103_biboo_birthday_speech_explosion_4 ------ [12/28 22:30:14][INFO] [Metrics] EMDB_1: pa_mpjpe: 234.3 mpjpe: 695.2 pve: 812.0 accel: 8.1 ------ [12/28 22:30:14][INFO] 5 sequences evaluated in MetricMocap [12/28 22:30:14][INFO] monitored metric wa2_mpjpe per sequence 5629.3 : 100_biboo_birthday_speech_explosion_1 3964.5 : 102_biboo_birthday_speech_explosion_3 3530.3 : 101_biboo_birthday_speech_explosion_2 3050.9 : 103_biboo_birthday_speech_explosion_4 2628.2 : 105_biboo_birthday_speech_explosion_6 ------ [12/28 22:30:14][INFO] [Metrics] EMDB_2: wa2_mpjpe: 3760.7 waa_mpjpe: 422.7 rte: 419.6 jitter: 960.2 fs: 249.6 ------ [12/28 22:30:14][INFO] 0 sequences evaluated in MetricMocap [12/28 22:30:14][INFO] 0 sequences evaluated in MetricMocap [12/28 22:30:14][INFO] 0 sequences evaluated in MetricMocap [12/28 22:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/wa2_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/waa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/rte', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/jitter', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/fs', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 22:30:14][INFO] ✅[FIT][Epoch 0] finished! 00:40→00:00 | loss_epoch=132 [12/28 22:31:02][INFO] Manually saved checkpoint to /root/miko/puni/train/GENMO/checkpoints/manual_epoch_0.ckpt [12/28 22:31:04][INFO] End of script. [12/28 23:12:52][INFO] [Exp Name]: finetune_ [12/28 23:12:52][INFO] [GPU x Batch] = 1 x 1 [12/28 23:12:52][INFO] [UnityDataset] Found 5 sequences. [12/28 23:12:52][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 23:12:52][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 23:12:52][INFO] [12/28 23:12:56][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 23:12:56][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 23:12:56][INFO] [EMDB] Full sequence, split=1 [12/28 23:12:56][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 23:12:56][INFO] [EMDB] Full sequence, split=2 [12/28 23:12:56][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 23:12:56][INFO] [RICH] Full sequence, Test [12/28 23:12:56][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 23:12:56][INFO] [3DPW] Full sequence [12/28 23:12:56][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 23:12:56][INFO] [3DPW_OCC] Full sequence [12/28 23:12:56][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 23:12:56][INFO] [UnityDataset] Found 5 sequences. [12/28 23:12:56][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 23:12:56][INFO] [12/28 23:13:02][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 23:13:35][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_5/checkpoints' [12/28 23:14:02][INFO] Start Fitting... [12/28 23:14:12][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/28 23:14:13][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/28 23:14:13][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/28 23:14:15][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/28 23:14:18][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/28 23:14:21][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/28 23:14:54][INFO] 5 sequences evaluated in MetricMocap [12/28 23:14:54][INFO] monitored metric mpjpe per sequence 756.7 : 100_biboo_birthday_speech_explosion_1 719.1 : 105_biboo_birthday_speech_explosion_6 711.3 : 101_biboo_birthday_speech_explosion_2 669.2 : 102_biboo_birthday_speech_explosion_3 619.6 : 103_biboo_birthday_speech_explosion_4 ------ [12/28 23:14:54][INFO] [Metrics] EMDB_1: pa_mpjpe: 234.3 mpjpe: 695.2 pve: 812.0 accel: 8.1 ------ [12/28 23:14:54][INFO] 5 sequences evaluated in MetricMocap [12/28 23:14:54][INFO] monitored metric wa2_mpjpe per sequence 5629.3 : 100_biboo_birthday_speech_explosion_1 3964.5 : 102_biboo_birthday_speech_explosion_3 3530.3 : 101_biboo_birthday_speech_explosion_2 3050.9 : 103_biboo_birthday_speech_explosion_4 2628.2 : 105_biboo_birthday_speech_explosion_6 ------ [12/28 23:14:54][INFO] [Metrics] EMDB_2: wa2_mpjpe: 3760.7 waa_mpjpe: 422.7 rte: 419.6 jitter: 960.2 fs: 249.6 ------ [12/28 23:14:54][INFO] 0 sequences evaluated in MetricMocap [12/28 23:14:54][INFO] 0 sequences evaluated in MetricMocap [12/28 23:14:54][INFO] 0 sequences evaluated in MetricMocap [12/28 23:14:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 23:14:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 23:14:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 23:14:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_1/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 23:14:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/wa2_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 23:14:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/waa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 23:14:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/rte', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 23:14:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/jitter', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 23:14:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_EMDB_2/fs', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/28 23:14:54][INFO] ✅[FIT][Epoch 0] finished! 00:42→00:00 | loss_epoch=132 [12/28 23:15:44][INFO] Manually saved checkpoint to /root/miko/puni/train/GENMO/checkpoints/manual_epoch_0.ckpt [12/28 23:15:47][INFO] End of script. [12/28 23:29:49][INFO] [Exp Name]: finetune_ [12/28 23:29:49][INFO] [GPU x Batch] = 1 x 1 [12/28 23:29:49][INFO] [UnityDataset] Found 5 sequences. [12/28 23:29:49][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 23:29:49][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 23:29:49][INFO] [12/28 23:29:52][INFO] [HumanML3D] Loading from inputs/HumanML3D_SMPL/hmr4d_support/humanml3d_smplhpose_train.pth ... [12/28 23:29:52][WARNING] [val] Skipping humanml3d_eval due to error: Error in call to target 'genmo.datasets.pure_motion.humanml3d.Humanml3dDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.humanml3d_eval [12/28 23:29:52][INFO] [EMDB] Full sequence, split=1 [12/28 23:29:52][WARNING] [val] Skipping emdb1_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb1_fliptest [12/28 23:29:52][INFO] [EMDB] Full sequence, split=2 [12/28 23:29:52][WARNING] [val] Skipping emdb2_fliptest due to error: Error in call to target 'genmo.datasets.emdb.emdb_motion_test.EmdbSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.emdb2_fliptest [12/28 23:29:52][INFO] [RICH] Full sequence, Test [12/28 23:29:52][WARNING] [val] Skipping rich_test due to error: Error in call to target 'genmo.datasets.rich.rich_motion_test.RichSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.rich_test [12/28 23:29:52][INFO] [3DPW] Full sequence [12/28 23:29:52][WARNING] [val] Skipping 3dpw_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_motion_test.ThreedpwSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_fliptest [12/28 23:29:52][INFO] [3DPW_OCC] Full sequence [12/28 23:29:52][WARNING] [val] Skipping 3dpw_occ_fliptest due to error: Error in call to target 'genmo.datasets.threedpw.threedpw_occ_motion_test.ThreedpwOccSmplFullSeqDataset': FileNotFoundError(2, 'No such file or directory') full_key: dataset_opts.val.3dpw_occ_fliptest [12/28 23:29:52][INFO] [UnityDataset] Found 5 sequences. [12/28 23:29:52][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 23:29:52][INFO] [12/28 23:29:57][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 23:30:31][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_6/checkpoints' [12/28 23:58:20][INFO] [Exp Name]: finetune_ [12/28 23:58:20][INFO] [GPU x Batch] = 1 x 128 [12/28 23:58:20][INFO] [UnityDataset] Found 5 sequences. [12/28 23:58:20][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 23:58:20][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/28 23:58:20][INFO] [12/28 23:58:20][INFO] [UnityDataset] Found 5 sequences. [12/28 23:58:20][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/28 23:58:20][INFO] [12/28 23:58:26][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/28 23:58:41][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_7/checkpoints' [12/28 23:58:41][INFO] Start Fitting... [12/28 23:59:01][INFO] End of script. [12/29 00:01:50][INFO] [Exp Name]: finetune_ [12/29 00:01:50][INFO] [GPU x Batch] = 1 x 1 [12/29 00:01:50][INFO] [UnityDataset] Found 5 sequences. [12/29 00:01:50][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:01:50][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/29 00:01:50][INFO] [12/29 00:01:50][INFO] [UnityDataset] Found 5 sequences. [12/29 00:01:50][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:01:50][INFO] [12/29 00:01:57][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/29 00:02:07][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_8/checkpoints' [12/29 00:02:07][INFO] Start Fitting... [12/29 00:02:21][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 00:02:54][INFO] ✅[FIT][Epoch 0] finished! 00:34→00:00 | loss_epoch=132 [12/29 00:03:41][INFO] Manually saved checkpoint to /root/miko/puni/train/GENMO/checkpoints/manual_epoch_0.ckpt [12/29 00:03:44][INFO] End of script. [12/29 00:05:19][INFO] [Exp Name]: finetune_ [12/29 00:05:19][INFO] [GPU x Batch] = 1 x 1 [12/29 00:05:19][INFO] [UnityDataset] Found 5 sequences. [12/29 00:05:19][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:05:19][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/29 00:05:19][INFO] [12/29 00:05:19][INFO] [UnityDataset] Found 5 sequences. [12/29 00:05:19][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:05:19][INFO] [12/29 00:05:26][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/29 00:05:43][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_9/checkpoints' [12/29 00:05:43][INFO] Start Fitting... [12/29 00:06:02][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 00:06:34][INFO] ✅[FIT][Epoch 0] finished! 00:34→00:00 | loss_epoch=132 [12/29 00:06:36][INFO] End of script. [12/29 00:08:20][INFO] [Exp Name]: finetune_ [12/29 00:08:20][INFO] [GPU x Batch] = 1 x 1 [12/29 00:08:55][INFO] [Exp Name]: finetune_ [12/29 00:08:55][INFO] [GPU x Batch] = 1 x 1 [12/29 00:08:55][INFO] [UnityDataset] Found 5 sequences. [12/29 00:08:55][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:08:55][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/29 00:08:55][INFO] [12/29 00:08:55][INFO] [UnityDataset] Found 5 sequences. [12/29 00:08:55][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:08:55][INFO] [12/29 00:09:01][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/29 00:09:16][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_10/checkpoints' [12/29 00:09:17][INFO] Start Fitting... [12/29 00:09:29][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 00:10:03][INFO] ✅[FIT][Epoch 0] finished! 00:35→00:00 | loss_epoch=132 [12/29 00:12:38][INFO] End of script. [12/29 00:25:51][INFO] [Exp Name]: finetune_ [12/29 00:25:51][INFO] [GPU x Batch] = 1 x 1 [12/29 00:25:51][INFO] [UnityDataset] Found 5 sequences. [12/29 00:25:51][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:25:51][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/29 00:25:51][INFO] [12/29 00:25:51][INFO] [UnityDataset] Found 5 sequences. [12/29 00:25:51][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:25:51][INFO] [12/29 00:25:58][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/29 00:26:20][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_0/checkpoints' [12/29 00:26:32][INFO] Start Fitting... [12/29 00:26:33][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 00:26:34][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 00:26:34][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 00:26:34][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 00:26:35][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 00:26:39][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 00:27:08][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:27:08][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:27:08][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:27:08][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:27:08][INFO] ✅[FIT][Epoch 0] finished! 00:36→00:00 | loss_epoch=132 [12/29 00:29:24][INFO] End of script. [12/29 00:41:56][INFO] [Exp Name]: finetune_ [12/29 00:41:56][INFO] [GPU x Batch] = 1 x 1 [12/29 00:41:56][INFO] [UnityDataset] Found 5 sequences. [12/29 00:41:56][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:41:56][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/29 00:41:56][INFO] [12/29 00:41:56][INFO] [UnityDataset] Found 5 sequences. [12/29 00:41:56][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:41:56][INFO] [12/29 00:42:03][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/29 00:42:26][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_1/checkpoints' [12/29 00:42:35][INFO] Start Fitting... [12/29 00:42:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 00:42:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 00:42:36][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 00:42:38][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 00:42:39][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 00:42:42][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 00:44:35][INFO] [Exp Name]: finetune_ [12/29 00:44:35][INFO] [GPU x Batch] = 1 x 1 [12/29 00:44:35][INFO] [UnityDataset] Found 5 sequences. [12/29 00:44:35][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:44:35][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/29 00:44:35][INFO] [12/29 00:44:35][INFO] [UnityDataset] Found 5 sequences. [12/29 00:44:35][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:44:35][INFO] [12/29 00:44:41][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/29 00:45:02][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_2/checkpoints' [12/29 00:45:14][INFO] Start Fitting... [12/29 00:45:16][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 00:45:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 00:45:17][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 00:45:19][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 00:45:21][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 00:45:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 00:46:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:46:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:46:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:46:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:46:45][INFO] ✅[FIT][Epoch 0] finished! 01:29→00:00 | loss_epoch=132 [12/29 00:53:28][INFO] [Exp Name]: finetune_ [12/29 00:53:28][INFO] [GPU x Batch] = 1 x 1 [12/29 00:53:28][INFO] [UnityDataset] Found 5 sequences. [12/29 00:53:28][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:53:28][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/29 00:53:28][INFO] [12/29 00:53:28][INFO] [UnityDataset] Found 5 sequences. [12/29 00:53:28][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 00:53:28][INFO] [12/29 00:53:36][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/29 00:53:59][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_3/checkpoints' [12/29 00:54:10][INFO] Start Fitting... [12/29 00:54:12][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 00:54:12][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 00:54:12][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 00:54:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 00:54:16][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 00:54:20][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 00:55:39][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:55:39][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:55:39][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:55:39][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 00:55:39][INFO] ✅[FIT][Epoch 0] finished! 01:28→00:00 | loss_epoch=132 [12/29 00:57:13][INFO] End of script. [12/29 01:03:29][INFO] [Exp Name]: finetune_ [12/29 01:03:29][INFO] [GPU x Batch] = 1 x 1 [12/29 01:03:29][INFO] [UnityDataset] Found 5 sequences. [12/29 01:03:29][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 01:03:29][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/29 01:03:29][INFO] [12/29 01:03:29][INFO] [UnityDataset] Found 5 sequences. [12/29 01:03:29][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 01:03:29][INFO] [12/29 01:03:37][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/29 01:04:00][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_4/checkpoints' [12/29 01:04:09][INFO] Start Fitting... [12/29 01:04:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 01:04:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 01:04:10][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 01:04:11][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 01:04:12][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 01:04:16][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 01:05:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 01:05:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 01:05:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 01:05:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 01:05:36][INFO] ✅[FIT][Epoch 0] finished! 01:26→00:00 | loss_epoch=132 [12/29 01:11:36][INFO] [Exp Name]: finetune_ [12/29 01:11:36][INFO] [GPU x Batch] = 1 x 1 [12/29 01:11:36][INFO] [UnityDataset] Found 5 sequences. [12/29 01:11:36][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 01:11:36][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/29 01:11:36][INFO] [12/29 01:11:36][INFO] [UnityDataset] Found 5 sequences. [12/29 01:11:36][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/29 01:11:36][INFO] [12/29 01:11:43][INFO] [PL-Trainer] Loading ckpt: /root/miko/puni/train/GENMO/s050000.ckpt [12/29 01:12:08][INFO] [Simple Ckpt Saver]: Save to `outputs/unity_finetune_v1/version_5/checkpoints' [12/29 01:12:16][INFO] Start Fitting... [12/29 01:12:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 01:12:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 01:12:18][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 01:12:18][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 01:12:19][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 01:12:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 01:13:44][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 01:13:44][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 01:13:44][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 01:13:44][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 01:13:44][INFO] ✅[FIT][Epoch 0] finished! 01:27→00:00 | loss_epoch=132 [12/29 01:15:56][INFO] End of script. [12/29 19:01:58][INFO] [Exp Name]: finetune_ [12/29 19:01:58][INFO] [GPU x Batch] = 1 x 1 [12/29 19:01:58][INFO] [UnityDataset] Found 2 sequences. [12/29 19:01:58][INFO] [Train Dataset][9/9]: name=unity, size=2, genmo.datasets.unity_dataset.UnityDataset [12/29 19:01:58][INFO] [Train Dataset][All]: ConcatDataset size=2 [12/29 19:01:58][INFO] [12/29 19:01:58][INFO] [UnityDataset] Found 2 sequences. [12/29 19:01:58][INFO] [Val Dataset][7/7]: name=unity_val, size=2, genmo.datasets.unity_dataset.UnityDataset [12/29 19:01:58][INFO] [12/29 19:02:06][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/29 19:02:27][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_0/checkpoints' [12/29 19:02:40][INFO] Start Fitting... [12/29 19:02:42][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 19:02:42][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 19:02:42][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 19:02:44][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 19:02:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 19:02:46][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 19:02:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:02:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:02:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:02:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:02:58][INFO] ✅[FIT][Epoch 0] finished! 00:16→00:00 | loss_epoch=168 [12/29 19:04:38][INFO] End of script. [12/29 19:10:14][INFO] [Exp Name]: finetune_ [12/29 19:10:14][INFO] [GPU x Batch] = 1 x 1 [12/29 19:10:14][INFO] [UnityDataset] Found 2 sequences. [12/29 19:10:14][INFO] [Train Dataset][9/9]: name=unity, size=2, genmo.datasets.unity_dataset.UnityDataset [12/29 19:10:14][INFO] [Train Dataset][All]: ConcatDataset size=2 [12/29 19:10:14][INFO] [12/29 19:10:14][INFO] [UnityDataset] Found 2 sequences. [12/29 19:10:14][INFO] [Val Dataset][7/7]: name=unity_val, size=2, genmo.datasets.unity_dataset.UnityDataset [12/29 19:10:14][INFO] [12/29 19:10:20][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/29 19:10:54][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_1/checkpoints' [12/29 19:11:05][INFO] Start Fitting... [12/29 19:11:08][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 19:11:09][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 19:11:09][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 19:11:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 19:11:11][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 19:11:12][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 19:11:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:11:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:11:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:11:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:11:24][INFO] ✅[FIT][Epoch 0] finished! 00:18→00:00 | loss_epoch=168 [12/29 19:13:38][INFO] End of script. [12/29 19:17:28][INFO] [Exp Name]: finetune_ [12/29 19:17:28][INFO] [GPU x Batch] = 1 x 1 [12/29 19:17:28][INFO] [UnityDataset] Found 2 sequences. [12/29 19:17:28][INFO] [Train Dataset][9/9]: name=unity, size=2, genmo.datasets.unity_dataset.UnityDataset [12/29 19:17:28][INFO] [Train Dataset][All]: ConcatDataset size=2 [12/29 19:17:28][INFO] [12/29 19:17:28][INFO] [UnityDataset] Found 2 sequences. [12/29 19:17:28][INFO] [Val Dataset][7/7]: name=unity_val, size=2, genmo.datasets.unity_dataset.UnityDataset [12/29 19:17:28][INFO] [12/29 19:17:36][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/29 19:18:14][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_2/checkpoints' [12/29 19:18:22][INFO] Start Fitting... [12/29 19:18:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 19:18:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 19:18:24][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 19:18:25][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 19:18:26][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 19:18:27][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 19:19:32][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:19:32][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:19:32][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:19:32][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:19:32][INFO] ✅[FIT][Epoch 0] finished! 01:09→00:00 | loss_epoch=168 [12/29 19:20:28][INFO] End of script. [12/29 19:27:24][INFO] [Exp Name]: finetune_ [12/29 19:27:24][INFO] [GPU x Batch] = 1 x 1 [12/29 19:27:24][INFO] [UnityDataset] Found 2 sequences. [12/29 19:27:24][INFO] [Train Dataset][9/9]: name=unity, size=2, genmo.datasets.unity_dataset.UnityDataset [12/29 19:27:24][INFO] [Train Dataset][All]: ConcatDataset size=2 [12/29 19:27:24][INFO] [12/29 19:27:24][INFO] [UnityDataset] Found 2 sequences. [12/29 19:27:24][INFO] [Val Dataset][7/7]: name=unity_val, size=2, genmo.datasets.unity_dataset.UnityDataset [12/29 19:27:24][INFO] [12/29 19:27:31][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/29 19:28:08][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_3/checkpoints' [12/29 19:28:17][INFO] Start Fitting... [12/29 19:28:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 19:28:18][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 19:28:18][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 19:28:19][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 19:28:20][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 19:28:21][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 19:29:20][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:29:20][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:29:20][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:29:20][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:29:20][INFO] ✅[FIT][Epoch 0] finished! 01:03→00:00 | loss_epoch=31.9 [12/29 19:31:32][INFO] End of script. [12/29 19:39:33][INFO] [Exp Name]: finetune_ [12/29 19:39:33][INFO] [GPU x Batch] = 1 x 1 [12/29 19:39:33][INFO] [UnityDataset] Found 2 sequences. [12/29 19:39:33][INFO] [Train Dataset][9/9]: name=unity, size=2, genmo.datasets.unity_dataset.UnityDataset [12/29 19:39:33][INFO] [Train Dataset][All]: ConcatDataset size=2 [12/29 19:39:33][INFO] [12/29 19:39:33][INFO] [UnityDataset] Found 2 sequences. [12/29 19:39:33][INFO] [Val Dataset][7/7]: name=unity_val, size=2, genmo.datasets.unity_dataset.UnityDataset [12/29 19:39:33][INFO] [12/29 19:39:42][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/29 19:40:16][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_4/checkpoints' [12/29 19:40:23][INFO] Start Fitting... [12/29 19:40:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 19:40:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 19:40:24][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 19:40:25][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 19:40:26][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 19:40:27][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 19:41:28][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:41:28][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:41:28][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:41:28][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:41:28][INFO] ✅[FIT][Epoch 0] finished! 01:04→00:00 | loss_epoch=31.9 [12/29 19:42:58][INFO] End of script. [12/29 19:47:07][INFO] [Exp Name]: finetune_ [12/29 19:47:07][INFO] [GPU x Batch] = 1 x 1 [12/29 19:47:07][INFO] [UnityDataset] Found 1 sequences. [12/29 19:47:07][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/29 19:47:07][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/29 19:47:07][INFO] [12/29 19:47:07][INFO] [UnityDataset] Found 1 sequences. [12/29 19:47:07][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/29 19:47:07][INFO] [12/29 19:47:14][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/29 19:47:50][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_5/checkpoints' [12/29 19:47:58][INFO] Start Fitting... [12/29 19:47:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/29 19:48:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/29 19:48:00][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/29 19:48:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/29 19:48:02][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/29 19:48:02][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/29 19:49:03][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:49:03][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:49:03][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:49:03][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/29 19:49:03][INFO] ✅[FIT][Epoch 0] finished! 01:04→00:00 | loss_epoch=31.6 [12/29 19:50:03][INFO] End of script. [12/30 05:08:55][INFO] [Exp Name]: finetune_ [12/30 05:08:55][INFO] [GPU x Batch] = 1 x 1 [12/30 05:08:55][INFO] [UnityDataset] Found 3 sequences. [12/30 05:08:55][INFO] [Train Dataset][9/9]: name=unity, size=3, genmo.datasets.unity_dataset.UnityDataset [12/30 05:08:55][INFO] [Train Dataset][All]: ConcatDataset size=3 [12/30 05:08:55][INFO] [12/30 05:08:55][INFO] [UnityDataset] Found 3 sequences. [12/30 05:08:55][INFO] [Val Dataset][7/7]: name=unity_val, size=3, genmo.datasets.unity_dataset.UnityDataset [12/30 05:08:55][INFO] [12/30 05:09:01][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 05:09:50][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_6/checkpoints' [12/30 05:10:09][INFO] Start Fitting... [12/30 05:10:11][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 05:10:12][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 05:10:12][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 05:14:42][INFO] [Exp Name]: finetune_ [12/30 05:14:42][INFO] [GPU x Batch] = 1 x 1 [12/30 05:14:42][INFO] [UnityDataset] Found 3 sequences. [12/30 05:14:42][INFO] [Train Dataset][9/9]: name=unity, size=3, genmo.datasets.unity_dataset.UnityDataset [12/30 05:14:42][INFO] [Train Dataset][All]: ConcatDataset size=3 [12/30 05:14:42][INFO] [12/30 05:14:42][INFO] [UnityDataset] Found 3 sequences. [12/30 05:14:42][INFO] [Val Dataset][7/7]: name=unity_val, size=3, genmo.datasets.unity_dataset.UnityDataset [12/30 05:14:42][INFO] [12/30 05:14:49][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 05:15:30][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_7/checkpoints' [12/30 05:15:42][INFO] Start Fitting... [12/30 05:15:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 05:15:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 05:15:45][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 05:15:47][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 05:15:49][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 05:15:51][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 05:16:05][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9820 pred=+2.3235 delta(pred-gt)=+1.3415 [12/30 05:17:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 05:17:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 05:17:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 05:17:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 05:17:00][INFO] ✅[FIT][Epoch 0] finished! 01:16→00:00 | loss_epoch=37 [12/30 05:19:15][INFO] End of script. [12/30 05:33:19][INFO] [Exp Name]: finetune_ [12/30 05:33:19][INFO] [GPU x Batch] = 1 x 1 [12/30 05:33:19][INFO] [UnityDataset] Found 1 sequences. [12/30 05:33:19][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/30 05:33:19][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/30 05:33:19][INFO] [12/30 05:33:19][INFO] [UnityDataset] Found 1 sequences. [12/30 05:33:19][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/30 05:33:19][INFO] [12/30 05:33:26][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 05:34:04][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_8/checkpoints' [12/30 05:34:15][INFO] Start Fitting... [12/30 05:34:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 05:34:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 05:34:17][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 05:35:55][INFO] [Exp Name]: finetune_ [12/30 05:35:55][INFO] [GPU x Batch] = 1 x 1 [12/30 05:35:56][INFO] [UnityDataset] Found 1 sequences. [12/30 05:35:56][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/30 05:35:56][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/30 05:35:56][INFO] [12/30 05:35:56][INFO] [UnityDataset] Found 1 sequences. [12/30 05:35:56][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/30 05:35:56][INFO] [12/30 05:36:01][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 05:36:41][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_9/checkpoints' [12/30 05:36:54][INFO] Start Fitting... [12/30 05:36:56][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 05:36:56][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 05:36:56][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 05:36:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 05:37:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 05:37:01][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 05:37:15][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9816 delta(pred-gt)=-0.0059 [12/30 05:38:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 05:38:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 05:38:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 05:38:00][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 05:38:00][INFO] ✅[FIT][Epoch 0] finished! 01:05→00:00 | loss_epoch=14.1 [12/30 05:39:34][INFO] End of script. [12/30 05:57:45][INFO] [Exp Name]: finetune_ [12/30 05:57:45][INFO] [GPU x Batch] = 1 x 1 [12/30 05:57:46][INFO] [UnityDataset] Found 5 sequences. [12/30 05:57:46][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/30 05:57:46][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/30 05:57:46][INFO] [12/30 05:57:46][INFO] [UnityDataset] Found 5 sequences. [12/30 05:57:46][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/30 05:57:46][INFO] [12/30 05:57:55][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 05:58:35][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_10/checkpoints' [12/30 05:58:47][INFO] Start Fitting... [12/30 05:58:49][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 05:58:49][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 05:58:49][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 05:58:51][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 05:58:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 05:58:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 05:59:11][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9894 pred=+0.9799 delta(pred-gt)=-0.0094 [12/30 06:00:21][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:00:21][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:00:21][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:00:21][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:00:21][INFO] ✅[FIT][Epoch 0] finished! 01:33→00:00 | loss_epoch=38.8 [12/30 06:01:56][INFO] End of script. [12/30 06:03:41][INFO] [Exp Name]: finetune_ [12/30 06:03:41][INFO] [GPU x Batch] = 1 x 1 [12/30 06:03:41][INFO] [UnityDataset] Found 5 sequences. [12/30 06:03:41][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/30 06:03:41][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/30 06:03:41][INFO] [12/30 06:03:41][INFO] [UnityDataset] Found 5 sequences. [12/30 06:03:41][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/30 06:03:41][INFO] [12/30 06:03:50][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 06:04:17][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_11/checkpoints' [12/30 06:04:29][INFO] Start Fitting... [12/30 06:04:30][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 06:04:31][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 06:04:31][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 06:04:32][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 06:04:33][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 06:04:37][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 06:04:50][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9894 pred=+0.9799 delta(pred-gt)=-0.0094 [12/30 06:05:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:05:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:05:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:05:58][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:05:58][INFO] ✅[FIT][Epoch 0] finished! 01:29→05:56 | loss_epoch=38.8 [12/30 06:07:32][INFO] 🚀[FIT][Epoch 1] Data: unity Experiment: finetune_ [12/30 06:07:48][INFO] [VisUnityVal] e001_0_biboo_birthday_speech root_y0: gt=+0.9954 pred=+0.8699 delta(pred-gt)=-0.1255 [12/30 06:09:05][INFO] ✅[FIT][Epoch 1] finished! 04:35→06:53 | loss_epoch=82.5 [12/30 06:12:43][INFO] [Exp Name]: finetune_ [12/30 06:12:43][INFO] [GPU x Batch] = 1 x 1 [12/30 06:12:43][INFO] [UnityDataset] Found 4 sequences. [12/30 06:12:43][INFO] [Train Dataset][9/9]: name=unity, size=4, genmo.datasets.unity_dataset.UnityDataset [12/30 06:12:43][INFO] [Train Dataset][All]: ConcatDataset size=4 [12/30 06:12:43][INFO] [12/30 06:12:43][INFO] [UnityDataset] Found 4 sequences. [12/30 06:12:43][INFO] [Val Dataset][7/7]: name=unity_val, size=4, genmo.datasets.unity_dataset.UnityDataset [12/30 06:12:43][INFO] [12/30 06:12:51][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 06:13:16][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_12/checkpoints' [12/30 06:13:27][INFO] Start Fitting... [12/30 06:13:29][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 06:13:29][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 06:13:29][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 06:13:29][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 06:13:31][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 06:13:33][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 06:13:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 06:13:48][INFO] [VisUnityVal] e000_100_biboo_birthday_speech_explosion_1 root_y0: gt=+0.9883 pred=+0.5209 delta(pred-gt)=-0.4674 [12/30 06:14:49][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:14:49][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:14:49][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:14:49][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 06:14:49][INFO] ✅[FIT][Epoch 0] finished! 01:21→05:24 | loss_epoch=46.4 [12/30 06:17:01][INFO] 🚀[FIT][Epoch 1] Data: unity Experiment: finetune_ [12/30 06:17:14][INFO] [VisUnityVal] e001_100_biboo_birthday_speech_explosion_1 root_y0: gt=+0.9894 pred=+0.9280 delta(pred-gt)=-0.0614 [12/30 06:18:24][INFO] ✅[FIT][Epoch 1] finished! 04:55→07:23 | loss_epoch=70.8 [12/30 07:06:34][INFO] [Exp Name]: finetune_ [12/30 07:06:34][INFO] [GPU x Batch] = 1 x 1 [12/30 07:06:34][WARNING] [Train Dataset] Skipping unity due to error: Error in call to target 'genmo.datasets.unity_dataset.UnityDataset': FileNotFoundError('Feature dir not found: third_party/GVHMR/processed_dataset/genmo_features') full_key: dataset_opts.train.unity [12/30 07:22:07][INFO] [Exp Name]: finetune_ [12/30 07:22:07][INFO] [GPU x Batch] = 1 x 1 [12/30 07:22:07][INFO] [UnityDataset] Found 1 sequences. [12/30 07:22:07][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/30 07:22:07][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/30 07:22:07][INFO] [12/30 07:22:07][INFO] [UnityDataset] Found 1 sequences. [12/30 07:22:07][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/30 07:22:07][INFO] [12/30 07:22:13][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 07:22:36][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_0/checkpoints' [12/30 07:22:46][INFO] Start Fitting... [12/30 07:22:47][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 07:22:47][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 07:22:48][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 07:22:48][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 07:22:49][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 07:22:51][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 07:22:51][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 07:23:03][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9816 delta(pred-gt)=-0.0059 [12/30 07:23:03][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=+148.05 [12/30 07:23:47][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 07:23:47][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 07:23:47][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 07:23:47][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 07:23:47][INFO] ✅[FIT][Epoch 0] finished! 01:00→04:02 | loss_epoch=14.1 [12/30 07:25:32][INFO] 🚀[FIT][Epoch 1] Data: unity Experiment: finetune_ [12/30 07:25:42][INFO] [VisUnityVal] e001_0_biboo_birthday_speech root_y0: gt=+0.9948 pred=+0.9890 delta(pred-gt)=-0.0058 [12/30 07:25:42][INFO] [VisUnityVal] e001_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=+99.30 [12/30 07:26:26][INFO] ✅[FIT][Epoch 1] finished! 03:39→05:29 | loss_epoch=21.5 [12/30 07:28:44][INFO] [Exp Name]: finetune_ [12/30 07:28:44][INFO] [GPU x Batch] = 1 x 1 [12/30 07:28:44][INFO] [UnityDataset] Found 1 sequences. [12/30 07:28:44][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/30 07:28:44][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/30 07:28:44][INFO] [12/30 07:28:44][INFO] [UnityDataset] Found 1 sequences. [12/30 07:28:44][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/30 07:28:44][INFO] [12/30 07:28:51][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 07:29:13][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_1/checkpoints' [12/30 07:29:22][INFO] Start Fitting... [12/30 07:29:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 07:29:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 07:29:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 07:29:23][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 07:29:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 07:29:26][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 07:29:26][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 07:29:38][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9816 delta(pred-gt)=-0.0059 [12/30 07:29:38][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.01689476 -0.20703591 0.01797612] global_orient0_aa(pred)=[-0.02222293 -2.7995787 -0.05008147] [12/30 07:29:38][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(-11.85,+1.07,+0.92) pred=(-160.44,-2.14,+0.54) pred_vs_gt=(-148.56,-3.06,-1.03) [12/30 07:29:38][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=+148.05 [12/30 07:30:22][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 07:30:22][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 07:30:22][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 07:30:22][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 07:30:22][INFO] ✅[FIT][Epoch 0] finished! 01:00→04:00 | loss_epoch=14.1 [12/30 07:32:42][INFO] 🚀[FIT][Epoch 1] Data: unity Experiment: finetune_ [12/30 07:32:51][INFO] [VisUnityVal] e001_0_biboo_birthday_speech root_y0: gt=+0.9948 pred=+0.9890 delta(pred-gt)=-0.0058 [12/30 07:32:51][INFO] [VisUnityVal] e001_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.03199771 -0.16761649 0.02280195] global_orient0_aa(pred)=[-0.01023587 -1.876039 0.0269159 ] [12/30 07:32:51][INFO] [VisUnityVal] e001_0_biboo_birthday_speech global_orient0_yxz_deg gt=(-9.59,+1.93,+1.15) pred=(-107.49,+0.77,+1.19) pred_vs_gt=(-97.90,-1.15,-0.15) [12/30 07:32:51][INFO] [VisUnityVal] e001_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=+99.30 [12/30 08:07:02][INFO] [Exp Name]: finetune_ [12/30 08:07:02][INFO] [GPU x Batch] = 1 x 1 [12/30 08:07:02][INFO] [UnityDataset] Found 8 sequences. [12/30 08:07:02][INFO] [Train Dataset][9/9]: name=unity, size=8, genmo.datasets.unity_dataset.UnityDataset [12/30 08:07:02][INFO] [Train Dataset][All]: ConcatDataset size=8 [12/30 08:07:02][INFO] [12/30 08:07:02][INFO] [UnityDataset] Found 8 sequences. [12/30 08:07:02][INFO] [Val Dataset][7/7]: name=unity_val, size=8, genmo.datasets.unity_dataset.UnityDataset [12/30 08:07:02][INFO] [12/30 08:07:09][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 08:07:32][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_2/checkpoints' [12/30 08:07:43][INFO] Start Fitting... [12/30 08:07:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 08:07:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 08:07:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 08:07:45][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 08:07:47][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 08:07:49][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 08:08:09][INFO] [VisUnityVal] e000_10_biboo_birthday_speech_poke_large_object root_y0: gt=+0.9749 pred=+0.8970 delta(pred-gt)=-0.0778 [12/30 08:08:09][INFO] [VisUnityVal] e000_10_biboo_birthday_speech_poke_large_object global_orient0_aa(gt)=[ 0.01488011 1.3818389 -0.01262669] global_orient0_aa(pred)=[-0.6875641 2.6614892 -0.57625073] [12/30 08:08:09][INFO] [VisUnityVal] e000_10_biboo_birthday_speech_poke_large_object global_orient0_yxz_deg gt=(+79.18,+1.03,-0.01) pred=(+154.73,+17.35,-32.89) pred_vs_gt=(+87.12,-27.10,-24.98) [12/30 08:08:09][INFO] [VisUnityVal] e000_10_biboo_birthday_speech_poke_large_object yaw0_deg(pred_vs_gt)=-87.08 [12/30 08:09:33][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 08:09:33][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 08:09:33][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 08:09:33][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 08:09:33][INFO] ✅[FIT][Epoch 0] finished! 01:49→07:16 | loss_epoch=52.3 [12/30 22:16:05][INFO] [Exp Name]: finetune_ [12/30 22:16:05][INFO] [GPU x Batch] = 1 x 1 [12/30 22:16:05][INFO] [UnityDataset] Found 5 sequences. [12/30 22:16:05][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/30 22:16:05][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/30 22:16:05][INFO] [12/30 22:16:05][INFO] [UnityDataset] Found 5 sequences. [12/30 22:16:05][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/30 22:16:05][INFO] [12/30 22:26:06][INFO] [Exp Name]: finetune_ [12/30 22:26:06][INFO] [GPU x Batch] = 1 x 1 [12/30 22:26:06][INFO] [UnityDataset] Found 5 sequences. [12/30 22:26:06][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/30 22:26:06][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/30 22:26:06][INFO] [12/30 22:26:06][INFO] [UnityDataset] Found 5 sequences. [12/30 22:26:06][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/30 22:26:06][INFO] [12/30 22:26:11][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 22:26:42][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_3/checkpoints' [12/30 22:26:54][INFO] Start Fitting... [12/30 22:26:56][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 22:26:56][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 22:26:56][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 22:26:56][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 22:27:28][INFO] [Exp Name]: finetune_ [12/30 22:27:28][INFO] [GPU x Batch] = 1 x 1 [12/30 22:27:28][INFO] [UnityDataset] Found 5 sequences. [12/30 22:27:28][INFO] [Train Dataset][9/9]: name=unity, size=5, genmo.datasets.unity_dataset.UnityDataset [12/30 22:27:28][INFO] [Train Dataset][All]: ConcatDataset size=5 [12/30 22:27:28][INFO] [12/30 22:27:28][INFO] [UnityDataset] Found 5 sequences. [12/30 22:27:28][INFO] [Val Dataset][7/7]: name=unity_val, size=5, genmo.datasets.unity_dataset.UnityDataset [12/30 22:27:28][INFO] [12/30 22:27:37][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 22:27:56][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_4/checkpoints' [12/30 22:28:08][INFO] Start Fitting... [12/30 22:28:11][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 22:28:11][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 22:28:11][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 22:28:11][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 22:29:56][INFO] [Exp Name]: finetune_ [12/30 22:29:56][INFO] [GPU x Batch] = 1 x 1 [12/30 22:29:56][INFO] [UnityDataset] Found 2 sequences. [12/30 22:29:56][INFO] [Train Dataset][9/9]: name=unity, size=2, genmo.datasets.unity_dataset.UnityDataset [12/30 22:29:56][INFO] [Train Dataset][All]: ConcatDataset size=2 [12/30 22:29:56][INFO] [12/30 22:29:56][INFO] [UnityDataset] Found 2 sequences. [12/30 22:29:56][INFO] [Val Dataset][7/7]: name=unity_val, size=2, genmo.datasets.unity_dataset.UnityDataset [12/30 22:29:56][INFO] [12/30 22:30:02][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 22:30:17][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_5/checkpoints' [12/30 22:30:30][INFO] Start Fitting... [12/30 22:30:31][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 22:30:31][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 22:30:31][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 22:30:31][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 22:56:38][INFO] [Exp Name]: finetune_ [12/30 22:56:38][INFO] [GPU x Batch] = 1 x 1 [12/30 22:56:38][INFO] [UnityDataset] Found 6 sequences. [12/30 22:56:38][INFO] [Train Dataset][9/9]: name=unity, size=6, genmo.datasets.unity_dataset.UnityDataset [12/30 22:56:38][INFO] [Train Dataset][All]: ConcatDataset size=6 [12/30 22:56:38][INFO] [12/30 22:56:38][INFO] [UnityDataset] Found 6 sequences. [12/30 22:56:38][INFO] [Val Dataset][7/7]: name=unity_val, size=6, genmo.datasets.unity_dataset.UnityDataset [12/30 22:56:38][INFO] [12/30 22:56:44][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 22:57:07][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_6/checkpoints' [12/30 22:57:27][INFO] Start Fitting... [12/30 22:57:30][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 22:57:30][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 22:57:31][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 22:57:31][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 22:57:34][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 22:57:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 22:57:40][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 22:57:47][WARNING] [VisUnityVal] Failed to read image: third_party/GVHMR/processed_dataset/images/0_biboo_birthday_speech/img_00699.jpg [12/30 22:58:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 22:58:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 22:58:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 22:58:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 22:58:14][INFO] ✅[FIT][Epoch 0] finished! 00:46→03:06 | loss_epoch=28 [12/30 23:01:18][INFO] [Exp Name]: finetune_ [12/30 23:01:18][INFO] [GPU x Batch] = 1 x 1 [12/30 23:01:18][INFO] [UnityDataset] Found 6 sequences. [12/30 23:01:18][INFO] [Train Dataset][9/9]: name=unity, size=6, genmo.datasets.unity_dataset.UnityDataset [12/30 23:01:18][INFO] [Train Dataset][All]: ConcatDataset size=6 [12/30 23:01:18][INFO] [12/30 23:01:18][INFO] [UnityDataset] Found 6 sequences. [12/30 23:01:18][INFO] [Val Dataset][7/7]: name=unity_val, size=6, genmo.datasets.unity_dataset.UnityDataset [12/30 23:01:18][INFO] [12/30 23:01:26][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 23:01:45][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_0/checkpoints' [12/30 23:01:57][INFO] Start Fitting... [12/30 23:01:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 23:01:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:01:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:01:59][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 23:02:01][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 23:02:03][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 23:02:07][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 23:02:13][WARNING] [VisUnityVal] Failed to read image: third_party/GVHMR/processed_dataset/images/0_biboo_birthday_speech/img_00699.jpg [12/30 23:02:41][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:02:41][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:02:41][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:02:41][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:02:41][INFO] ✅[FIT][Epoch 0] finished! 00:43→02:53 | loss_epoch=28 [12/30 23:09:40][INFO] [Exp Name]: finetune_ [12/30 23:09:40][INFO] [GPU x Batch] = 1 x 1 [12/30 23:09:41][INFO] [UnityDataset] Found 6 sequences. [12/30 23:09:41][INFO] [Train Dataset][9/9]: name=unity, size=6, genmo.datasets.unity_dataset.UnityDataset [12/30 23:09:41][INFO] [Train Dataset][All]: ConcatDataset size=6 [12/30 23:09:41][INFO] [12/30 23:09:41][INFO] [UnityDataset] Found 6 sequences. [12/30 23:09:41][INFO] [Val Dataset][7/7]: name=unity_val, size=6, genmo.datasets.unity_dataset.UnityDataset [12/30 23:09:41][INFO] [12/30 23:09:49][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 23:10:08][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_1/checkpoints' [12/30 23:10:17][INFO] Start Fitting... [12/30 23:10:18][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 23:10:18][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:10:18][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:10:18][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 23:10:19][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 23:10:20][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 23:10:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 23:10:44][INFO] [Exp Name]: finetune_ [12/30 23:10:44][INFO] [GPU x Batch] = 1 x 1 [12/30 23:10:44][INFO] [UnityDataset] Found 6 sequences. [12/30 23:10:44][INFO] [Train Dataset][9/9]: name=unity, size=6, genmo.datasets.unity_dataset.UnityDataset [12/30 23:10:44][INFO] [Train Dataset][All]: ConcatDataset size=6 [12/30 23:10:44][INFO] [12/30 23:10:44][INFO] [UnityDataset] Found 6 sequences. [12/30 23:10:44][INFO] [Val Dataset][7/7]: name=unity_val, size=6, genmo.datasets.unity_dataset.UnityDataset [12/30 23:10:44][INFO] [12/30 23:10:52][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 23:11:04][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_2/checkpoints' [12/30 23:11:11][INFO] Start Fitting... [12/30 23:11:13][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 23:11:13][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:11:13][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:11:13][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 23:11:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 23:11:15][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 23:11:19][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 23:11:27][WARNING] [VisUnityVal] Failed to read image: third_party/GVHMR/processed_dataset/images/0_biboo_birthday_speech/img_00699.jpg [12/30 23:11:53][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:11:53][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:11:53][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:11:53][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:11:53][INFO] ✅[FIT][Epoch 0] finished! 00:40→02:43 | loss_epoch=28 [12/30 23:29:33][INFO] [Exp Name]: finetune_ [12/30 23:29:33][INFO] [GPU x Batch] = 1 x 1 [12/30 23:29:33][INFO] [UnityDataset] Found 2 sequences. [12/30 23:29:33][INFO] [Train Dataset][9/9]: name=unity, size=2, genmo.datasets.unity_dataset.UnityDataset [12/30 23:29:33][INFO] [Train Dataset][All]: ConcatDataset size=2 [12/30 23:29:33][INFO] [12/30 23:29:33][INFO] [UnityDataset] Found 2 sequences. [12/30 23:29:33][INFO] [Val Dataset][7/7]: name=unity_val, size=2, genmo.datasets.unity_dataset.UnityDataset [12/30 23:29:33][INFO] [12/30 23:29:39][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 23:30:02][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_3/checkpoints' [12/30 23:30:13][INFO] Start Fitting... [12/30 23:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 23:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:30:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:30:14][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 23:30:15][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 23:30:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 23:30:18][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 23:30:30][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9930 pred=+0.9643 delta(pred-gt)=-0.0287 [12/30 23:30:30][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.03876931 -0.17480041 0.02509396] global_orient0_aa(pred)=[-0.1090048 -1.7763788 -0.15125035] [12/30 23:30:30][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(-9.99,+2.34,+1.24) pred=(-101.99,-9.31,-0.53) pred_vs_gt=(-91.73,-11.16,-3.80) [12/30 23:30:30][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=+94.23 [12/30 23:46:28][INFO] [Exp Name]: finetune_ [12/30 23:46:28][INFO] [GPU x Batch] = 1 x 1 [12/30 23:46:28][INFO] [UnityDataset] Found 2 sequences. [12/30 23:46:28][INFO] [Train Dataset][9/9]: name=unity, size=2, genmo.datasets.unity_dataset.UnityDataset [12/30 23:46:28][INFO] [Train Dataset][All]: ConcatDataset size=2 [12/30 23:46:28][INFO] [12/30 23:46:28][INFO] [UnityDataset] Found 2 sequences. [12/30 23:46:28][INFO] [Val Dataset][7/7]: name=unity_val, size=2, genmo.datasets.unity_dataset.UnityDataset [12/30 23:46:28][INFO] [12/30 23:46:34][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 23:46:54][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_4/checkpoints' [12/30 23:47:05][INFO] Start Fitting... [12/30 23:47:07][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 23:47:07][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:47:07][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:47:07][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 23:47:09][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 23:47:11][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 23:47:12][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 23:47:26][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9930 pred=+0.9643 delta(pred-gt)=-0.0287 [12/30 23:47:26][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.03876931 -0.17480041 0.02509396] global_orient0_aa(pred)=[-0.1090048 -1.7763788 -0.15125035] [12/30 23:47:26][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(-9.99,+2.34,+1.24) pred=(-101.99,-9.31,-0.53) pred_vs_gt=(-91.73,-11.16,-3.80) [12/30 23:47:26][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=+94.23 [12/30 23:51:42][INFO] [Exp Name]: finetune_ [12/30 23:51:42][INFO] [GPU x Batch] = 1 x 1 [12/30 23:51:42][INFO] [UnityDataset] Found 2 sequences. [12/30 23:51:42][INFO] [Train Dataset][9/9]: name=unity, size=2, genmo.datasets.unity_dataset.UnityDataset [12/30 23:51:42][INFO] [Train Dataset][All]: ConcatDataset size=2 [12/30 23:51:42][INFO] [12/30 23:51:42][INFO] [UnityDataset] Found 2 sequences. [12/30 23:51:42][INFO] [Val Dataset][7/7]: name=unity_val, size=2, genmo.datasets.unity_dataset.UnityDataset [12/30 23:51:42][INFO] [12/30 23:51:48][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 23:52:04][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_5/checkpoints' [12/30 23:52:15][INFO] Start Fitting... [12/30 23:52:16][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 23:52:16][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:52:16][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:52:16][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 23:52:18][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 23:52:20][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 23:52:21][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 23:52:35][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9930 pred=+0.9643 delta(pred-gt)=-0.0287 [12/30 23:52:35][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.03876931 -0.17480041 0.02509396] global_orient0_aa(pred)=[-0.1090048 -1.7763788 -0.15125035] [12/30 23:52:35][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(-9.99,+2.34,+1.24) pred=(-101.99,-9.31,-0.53) pred_vs_gt=(-91.73,-11.16,-3.80) [12/30 23:52:35][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=+94.23 [12/30 23:53:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:53:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:53:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:53:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:53:24][INFO] ✅[FIT][Epoch 0] finished! 01:08→04:34 | loss_epoch=24.5 [12/30 23:55:59][INFO] [Exp Name]: finetune_ [12/30 23:55:59][INFO] [GPU x Batch] = 1 x 1 [12/30 23:55:59][INFO] [UnityDataset] Found 2 sequences. [12/30 23:55:59][INFO] [Train Dataset][9/9]: name=unity, size=2, genmo.datasets.unity_dataset.UnityDataset [12/30 23:55:59][INFO] [Train Dataset][All]: ConcatDataset size=2 [12/30 23:55:59][INFO] [12/30 23:55:59][INFO] [UnityDataset] Found 2 sequences. [12/30 23:55:59][INFO] [Val Dataset][7/7]: name=unity_val, size=2, genmo.datasets.unity_dataset.UnityDataset [12/30 23:55:59][INFO] [12/30 23:56:06][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 23:56:23][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_6/checkpoints' [12/30 23:56:35][INFO] Start Fitting... [12/30 23:56:37][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 23:56:37][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:56:37][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:56:37][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 23:56:39][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 23:56:41][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 23:56:42][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 23:56:54][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9930 pred=+0.9643 delta(pred-gt)=-0.0287 [12/30 23:56:54][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.03876931 -0.17480041 0.02509396] global_orient0_aa(pred)=[-0.1090048 -1.7763788 -0.15125035] [12/30 23:56:54][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(-9.99,+2.34,+1.24) pred=(-101.99,-9.31,-0.53) pred_vs_gt=(-91.73,-11.16,-3.80) [12/30 23:56:54][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=+94.23 [12/30 23:57:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:57:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:57:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:57:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/30 23:57:45][INFO] ✅[FIT][Epoch 0] finished! 01:09→04:38 | loss_epoch=24.5 [12/30 23:58:35][INFO] [Exp Name]: finetune_ [12/30 23:58:35][INFO] [GPU x Batch] = 1 x 1 [12/30 23:58:35][INFO] [UnityDataset] Found 1 sequences. [12/30 23:58:35][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/30 23:58:35][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/30 23:58:35][INFO] [12/30 23:58:35][INFO] [UnityDataset] Found 1 sequences. [12/30 23:58:35][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/30 23:58:35][INFO] [12/30 23:58:44][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/30 23:59:06][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_7/checkpoints' [12/30 23:59:18][INFO] Start Fitting... [12/30 23:59:20][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/30 23:59:20][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:59:20][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/30 23:59:20][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/30 23:59:22][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/30 23:59:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/30 23:59:24][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/30 23:59:36][INFO] [VisUnityVal] e000_100_biboo_birthday_speech_explosion_1 root_y0: gt=+0.9944 pred=+0.9685 delta(pred-gt)=-0.0259 [12/30 23:59:36][INFO] [VisUnityVal] e000_100_biboo_birthday_speech_explosion_1 global_orient0_aa(gt)=[0.02056097 0.18737577 0.01068786] global_orient0_aa(pred)=[ 0.0337113 -2.8594027 -0.01747983] [12/30 23:59:36][INFO] [VisUnityVal] e000_100_biboo_birthday_speech_explosion_1 global_orient0_yxz_deg gt=(+10.74,+1.11,+0.72) pred=(-163.84,-0.50,-1.42) pred_vs_gt=(-174.54,-1.98,-1.80) [12/30 23:59:36][INFO] [VisUnityVal] e000_100_biboo_birthday_speech_explosion_1 yaw0_deg(pred_vs_gt)=+174.27 [12/31 02:50:01][INFO] [Exp Name]: finetune_ [12/31 02:50:01][INFO] [GPU x Batch] = 1 x 1 [12/31 02:50:01][INFO] [UnityDataset] Found 1 sequences. [12/31 02:50:01][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 02:50:01][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 02:50:01][INFO] [12/31 02:50:01][INFO] [UnityDataset] Found 1 sequences. [12/31 02:50:01][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 02:50:01][INFO] [12/31 02:50:07][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 02:50:28][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_8/checkpoints' [12/31 02:50:41][INFO] Start Fitting... [12/31 02:50:42][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 02:50:42][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 02:50:42][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 02:50:42][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/31 02:50:43][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/31 02:50:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/31 02:50:45][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/31 02:50:55][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9726 delta(pred-gt)=-0.0149 [12/31 02:50:55][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.01689476 -0.20703591 0.01797612] global_orient0_aa(pred)=[-0.0321125 -2.8486555 -0.07525362] [12/31 02:50:55][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(-11.85,+1.07,+0.92) pred=(-163.30,-3.15,+0.83) pred_vs_gt=(-151.41,-4.11,-0.96) [12/31 02:50:55][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=+151.99 [12/31 02:51:41][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 02:51:41][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 02:51:41][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 02:51:41][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 02:51:41][INFO] ✅[FIT][Epoch 0] finished! 01:00→04:01 | loss_epoch=12.6 [12/31 03:10:22][INFO] [Exp Name]: finetune_ [12/31 03:10:22][INFO] [GPU x Batch] = 1 x 1 [12/31 03:10:22][INFO] [UnityDataset] Found 1 sequences. [12/31 03:10:22][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 03:10:22][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 03:10:22][INFO] [12/31 03:10:22][INFO] [UnityDataset] Found 1 sequences. [12/31 03:10:22][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 03:10:22][INFO] [12/31 03:10:28][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 03:10:51][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_9/checkpoints' [12/31 03:11:01][INFO] Start Fitting... [12/31 03:11:03][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 03:11:03][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 03:11:03][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 03:11:03][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/31 03:11:04][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/31 03:11:05][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/31 03:11:05][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/31 03:11:16][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9726 delta(pred-gt)=-0.0149 [12/31 03:11:16][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.02646996 2.9343371 -0.02487765] global_orient0_aa(pred)=[-0.03202499 -2.848779 -0.0755955 ] [12/31 03:11:16][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(+168.15,+1.07,+0.92) pred=(-163.31,-3.16,+0.82) pred_vs_gt=(+28.58,+4.12,+0.97) [12/31 03:11:16][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=-28.01 [12/31 03:12:02][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 03:12:02][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 03:12:02][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 03:12:02][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 03:12:02][INFO] ✅[FIT][Epoch 0] finished! 01:00→04:01 | loss_epoch=14.2 [12/31 03:16:57][INFO] [Exp Name]: finetune_ [12/31 03:16:57][INFO] [GPU x Batch] = 1 x 1 [12/31 03:16:57][INFO] [UnityDataset] Found 1 sequences. [12/31 03:16:57][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 03:16:57][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 03:16:57][INFO] [12/31 03:16:57][INFO] [UnityDataset] Found 1 sequences. [12/31 03:16:57][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 03:16:57][INFO] [12/31 03:17:04][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 03:17:24][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_10/checkpoints' [12/31 03:17:36][INFO] Start Fitting... [12/31 03:17:38][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 03:17:38][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 03:17:38][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 03:17:38][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/31 03:17:40][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/31 03:17:41][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/31 03:17:41][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/31 03:17:52][INFO] [VisUnityVal] e000_100_biboo_birthday_speech_explosion_1 root_y0: gt=+0.9944 pred=+0.9686 delta(pred-gt)=-0.0258 [12/31 03:17:52][INFO] [VisUnityVal] e000_100_biboo_birthday_speech_explosion_1 global_orient0_aa(gt)=[-0.01583315 -2.9540217 0.03045931] global_orient0_aa(pred)=[ 0.03382589 -2.8592563 -0.01758517] [12/31 03:17:52][INFO] [VisUnityVal] e000_100_biboo_birthday_speech_explosion_1 global_orient0_yxz_deg gt=(-169.26,+1.11,+0.72) pred=(-163.83,-0.50,-1.43) pred_vs_gt=(+5.47,+1.99,+1.81) [12/31 03:17:52][INFO] [VisUnityVal] e000_100_biboo_birthday_speech_explosion_1 yaw0_deg(pred_vs_gt)=-5.75 [12/31 03:18:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 03:18:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 03:18:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 03:18:36][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 03:18:36][INFO] ✅[FIT][Epoch 0] finished! 00:58→03:55 | loss_epoch=23 [12/31 06:06:15][INFO] [Exp Name]: finetune_ [12/31 06:06:15][INFO] [GPU x Batch] = 1 x 1 [12/31 06:06:15][INFO] [UnityDataset] Found 3 sequences. [12/31 06:06:15][INFO] [Train Dataset][9/9]: name=unity, size=3, genmo.datasets.unity_dataset.UnityDataset [12/31 06:06:15][INFO] [Train Dataset][All]: ConcatDataset size=3 [12/31 06:06:15][INFO] [12/31 06:06:15][INFO] [UnityDataset] Found 3 sequences. [12/31 06:06:15][INFO] [Val Dataset][7/7]: name=unity_val, size=3, genmo.datasets.unity_dataset.UnityDataset [12/31 06:06:15][INFO] [12/31 06:06:21][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 06:06:49][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_11/checkpoints' [12/31 06:07:02][INFO] Start Fitting... [12/31 06:07:04][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 06:07:04][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 06:07:04][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 06:07:04][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/31 06:07:07][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/31 06:07:09][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/31 06:07:11][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/31 06:07:22][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9820 pred=+0.9698 delta(pred-gt)=-0.0123 [12/31 06:07:22][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.03590106 -0.17807975 0.02012725] global_orient0_aa(pred)=[-0.08420898 -2.6493108 -0.07150012] [12/31 06:07:22][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(-10.19,+2.15,+0.96) pred=(-151.99,-3.76,+2.70) pred_vs_gt=(-141.82,-6.13,+0.67) [12/31 06:07:22][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=+144.65 [12/31 06:08:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:08:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:08:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:08:17][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:08:17][INFO] ✅[FIT][Epoch 0] finished! 01:14→04:57 | loss_epoch=41.9 [12/31 06:13:34][INFO] [Exp Name]: finetune_ [12/31 06:13:34][INFO] [GPU x Batch] = 1 x 1 [12/31 06:13:34][INFO] [UnityDataset] Found 3 sequences. [12/31 06:13:34][INFO] [Train Dataset][9/9]: name=unity, size=3, genmo.datasets.unity_dataset.UnityDataset [12/31 06:13:34][INFO] [Train Dataset][All]: ConcatDataset size=3 [12/31 06:13:34][INFO] [12/31 06:13:34][INFO] [UnityDataset] Found 3 sequences. [12/31 06:13:34][INFO] [Val Dataset][7/7]: name=unity_val, size=3, genmo.datasets.unity_dataset.UnityDataset [12/31 06:13:34][INFO] [12/31 06:13:43][INFO] [Exp Name]: finetune_ [12/31 06:13:43][INFO] [GPU x Batch] = 1 x 1 [12/31 06:13:43][INFO] [UnityDataset] Found 1 sequences. [12/31 06:13:43][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 06:13:43][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 06:13:43][INFO] [12/31 06:13:43][INFO] [UnityDataset] Found 1 sequences. [12/31 06:13:43][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 06:13:43][INFO] [12/31 06:13:48][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 06:14:11][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_12/checkpoints' [12/31 06:14:22][INFO] Start Fitting... [12/31 06:14:26][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 06:14:26][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 06:14:26][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 06:14:26][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/31 06:14:28][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/31 06:14:30][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/31 06:14:30][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/31 06:14:41][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9726 delta(pred-gt)=-0.0149 [12/31 06:14:41][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.01689476 -0.20703594 0.01797612] global_orient0_aa(pred)=[-0.03202499 -2.848779 -0.0755955 ] [12/31 06:14:41][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(-11.85,+1.07,+0.92) pred=(-163.31,-3.16,+0.82) pred_vs_gt=(-151.42,-4.12,-0.97) [12/31 06:14:41][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=+151.99 [12/31 06:15:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:15:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:15:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:15:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:15:23][INFO] ✅[FIT][Epoch 0] finished! 01:00→04:00 | loss_epoch=14.3 [12/31 06:19:20][INFO] [Exp Name]: finetune_ [12/31 06:19:20][INFO] [GPU x Batch] = 1 x 1 [12/31 06:19:20][INFO] [UnityDataset] Found 1 sequences. [12/31 06:19:20][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 06:19:20][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 06:19:20][INFO] [12/31 06:19:20][INFO] [UnityDataset] Found 1 sequences. [12/31 06:19:20][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 06:19:20][INFO] [12/31 06:19:26][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 06:19:48][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_13/checkpoints' [12/31 06:19:59][INFO] Start Fitting... [12/31 06:20:01][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 06:20:01][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 06:20:01][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 06:20:01][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/31 06:20:04][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/31 06:20:06][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/31 06:20:06][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/31 06:20:16][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9726 delta(pred-gt)=-0.0149 [12/31 06:20:16][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.02646996 2.9343371 -0.02487765] global_orient0_aa(pred)=[-0.03202499 -2.848779 -0.0755955 ] [12/31 06:20:16][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(+168.15,+1.07,+0.92) pred=(-163.31,-3.16,+0.82) pred_vs_gt=(+28.58,+4.12,+0.97) [12/31 06:20:16][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=-28.01 [12/31 06:20:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:20:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:20:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:20:59][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 06:20:59][INFO] ✅[FIT][Epoch 0] finished! 00:59→03:56 | loss_epoch=14.2 [12/31 17:32:47][INFO] [Exp Name]: finetune_ [12/31 17:32:47][INFO] [GPU x Batch] = 1 x 1 [12/31 17:32:47][INFO] [UnityDataset] Found 1 sequences. [12/31 17:32:47][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:32:47][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 17:32:47][INFO] [12/31 17:32:47][INFO] [UnityDataset] Found 1 sequences. [12/31 17:32:47][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:32:47][INFO] [12/31 17:33:00][INFO] [Exp Name]: finetune_ [12/31 17:33:00][INFO] [GPU x Batch] = 1 x 1 [12/31 17:33:00][INFO] [UnityDataset] Found 1 sequences. [12/31 17:33:00][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:33:00][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 17:33:00][INFO] [12/31 17:33:00][INFO] [UnityDataset] Found 1 sequences. [12/31 17:33:00][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:33:00][INFO] [12/31 17:33:06][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 17:33:29][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_14/checkpoints' [12/31 17:33:41][INFO] Start Fitting... [12/31 17:33:43][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 17:33:43][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 17:33:43][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 17:34:11][INFO] [Exp Name]: finetune_ [12/31 17:34:11][INFO] [GPU x Batch] = 1 x 1 [12/31 17:34:11][INFO] [UnityDataset] Found 1 sequences. [12/31 17:34:11][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:34:11][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 17:34:11][INFO] [12/31 17:34:11][INFO] [UnityDataset] Found 1 sequences. [12/31 17:34:11][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:34:11][INFO] [12/31 17:34:17][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 17:34:39][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_15/checkpoints' [12/31 17:34:51][INFO] Start Fitting... [12/31 17:34:52][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 17:34:52][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 17:34:52][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 17:34:52][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/31 17:34:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/31 17:34:56][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/31 17:34:56][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/31 17:35:08][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9747 delta(pred-gt)=-0.0128 [12/31 17:35:08][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.02646996 2.9343371 -0.02487765] global_orient0_aa(pred)=[ 0.04138837 -2.8503516 -0.16602226] [12/31 17:35:08][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(+168.15,+1.07,+0.92) pred=(-163.44,-6.29,-2.58) pred_vs_gt=(+28.54,+6.48,+4.95) [12/31 17:35:08][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=-23.65 [12/31 17:35:53][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:35:53][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:35:53][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:35:53][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:35:53][INFO] ✅[FIT][Epoch 0] finished! 01:01→04:05 | loss_epoch=17.9 [12/31 17:46:55][INFO] [Exp Name]: finetune_ [12/31 17:46:55][INFO] [GPU x Batch] = 1 x 1 [12/31 17:46:55][INFO] [UnityDataset] Found 1 sequences. [12/31 17:46:55][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:46:55][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 17:46:55][INFO] [12/31 17:46:55][INFO] [UnityDataset] Found 1 sequences. [12/31 17:46:55][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:46:55][INFO] [12/31 17:47:03][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 17:47:20][INFO] [Exp Name]: finetune_ [12/31 17:47:20][INFO] [GPU x Batch] = 1 x 1 [12/31 17:47:20][INFO] [UnityDataset] Found 1 sequences. [12/31 17:47:20][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:47:20][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 17:47:20][INFO] [12/31 17:47:20][INFO] [UnityDataset] Found 1 sequences. [12/31 17:47:20][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:47:20][INFO] [12/31 17:47:27][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 17:47:39][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_0/checkpoints' [12/31 17:47:50][INFO] Start Fitting... [12/31 17:47:52][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 17:47:52][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 17:47:52][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 17:47:52][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/31 17:47:54][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/31 17:47:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/31 17:47:55][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/31 17:48:07][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9747 delta(pred-gt)=-0.0128 [12/31 17:48:07][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.02646996 2.9343371 -0.02487765] global_orient0_aa(pred)=[ 0.04149618 -2.8503237 -0.16659868] [12/31 17:48:07][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(+168.15,+1.07,+0.92) pred=(-163.44,-6.32,-2.59) pred_vs_gt=(+28.54,+6.50,+4.96) [12/31 17:48:07][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=-23.65 [12/31 17:48:52][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:48:52][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:48:52][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:48:52][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:48:52][INFO] ✅[FIT][Epoch 0] finished! 01:01→04:06 | loss_epoch=17.9 [12/31 17:54:30][INFO] [Exp Name]: finetune_ [12/31 17:54:30][INFO] [GPU x Batch] = 1 x 1 [12/31 17:54:30][INFO] [UnityDataset] Found 1 sequences. [12/31 17:54:30][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:54:30][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 17:54:30][INFO] [12/31 17:54:30][INFO] [UnityDataset] Found 1 sequences. [12/31 17:54:30][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 17:54:30][INFO] [12/31 17:54:36][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 17:54:56][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_1/checkpoints' [12/31 17:55:08][INFO] Start Fitting... [12/31 17:55:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 17:55:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 17:55:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 17:55:10][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/31 17:55:12][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/31 17:55:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/31 17:55:14][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/31 17:55:26][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9747 delta(pred-gt)=-0.0128 [12/31 17:55:26][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.02646996 2.9343371 -0.02487765] global_orient0_aa(pred)=[ 0.04149618 -2.8503237 -0.16659868] [12/31 17:55:26][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(+168.15,+1.07,+0.92) pred=(-163.44,-6.32,-2.59) pred_vs_gt=(+28.54,+6.50,+4.96) [12/31 17:55:26][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=-23.65 [12/31 17:56:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:56:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:56:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:56:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 17:56:10][INFO] ✅[FIT][Epoch 0] finished! 01:01→04:07 | loss_epoch=17.9 [12/31 18:11:41][INFO] [Exp Name]: finetune_ [12/31 18:11:41][INFO] [GPU x Batch] = 1 x 1 [12/31 18:11:41][INFO] [UnityDataset] Found 1 sequences. [12/31 18:11:41][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 18:11:41][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 18:11:41][INFO] [12/31 18:11:41][INFO] [UnityDataset] Found 1 sequences. [12/31 18:11:41][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 18:11:41][INFO] [12/31 18:11:47][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 18:12:11][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_2/checkpoints' [12/31 18:12:22][INFO] Start Fitting... [12/31 18:12:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 18:12:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 18:12:23][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 18:12:23][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/31 18:12:25][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/31 18:12:26][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/31 18:12:26][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/31 18:12:37][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9747 delta(pred-gt)=-0.0128 [12/31 18:12:37][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.02646996 2.9343371 -0.02487765] global_orient0_aa(pred)=[ 0.04149618 -2.8503237 -0.16659868] [12/31 18:12:37][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(+168.15,+1.07,+0.92) pred=(-163.44,-6.32,-2.59) pred_vs_gt=(+28.54,+6.50,+4.96) [12/31 18:12:37][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=-23.65 [12/31 18:13:22][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 18:13:22][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 18:13:22][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 18:13:22][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 18:13:22][INFO] ✅[FIT][Epoch 0] finished! 01:00→04:00 | loss_epoch=17.9 [12/31 18:39:25][INFO] [Exp Name]: finetune_ [12/31 18:39:25][INFO] [GPU x Batch] = 1 x 1 [12/31 18:39:25][INFO] [UnityDataset] Found 1 sequences. [12/31 18:39:25][INFO] [Train Dataset][9/9]: name=unity, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 18:39:25][INFO] [Train Dataset][All]: ConcatDataset size=1 [12/31 18:39:25][INFO] [12/31 18:39:25][INFO] [UnityDataset] Found 1 sequences. [12/31 18:39:25][INFO] [Val Dataset][7/7]: name=unity_val, size=1, genmo.datasets.unity_dataset.UnityDataset [12/31 18:39:25][INFO] [12/31 18:39:32][INFO] [PL-Trainer] Loading ckpt: ./s050000.ckpt [12/31 18:39:57][INFO] [Simple Ckpt Saver]: Save to `outputs/unity/finetune_/version_3/checkpoints' [12/31 18:40:08][INFO] Start Fitting... [12/31 18:40:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/utilities/model_summary/model_summary.py:242: Precision 16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [12/31 18:40:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 18:40:10][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:434: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance. [12/31 18:40:10][INFO] 🚀[FIT][Epoch 0] Data: unity Experiment: finetune_ [12/31 18:40:11][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv1d(input, weight, bias, self.stride, [12/31 18:40:13][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass [12/31 18:40:13][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [12/31 18:40:25][INFO] [VisUnityVal] e000_0_biboo_birthday_speech root_y0: gt=+0.9875 pred=+0.9747 delta(pred-gt)=-0.0128 [12/31 18:40:25][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_aa(gt)=[ 0.02646996 2.9343371 -0.02487765] global_orient0_aa(pred)=[ 0.0413059 -2.8506136 -0.16606733] [12/31 18:40:25][INFO] [VisUnityVal] e000_0_biboo_birthday_speech global_orient0_yxz_deg gt=(+168.15,+1.07,+0.92) pred=(-163.45,-6.30,-2.58) pred_vs_gt=(+28.52,+6.48,+4.95) [12/31 18:40:25][INFO] [VisUnityVal] e000_0_biboo_birthday_speech yaw0_deg(pred_vs_gt)=-23.63 [12/31 18:41:09][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pa_mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 18:41:09][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/mpjpe', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 18:41:09][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/pve', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 18:41:09][WARNING] /root/miniconda3/envs/gvhmr/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:433: It is recommended to use `self.log('val_metric_Unity/accel', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. [12/31 18:41:09][INFO] ✅[FIT][Epoch 0] finished! 01:00→04:01 | loss_epoch=17.9