[2025-10-21 21:51:19,889][INFO] Args: { "accelerator": "gpu", "batch_size": 16, "callbacks": { "learning_rate_monitor": {}, "model_checkpoint": { "dirpath": "out/distill_vit_lightly/checkpoints", "filename": "best-model-{epoch}-{validation_loss:.4f}", "mode": "min", "monitor": "val_loss", "save_top_k": 1 } }, "checkpoint": null, "data": [ "/kaggle/input/dsp-pre-final/processed_taco_coco/train2017", "/kaggle/input/dsp-pre-final/processed_taco_coco/val2017" ], "devices": 2, "embed_dim": null, "epochs": 50, "float32_matmul_precision": "auto", "loader_args": null, "loggers": { "wandb": { "name": "run_ddp_2025-10-21_21-51_lr0.0001_bs8", "project": "Distill-RTDETR-Distill-VIT" } }, "method": "distillationv1", "method_args": { "teacher": "dinov3/vitb16", "teacher_url": "https://dinov3.llamameta.net/dinov3_vitb16/dinov3_vitb16_pretrain_lvd1689m-73cec8be.pth?Policy=eyJTdGF0ZW1lbnQiOlt7InVuaXF1ZV9oYXNoIjoiZXlkcGk1cTRjN3Fla3VmYWgzdzBsNzU2IiwiUmVzb3VyY2UiOiJodHRwczpcL1wvZGlub3YzLmxsYW1hbWV0YS5uZXRcLyoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE3NjExOTUyOTN9fX1dfQ__&Signature=F3Nke5NE0guqjGNTgSLvIG-6kwaz4E-NqruE9IUPIfRQ-l6wvmNCdcZc%7E1ZlD2OeRD0I95IMzvGIsllVrC900w57yi0NBp6Lf0hp5tOztTt%7EafEkP1Grh3EnW8nsd0bE--rQj2jCTDQR5fnorLd6ZBKYJKWkJDq58Kcen65gv5EHnfITyGGBnXCLbvf1i%7EzrfttueEIkOydXFnIIOl%7Etr33p2pGxO-uEIhWzmS%7E57sB43gA1UQAB6Miv1dhLr3HVAU5W-EL6IxTQbgtAJUnNFLFaVIXvwZsCaNrRBvVyS%7ECwG5depOU2w6NDGgS8FjxqdbLl%7EhPznLsBe2Vv73iYhQ__&Key-Pair-Id=K15QRJLYKIFSLZ&Download-Request-ID=787063460892617" }, "model": "RTDETRModelWrapper", "model_args": null, "num_nodes": 1, "num_workers": 2, "optim": "adamw", "optim_args": { "lr": 0.0001, "weight_decay": 1e-05 }, "out": "out/distill_vit_lightly", "overwrite": false, "precision": "auto", "resume": null, "resume_interrupted": false, "seed": 0, "strategy": "ddp_find_unused_parameters_true", "trainer_args": null, "transform_args": null } [2025-10-21 21:51:19,890][INFO] Using output directory '/kaggle/working/out/distill_vit_lightly'. [2025-10-21 21:51:20,016][DEBUG] '/usr/local/lib/python3.11/dist-packages/lightly_train' is not a git repository. [2025-10-21 21:51:20,019][DEBUG] '/kaggle/working' is not a git repository. [2025-10-21 21:51:20,019][DEBUG] Platform: Linux-6.6.56+-x86_64-with-glibc2.35 [2025-10-21 21:51:20,019][DEBUG] Python: 3.11.13 [2025-10-21 21:51:20,019][DEBUG] LightlyTrain: 0.11.4 [2025-10-21 21:51:20,019][DEBUG] LightlyTrain Git Information: [2025-10-21 21:51:20,019][DEBUG] LightlyTrain is not installed from a git repository. [2025-10-21 21:51:20,019][DEBUG] Run directory Git Information: [2025-10-21 21:51:20,019][DEBUG] The code is not running from a git repository. [2025-10-21 21:51:20,019][DEBUG] Dependencies: [2025-10-21 21:51:20,020][DEBUG] - torch 2.5.1 [2025-10-21 21:51:20,020][DEBUG] - torchvision 0.20.1 [2025-10-21 21:51:20,020][DEBUG] - pytorch-lightning 2.5.5 [2025-10-21 21:51:20,020][DEBUG] - Pillow 12.0.0 [2025-10-21 21:51:20,020][DEBUG] - pillow-simd x [2025-10-21 21:51:20,020][DEBUG] Optional dependencies: [2025-10-21 21:51:20,020][DEBUG] - super-gradients x [2025-10-21 21:51:20,020][DEBUG] - timm 1.0.19 [2025-10-21 21:51:20,020][DEBUG] - ultralytics x [2025-10-21 21:51:20,020][DEBUG] - wandb 0.21.0 [2025-10-21 21:51:20,020][DEBUG] CPUs: 4 [2025-10-21 21:51:20,020][DEBUG] GPUs: 2 [2025-10-21 21:51:20,020][DEBUG] - Tesla T4 7.5 (15828320256) [2025-10-21 21:51:20,020][DEBUG] - Tesla T4 7.5 (15828320256) [2025-10-21 21:51:20,020][DEBUG] Environment variables: [2025-10-21 21:51:20,022][DEBUG] Getting transform args for method 'distillationv1'. [2025-10-21 21:51:20,022][DEBUG] Using additional transform arguments None. [2025-10-21 21:51:20,022][DEBUG] Getting transform for method 'distillationv1'. [2025-10-21 21:51:20,029][DEBUG] Creating temporary file '/root/.cache/lightly-train/data/verify-out/19ee72d6fbf2b88f109220ade3122f120b30dad7c4cec4520f0e2c19e0a2160c' to verify out path. [2025-10-21 21:51:20,030][DEBUG] Writing filepaths to '/root/.cache/lightly-train/data/19ee72d6fbf2b88f109220ade3122f120b30dad7c4cec4520f0e2c19e0a2160c.5c76640f4a2ddcd25e3bf06cbcb6fa73461414820e583f3add69395f71b0da0a.temp' (chunk_size=10000) [2025-10-21 21:51:21,360][DEBUG] Creating memory mapped sequence with 1498 '['filenames']'. [2025-10-21 21:51:21,361][DEBUG] Found dataset size 1498. [2025-10-21 21:51:21,361][DEBUG] Using provided epochs 50. [2025-10-21 21:51:21,361][DEBUG] Getting embedding model with embedding dimension None. [2025-10-21 21:51:21,361][DEBUG] Using jsonl logger with args flush_logs_every_n_steps=100 [2025-10-21 21:51:21,387][DEBUG] Using tensorboard logger with args name='' version='' log_graph=False default_hp_metric=True prefix='' sub_dir=None [2025-10-21 21:51:21,388][DEBUG] Using wandb logger with args name='run_ddp_2025-10-21_21-51_lr0.0001_bs8' version=None offline=False anonymous=None project='Distill-RTDETR-Distill-VIT' log_model=False prefix='' checkpoint_name=None [2025-10-21 21:51:21,389][DEBUG] Using loggers ['JSONLLogger', 'TensorBoardLogger', 'WandbLogger']. [2025-10-21 21:51:21,391][DEBUG] Getting accelerator for 'gpu'. [2025-10-21 21:51:21,391][DEBUG] Using provided strategy 'ddp_find_unused_parameters_true'. [2025-10-21 21:51:21,518][DEBUG] Using precision 'bf16-mixed'. [2025-10-21 21:51:21,518][DEBUG] Getting trainer. [2025-10-21 21:51:21,519][DEBUG] Using sync_batchnorm 'True'. [2025-10-21 21:51:21,520][INFO] Using bfloat16 Automatic Mixed Precision (AMP) [2025-10-21 21:51:21,572][INFO] GPU available: True (cuda), used: True [2025-10-21 21:51:21,573][INFO] TPU available: False, using: 0 TPU cores [2025-10-21 21:51:21,573][INFO] HPU available: False, using: 0 HPUs [2025-10-21 21:51:21,573][DEBUG] Detected 1 nodes and 2 devices per node. [2025-10-21 21:51:21,573][DEBUG] Total number of devices: 2. [2025-10-21 21:51:21,573][DEBUG] Detected dataset size 1498. [2025-10-21 21:51:21,574][DEBUG] Using batch size per device 8. [2025-10-21 21:51:21,574][DEBUG] Using optimizer 'OptimizerType.ADAMW'. [2025-10-21 21:51:21,574][DEBUG] Getting method args for 'Distillation' [2025-10-21 21:51:21,574][DEBUG] Getting method for 'Distillation' [2025-10-21 21:51:25,780][INFO] Resolved configuration: { "accelerator": "CUDAAccelerator", "batch_size": 16, "callbacks": { "device_stats_monitor": null, "early_stopping": { "check_finite": true, "monitor": "train_loss", "patience": 1000000000000 }, "learning_rate_monitor": {}, "model_checkpoint": { "auto_insert_metric_name": true, "dirpath": "out/distill_vit_lightly/checkpoints", "enable_version_counter": false, "every_n_epochs": null, "every_n_train_steps": null, "filename": "best-model-{epoch}-{validation_loss:.4f}", "mode": "min", "monitor": "val_loss", "save_last": true, "save_on_train_epoch_end": null, "save_top_k": 1, "save_weights_only": false, "train_time_interval": null, "verbose": false }, "model_export": { "every_n_epochs": 1 } }, "checkpoint": null, "data": [ "/kaggle/input/dsp-pre-final/processed_taco_coco/train2017", "/kaggle/input/dsp-pre-final/processed_taco_coco/val2017" ], "devices": 2, "embed_dim": null, "epochs": 50, "float32_matmul_precision": "highest", "loader_args": null, "loggers": { "jsonl": { "flush_logs_every_n_steps": 100 }, "mlflow": null, "tensorboard": { "default_hp_metric": true, "log_graph": false, "name": "", "prefix": "", "sub_dir": null, "version": "" }, "wandb": { "anonymous": null, "checkpoint_name": null, "log_model": false, "name": "run_ddp_2025-10-21_21-51_lr0.0001_bs8", "offline": false, "prefix": "", "project": "Distill-RTDETR-Distill-VIT", "version": null } }, "method": "distillationv1", "method_args": { "lr_scale_method": "sqrt", "queue_size": 512, "reference_batch_size": 1536, "teacher": "dinov3/vitb16", "teacher_url": "https://dinov3.llamameta.net/dinov3_vitb16/dinov3_vitb16_pretrain_lvd1689m-73cec8be.pth?Policy=eyJTdGF0ZW1lbnQiOlt7InVuaXF1ZV9oYXNoIjoiZXlkcGk1cTRjN3Fla3VmYWgzdzBsNzU2IiwiUmVzb3VyY2UiOiJodHRwczpcL1wvZGlub3YzLmxsYW1hbWV0YS5uZXRcLyoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE3NjExOTUyOTN9fX1dfQ__&Signature=F3Nke5NE0guqjGNTgSLvIG-6kwaz4E-NqruE9IUPIfRQ-l6wvmNCdcZc%7E1ZlD2OeRD0I95IMzvGIsllVrC900w57yi0NBp6Lf0hp5tOztTt%7EafEkP1Grh3EnW8nsd0bE--rQj2jCTDQR5fnorLd6ZBKYJKWkJDq58Kcen65gv5EHnfITyGGBnXCLbvf1i%7EzrfttueEIkOydXFnIIOl%7Etr33p2pGxO-uEIhWzmS%7E57sB43gA1UQAB6Miv1dhLr3HVAU5W-EL6IxTQbgtAJUnNFLFaVIXvwZsCaNrRBvVyS%7ECwG5depOU2w6NDGgS8FjxqdbLl%7EhPznLsBe2Vv73iYhQ__&Key-Pair-Id=K15QRJLYKIFSLZ&Download-Request-ID=787063460892617", "teacher_weights": null, "temperature": 0.07 }, "model": "RTDETRModelWrapper", "model_args": null, "num_nodes": 1, "num_workers": 2, "optim": "adamw", "optim_args": { "betas": [ 0.9, 0.999 ], "eps": 1e-08, "lr": 0.0001, "weight_decay": 1e-05 }, "out": "out/distill_vit_lightly", "overwrite": false, "precision": "bf16-mixed", "resume": null, "resume_interrupted": false, "seed": 0, "strategy": "DDPStrategy", "trainer_args": null, "transform_args": { "channel_drop": null, "color_jitter": { "brightness": 0.8, "contrast": 0.8, "hue": 0.2, "prob": 0.8, "saturation": 0.4, "strength": 0.5 }, "gaussian_blur": { "blur_limit": 0, "prob": 1.0, "sigmas": [ 0.0, 0.1 ] }, "image_size": [ 224, 224 ], "normalize": { "mean": [ 0.485, 0.456, 0.406 ], "std": [ 0.229, 0.224, 0.225 ] }, "num_channels": 3, "random_flip": { "horizontal_prob": 0.5, "vertical_prob": 0.0 }, "random_gray_scale": 0.2, "random_resize": { "max_scale": 1.0, "min_scale": 0.14 }, "random_rotation": null, "solarize": null } } [2025-10-21 21:51:27,716][INFO] ---------------------------------------------------------------------------------------------------- distributed_backend=nccl All distributed processes registered. Starting with 2 processes ---------------------------------------------------------------------------------------------------- [2025-10-21 21:51:27,733][INFO] LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1] [2025-10-21 21:51:27,900][INFO] Loading `train_dataloader` to estimate number of stepping batches. [2025-10-21 21:51:28,075][WARNING] /usr/local/lib/python3.11/dist-packages/pytorch_lightning/utilities/model_summary/model_summary.py:231: Precision bf16-mixed is not supported by the model summary. Estimated model size in MB will not be accurate. Using 32 bits instead. [2025-10-21 21:51:28,092][INFO] | Name | Type | Params | Mode -------------------------------------------------------------------------- 0 | teacher_embedding_model | DinoVisionTransformer | 85.7 M | eval 1 | student_embedding_model | EmbeddingModel | 23.5 M | train 2 | flatten | Flatten | 0 | train 3 | student_projection_head | Linear | 1.6 M | train 4 | criterion | DistillationLoss | 0 | train -------------------------------------------------------------------------- 110 M Trainable params 28.5 K Non-trainable params 110 M Total params 442.869 Total estimated model params size (MB) 8 Modules in train mode 481 Modules in eval mode [2025-10-21 21:51:28,102][WARNING] /usr/local/lib/python3.11/dist-packages/pytorch_lightning/loops/fit_loop.py:527: Found 481 module(s) in eval mode at the start of training. This may lead to unexpected behavior during training. If this is intentional, you can ignore this warning. [2025-10-21 21:52:12,533][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 21:52:12,882][WARNING] /usr/local/lib/python3.11/dist-packages/pytorch_lightning/callbacks/model_checkpoint.py:467: `ModelCheckpoint(monitor='val_loss')` could not find the monitored key in the returned metrics: ['train_loss', 'profiling/batch_time', 'profiling/data_time', 'lr-AdamW/params', 'lr-AdamW/params_no_weight_decay', 'epoch', 'step']. HINT: Did you call `log('val_loss', value)` in the `LightningModule`? [2025-10-21 21:52:59,457][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 21:53:46,303][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 21:54:34,023][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 21:55:21,018][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 21:56:08,154][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 21:56:55,194][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 21:57:42,256][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 21:58:29,413][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 21:59:16,521][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:00:03,825][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:00:51,987][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:01:39,467][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:02:26,853][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:03:13,960][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:04:01,041][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:04:48,085][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:05:35,092][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:06:23,833][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:07:11,042][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:07:58,279][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:08:45,339][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:09:32,676][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:10:19,997][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:11:07,186][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:11:54,383][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:12:42,423][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:13:29,533][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:14:16,443][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:15:03,427][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:15:50,329][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:16:37,150][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:17:24,431][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:18:12,003][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:19:00,569][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:19:48,146][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:20:35,960][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:21:23,759][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:22:11,453][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:22:59,060][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:23:46,990][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:24:34,964][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:25:23,885][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:26:11,735][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:26:59,592][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:27:47,598][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:28:35,523][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:29:23,525][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:30:11,786][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:31:00,765][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:31:04,103][INFO] `Trainer.fit` stopped: `max_epochs=50` reached. [2025-10-21 22:31:04,928][INFO] Training completed. [2025-10-21 22:31:04,931][DEBUG] Exporting model to '/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt' in format 'ModelFormat.PACKAGE_DEFAULT'. [2025-10-21 22:31:05,419][INFO] Example: How to use the exported model ---------------------------------------------------------------------------------------- import RTDETR # Import the model that was used here import torch # Load the pretrained model model = RTDETR(...) model.load_state_dict(torch.load('/kaggle/working/out/distill_vit_lightly/exported_models/exported_last.pt', weights_only=True)) # Finetune or evaluate the model ... ---------------------------------------------------------------------------------------- [2025-10-21 22:31:05,420][INFO] Model exported.