| 2025-02-18 11:33:44.321671: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. | |
| 2025-02-18 11:33:44.325020: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used. | |
| 2025-02-18 11:33:44.328630: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used. | |
| 2025-02-18 11:33:44.340517: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered | |
| WARNING: All log messages before absl::InitializeLog() is called are written to STDERR | |
| E0000 00:00:1739846024.361216 3405905 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered | |
| E0000 00:00:1739846024.367365 3405905 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered | |
| 2025-02-18 11:33:44.388285: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. | |
| To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. | |
| /home/chaeyun/.conda/envs/risall/lib/python3.9/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers | |
| warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) | |
| /home/chaeyun/.conda/envs/risall/lib/python3.9/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.) | |
| return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] | |
| Image size: 480 | |
| loading dataset refcocog into memory... | |
| loading dataset split umd | |
| creating index... | |
| index created. | |
| DONE (t=6.06s) | |
| dmmi_swin_hardpos_only | |
| Window size 12! | |
| Randomly initialize Multi-modal Swin Transformer weights. | |
| Test: [ 0/5023] eta: 2:14:36 time: 1.6079 data: 0.7791 max mem: 1554 | |
| Test: [ 100/5023] eta: 0:21:54 time: 0.2536 data: 0.0024 max mem: 1554 | |
| Test: [ 200/5023] eta: 0:20:44 time: 0.2491 data: 0.0025 max mem: 1554 | |
| Test: [ 300/5023] eta: 0:20:20 time: 0.2592 data: 0.0024 max mem: 1554 | |
| Test: [ 400/5023] eta: 0:19:46 time: 0.2509 data: 0.0024 max mem: 1554 | |
| Test: [ 500/5023] eta: 0:19:17 time: 0.2535 data: 0.0024 max mem: 1554 | |
| Test: [ 600/5023] eta: 0:18:52 time: 0.2560 data: 0.0024 max mem: 1554 | |
| Test: [ 700/5023] eta: 0:18:25 time: 0.2549 data: 0.0024 max mem: 1554 | |
| Test: [ 800/5023] eta: 0:18:00 time: 0.2561 data: 0.0024 max mem: 1554 | |
| Test: [ 900/5023] eta: 0:17:34 time: 0.2553 data: 0.0024 max mem: 1554 | |
| Test: [1000/5023] eta: 0:17:07 time: 0.2516 data: 0.0024 max mem: 1554 | |
| Test: [1100/5023] eta: 0:16:40 time: 0.2521 data: 0.0024 max mem: 1554 | |
| Test: [1200/5023] eta: 0:16:16 time: 0.2606 data: 0.0024 max mem: 1554 | |
| Test: [1300/5023] eta: 0:15:51 time: 0.2554 data: 0.0024 max mem: 1554 | |
| Test: [1400/5023] eta: 0:15:24 time: 0.2494 data: 0.0024 max mem: 1554 | |
| Test: [1500/5023] eta: 0:14:59 time: 0.2569 data: 0.0024 max mem: 1554 | |
| Test: [1600/5023] eta: 0:14:32 time: 0.2496 data: 0.0024 max mem: 1554 | |
| Test: [1700/5023] eta: 0:14:07 time: 0.2564 data: 0.0024 max mem: 1554 | |
| Test: [1800/5023] eta: 0:13:42 time: 0.2605 data: 0.0024 max mem: 1554 | |
| Test: [1900/5023] eta: 0:13:16 time: 0.2523 data: 0.0024 max mem: 1554 | |
| Test: [2000/5023] eta: 0:12:50 time: 0.2533 data: 0.0024 max mem: 1554 | |
| Test: [2100/5023] eta: 0:12:25 time: 0.2524 data: 0.0024 max mem: 1554 | |
| Test: [2200/5023] eta: 0:11:59 time: 0.2509 data: 0.0024 max mem: 1554 | |
| Test: [2300/5023] eta: 0:11:32 time: 0.2487 data: 0.0024 max mem: 1554 | |
| Test: [2400/5023] eta: 0:11:07 time: 0.2579 data: 0.0024 max mem: 1554 | |
| Test: [2500/5023] eta: 0:10:42 time: 0.2539 data: 0.0024 max mem: 1554 | |
| Test: [2600/5023] eta: 0:10:17 time: 0.2583 data: 0.0024 max mem: 1554 | |
| Test: [2700/5023] eta: 0:09:51 time: 0.2572 data: 0.0024 max mem: 1554 | |
| Test: [2800/5023] eta: 0:09:26 time: 0.2539 data: 0.0024 max mem: 1554 | |
| Test: [2900/5023] eta: 0:09:01 time: 0.2593 data: 0.0024 max mem: 1554 | |
| Test: [3000/5023] eta: 0:08:35 time: 0.2496 data: 0.0024 max mem: 1554 | |
| Test: [3100/5023] eta: 0:08:10 time: 0.2569 data: 0.0024 max mem: 1554 | |
| Test: [3200/5023] eta: 0:07:44 time: 0.2493 data: 0.0024 max mem: 1554 | |
| Test: [3300/5023] eta: 0:07:18 time: 0.2575 data: 0.0024 max mem: 1554 | |
| Test: [3400/5023] eta: 0:06:53 time: 0.2494 data: 0.0024 max mem: 1554 | |
| Test: [3500/5023] eta: 0:06:27 time: 0.2490 data: 0.0025 max mem: 1554 | |
| Test: [3600/5023] eta: 0:06:01 time: 0.2496 data: 0.0024 max mem: 1554 | |
| Test: [3700/5023] eta: 0:05:36 time: 0.2615 data: 0.0024 max mem: 1554 | |
| Test: [3800/5023] eta: 0:05:11 time: 0.2553 data: 0.0024 max mem: 1554 | |
| Test: [3900/5023] eta: 0:04:45 time: 0.2559 data: 0.0024 max mem: 1554 | |
| Test: [4000/5023] eta: 0:04:20 time: 0.2486 data: 0.0024 max mem: 1554 | |
| Test: [4100/5023] eta: 0:03:54 time: 0.2360 data: 0.0024 max mem: 1554 | |
| Test: [4200/5023] eta: 0:03:28 time: 0.2500 data: 0.0025 max mem: 1554 | |
| Test: [4300/5023] eta: 0:03:03 time: 0.2532 data: 0.0024 max mem: 1554 | |
| Test: [4400/5023] eta: 0:02:38 time: 0.2476 data: 0.0024 max mem: 1554 | |
| Test: [4500/5023] eta: 0:02:12 time: 0.2490 data: 0.0024 max mem: 1554 | |
| Test: [4600/5023] eta: 0:01:47 time: 0.2511 data: 0.0024 max mem: 1554 | |
| Test: [4700/5023] eta: 0:01:21 time: 0.2558 data: 0.0024 max mem: 1554 | |
| Test: [4800/5023] eta: 0:00:56 time: 0.2530 data: 0.0024 max mem: 1554 | |
| Test: [4900/5023] eta: 0:00:31 time: 0.2519 data: 0.0024 max mem: 1554 | |
| Test: [5000/5023] eta: 0:00:05 time: 0.2462 data: 0.0025 max mem: 1554 | |
| /home/chaeyun/.conda/envs/risall/lib/python3.9/site-packages/numpy/core/fromnumeric.py:3504: RuntimeWarning: Mean of empty slice. | |
| return _methods._mean(a, axis=axis, dtype=dtype, | |
| /home/chaeyun/.conda/envs/risall/lib/python3.9/site-packages/numpy/core/_methods.py:129: RuntimeWarning: invalid value encountered in scalar divide | |
| ret = ret.dtype.type(ret / rcount) | |
| Test: Total time: 0:21:13 | |
| Final results: | |
| precision@0.5 = 72.59 | |
| precision@0.6 = 68.48 | |
| precision@0.7 = 62.26 | |
| precision@0.8 = 51.65 | |
| precision@0.9 = 27.24 | |
| overall IoU = 61.36 | |
| mean IoU = 64.22 | |