runtime error

Exit code: 1. Reason: , in to module.to(device, dtype) ~~~~~~~~~^^^^^^^^^^^^^^^ File "/root/.pyenv/versions/3.13.12/lib/python3.13/site-packages/diffusers/models/modeling_utils.py", line 1451, in to return super().to(*args, **kwargs) ~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/root/.pyenv/versions/3.13.12/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1381, in to return self._apply(convert) ~~~~~~~~~~~^^^^^^^^^ File "/root/.pyenv/versions/3.13.12/lib/python3.13/site-packages/torch/nn/modules/module.py", line 933, in _apply module._apply(fn) ~~~~~~~~~~~~~^^^^ File "/root/.pyenv/versions/3.13.12/lib/python3.13/site-packages/torch/nn/modules/module.py", line 933, in _apply module._apply(fn) ~~~~~~~~~~~~~^^^^ File "/root/.pyenv/versions/3.13.12/lib/python3.13/site-packages/torch/nn/modules/module.py", line 933, in _apply module._apply(fn) ~~~~~~~~~~~~~^^^^ [Previous line repeated 1 more time] File "/root/.pyenv/versions/3.13.12/lib/python3.13/site-packages/torch/nn/modules/module.py", line 964, in _apply param_applied = fn(param) File "/root/.pyenv/versions/3.13.12/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1367, in convert return t.to( ~~~~^ device, ^^^^^^^ dtype if t.is_floating_point() or t.is_complex() else None, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ non_blocking, ^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 108.00 MiB. GPU 0 has a total capacity of 44.39 GiB of which 3.38 MiB is free. Including non-PyTorch memory, this process has 44.38 GiB memory in use. Of the allocated memory 43.77 GiB is allocated by PyTorch, and 196.63 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

Container logs:

Fetching error logs...