Spaces:
Running
RuntimeError: CUDA out of memory.
Hello, I ran your new code, but when it arrived at model.to(device), I got this error.
RuntimeError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 39.41 GiB total capacity; 38.43 GiB already allocated; 51.12 MiB free; 38.43 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Our GPU is A100, and the GPU memory is 40G, but the memory is still not enough. When I ran it before, there was no error in the case of insufficient memory, at the same time, I also checked my GPU, and there are no other zombie processes running on the GPU. What is your configuration? How can I get the correct result?
Besides,for the overall inference code on the https://sites.google.com/view/iaudittool/home, I changed the dtype in the audit.sh from float16 to float32, and the result is still messy.
I confused a lot. Hope you can solve these problems, thanks.
Hi,
Thank you for your interest. Regarding the hardware, unfortunately, it is beyond the scope of my assistance at the moment. As for the inconsistent results you mentioned, I have not encountered this issue on my side. However, I may consider investigating it in the future using different hardware configurations.