Memory leak

#3
by sourcesur - opened

@anton-l
Encountered a memory leak when inferencing the model on the same audio sample. The memory keeps increasing and throwing CUDA out of memory error when running on multiple audios.
I use transformers vresion 4.29.0 since the versions above have problem with loading the pretrained weights correctly (the problem is described in this discussion https://huggingface.co/microsoft/wavlm-base-plus-sv/discussions/2)
CUDA: 12.2
torch: 2.2.1
Has someone successfully used the model?

torch.no_grad() solved the issue

sourcesur changed discussion status to closed

Sign up or log in to comment