Instructions to use microsoft/wavlm-base-plus-sv with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/wavlm-base-plus-sv with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForAudioXVector processor = AutoProcessor.from_pretrained("microsoft/wavlm-base-plus-sv") model = AutoModelForAudioXVector.from_pretrained("microsoft/wavlm-base-plus-sv") - Notebooks
- Google Colab
- Kaggle
Memory leak
#3
by sourcesur - opened
@anton-l
Encountered a memory leak when inferencing the model on the same audio sample. The memory keeps increasing and throwing CUDA out of memory error when running on multiple audios.
I use transformers vresion 4.29.0 since the versions above have problem with loading the pretrained weights correctly (the problem is described in this discussion https://huggingface.co/microsoft/wavlm-base-plus-sv/discussions/2)
CUDA: 12.2
torch: 2.2.1
Has someone successfully used the model?
torch.no_grad() solved the issue
sourcesur changed discussion status to closed