Instructions to use utter-project/mHuBERT-147 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use utter-project/mHuBERT-147 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="utter-project/mHuBERT-147")# Load model directly from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("utter-project/mHuBERT-147") model = AutoModel.from_pretrained("utter-project/mHuBERT-147") - Inference
- Notebooks
- Google Colab
- Kaggle
mHuBERT vs. ContentVec
Hi. Is there any difference between mHubert and ContentVec in terms of quality?
Which one is more recent? I think ContentVec is not multilingual or is it?
I plan to improve the pronunciation of voice conversions through RVC, especially in multiple languages, by using mHuBERT.
RVC currently uses ContentVec.
What do you think? I think I will have to train a new pretrain base model using this mHuBERT :)
Hi!
Thanks for the interest on the model. I am not familiar with ContentVec, but it seems to be an English-only model from 2022.
mHuBERT-147 is based on HuBERT, and trained on 147 languages. It was released last August, and it's a very competitive multilingual SSL block (see ML-SUPERB results).
If you train something based on mHuBERT-147, don't hesitate to let me know. :)
Hi Blakus, I am also looking at doing the same thing with RVC and mHubert. I'm curious, did you ended up trying this and what your results were like?