Instructions to use maya-research/maya1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use maya-research/maya1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-to-speech", model="maya-research/maya1")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("maya-research/maya1") model = AutoModelForCausalLM.from_pretrained("maya-research/maya1") - Notebooks
- Google Colab
- Kaggle
Audio length limit?
#15
by weightsnweights - opened
Is there a audio length limit? I get only 18s (when running on the HF space)
You will have to increase the max_tokens in advanced settings.
Tried maxing the max_tokens but still get only 20s. And it picks a random set of sentences in the middle of the text that I input.
That is a limitation for transformers based inference. That is the max context for that model. We have however created this repository using fastAPI and sliding window approach for more longer and stable generations using vLLM as inference engine. https://github.com/MayaResearch/maya1-fastapi