Instructions to use genmo/mochi-1-preview with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use genmo/mochi-1-preview with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("genmo/mochi-1-preview", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Genmo
How to use genmo/mochi-1-preview with Genmo:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Inference
- Notebooks
- Google Colab
- Kaggle
Fine Tuning
#9
by pingkeest - opened
Any documentation on how we can fine-tune the model?
Any documentation on how we can fine-tune the model?
Do U have any supercomputer in ur Backyard
Any idea if the model will be opensource ?
Encoder is now open-sourced, so it should be possible now.
Is 8x A100 80gb big enough for fine tuning? Is there any fine-tuning code available? Or should it be pretty easy to figure out?
I M WORKING ON AI MODEL 4 GENREATING TEXT TO IMAGE, AUDIO TO TEXT AND TEXT TO TEXT WHICH MODEL SHOULD BE USED FOR THE BEST CONTENT FOR THIS DIFFRENT API REQUIRED OR SINGLE API WILL WORK FOR EVERY OUTPUT. PLEASE SUGGEST FROM WHERE I CAN PURCHASE THE API FOR THE BEST OUTPUT?
This comment has been hidden (marked as Graphic Content)
hahahahaha