Instructions to use LetsThink/MfM-Pipeline-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use LetsThink/MfM-Pipeline-8B with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("LetsThink/MfM-Pipeline-8B", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Add comprehensive model card for Many-for-Many unified generation model
#1
by nielsr HF Staff - opened
This PR adds a comprehensive model card for the Many-for-Many model.
It links the model to its paper: Many-for-Many: Unify the Training of Multiple Video and Image Generation and Manipulation Tasks.
It also adds essential metadata, including:
pipeline_tag: any-to-any, reflecting its capability across various image and video generation and manipulation tasks.library_name: diffusers, as the model is built upon the Diffusers framework.license: apache-2.0.
Additionally, the PR provides links to the project page and the GitHub repository, along with a basic Python usage example to help users get started.