Instructions to use LetsThink/MfM-Pipeline-2B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use LetsThink/MfM-Pipeline-2B with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("LetsThink/MfM-Pipeline-2B", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Add comprehensive model card for Many-for-Many
#1
by nielsr HF Staff - opened
This PR adds a comprehensive model card for the Many-for-Many model.
It includes:
- Relevant metadata:
license(Apache 2.0),pipeline_tag(any-to-any), andlibrary_name(diffusers), along with descriptive tags for its capabilities. - Links to the official paper on Hugging Face, the project page, and the GitHub repository.
- A concise description of the model and its key features.
- Sample inference code directly from the project's GitHub to facilitate quick start.
- Visual elements (logo, results image, demo video, architecture diagram) for better understanding.
- The official BibTeX citation.
This update significantly improves discoverability and usability for researchers and users on the Hugging Face Hub.