Instructions to use behka57/mv-diffusion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use behka57/mv-diffusion with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("behka57/mv-diffusion", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,9 +9,9 @@ This is a duplicate of [ashawkey/imagedream-ipmv-diffusers](https://huggingface.
|
|
| 9 |
|
| 10 |
It is hosted here for the purpose of persistence and reproducibility for the ML for 3D course.
|
| 11 |
|
| 12 |
-
Original MVDream paper: [
|
| 13 |
|
| 14 |
-
Original ImageDream paper: [
|
| 15 |
|
| 16 |
### Usage
|
| 17 |
|
|
|
|
| 9 |
|
| 10 |
It is hosted here for the purpose of persistence and reproducibility for the ML for 3D course.
|
| 11 |
|
| 12 |
+
Original MVDream paper: [MVDream: Multi-view Diffusion for 3D Generation](https://huggingface.co/papers/2308.16512)
|
| 13 |
|
| 14 |
+
Original ImageDream paper: [ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation](https://huggingface.co/papers/2312.02201)
|
| 15 |
|
| 16 |
### Usage
|
| 17 |
|