Instructions to use DhruvDecoder/model_3d_diffuser with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use DhruvDecoder/model_3d_diffuser with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("DhruvDecoder/model_3d_diffuser", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,6 +9,9 @@ This is a duplicate of [ashawkey/imagedream-ipmv-diffusers](https://huggingface.
|
|
| 9 |
|
| 10 |
It is hosted here for the purpose of persistence and reproducibility for the ML for 3D course.
|
| 11 |
|
|
|
|
|
|
|
|
|
|
| 12 |
### Usage
|
| 13 |
|
| 14 |
This project can be used from other projects as follows.
|
|
|
|
| 9 |
|
| 10 |
It is hosted here for the purpose of persistence and reproducibility for the ML for 3D course.
|
| 11 |
|
| 12 |
+
Original MVDream paper: [https://huggingface.co/papers/2308.16512](MVDream: Multi-view Diffusion for 3D Generation)
|
| 13 |
+
Original ImageDream paper: [ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation](https://huggingface.co/papers/2312.02201)
|
| 14 |
+
|
| 15 |
### Usage
|
| 16 |
|
| 17 |
This project can be used from other projects as follows.
|