Instructions to use DhruvDecoder/model_3d_diffuser with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use DhruvDecoder/model_3d_diffuser with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("DhruvDecoder/model_3d_diffuser", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
| license: openrail | |
| pipeline_tag: image-to-3d | |
| This is a duplicate of [ashawkey/imagedream-ipmv-diffusers](https://huggingface.co/ashawkey/imagedream-ipmv-diffusers). | |
| It is hosted here for the purpose of persistence and reproducibility for the ML for 3D course. | |
| Original model card below. | |
| --- | |
| # MVDream-diffusers | |
| A **unified** diffusers implementation of [MVDream](https://github.com/bytedance/MVDream) and [ImageDream](https://github.com/bytedance/ImageDream). | |
| We provide converted `fp16` weights on huggingface: | |
| - [MVDream](https://huggingface.co/ashawkey/mvdream-sd2.1-diffusers) | |
| - [ImageDream](https://huggingface.co/ashawkey/imagedream-ipmv-diffusers) | |
| ### Install | |
| ```bash | |
| # dependency | |
| pip install -r requirements.txt | |
| # xformers is required! please refer to https://github.com/facebookresearch/xformers | |
| pip install ninja | |
| pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers | |
| ``` | |
| ### Usage | |
| ```bash | |
| python run_mvdream.py "a cute owl" | |
| python run_imagedream.py data/anya_rgba.png | |
| ``` | |
| ### Convert weights | |
| MVDream: | |
| ```bash | |
| # download original ckpt (we only support the SD 2.1 version) | |
| mkdir models | |
| cd models | |
| wget https://huggingface.co/MVDream/MVDream/resolve/main/sd-v2.1-base-4view.pt | |
| wget https://raw.githubusercontent.com/bytedance/MVDream/main/mvdream/configs/sd-v2-base.yaml | |
| cd .. | |
| # convert | |
| python convert_mvdream_to_diffusers.py --checkpoint_path models/sd-v2.1-base-4view.pt --dump_path ./weights_mvdream --original_config_file models/sd-v2-base.yaml --half --to_safetensors --test | |
| ``` | |
| ImageDream: | |
| ```bash | |
| # download original ckpt (we only support the pixel-controller version) | |
| cd models | |
| wget https://huggingface.co/Peng-Wang/ImageDream/resolve/main/sd-v2.1-base-4view-ipmv.pt | |
| wget https://raw.githubusercontent.com/bytedance/ImageDream/main/extern/ImageDream/imagedream/configs/sd_v2_base_ipmv.yaml | |
| cd .. | |
| # convert | |
| python convert_mvdream_to_diffusers.py --checkpoint_path models/sd-v2.1-base-4view-ipmv.pt --dump_path ./weights_imagedream --original_config_file models/sd_v2_base_ipmv.yaml --half --to_safetensors --test | |
| ``` | |
| ### Acknowledgement | |
| - The original papers: | |
| ```bibtex | |
| @article{shi2023MVDream, | |
| author = {Shi, Yichun and Wang, Peng and Ye, Jianglong and Mai, Long and Li, Kejie and Yang, Xiao}, | |
| title = {MVDream: Multi-view Diffusion for 3D Generation}, | |
| journal = {arXiv:2308.16512}, | |
| year = {2023}, | |
| } | |
| @article{wang2023imagedream, | |
| title={ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation}, | |
| author={Wang, Peng and Shi, Yichun}, | |
| journal={arXiv preprint arXiv:2312.02201}, | |
| year={2023} | |
| } | |
| ``` | |
| - This codebase is modified from [mvdream-hf](https://github.com/KokeCacao/mvdream-hf). | |