--- title: PolyGenixAI emoji: 🌍 colorFrom: purple colorTo: red sdk: gradio sdk_version: 4.44.0 app_file: gradio_app.py pinned: false short_description: Text-to-3D and Image-to-3D Generation models: - tencent/Hunyuan3D-2 --- [δΈ­ζ–‡ι˜…θ―»](README_zh_cn.md) [ζ—₯本θͺžγ§θͺ­γ‚€](README_ja_jp.md)

[//]: # ( ) [//]: # ( ) [//]: # ( PyPI - Downloads) > Join our **[Wechat](#find-us)** and **[Discord](#find-us)** group to discuss and find help from us.

β€œ Living out everyone’s imagination on creating and manipulating 3D assets.”

## πŸ”₯ News - Jan 21, 2025: πŸ’¬ Enjoy exciting 3D generation on our website [Hunyuan3D Studio](https://3d.hunyuan.tencent.com)! - Jan 21, 2025: πŸ’¬ Release inference code and pretrained models of [Hunyuan3D 2.0](https://huggingface.co/tencent/Hunyuan3D-2). - Jan 21, 2025: πŸ’¬ Release Hunyuan3D 2.0. Please give it a try via [huggingface space](https://huggingface.co/spaces/tencent/Hunyuan3D-2) our [official site](https://3d.hunyuan.tencent.com)! ## **Abstract** PolyGenixAI: Fast and High-Quality 3D Asset Generation We present PolyGenixAI, an advanced system for rapidly generating high-resolution textured 3D assets. This system comprises two core components: a high-speed shape generation model, PolyGenixAI-DiT, and a robust texture synthesis model, PolyGenixAI-Paint. PolyGenixAI-DiT, a scalable flow-based diffusion transformer, delivers precise geometry aligned with input images in seconds, enabling efficient creation of 3D models for diverse applications. PolyGenixAI-Paint leverages strong geometric and diffusion priors to produce vibrant, high-resolution texture maps for both generated and user-provided meshes. Additionally, PolyGenixAI Studio offers a user-friendly platform that simplifies 3D asset creation and manipulation. It empowers both professionals and enthusiasts to quickly generate, edit, and animate 3D models with ease. PolyGenixAI outperforms state-of-the-art models, delivering superior geometry details, condition alignment, and texture quality. Optimized for speed, it ensures fast model generation without compromising quality, making it ideal for real-time and production workflows.

## ☯️ **Hunyuan3D 2.0** ### Architecture Hunyuan3D 2.0 features a two-stage generation pipeline, starting with the creation of a bare mesh, followed by the synthesis of a texture map for that mesh. This strategy is effective for decoupling the difficulties of shape and texture generation and also provides flexibility for texturing either generated or handcrafted meshes.

### Performance We have evaluated Hunyuan3D 2.0 with other open-source as well as close-source 3d-generation methods. The numerical results indicate that Hunyuan3D 2.0 surpasses all baselines in the quality of generated textured 3D assets and the condition following ability. | Model | CMMD(⬇) | FID_CLIP(⬇) | FID(⬇) | CLIP-score(⬆) | |-------------------------|-----------|-------------|-------------|---------------| | Top Open-source Model1 | 3.591 | 54.639 | 289.287 | 0.787 | | Top Close-source Model1 | 3.600 | 55.866 | 305.922 | 0.779 | | Top Close-source Model2 | 3.368 | 49.744 | 294.628 | 0.806 | | Top Close-source Model3 | 3.218 | 51.574 | 295.691 | 0.799 | | Hunyuan3D 2.0 | **3.193** | **49.165** | **282.429** | **0.809** | Generation results of Hunyuan3D 2.0:

### Pretrained Models | Model | Date | Huggingface | |----------------------|------------|--------------------------------------------------------| | Hunyuan3D-DiT-v2-0 | 2025-01-21 | [Download](https://huggingface.co/tencent/Hunyuan3D-2) | | Hunyuan3D-Paint-v2-0 | 2025-01-21 | [Download](https://huggingface.co/tencent/Hunyuan3D-2) | ## πŸ€— Get Started with Hunyuan3D 2.0 You may follow the next steps to use Hunyuan3D 2.0 via code or the Gradio App. ### Install Requirements Please install Pytorch via the [official](https://pytorch.org/) site. Then install the other requirements via ```bash pip install -r requirements.txt # for texture cd hy3dgen/texgen/custom_rasterizer python3 setup.py install cd hy3dgen/texgen/differentiable_renderer bash compile_mesh_painter.sh ``` ### API Usage We designed a diffusers-like API to use our shape generation model - Hunyuan3D-DiT and texture synthesis model - Hunyuan3D-Paint. You could assess **Hunyuan3D-DiT** via: ```python from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2') mesh = pipeline(image='assets/demo.png')[0] ``` The output mesh is a [trimesh object](https://trimesh.org/trimesh.html), which you could save to glb/obj (or other format) file. For **Hunyuan3D-Paint**, do the following: ```python from hy3dgen.texgen import Hunyuan3DPaintPipeline from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline # let's generate a mesh first pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2') mesh = pipeline(image='assets/demo.png')[0] pipeline = Hunyuan3DPaintPipeline.from_pretrained('tencent/Hunyuan3D-2') mesh = pipeline(mesh, image='assets/demo.png') ``` Please visit [minimal_demo.py](minimal_demo.py) for more advanced usage, such as **text to 3D** and **texture generation for handcrafted mesh**. ### Gradio App You could also host a [Gradio](https://www.gradio.app/) App in your own computer via: ```bash python3 gradio_app.py ``` Don't forget to visit [Hunyuan3D](https://3d.hunyuan.tencent.com) for quick use, if you don't want to host yourself. ## πŸ“‘ Open-Source Plan - [x] Inference Code - [x] Model Checkpoints - [x] Technical Report - [ ] ComfyUI - [ ] TensorRT Version ## πŸ”— BibTeX If you found this repository helpful, please cite our reports: ```bibtex @misc{hunyuan3d22025tencent, title={Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation}, author={Tencent Hunyuan3D Team}, year={2025}, } @misc{yang2024tencent, title={Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation}, year={2024}, author={Tencent Hunyuan3D Team}, eprint={2411.02293}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## Acknowledgements We would like to thank the contributors to the [DINOv2](https://github.com/facebookresearch/dinov2), [Stable Diffusion](https://github.com/Stability-AI/stablediffusion), [FLUX](https://github.com/black-forest-labs/flux), [diffusers](https://github.com/huggingface/diffusers), [HuggingFace](https://huggingface.co), [CraftsMan3D](https://github.com/wyysf-98/CraftsMan3D), and [Michelangelo](https://github.com/NeuralCarver/Michelangelo/tree/main) repositories, for their open research and exploration. ## Find Us | Wechat Group | Xiaohongshu | X | Discord | |--------------|-------------|---|---------| | | | | | ## Star History Star History Chart