PolyGeniixAI5.0 / README.md
anvilinteractiv's picture
Update README.md
d11391f verified
|
raw
history blame
9.2 kB
metadata
title: PolyGenixAI
emoji: 🌍
colorFrom: purple
colorTo: red
sdk: gradio
sdk_version: 4.44.0
app_file: gradio_app.py
pinned: false
short_description: Text-to-3D and Image-to-3D Generation
models:
  - tencent/Hunyuan3D-2

δΈ­ζ–‡ι˜…θ―» ζ—₯本θͺžγ§θͺ­γ‚€

Join our Wechat and Discord group to discuss and find help from us.

β€œ Living out everyone’s imagination on creating and manipulating 3D assets.”

πŸ”₯ News

Abstract

PolyGenixAI: Fast and High-Quality 3D Asset Generation We present PolyGenixAI, an advanced system for rapidly generating high-resolution textured 3D assets. This system comprises two core components: a high-speed shape generation model, PolyGenixAI-DiT, and a robust texture synthesis model, PolyGenixAI-Paint.

PolyGenixAI-DiT, a scalable flow-based diffusion transformer, delivers precise geometry aligned with input images in seconds, enabling efficient creation of 3D models for diverse applications. PolyGenixAI-Paint leverages strong geometric and diffusion priors to produce vibrant, high-resolution texture maps for both generated and user-provided meshes.

Additionally, PolyGenixAI Studio offers a user-friendly platform that simplifies 3D asset creation and manipulation. It empowers both professionals and enthusiasts to quickly generate, edit, and animate 3D models with ease. PolyGenixAI outperforms state-of-the-art models, delivering superior geometry details, condition alignment, and texture quality. Optimized for speed, it ensures fast model generation without compromising quality, making it ideal for real-time and production workflows.

☯️ Hunyuan3D 2.0

Architecture

Hunyuan3D 2.0 features a two-stage generation pipeline, starting with the creation of a bare mesh, followed by the synthesis of a texture map for that mesh. This strategy is effective for decoupling the difficulties of shape and texture generation and also provides flexibility for texturing either generated or handcrafted meshes.

Performance

We have evaluated Hunyuan3D 2.0 with other open-source as well as close-source 3d-generation methods. The numerical results indicate that Hunyuan3D 2.0 surpasses all baselines in the quality of generated textured 3D assets and the condition following ability.

Model CMMD(⬇) FID_CLIP(⬇) FID(⬇) CLIP-score(⬆)
Top Open-source Model1 3.591 54.639 289.287 0.787
Top Close-source Model1 3.600 55.866 305.922 0.779
Top Close-source Model2 3.368 49.744 294.628 0.806
Top Close-source Model3 3.218 51.574 295.691 0.799
Hunyuan3D 2.0 3.193 49.165 282.429 0.809

Generation results of Hunyuan3D 2.0:

Pretrained Models

Model Date Huggingface
Hunyuan3D-DiT-v2-0 2025-01-21 Download
Hunyuan3D-Paint-v2-0 2025-01-21 Download

πŸ€— Get Started with Hunyuan3D 2.0

You may follow the next steps to use Hunyuan3D 2.0 via code or the Gradio App.

Install Requirements

Please install Pytorch via the official site. Then install the other requirements via

pip install -r requirements.txt
# for texture
cd hy3dgen/texgen/custom_rasterizer
python3 setup.py install
cd hy3dgen/texgen/differentiable_renderer
bash compile_mesh_painter.sh

API Usage

We designed a diffusers-like API to use our shape generation model - Hunyuan3D-DiT and texture synthesis model - Hunyuan3D-Paint.

You could assess Hunyuan3D-DiT via:

from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline

pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(image='assets/demo.png')[0]

The output mesh is a trimesh object, which you could save to glb/obj (or other format) file.

For Hunyuan3D-Paint, do the following:

from hy3dgen.texgen import Hunyuan3DPaintPipeline
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline

# let's generate a mesh first
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(image='assets/demo.png')[0]

pipeline = Hunyuan3DPaintPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(mesh, image='assets/demo.png')

Please visit minimal_demo.py for more advanced usage, such as text to 3D and texture generation for handcrafted mesh.

Gradio App

You could also host a Gradio App in your own computer via:

python3 gradio_app.py

Don't forget to visit Hunyuan3D for quick use, if you don't want to host yourself.

πŸ“‘ Open-Source Plan

  • Inference Code
  • Model Checkpoints
  • Technical Report
  • ComfyUI
  • TensorRT Version

πŸ”— BibTeX

If you found this repository helpful, please cite our reports:

@misc{hunyuan3d22025tencent,
    title={Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation},
    author={Tencent Hunyuan3D Team},
    year={2025},
}

@misc{yang2024tencent,
    title={Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation},
    year={2024},
    author={Tencent Hunyuan3D Team},
    eprint={2411.02293},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Acknowledgements

We would like to thank the contributors to the DINOv2, Stable Diffusion, FLUX, diffusers, HuggingFace, CraftsMan3D, and Michelangelo repositories, for their open research and exploration.

Find Us

Wechat Group Xiaohongshu X Discord

Star History

Star History Chart