Update README.md
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ license: mit
|
|
| 5 |
|
| 6 |
This is the official pretrained model of *LDM: Large Tensorial SDF Model for Textured Mesh Generation*.
|
| 7 |
|
| 8 |
-
|
| 9 |
|
| 10 |
|
| 11 |
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/65a3ce8d680cb2eb94621c17/Hd17PU3mSKiX_C8F51mtk.mp4"></video>
|
|
|
|
| 5 |
|
| 6 |
This is the official pretrained model of *LDM: Large Tensorial SDF Model for Textured Mesh Generation*.
|
| 7 |
|
| 8 |
+
Previous efforts have managed to generate production-ready 3D assets from text or images. However, these methods primarily employ NeRF or 3D Gaussian representations, which are not adept at producing smooth, high-quality geometries required by modern rendering pipelines. We propose LDM, a novel feed-forward framework capable of generating high-fidelity, illumination-decoupled textured mesh from a single image or text prompts. We firstly utilize a multi-view diffusion model to generate sparse multi-view inputs from single images or text prompts, and then a transformer-based model is trained to predict a tensorial SDF field from these sparse multi-view image inputs. Finally, we employ a gradient-based mesh optimization layer to refine this model, enabling it to produce an SDF field from which high-quality textured meshes can be extracted. Extensive experiments demonstrate that our method can generate diverse, high-quality 3D mesh assets with corresponding decomposed RGB textures within seconds. The project code is available at [https://github.com/rgxie/LDM](https://huggingface.co/rgxie/LDM).
|
| 9 |
|
| 10 |
|
| 11 |
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/65a3ce8d680cb2eb94621c17/Hd17PU3mSKiX_C8F51mtk.mp4"></video>
|