Update README.md
Browse files
README.md
CHANGED
|
@@ -9,46 +9,80 @@ language:
|
|
| 9 |
# Model Card for PartPacker
|
| 10 |
|
| 11 |
## Description
|
| 12 |
-
PartPacker
|
| 13 |
We introduce a dual volume packing strategy that organizes all parts into two complementary volumes, allowing for the creation of complete and interleaved parts that assemble into the final object.
|
| 14 |
This model is ready for non-commercial use.
|
| 15 |
|
| 16 |
## License/Terms of Use
|
| 17 |
[NVIDIA Non-Commercial License](https://huggingface.co/nvidia/PartPacker/blob/main/LICENSE)
|
| 18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
## Model Architecture
|
| 20 |
-
**Architecture Type:** Transformer
|
|
|
|
|
|
|
| 21 |
|
| 22 |
## Input
|
| 23 |
**Input Type(s):** Image
|
| 24 |
-
**Input Format(s):** RGB
|
| 25 |
-
**Input Parameters:** 2D
|
| 26 |
-
**Other Properties Related to Input:**
|
| 27 |
|
| 28 |
## Output
|
| 29 |
-
**Output Type(s):** Mesh
|
| 30 |
-
**Output Format:** GLB
|
| 31 |
-
**Output Parameters:** 3D
|
| 32 |
-
**Other Properties Related to Output:**
|
| 33 |
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
* NVIDIA Ampere
|
| 36 |
* NVIDIA Hopper
|
| 37 |
|
| 38 |
-
|
| 39 |
* Linux
|
| 40 |
|
| 41 |
## Model Version(s)
|
| 42 |
v1.0
|
| 43 |
|
| 44 |
-
## Training
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
|
| 50 |
## Inference
|
| 51 |
-
|
|
|
|
|
|
|
| 52 |
|
| 53 |
|
| 54 |
## Ethical Considerations
|
|
|
|
| 9 |
# Model Card for PartPacker
|
| 10 |
|
| 11 |
## Description
|
| 12 |
+
PartPacker is a three-dimensional (3D) generation model that is able to generate part-level 3D objects from single-view images.
|
| 13 |
We introduce a dual volume packing strategy that organizes all parts into two complementary volumes, allowing for the creation of complete and interleaved parts that assemble into the final object.
|
| 14 |
This model is ready for non-commercial use.
|
| 15 |
|
| 16 |
## License/Terms of Use
|
| 17 |
[NVIDIA Non-Commercial License](https://huggingface.co/nvidia/PartPacker/blob/main/LICENSE)
|
| 18 |
|
| 19 |
+
|
| 20 |
+
## Deployment Geography
|
| 21 |
+
Global
|
| 22 |
+
|
| 23 |
+
## Use Case
|
| 24 |
+
PartPacker takes a single input image and generates a 3D shape with an arbitrary number of complete parts. Each part can be separated and edited independently to facilitate downstream tasks such as editing and animation.
|
| 25 |
+
It's intended to be used by researchers and academics to develop new 3D generation methods.
|
| 26 |
+
|
| 27 |
+
## Release Date
|
| 28 |
+
* Github: 06/04/2025 via [https://github.com/NVlabs/PartPacker](https://github.com/NVlabs/PartPacker)
|
| 29 |
+
* Huggingface: 06/04/2025 via [https://huggingface.co/NVlabs/PartPacker](https://huggingface.co/NVlabs/PartPacker)
|
| 30 |
+
|
| 31 |
+
## Reference(s)
|
| 32 |
+
[Code](https://github.com/NVlabs/PartPacker)
|
| 33 |
+
[Paper](https://arxiv.org/abs/TODO)
|
| 34 |
+
|
| 35 |
## Model Architecture
|
| 36 |
+
**Architecture Type:** Transformer
|
| 37 |
+
**Network Architecture:** Diffusion Transformer (DiT)
|
| 38 |
+
|
| 39 |
|
| 40 |
## Input
|
| 41 |
**Input Type(s):** Image
|
| 42 |
+
**Input Format(s):** Red, Green, Blue (RGB)
|
| 43 |
+
**Input Parameters:** Two-dimensional (2D) image
|
| 44 |
+
**Other Properties Related to Input:** Resolution will be resized to $518 \times 518$.
|
| 45 |
|
| 46 |
## Output
|
| 47 |
+
**Output Type(s):** Triangle Mesh
|
| 48 |
+
**Output Format:** GL Transmission Format Binary (GLB)
|
| 49 |
+
**Output Parameters:** Three-dimensional (3D) triangle mesh
|
| 50 |
+
**Other Properties Related to Output:** Extracted at a resolution up to $512^3$; without texture.
|
| 51 |
|
| 52 |
+
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
|
| 53 |
+
|
| 54 |
+
## Software Integration
|
| 55 |
+
|
| 56 |
+
### Runtime Engine(s)
|
| 57 |
+
* PyTorch
|
| 58 |
+
|
| 59 |
+
### Supported Hardware Microarchitecture Compatibility
|
| 60 |
* NVIDIA Ampere
|
| 61 |
* NVIDIA Hopper
|
| 62 |
|
| 63 |
+
### Preferred Operating System(s)
|
| 64 |
* Linux
|
| 65 |
|
| 66 |
## Model Version(s)
|
| 67 |
v1.0
|
| 68 |
|
| 69 |
+
## Training, Testing, and Evaluation Datasets
|
| 70 |
+
|
| 71 |
+
We perform training, testing, and evaluation on the Objaverse-XL dataset.
|
| 72 |
+
For the VAE model, we use the first 253K meshes for training and the rest 1K meshes for validation.
|
| 73 |
+
For the Flow model, we use all 254K meshes for training.
|
| 74 |
+
### Objaverse-XL
|
| 75 |
+
|
| 76 |
+
**Link**: https://objaverse.allenai.org/
|
| 77 |
+
**Data Collection Method**: Hybrid: Automatic, Synthetic
|
| 78 |
+
**Labeling Method by dataset**: N/A (no labels)
|
| 79 |
+
**Properties:** We use about 254k mesh data, which is a subset from the Objaverse-XL filtered by the number of parts.
|
| 80 |
|
| 81 |
|
| 82 |
## Inference
|
| 83 |
+
|
| 84 |
+
**Acceleration Engine**: PyTorch
|
| 85 |
+
**Test Hardware**: NVIDIA A100 (1 GPU configuration)
|
| 86 |
|
| 87 |
|
| 88 |
## Ethical Considerations
|