File size: 12,809 Bytes
ac48d90 d28894d ac48d90 d28894d 5c7824f d28894d 5c7824f ac48d90 4f16d32 ac48d90 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 | ---
language:
- en
license: other
license_name: nvidia-open-model-license
license_link: >-
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license
tags:
- nvidia
- asset-harvester
- image-to-3d
- 3d-generation
- gaussian-splatting
- physical-ai
pipeline_tag: image-to-3d
---
# Asset Harvester | System Model Card
**Paper** | **Project Page** | [**Code**](https://github.com/NVIDIA/asset-harvester) | [**Model**](https://huggingface.co/nvidia/asset-harvester) | [**Data**](https://huggingface.co/datasets/nvidia/PhysicalAI-Autonomous-Vehicles-NCore)
## **Description:**
**Asset Harvester** is an image-to-3D model and end-to-end system that converts sparse, in-the-wild object observations from real driving logs into complete, simulation-ready assets. The model generates 3D assets from a single image or multiple images of vehicles, VRUs or other road objects extracted from autonomous driving sessions. To run Asset Harvester, please check our [**codebase**](https://github.com/NVIDIA/asset-harvester).
<p align="center">
<img src="docs/pipeline.gif" alt="Asset Harvester teaser" width="100%" style="border: none;">
</p>
**Asset Harvester** turns real-world driving logs into complete, simulation-ready 3D assets — from just one or a few in-the-wild object views. It handles vehicles, pedestrians, riders, and other road objects, even under heavy occlusion, noisy calibration, and extreme viewpoint bias. A multiview diffusion model generates consistent novel viewpoints, and a feed-forward Gaussian reconstructor lifts them to full 3D in seconds. The result: high-fidelity 3D Gaussian splat assets ready for insertion into simulation environments. The pipeline plugs directly into NVIDIA NCore and NuRec for scalable data ingestion and closed-loop simulation.
Here's how the model checkpoints in this repo are used in the end-to-end system following the order in the pipeline: The [AV object Mask2former](model_cards/AV_Object_Mask2former.md) instance segmentation model is used for image processing when parsing input views from NCore data sessions.
The input images are encoded by [C-Radio](https://huggingface.co/nvidia/C-RADIO),
and the multiview diffusion model, [SparseViewDiT](model_cards/MultiviewDiffusion.md), is then used to generate 16 multiview images of the input objects.
In cases where camera parameters are not provided, the multiview diffusion model includes a camera pose estimation submodule that predicts camera parameters for the input images.
Lastly, an [Object TokenGS](model_cards/Object_TokenGS.md) lifts the images to a 3D asset.
This system is ready for commercial/non-commercial use
<details>
<summary><big><big><strong>🚗 Example Results 🚗</strong></big></big></summary>
Each row contains the input image, object mask, and a rendering of the harvested 3DGS asset.
#### 1. Vehicles / Trucks / Trailers
<table>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/bus_01.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/trailer_01.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/tractor_01.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/truck_01.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/sedan_01.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/suv_01.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/suv_02.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/sedan_02.jpg" width="860"></td>
</tr>
</table>
#### 2. VRUs
<table>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/pedestrian_01.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/pedestrian_03.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/pedestrian_04.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/pedestrian_05.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/pedestrian_06.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/cyclist_02.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/stroller_01.jpg" width="860"></td>
</tr>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/stroller_02.jpg" width="860"></td>
</tr>
</table>
#### 3. Other
<table>
<tr>
<td align="center"><img src="docs/in_the_wild_examples/bin_01.jpg" width="860"></td>
</tr>
</table>
</details>
### **License/Terms of Use**:
### Governing Terms: Use of this model system is governed by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) .
**Deployment Geography:** Global
### **Release Management:**
This system is exposed as a collection of models on [HuggingFace](https://huggingface.co/nvidia/asset-harvester) and inference scripts on [Github](https://github.com/NVIDIA/asset-harvester).
## **Automation Level:**
Partial Automation
## **Use Case:**
Physical AI developers who are looking to create 3D assets of vehicles or VRUs for either closed-loop simulation or Synthetic Data Generation (SDG).
## **Known Technical Limitations:**
The system is not guaranteed to perform well with occluded objects or objects that are outside of the common distribution. For example, a heavily occluded vehicle can generate a poor or hallucinated 3D asset.
## Known Risk(s):
AV and robotics developers should be aware that this model cannot guarantee a 100% success rate. In cases of unsuccessful generation, the output may not possess an accurate real-world representation of the asset and should not be relied upon in safety-critical simulations.
##
**Reference(s):** _(coming soon)_
[Asset Harvester: Turning Autonomous Driving Logs into 3D Assets for Simulation]()
## **System Architecture**
System architecture details described in white paper above.
## **System Input:**
**Input Type(s):** 1 or more images (up until 4\)
**Input Format:** Red, Green, Blue (RGB)
**Input Parameters:** Two-Dimensional (2D)
**Other Properties Related to Input:**
We currently accept up to 4 input images for each object. The resolution of the images are 512x512. The input images are extracted from NVIDIA’s NCore data along w/ other metadata needed for downstream processing:
* Camera orientation of each image
* Camera distance of each image
* Camera field of view of each image
* Bounding box dimensions of each object
## **System Output:**
**Output Type(s):** Corresponding 3D Gaussian asset to the object in input images
**Output Format:** Polygon File Format (PLY)
**Output Parameters:** Three-Dimensional (3D)
**Other Properties Related to Output:**
A PLY file (3D Gaussian Splatting, 3DGS) contains 3D object data with the following specific components:
* **Header**: Defines the file structure, including format (ASCII or binary), Gaussian elements, their properties (e.g., position, appearance coefficients, opacity, scale, rotation), and data types (e.g., float, int).
* **Gaussian Data**: Stores the parameters of each 3D Gaussian as vertex elements: center position (`x`, `y`, `z`), spherical harmonics DC coefficients (`f_dc_0`, `f_dc_1`, `f_dc_2`), `opacity`, anisotropic scale (`scale_0`, `scale_1`, `scale_2`), and rotation quaternion (`rot_0`, `rot_1`, `rot_2`, `rot_3`).
## **Hardware Compatibility:**
**Supported Hardware Microarchitecture Compatibility:**
* NVIDIA Ampere
* NVIDIA Blackwell
* NVIDIA Hopper
* NVIDIA Lovelace
**Preferred/Supported Operating Systems:** Linux
**Hardware Specific Requirements:**
The systems can run on a single GPU with an Nvidia GPU with CUDA Compute Capability greater than or equal to 8.0. The following is required:
* GPU performance \>= 300 Tflops
* GPU memory size \>= 30GB
* GPU memory bandwidth \>= 768 GB/s
* System RAM \>= 32 GB
* System disk storage \>= 100GB
* CPU \>= 16 threads x 3GHz
## **System Version:**
Asset\_Harvester\_GA
## **Inference:**
**Engine:** Pytorch
**Test Hardware:** A100, H100
## **Ethical Considerations:**
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
## Model Card++
**Bias**
| Field | Response |
| :---- | :---- |
| Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
| Measures taken to mitigate against unwanted bias: | None |
**Explainability**
| Field | Response |
| :---- | :---- |
| Intended Domain | Autonomous Driving Simulation |
| Model Type: | Image-to-3D Asset |
| Intended Users: | Autonomous Vehicles developers enhancing and improving Neural Reconstruction pipelines. |
| Output | 3D Asset |
| Describe how the model works | The system takes as an input one or few images, and outputs a corresponding 3D asset |
| Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of | None |
| Technical Limitations | The system is not guaranteed to perform well with occluded objects or objects that are outside of the common distribution. For example, a heavily occluded vehicle image can generate a poor or hallucinated 3D asset |
| Verified to have met prescribed NVIDIA quality standards | Yes |
| Performance Metrics | PSNR (Peak Signal-to-Noise Ratio) |
| Potential Known Risks | AV and robotics developers should be aware that this model cannot guarantee a 100% success rate. In cases of unsuccessful generation, the output may not possess an accurate real-world representation of the asset and should not be relied upon in safety-critical simulations. |
| Licensing | Use of this model system is governed by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). |
**Privacy**
| Field | Response |
| :---- | :---- |
| Generatable or reverse engineerable personal data? | No |
| Personal data used to create this model? | Yes |
| Was consent obtained for any personal data used? | Yes |
| How often is the dataset reviewed? | Before release |
| Is a mechanism in place to honor data subject right of access or deletion of personal data? | Yes |
| If personal data was collected for the development of the model, was it collected directly by NVIDIA? | No |
| If personal data was collected for the development of the model by NVIDIA, do you maintain or have access to disclosures made to data subjects? | Not Applicable |
| If personal data was collected for the development of this AI model, was it minimized to only what was required? | Yes |
| Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model? | No |
| Is there provenance for all datasets used in training? | Yes |
| Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
| Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes |
| Applicable Privacy Policy | [https://www.nvidia.com/en-us/about-nvidia/privacy-policy/](https://www.nvidia.com/en-us/about-nvidia/privacy-policy/) |
**Safety & Security**
| Field | Response |
| :---- | :---- |
| Model Application(s): | 3D Asset Generation |
| Describe the life critical impact (if present). | N/A \- The system should not be deployed in a vehicle to perform life-critical tasks. |
| Use Case Restrictions: | Use of this model system is governed by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) |
| Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training |
|