Update README.md
Browse files
README.md
CHANGED
|
@@ -43,7 +43,7 @@ This model was developed based on Cosmos-Predict1 Single2MultiView foundation mo
|
|
| 43 |
**Input Type(s):** Image <br>
|
| 44 |
**Input Format:** Red, Green, Blue (RGB) <br>
|
| 45 |
**Input Parameters:** Two-Dimensional (2D) <br>
|
| 46 |
-
**Other Properties Related to Input:** Multi-view RGB images from 3 f-theta cameras (front wide 120° FOV, rear left 70° FOV, rear right 70° FOV). Input resolution of [720, 1280] cropped to [704, 1280]. Synchronized camera views required. <br>
|
| 47 |
|
| 48 |
## Output: <br>
|
| 49 |
**Output Type(s):** Lidar <br>
|
|
|
|
| 43 |
**Input Type(s):** Image <br>
|
| 44 |
**Input Format:** Red, Green, Blue (RGB) <br>
|
| 45 |
**Input Parameters:** Two-Dimensional (2D) <br>
|
| 46 |
+
**Other Properties Related to Input:** Multi-view RGB images from 3 f-theta cameras (front wide 120° FOV, rear left 70° FOV, rear right 70° FOV). Input resolution of [720, 1280] cropped to [704, 1280]. Synchronized camera views required. Note that this model only works for Nvidia camera-lidar rig configuration. Users will need to post-train their own LidarGen model following the provided [receipe](https://github.com/nv-tlabs/Cosmos-Drive-Dreams/blob/main/cosmos-transfer-lidargen/examples/lidar_diffusion.md) on different data. <br>
|
| 47 |
|
| 48 |
## Output: <br>
|
| 49 |
**Output Type(s):** Lidar <br>
|