Update README.md
Browse files
README.md
CHANGED
|
@@ -175,7 +175,8 @@ v1.0 - Initial research release with Geometry Transformer (85.81M parameters + 3
|
|
| 175 |
Training set contains 1,333 objects with 6,477 segments and 28.7M voxels. Data modalities include (i) 3D meshes (triangle meshes in USD format with part-level segmentation), multi-view RGB images (512×512 rendered views), text (English material names), and tabular material properties (E, ν, ρ triplets); (ii) Nature of content is non-personal, proprietary 3D assets (NVIDIA asset libraries) combined with public domain material science data, no copyright-protected creative content, machine-generated annotations (VLM) constrained by human-measured physical properties; (iii) Linguistic characteristics include English material names and semantic object labels. No sensors were used for data collection; 3D assets are human-modeled, material properties are from laboratory measurements (ASTM standard testing), and images are path-traced renders. Average 4.86 segments per object (±11.97 std dev), 21,537 voxels per object (±23,431 std dev). Material property ranges: E [1.0×10^5, 2.8×10^11 Pa], ν [0.16, 0.49], ρ [50, 19,300 kg/m³].
|
| 176 |
|
| 177 |
## Testing Dataset:
|
| 178 |
-
|
|
|
|
| 179 |
|
| 180 |
**Data Collection Method by dataset:**
|
| 181 |
* Hybrid: Human, Automated (VLM-assisted)
|
|
@@ -187,7 +188,6 @@ Training set contains 1,333 objects with 6,477 segments and 28.7M voxels. Data m
|
|
| 187 |
Test set contains 166 objects with 1,060 segments and 4.9M voxels (13.1% of total dataset). Same modalities, content nature, and linguistic characteristics as training data. Average 6.39 segments per object (±11.33 std dev), 29,571 voxels per object (±25,987 std dev). Held-out objects ensure no overlap with training data.
|
| 188 |
|
| 189 |
## Evaluation Dataset:
|
| 190 |
-
**Link:**
|
| 191 |
- Primary: GVM test split (166 objects, 4.9M voxel annotations)
|
| 192 |
- Secondary: [ABO-500](https://amazon-berkeley-objects.s3.amazonaws.com/index.html) mass estimation benchmark - 500 objects with ground truth mass labels
|
| 193 |
|
|
|
|
| 175 |
Training set contains 1,333 objects with 6,477 segments and 28.7M voxels. Data modalities include (i) 3D meshes (triangle meshes in USD format with part-level segmentation), multi-view RGB images (512×512 rendered views), text (English material names), and tabular material properties (E, ν, ρ triplets); (ii) Nature of content is non-personal, proprietary 3D assets (NVIDIA asset libraries) combined with public domain material science data, no copyright-protected creative content, machine-generated annotations (VLM) constrained by human-measured physical properties; (iii) Linguistic characteristics include English material names and semantic object labels. No sensors were used for data collection; 3D assets are human-modeled, material properties are from laboratory measurements (ASTM standard testing), and images are path-traced renders. Average 4.86 segments per object (±11.97 std dev), 21,537 voxels per object (±23,431 std dev). Material property ranges: E [1.0×10^5, 2.8×10^11 Pa], ν [0.16, 0.49], ρ [50, 19,300 kg/m³].
|
| 176 |
|
| 177 |
## Testing Dataset:
|
| 178 |
+
- GVM test split
|
| 179 |
+
- ABO-500 - [Amazon Berkeley Objects](https://amazon-berkeley-objects.s3.amazonaws.com/index.html)
|
| 180 |
|
| 181 |
**Data Collection Method by dataset:**
|
| 182 |
* Hybrid: Human, Automated (VLM-assisted)
|
|
|
|
| 188 |
Test set contains 166 objects with 1,060 segments and 4.9M voxels (13.1% of total dataset). Same modalities, content nature, and linguistic characteristics as training data. Average 6.39 segments per object (±11.33 std dev), 29,571 voxels per object (±25,987 std dev). Held-out objects ensure no overlap with training data.
|
| 189 |
|
| 190 |
## Evaluation Dataset:
|
|
|
|
| 191 |
- Primary: GVM test split (166 objects, 4.9M voxel annotations)
|
| 192 |
- Secondary: [ABO-500](https://amazon-berkeley-objects.s3.amazonaws.com/index.html) mass estimation benchmark - 500 objects with ground truth mass labels
|
| 193 |
|