anonymous-authors-2025 commited on
Commit
db10e3c
·
1 Parent(s): 599a945

update README

Browse files
Files changed (1) hide show
  1. README.md +4 -7
README.md CHANGED
@@ -72,12 +72,11 @@ The dataset was procedurally generated using **NVIDIA Isaac Sim** and **Omnivers
72
 
73
  ### Dataset Summary
74
 
75
- Visual Perspective Taking is a challenging task that traditionally requires large amounts of precisely labelled real-world data. This dataset serves as a proof-of-concept to explore the viability of using high-fidelity synthetic data as a scalable and cost-effective alternative for metric spatial grounding.
76
 
77
  The data consists of renders of a target object (**mug**) placed on a tabletop in a shared workspace scene containing a humanoid agent (**x-bot**). For each rendered image, the dataset contains separate entries for each entity, providing its semantic class and exact 6-DOF pose relative to the camera.
78
 
79
  * **Total Examples:** 20,000 (derived from 10,000 unique scenes)
80
- * **Generator:** NVIDIA Omniverse Replicator
81
  * **Objects:** `mug`, `xbot_humanoid`
82
 
83
  ---
@@ -86,9 +85,9 @@ The data consists of renders of a target object (**mug**) placed on a tabletop i
86
 
87
  The dataset contains the following fields for each instance:
88
 
89
- * **`image`**: A `PIL.Image.Image` object containing the rendered RGB image ($512 \times 512$ pixels).
90
  * **`semantic_class`**: A `string` indicating the class of the entity for which the pose is provided (e.g., "mug" or "humanoid").
91
- * **`transform`**: A `string` representing the full $4\times4$ transformation matrix that maps points from the camera's coordinate frame to the object's local coordinate frame.
92
  * **`Tx`, `Ty`, `Tz`**: The translation components (`float`) of the object's pose in metres, extracted from the transformation matrix.
93
  * **`rot_x`, `rot_y`, `rot_z`, `rot_w`**: The unit quaternion components (`float`) representing the rotation of the object relative to the camera.
94
 
@@ -110,13 +109,11 @@ The data is split into training, validation, and test sets. Critically, the spli
110
 
111
  You can load and use the dataset with the `datasets` library.
112
 
113
- **Note for Reviewers:** Please replace `[ANONYMOUS_USER]/[REPO_NAME]` below with the repository ID where this dataset is currently hosted.
114
-
115
  ```python
116
  from datasets import load_dataset
117
 
118
  # Load the dataset from the Hugging Face Hub
119
- dataset = load_dataset("[ANONYMOUS_USER]/[REPO_NAME]")
120
 
121
  # Access an example from the training set
122
  example = dataset['train'][42]
 
72
 
73
  ### Dataset Summary
74
 
75
+ This dataset serves to explore the viability of using high-fidelity synthetic data as a scalable and cost-effective alternative for metric spatial grounding in the context of Visual Perspective Taking.
76
 
77
  The data consists of renders of a target object (**mug**) placed on a tabletop in a shared workspace scene containing a humanoid agent (**x-bot**). For each rendered image, the dataset contains separate entries for each entity, providing its semantic class and exact 6-DOF pose relative to the camera.
78
 
79
  * **Total Examples:** 20,000 (derived from 10,000 unique scenes)
 
80
  * **Objects:** `mug`, `xbot_humanoid`
81
 
82
  ---
 
85
 
86
  The dataset contains the following fields for each instance:
87
 
88
+ * **`image`**: A `PIL.Image.Image` object containing the rendered RGB image ($512 X 512$ pixels).
89
  * **`semantic_class`**: A `string` indicating the class of the entity for which the pose is provided (e.g., "mug" or "humanoid").
90
+ * **`transform`**: A `string` representing the full $4 X 4$ transformation matrix that maps points from the camera's coordinate frame to the object's local coordinate frame.
91
  * **`Tx`, `Ty`, `Tz`**: The translation components (`float`) of the object's pose in metres, extracted from the transformation matrix.
92
  * **`rot_x`, `rot_y`, `rot_z`, `rot_w`**: The unit quaternion components (`float`) representing the rotation of the object relative to the camera.
93
 
 
109
 
110
  You can load and use the dataset with the `datasets` library.
111
 
 
 
112
  ```python
113
  from datasets import load_dataset
114
 
115
  # Load the dataset from the Hugging Face Hub
116
+ dataset = load_dataset("anonymous-authors-2025/AnonymousDataset")
117
 
118
  # Access an example from the training set
119
  example = dataset['train'][42]