AnonymousDataset / README.md
anonymous-authors-2025's picture
update README
db10e3c
metadata
license: mit
size_categories:
  - 10K<n<100K
task_categories:
  - image-to-3d
  - robotics
  - image-feature-extraction
  - depth-estimation
  - image-to-text
  - other
modalities:
  - image
  - tabular
  - text
dataset_info:
  features:
    - name: image
      dtype: image
    - name: semantic_class
      dtype: string
    - name: transform
      dtype: string
    - name: Tx
      dtype: float32
    - name: Ty
      dtype: float32
    - name: Tz
      dtype: float32
    - name: rot_x
      dtype: float32
    - name: rot_y
      dtype: float32
    - name: rot_z
      dtype: float32
    - name: rot_w
      dtype: float32
  splits:
    - name: train
      num_bytes: 4106560699
      num_examples: 16000
    - name: validation
      num_bytes: 510934045
      num_examples: 2000
    - name: test
      num_bytes: 513367561
      num_examples: 2000
  download_size: 2568917592
  dataset_size: 5130862305
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

A Synthetic Dataset for Visual Perspective Taking

This dataset ("AnonymousDataset") accompanies the submission "Towards Social Foundation Models: A Framework and Synthetic Dataset for Grounding Visual Perspective Taking in Robots".

It is a large-scale synthetic resource designed for training socio-cognitive foundational models for robotics, specifically for the task of Visual Perspective Taking (VPT). The core objective is to enable a robot to infer an object's precise 6-DOF pose (position and orientation) relative to another agent's viewpoint, given a single RGB image.

The dataset was procedurally generated using NVIDIA Isaac Sim and Omniverse Replicator, providing high-fidelity RGB images paired with perfect ground-truth pose annotations.


Dataset Details

Dataset Summary

This dataset serves to explore the viability of using high-fidelity synthetic data as a scalable and cost-effective alternative for metric spatial grounding in the context of Visual Perspective Taking.

The data consists of renders of a target object (mug) placed on a tabletop in a shared workspace scene containing a humanoid agent (x-bot). For each rendered image, the dataset contains separate entries for each entity, providing its semantic class and exact 6-DOF pose relative to the camera.

  • Total Examples: 20,000 (derived from 10,000 unique scenes)
  • Objects: mug, xbot_humanoid

Data Fields

The dataset contains the following fields for each instance:

  • image: A PIL.Image.Image object containing the rendered RGB image ($512 X 512$ pixels).
  • semantic_class: A string indicating the class of the entity for which the pose is provided (e.g., "mug" or "humanoid").
  • transform: A string representing the full $4 X 4$ transformation matrix that maps points from the camera's coordinate frame to the object's local coordinate frame.
  • Tx, Ty, Tz: The translation components (float) of the object's pose in metres, extracted from the transformation matrix.
  • rot_x, rot_y, rot_z, rot_w: The unit quaternion components (float) representing the rotation of the object relative to the camera.

Data Splits

The data is split into training, validation, and test sets. Critically, the splits were created based on unique images (scenes) rather than instances. This ensures that no image seen during training appears in the validation or test sets, preventing data leakage.

Split Number of Examples
train 16,000
validation 2,000
test 2,000

How to Use

You can load and use the dataset with the datasets library.

from datasets import load_dataset

# Load the dataset from the Hugging Face Hub
dataset = load_dataset("anonymous-authors-2025/AnonymousDataset")

# Access an example from the training set
example = dataset['train'][42]

image = example['image']
semantic_class = example['semantic_class']
translation_vector = [example['Tx'], example['Ty'], example['Tz']]
rotation_quaternion = [example['rot_x'], example['rot_y'], example['rot_z'], example['rot_w']]

print(f"Object Class: {semantic_class}")
print(f"Translation (m): {translation_vector}")
print(f"Rotation (quaternion): {rotation_quaternion}")

# To display the image (e.g., in a Jupyter notebook)
# image.show()