metadata
dataset_info:
features:
- name: tokenized_data
struct:
- name: attention_mask
list:
list: int64
- name: image_grid_thw
list:
list: int64
- name: input_ids
list:
list: int64
- name: pixel_values
list:
list: float32
- name: ego_history_xyz
list:
list:
list:
list: float32
- name: ego_history_rot
list:
list:
list:
list:
list: float32
splits:
- name: train
num_bytes: 2409638544
num_examples: 3
download_size: 191532609
dataset_size: 2409638544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- video-text-to-text
- robotics
- object-detection
language:
- en
tags:
- agent
size_categories:
- 1K<n<10K
Dataset with 2500 samples for Alpamayo R1. The architecture of the dataset: /train/shard_000xx.tar /val/shard_000yy.tar
train - 80% (2000 samples)
val - 20% (500 samples)
Each .tar file contains 25 samples in .npy. Get the data from .npy files by using the keys:
{ "uuid": uuid, "video_quality": "320x576", "tokenized_data": inputs, "ego_history_xyz": data["ego_history_xyz"], "ego_history_rot": data["ego_history_rot"] }
The videos are allready compressed to 320x576 - the size alpamayo compresses original videos to (may result in slight deviations in accuracy).