Datasets:

Modalities:
Tabular
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
jw2yang commited on
Commit
3f4eb7d
·
verified ·
1 Parent(s): 7612530

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -1
README.md CHANGED
@@ -29,4 +29,69 @@ task_categories:
29
 
30
  \[[arXiv Paper](https://www.arxiv.org/pdf/2502.13130)\]   \[[Project Page](https://microsoft.github.io/Magma/)\]   \[[Hugging Face Paper](https://huggingface.co/papers/2502.13130)\]   \[[Github Repo](https://github.com/microsoft/Magma)\]   \[[Video](https://www.youtube.com/watch?v=SbfzvUU5yM8)\]
31
 
32
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  \[[arXiv Paper](https://www.arxiv.org/pdf/2502.13130)\] &nbsp; \[[Project Page](https://microsoft.github.io/Magma/)\] &nbsp; \[[Hugging Face Paper](https://huggingface.co/papers/2502.13130)\] &nbsp; \[[Github Repo](https://github.com/microsoft/Magma)\] &nbsp; \[[Video](https://www.youtube.com/watch?v=SbfzvUU5yM8)\]
31
 
32
+ </div>
33
+
34
+ ## Introduction
35
+
36
+ This dataset contains the robotic manipulation data used in Magma pretraining. For fair comparison, we followed OpenVLA to use the data mix "siglip-224px+mx-oxe-magic-soup".
37
+
38
+ The dataset is organized by following source datasets, with each source containing one or more arrow files:
39
+
40
+ | Folder | Number of Shards |
41
+ |:------------------------------------------------------|-------------------:|
42
+ | ego4d | 15 |
43
+ | sthv2 | 6 |
44
+ | instruct_video | 14 |
45
+
46
+
47
+ ### Features
48
+
49
+ In addition to the default features, we extracted the visual traces of future 16 frames for each frame. The dataset contains the following fields:
50
+
51
+ - `dataset_name`: Original source dataset name
52
+ - `video_name`: video name
53
+ - `task_string`: Description of the task
54
+ - 'start_time': starting time stamp for the video segment
55
+ - 'end_time': ending time stamp for the video segment
56
+ - `frame_index`: starting index of the frame in the video segment
57
+ - `height`: resized image height for visual trace extraction
58
+ - 'width': resized image width for visual trace extraction
59
+ - `trace`: Robot trajectory trace (serialized numpy array)
60
+ - `trace_visibility`: Visibility mask for the trace (serialized numpy array)
61
+
62
+ ## Dataset Loading
63
+
64
+ ### Full Dataset Load
65
+
66
+ ```py
67
+ from datasets import load_dataset
68
+ dataset = load_dataset("MagmaAI/Magma-Video-ToM", streaming=True, split="train")
69
+ ```
70
+
71
+ ### Individual Dataset Load
72
+ or specify a dataset by:
73
+
74
+ ```py
75
+ from datasets import load_dataset
76
+ dataset = load_dataset("MagmaAI/Magma-Video-ToM", data_dir="sthv2", streaming=True, split="train")
77
+ ```
78
+
79
+ ### Sample Decoding
80
+
81
+ ```py
82
+ # Helper function to deserialize binary fields
83
+ def deserialize_array(bytes_data):
84
+ return pickle.loads(bytes_data)
85
+
86
+ # Helper function to convert binary image data to PIL Image
87
+ def bytes_to_image(image_bytes):
88
+ return Image.open(io.BytesIO(image_bytes))
89
+
90
+ for i, example in enumerate(dataset):
91
+ # decode trace: 1 x 16 x 256 x 2
92
+ trace = deserialize_array(example['trace'])
93
+ # decode trace visibility: 1 x 16 x 256 x 1
94
+ trace_visibility = deserialize_array(example['trace_visibility'])
95
+ ```
96
+
97
+ **NOTE**: the temporal length of traces for video data is 16 as we excluded the starting frame. For all robotics data, it is 17 as we did not exclude the starting frame.