MicroAGI commited on
Commit
0273ff7
·
verified ·
1 Parent(s): 312c323

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -108
README.md CHANGED
@@ -1,141 +1,123 @@
1
- ---
2
- license: other
3
- license_name: microagi-os-l1
4
- license_link: LICENSE
5
- task_categories:
6
- - robotics
7
- language:
8
- - en
9
- tags:
10
- - dataset
11
- - egocentric
12
- - robotics
13
- - rgbd
14
- - depth
15
- - manipulation
16
- - mcap
17
- - ros2
18
- - computer_vision
19
- pretty_name: MicroAGI00 Egocentric Dataset for Simple Household Manipulation
20
- size_categories:
21
- - 1M<n<10M
22
- ---
23
- ---
24
-
25
 
26
 
27
  # MicroAGI00: MicroAGI Egocentric Dataset (2025)
28
 
29
- > License: MicroAGI00 Open Use, No-Resale v1.0 (see `LICENSE`).
30
- > No resale: You may not sell or paywall this dataset or derivative data. Trained models/outputs may be released under any terms.
31
 
32
- ## Overview
33
- MicroAGI00 is a large-scale egocentric RGB+D dataset of human manipulation in https://behavior.stanford.edu/challenge/index.html tasks.
34
 
 
35
 
 
36
 
37
  ## Quick facts
38
 
39
- * Modality: synchronized RGB + 16bit depth + IMU + annotations
40
- * Resolution & rate (RGB): 1920×1080 @ 30 FPS (in MCAP)
41
- * Depth: 16bit, losslessly compressed inside MCAP
42
- * Scale: ≈1,000,000 synchronized RGB frames and ≈1,000,000 depth frames (≈1M frame pairs)
43
- * Container: `.mcap` (all signals + annotations)
44
- * Previews: For as sample for only some bags `.mp4` per sequence (annotated RGB; visualized native depth)
45
- * Annotations: Only in %5 of the dataset, hand landmarks and short action text
 
46
 
47
  ## What’s included per sequence
48
 
49
  * One large **MCAP** file containing:
50
 
51
  * RGB frames (1080p/30 fps)
52
- * 16bit depth stream (lossless compression)
53
  * IMU data (as available)
54
- * For Some Data the Embedded annotations (hands, action text)
55
-
56
- **MP4** preview videos:
57
-
58
- * Annotated RGB (for quick review)
59
- * Visualized native depth map (for quick review)
60
-
61
- > Note: MP4 previews may be lower quality than MCAP due to compression and post‑processing. Research use should read from MCAP.
62
-
63
- ## Annotations
64
-
65
- Annotations are generated by our in‑house.
66
-
67
- ### Hand annotations 21 Joints (not all shown below as it would be too long) (per frame) — JSON schema example
68
-
69
- ```
70
- {
71
- "frame_number": 9,
72
- "timestamp_seconds": 0.3,
73
- "resolution": { "width": 1920, "height": 1080 },
74
- "hands": [
75
- {
76
- "hand_index": 0,
77
- "landmarks": [
78
- { "id": 0, "name": "WRIST", "x": 0.7124036550521851, "y": 0.7347621917724609, "z": -1.444301744868426e-07, "visibility": 0.0 },
79
-
80
- ],
81
- "hand": "Left",
82
- "confidence": 0.9268525838851929
83
- },
84
- {
85
- "hand_index": 1,
86
- "landmarks": [
87
- { "id": 0, "name": "WRIST", "x": 0.4461262822151184, "y": 0.35183972120285034, "z": -1.2342320587777067e-07, "visibility": 0.0 },
88
-
89
- "hand": "Right",
90
- "confidence": 0.908446729183197
91
- }
92
- ],
93
- "frame_idx": 9,
94
- "exact_frame_timestamp": 1758122341583104000,
95
- "exact_frame_timestamp_sec": 1758122341.583104
96
- }
97
- ```
98
-
99
- ### Text (action) annotations (per frame/window) JSON schema example
100
-
101
- ```
102
- {
103
- "schema_version": "v1.0",
104
- "action_text": "Right hand, holding a knife, is chopping cooked meat held by the left hand on the red cutting board.",
105
- "confidence": 1.0,
106
- "source": { "model": "MicroAGI, MAGI01" },
107
- "exact_frame_timestamp": 1758122341583104000,
108
- "exact_frame_timestamp_sec": 1758122341.583104
109
- }
110
- ```
111
 
112
  ## Data access and structure
113
 
114
- * Each top-level sample folder contains: One folder of strong heavy mcap dump, one folder of annotated mcap dump, one folder of mp4 previews
115
- * All authoritative signals and annotations are inside the MCAP. Use the MP4s for quick visual QA only.
 
 
 
 
 
116
 
117
  ## Getting started
118
 
119
  * Inspect an MCAP: `mcap info your_sequence.mcap`
120
  * Extract messages: `mcap cat --topics <topic> your_sequence.mcap > out.bin`
121
- * Python readers: `pip install mcap` (see the MCAP Python docs) or any MCAP-compatible tooling. Typical topics include RGB, depth, IMU, and annotation channels.
 
 
122
 
123
  ## Intended uses
124
 
125
- * Policy and skill learning (robotics/VLA)
126
  * Action detection and segmentation
127
- * Hand/pose estimation and grasp analysis
128
  * Depth-based reconstruction, SLAM, scene understanding
129
- * World-model pre-post training
 
 
 
 
 
 
 
 
130
 
131
  ## Services and custom data
132
 
133
  MicroAGI provides on-demand:
134
 
135
- * Real‑to‑Sim pipelines
136
- * MLenhanced 3D point clouds and SLAM reconstructions
137
- * New data capture via our network of skilled tradespeople and factory workers (often below typical market cost)
138
- * Enablement for your workforce to wear our device and run through our processing pipeline
139
 
140
  Typical lead times: under two weeks (up to four weeks for large jobs).
141
 
@@ -145,18 +127,20 @@ Email `data@micro-agi.com` with:
145
 
146
  * Task description
147
  * Desired hours or frame counts
 
 
148
  * Proposed price
149
- We will reply within one business day with lead time and final pricing.
 
150
 
151
  Questions: `info@micro-agi.com`
152
 
153
  ## License
154
 
155
- This dataset is released under the MicroAGI00 Open Use, NoResale License v1.0 (custom). See [`LICENSE`](./LICENSE). Redistribution must be freeofcharge under the same license. Required credit: "This work uses the MicroAGI00 dataset (MicroAGI, 2025)."
156
-
157
-
158
 
 
159
 
160
  ## Attribution reminder
161
 
162
- Public uses of the Dataset or Derivative Data must include the credit line above in a reasonable location for the medium (papers, repos, product docs, dataset pages, demo descriptions). Attribution is appreciated but not required for Trained Models or Outputs.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
 
3
  # MicroAGI00: MicroAGI Egocentric Dataset (2025)
4
 
5
+ > **License:** MicroAGI00 Open Use, No-Resale v1.0 (see `LICENSE`).
6
+ > **No resale:** You may not sell or paywall this dataset or derivative data. Trained models/outputs may be released under any terms.
7
 
8
+ ## What this is
 
9
 
10
+ MicroAGI00 is a large-scale **egocentric RGB+D** dataset of **human household manipulation**, aligned with the task style of the Stanford BEHAVIOR benchmark: [https://behavior.stanford.edu/challenge/index.html](https://behavior.stanford.edu/challenge/index.html)
11
 
12
+ It’s designed to be “robotics-ready” at the *signal level*: synchronized streams, clean packaging, strong QC, and consistent structure—so you can spend time modeling, not cleaning data.
13
 
14
  ## Quick facts
15
 
16
+ * **Modalities:** synchronized RGB + 16-bit depth + IMU
17
+ * **Resolution & rate (RGB):** 1920×1080 @ 30 FPS (in MCAP)
18
+ * **Depth:** 16-bit, losslessly compressed inside MCAP
19
+ * **Scale:** ≈1,000,000 synchronized RGB frames and ≈1,000,000 depth frames (≈1M frame pairs)
20
+ * **Container:** `.mcap` (all signals)
21
+ * **Previews:** for a subset of sequences, `.mp4` previews (annotated overlays / visualized depth for quick review)
22
+
23
+ > Note: MP4 previews may be lower quality than MCAP due to compression and post-processing. Research use should read from MCAP.
24
 
25
  ## What’s included per sequence
26
 
27
  * One large **MCAP** file containing:
28
 
29
  * RGB frames (1080p/30 fps)
30
+ * 16-bit depth stream (lossless compression)
31
  * IMU data (as available)
32
+
33
+ * **MP4 preview videos** (subset of sequences):
34
+
35
+ * RGB preview (for quick visual QA)
36
+ * Visualized depth preview (for quick visual QA)
37
+
38
+ ## Labels / annotations
39
+
40
+ The base MicroAGI00 release is **primarily raw synchronized signals** (RGB-D-IMU) and **does not ship with full-coverage labels**.
41
+
42
+ If you’ve seen demo videos with overlays: those demonstrate **what MicroAGI can produce** as an add-on (see below), not what is universally present in the base dump.
43
+
44
+ ## Data quality and QC philosophy
45
+
46
+ MicroAGI00 is built around *trustworthy signal integrity*:
47
+
48
+ * Tight **RGB↔Depth synchronization** checks
49
+ * Automated detection and scoring of:
50
+
51
+ * frame drops / time discontinuities
52
+ * motion blur / exposure failures
53
+ * depth sanity (range/invalid ratios), compression integrity
54
+ * IMU continuity where available
55
+ * Consistent trimming and packaging, with **sequence-level quality ratings** to support filtering (e.g., “clean only” training vs. “wild” robustness training)
56
+
57
+ ## Diversity and covariate-shift robustness
58
+
59
+ MicroAGI data is captured across **Europe and Asia**, intentionally spanning environments that create real-world distribution shift:
60
+
61
+ * different homes, layouts, lighting regimes, materials
62
+ * different hands/skins, tool choices, cultural cooking/object priors
63
+ * varied camera motions and operator styles
64
+
65
+ This is meant to be **covariate-shift resilient** data for models that need to generalize.
66
+
67
+ ## Optional derived signals (available on request)
68
+
69
+ If you want more than raw RGB-D-IMU, MicroAGI can deliver *derived outputs* on top of the same sequences (or on newly captured data), such as:
70
+
71
+ * **Ego-motion / trajectories** (VIO-style)
72
+ * **SLAM reconstructions** (maps, trajectories, keyframes)
73
+ * **Accurate body pose estimation**
74
+ * **State-of-the-art 3D hand landmarks** (true 3D, not just 2D reprojections)
75
+ * Additional QA layers and consistency checks tailored to your training setup
76
+
77
+ These are provided as a service deliverable (and can be scoped to subsets / key frames / full coverage), depending on your needs.
 
 
 
 
 
 
 
 
 
 
 
78
 
79
  ## Data access and structure
80
 
81
+ * Each top-level sample folder typically contains:
82
+
83
+ * an MCAP “raw dump” folder
84
+ * an MCAP “processed/curated” folder (when applicable)
85
+ * an `mp4/` previews folder (when available)
86
+
87
+ All authoritative signals are inside the **MCAP**. Use MP4s for fast browsing only.
88
 
89
  ## Getting started
90
 
91
  * Inspect an MCAP: `mcap info your_sequence.mcap`
92
  * Extract messages: `mcap cat --topics <topic> your_sequence.mcap > out.bin`
93
+ * Python readers: `pip install mcap` (see the MCAP Python docs) or any MCAP-compatible tooling.
94
+
95
+ Typical topics include RGB, depth, IMU, and any additional channels you may have requested.
96
 
97
  ## Intended uses
98
 
99
+ * Policy and skill learning (robotics / VLA)
100
  * Action detection and segmentation
101
+ * Hand/pose estimation and grasp analysis (raw or with add-ons)
102
  * Depth-based reconstruction, SLAM, scene understanding
103
+ * World-model pre/post training
104
+ * Robustness testing under real distribution shift
105
+
106
+ ## Data rights, consent, and licensing options
107
+
108
+ All capture is **legally consented**, with **data rights documentation** attached. Depending on the engagement, rights can be structured as:
109
+
110
+ * **non-exclusive** usage rights (typical dataset access), or
111
+ * **exclusive** rights for specific task scopes / environments / cohorts (custom programs)
112
 
113
  ## Services and custom data
114
 
115
  MicroAGI provides on-demand:
116
 
117
+ * New data capture via our operator network (Europe + Asia)
118
+ * ML-enhanced derived signals (ego-motion, pose, hands, SLAM)
119
+ * Real-to-Sim pipelines and robotics-ready packaging
120
+ * Custom QC gates to match your training/eval stack
121
 
122
  Typical lead times: under two weeks (up to four weeks for large jobs).
123
 
 
127
 
128
  * Task description
129
  * Desired hours or frame counts
130
+ * Target environment constraints (if any)
131
+ * Rights preference (exclusive / non-exclusive)
132
  * Proposed price
133
+
134
+ We reply within one business day with lead time and final pricing.
135
 
136
  Questions: `info@micro-agi.com`
137
 
138
  ## License
139
 
140
+ This dataset is released under the **MicroAGI00 Open Use, No-Resale License v1.0** (custom). See [`LICENSE`](./LICENSE). Redistribution must be free-of-charge under the same license.
 
 
141
 
142
+ Required credit: **"This work uses the MicroAGI00 dataset (MicroAGI, 2025)."**
143
 
144
  ## Attribution reminder
145
 
146
+ Public uses of the Dataset or Derivative Data must include the credit line above in a reasonable location for the medium (papers, repos, product docs, dataset pages, demo descriptions). Attribution is appreciated but not required for Trained Models or Outputs.