| --- |
| license: mit |
| task_categories: |
| - robotics |
| - reinforcement-learning |
| tags: |
| - metaworld |
| - short-metaworld |
| - robotics |
| - manipulation |
| - multi-task |
| - vision-language |
| - imitation-learning |
| - r3m |
| size_categories: |
| - 10K<n<100K |
| language: |
| - en |
| pretty_name: Short-MetaWorld-VLA (v2+v3) |
| --- |
| |
| # Short-MetaWorld-VLA (v2 + v3) |
|
|
| ## Overview |
|
|
| This dataset contains a short MetaWorld collection used for VLA-style training and evaluation. |
|
|
| Current local structure includes: |
| - **24 task files** in `r3m_MT10_20` (`12 v2 + 12 v3`) |
| - **100 trajectories per task** |
| - **20 or 50 steps per trajectory** (task/version dependent) |
| - **84,000 total step samples** from PKL action/state streams |
|
|
| ## Dataset Structure |
|
|
| short-metaworld-vla/ |
| ├── mt50_task_prompts.json |
| ├── short_metaworld_loader.py |
| ├── requirements.txt |
| ├── short-MetaWorld/ |
| │ ├── img_only/ |
| │ │ └── <task>/<trajectory>/<step>.jpg |
| │ └── r3m-processed/ |
| │ └── r3m_MT10_20/ |
| │ ├── <task>-v2.pkl |
| │ ├── <task>-v3.pkl |
| │ └── data.pkl |
| └── r3m-processed/ |
| └── r3m_MT10_20/ |
| |
| ## Data Format |
| |
| Per step: |
| - `image`: RGB frame (`.jpg`) |
| - `state`: **39D** float vector |
| - `action`: **4D** float vector |
| - `prompt`: task language instruction (from `mt50_task_prompts.json`) |
| - `task_name`: task identifier (e.g. `button-press-topdown-v3`) |
|
|
| ## Tasks |
|
|
| Includes both `-v2` and `-v3` variants such as: |
| - basketball |
| - button-press-topdown |
| - door-open |
| - drawer-open / drawer-close |
| - peg-insert-side |
| - pick-place |
| - push |
| - reach |
| - sweep |
| - window-open / window-close |
| - plus v3-only tasks in this dump (e.g. `handle-pull-v3`, `stick-pull-v3`) |
|
|
|
|
| ## 🔬 Research Applications |
|
|
| This dataset is designed for: |
|
|
| - **Multi-task Reinforcement Learning**: Train policies across multiple manipulation tasks |
| - **Imitation Learning**: Learn from demonstration trajectories |
| - **Vision-Language Robotics**: Connect visual observations with natural language instructions |
| - **Meta-Learning**: Adapt quickly to new manipulation tasks |
| - **Robot Policy Training**: End-to-end visuomotor control |
|
|
|
|
| ## ⚖️ License |
|
|
| MIT License - See LICENSE file for details. |
|
|