Datasets:
File size: 2,138 Bytes
a44dd93 6ca540f a44dd93 6ca540f a44dd93 6ca540f a44dd93 6ca540f a44dd93 6ca540f a44dd93 6ca540f a44dd93 6ca540f a44dd93 6ca540f a44dd93 6ca540f a44dd93 6ca540f a44dd93 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | ---
license: mit
task_categories:
- robotics
- reinforcement-learning
tags:
- metaworld
- short-metaworld
- robotics
- manipulation
- multi-task
- vision-language
- imitation-learning
- r3m
size_categories:
- 10K<n<100K
language:
- en
pretty_name: Short-MetaWorld-VLA (v2+v3)
---
# Short-MetaWorld-VLA (v2 + v3)
## Overview
This dataset contains a short MetaWorld collection used for VLA-style training and evaluation.
Current local structure includes:
- **24 task files** in `r3m_MT10_20` (`12 v2 + 12 v3`)
- **100 trajectories per task**
- **20 or 50 steps per trajectory** (task/version dependent)
- **84,000 total step samples** from PKL action/state streams
## Dataset Structure
short-metaworld-vla/
├── mt50_task_prompts.json
├── short_metaworld_loader.py
├── requirements.txt
├── short-MetaWorld/
│ ├── img_only/
│ │ └── <task>/<trajectory>/<step>.jpg
│ └── r3m-processed/
│ └── r3m_MT10_20/
│ ├── <task>-v2.pkl
│ ├── <task>-v3.pkl
│ └── data.pkl
└── r3m-processed/
└── r3m_MT10_20/
## Data Format
Per step:
- `image`: RGB frame (`.jpg`)
- `state`: **39D** float vector
- `action`: **4D** float vector
- `prompt`: task language instruction (from `mt50_task_prompts.json`)
- `task_name`: task identifier (e.g. `button-press-topdown-v3`)
## Tasks
Includes both `-v2` and `-v3` variants such as:
- basketball
- button-press-topdown
- door-open
- drawer-open / drawer-close
- peg-insert-side
- pick-place
- push
- reach
- sweep
- window-open / window-close
- plus v3-only tasks in this dump (e.g. `handle-pull-v3`, `stick-pull-v3`)
## 🔬 Research Applications
This dataset is designed for:
- **Multi-task Reinforcement Learning**: Train policies across multiple manipulation tasks
- **Imitation Learning**: Learn from demonstration trajectories
- **Vision-Language Robotics**: Connect visual observations with natural language instructions
- **Meta-Learning**: Adapt quickly to new manipulation tasks
- **Robot Policy Training**: End-to-end visuomotor control
## ⚖️ License
MIT License - See LICENSE file for details.
|