Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

Improve `minecraft-motion-action-dataset` card: Add paper, code, project page, tasks, tags, and usage

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +122 -1
README.md CHANGED
@@ -1,5 +1,16 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
3
  dataset_info:
4
  features:
5
  - name: id
@@ -42,4 +53,114 @@ configs:
42
  path: data/valid-*
43
  ---
44
 
45
- <!-- **minecraft-motion-action-dataset** is part of the OpenHA suite, introduced in our paper [OpenHA: A Series of Open-Source Hierarchical Agentic Models in Minecraft](https://arxiv.org/pdf/2509.13347). -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - robotics
5
+ - image-text-to-text
6
+ language:
7
+ - en
8
+ tags:
9
+ - minecraft
10
+ - agent
11
+ - reinforcement-learning
12
+ - multimodal
13
+ - vision-language-model
14
  dataset_info:
15
  features:
16
  - name: id
 
53
  path: data/valid-*
54
  ---
55
 
56
+ # Minecraft Motion Action Dataset
57
+
58
+ This `minecraft-motion-action-dataset` is a fundamental component of the [OpenHA suite](https://github.com/CraftJarvis/OpenHA), which introduces a series of open-source hierarchical agentic models designed for the Minecraft environment. This specific dataset focuses on **Motion Action** data.
59
+
60
+ It is used in the research presented in the paper:
61
+ **[Training One Model to Master Cross-Level Agentic Actions via Reinforcement Learning](https://huggingface.co/papers/2512.09706)**
62
+
63
+ The paper introduces CrossAgent, a unified agentic model capable of mastering heterogeneous action spaces and autonomously selecting the most effective interface for each step of a trajectory. This dataset supports the training of such agents to learn adaptive action switching—balancing high-level efficiency with low-level precision—without human-specified rules.
64
+
65
+ - **Project Page**: [CraftJarvis Homepage](https://craftjarvis.github.io/)
66
+ - **Code**: [CraftJarvis/OpenHA GitHub Repository](https://github.com/CraftJarvis/OpenHA)
67
+
68
+ ## Sample Usage
69
+
70
+ To utilize this dataset within the OpenHA framework, you can follow the installation and inference procedures outlined below, as found in the associated GitHub repository.
71
+
72
+ ### Installation
73
+ Clone the OpenHA repository and install dependencies:
74
+
75
+ ```sh
76
+ git clone --recurse-submodules https://github.com/CraftJarvis/OpenHA.git
77
+ conda create -n openha python=3.10
78
+ conda activate openha
79
+ pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124 # check your CUDA version
80
+ cd OpenHA
81
+ conda install --channel=conda-forge openjdk=8 -y
82
+ pip install -e .
83
+ ```
84
+
85
+ > ⚠️ Note: The script will install **minestudio** automatically. If you have not used MineStudio before, please check [the tutorial](https://craftjarvis.github.io/MineStudio/overview/getting-started.html).
86
+
87
+ ### Inference
88
+ OpenHA supports multiple ways to serve and load models. We recommend **vLLM** for efficient multi-GPU / multi-process rollout. Example:
89
+
90
+ ```sh
91
+ CUDA_VISIBLE_DEVICES=0,1,2,3 vllm serve CraftJarvis/minecraft-openha-qwen2vl-7b-2509 \
92
+ --served-model-name minecraft-openha-qwen2vl-7b-2509 \
93
+ --port 11000 \
94
+ --limit-mm-per-prompt image=25 \
95
+ --trust-remote-code --gpu-memory-utilization 0.90 \
96
+ --pipeline-parallel-size 1 \
97
+ --tensor-parallel-size 4 \
98
+ --max-num-seqs 16 \
99
+ --max-logprobs 20 \
100
+ --max-model-len 32768
101
+ ```
102
+
103
+ Once the model is loaded, run rollout:
104
+
105
+ ```sh
106
+ python examples/rollout_openha.py --output_mode text_action \
107
+ --vlm_client_mode online \
108
+ --system_message_tag text_action \
109
+ --model_ips localhost --model_ports 11000 \
110
+ --model_id minecraft-openha-qwen2vl-7b-2509 \
111
+ --record_path "/DATA/limuyao/evaluate" \
112
+ --max_steps_num 200 \
113
+ --num_rollouts 8
114
+ ```
115
+
116
+ OpenHA also supports HuggingFace Transformers (`hf`) or offline `vllm` loading. Just change the `--vlm_client_mode` argument accordingly.
117
+
118
+ ## Interaction Details
119
+
120
+ You can control the **output format** of OpenHA via `system_message_tag` in `rollout_openha.py`.
121
+
122
+ | Parameter | Output Example | System Prompt |
123
+ |--------------------|-----------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
124
+ | `text_action` | `Action: move(dx='4.0', dy='-1.0') and keyDown(keys=(keyboard.left.control, keyboard.w))` | [text_action.txt](https://github.com/CraftJarvis/OpenHA/blob/master/openagents/assets/system_prompt/text_action.txt) |
125
+ | `grounding_action` | `Grounding: move_camera <\|object_ref_start\|>empty slot<\|object_ref_end\|><\|point_start\|>(881,558)<\|point_end\|>` | [grounding.txt](https://github.com/CraftJarvis/OpenHA/blob/master/openagents/assets/system_prompt/grounding.txt) |
126
+ | `motion_action` | `Motion: cursor move left and down` | [motion.txt](https://github.com/CraftJarvis/OpenHA/blob/master/openagents/assets/system_prompt/motion.txt) |
127
+ | `grounding_coa` | `Grounding: ... (615,505)...
128
+ , Action: move(19, 0) and press()` | [grounding_coa.txt](https://github.com/CraftJarvis/OpenHA/blob/master/openagents/assets/system_prompt/grounding_coa.txt) |
129
+ | `motion_coa` | `Motion: cursor move right and up
130
+ , Action: move(17, 0) and press()` | [motion_coa.txt](https://github.com/CraftJarvis/OpenHA/blob/master/openagents/assets/system_prompt/motion_coa.txt) |
131
+
132
+ Corresponding `output_mode` values:
133
+
134
+ ```python
135
+ MODE_SYSTEM_PROMPT_MAP = {
136
+ "greedy": {"motion_coa", "grounding_coa"},
137
+ "text_action": {"text_action"},
138
+ "grounding": {"grounding_action"},
139
+ "motion": {"motion_action"},
140
+ }
141
+ ```
142
+
143
+ ## Datasets on 🤗 Hugging Face
144
+
145
+ This dataset is one of several action space datasets released for the OpenHA project:
146
+
147
+ | Action Space | Size | HuggingFace URL |
148
+ |------------------|-------------|---------------------------------------------------------------------------------|
149
+ | Motion Action | 1B Tokens | https://huggingface.co/datasets/CraftJarvis/minecraft-motion-action-dataset |
150
+ | Grounding Action | 0.5B Tokens | https://huggingface.co/datasets/CraftJarvis/minecraft-grounding-action-dataset |
151
+ | Text Action | 2B Tokens | https://huggingface.co/datasets/CraftJarvis/minecraft-text-action-dataset |
152
+ | Motion CoA | 0.5B Tokens | https://huggingface.co/datasets/CraftJarvis/minecraft-motion-coa-dataset |
153
+ | Grounding CoA | 0.2B Tokens | https://huggingface.co/datasets/CraftJarvis/minecraft-grounding-coa-dataset |
154
+
155
+ ## Citation
156
+ If you find **OpenHA** useful, please give us a ⭐ on GitHub or cite us:
157
+
158
+ ```bibtex
159
+ @article{wang2025openha,
160
+ title={OpenHA: A Series of Open-Source Hierarchical Agentic Models in Minecraft},
161
+ author={Zihao Wang and Muyao Li and Kaichen He and Xiangyu Wang and Zhancun Mu and Anji Liu and Yitao Liang},
162
+ journal = {arXiv preprint arXiv:2509.13347},
163
+ year={2025},
164
+ url={https://arxiv.org/abs/2509.13347},
165
+ }
166
+ ```