File size: 2,491 Bytes
132b095
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: apache-2.0
size_categories:
- 100K<n<1M
---

# MultiWorld Dataset

## Dataset Summary

**MultiWorld** is a large-scale multi-agent multi-view video dataset collected for training video world models. It contains two complementary sources of data:

1. **It Takes Two Gameplay Dataset**: 100+ hours of real human gameplay from the cooperative action-adventure game *It Takes Two*, featuring dual-agent synchronized actions with distinct first-person viewpoints.
2. **RoboFactory Manipulation Dataset**: Multi-robot manipulation trajectories spanning 4 tasks with 2-4 agents and variable camera viewpoints, including both success and failure episodes.

This dataset is the official release accompanying the paper *"MultiWorld: Scalable Multi-Agent Multi-View Video World Models"*.

- **Homepage:** https://multi-world.github.io
- **Repository:** https://github.com/CIntellifusion/MultiWorld
- **Paper:** [arXiv:XXXX.XXXXX](https://arxiv.org/abs/XXXX.XXXXX)
---

## Dataset Details

### It Takes Two Gameplay

| Property | Value |
|----------|-------|
| **Total Duration** | 100+ hours |
| **Frame Rate** | 60 FPS |
| **Resolution** | 480 × 960 |
| **Agents** | 2 players |
| **Viewpoints** | 2 distinct first-person views per episode |
| **Actions** | Synchronized keyboard and mouse actions per agent |
| **Modality** | RGB video + discrete/continuous action vectors |

The gameplay videos are captured from real human players cooperating in the game. Each frame is accompanied by per-agent action labels capturing keyboard presses and mouse movements.

### RoboFactory Manipulation

| Property | Value |
|----------|-------|
| **Tasks** | 4 multi-robot manipulation tasks |
| **Agents** | 2–4 robots per task |
| **Viewpoints** | Variable camera configurations per task |
| **Resolution** | 256 × 320 |
| **Success Episodes** | 1,000 per task |
| **Failure Episodes** | 2,000 per task |
| **Modality** | RGB video + robot proprioception + actions |

Tasks include collaborative stacking, pushing, and pick-and-place scenarios. Both successful and failed trajectories are included to support learning robust world models and failure prediction.


---

### Possible Usage

The dataset is intended for research in:
- Video world models
- Multi-agent video generation 
- Multi-view consistent video generation.

---

### Contact

For questions about the dataset, please open an issue on the [GitHub repository](https://github.com/CIntellifusion/MultiWorld) or contact the authors.