File size: 2,409 Bytes
1029651
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
<div align="center">
    <h1>Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval</h1>
  <h1>SIGGRAPH Asia 2025</h1>
    <p>
        <a href="https://context-as-memory.github.io/">[Project page]</a>
        <a href="https://arxiv.org/pdf/2506.03141">[ArXiv]</a>
        <a href="https://huggingface.co/datasets/KwaiVGI/Context-as-Memory-Dataset">[Dataset]</a>
    </p>
</div>

# File Structure
To prepare the dataset for use, merge the parts into a single zip file using the following command:
```bash
cat Context-as-Memory-Dataset_* > Context-as-Memory-Dataset.zip
```
After extracting `Context-as-Memory-Dataset.zip`, the dataset will be organized as follows:
```
Context-as-Memory-Dataset
├── frames
│   ├── AncientTempleEnv_0
│   │   ├── 0000.png
│   │   ├── 0001.png
│   │   ├── 0002.png
│   │   └── ...
│   ├── AncientTempleEnv_1
│   │   ├── 0000.png
│   │   ├── 0001.png
│   │   ├── 0002.png
│   │   └── ...
│   └── ...
│  
├── jsons
│   ├── AncientTempleEnv_0.json
│   ├── AncientTempleEnv_1.json
│   └── ...

├── overlap_labels
│   ├── AncientTempleEnv_0
│   │   ├── 0.json
│   │   ├── 1.json
│   │   ├── 2.json
│   │   └── ...
│   ├── AncientTempleEnv_1
│   │   ├── 0.json
│   │   ├── 1.json
│   │   ├── 2.json
│   │   └── ...
│   └── ...
│  
└── captions.txt
```

# Explanation of Dataset Parts

- **`frames/`**: 100 subdirectories, each containing 7,601 video frame images.
- **`jsons/`**: 100 JSON files, each storing the camera pose (position + rotation) of every frame in the corresponding long video.
- **`overlap_labels/`**: 100 subdirectories, each containing 7,601 JSON files, where each file records the indices of overlapping frames corresponding to that frame.
- **`captions.txt`**: Captions annotated for a segment of a long video, from a given starting frame to an ending frame.
- We also provide a simple code file, `tools.py`, which can convert (x, y, z, yaw, pitch) into RT, and can also select a specific frame as the reference frame to align the RT of other frames to its coordinate system.