The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Doom 2-Player PvP Latents (DC-AE)
Pre-encoded latent representations of 2-player PvP Doom gameplay, ready for training video world models. Encoded using DC-AE-Lite f32c32 (32x spatial compression, 32 latent channels).
Dataset Details
| Property | Value |
|---|---|
| Episodes | 2,606 |
| Total duration | 167.1 hours (both perspectives) |
| Total frames | 21,060,484 |
| Latent shape per frame | (32, 15, 20) float16 |
| Original resolution | 480x640 |
| Frame rate | 35 fps |
| Size | 756 GB |
| Format | WebDataset tar shards (~4 GB each) |
| VAE | mit-han-lab/dc-ae-lite-f32c32-sana-1.1-diffusers |
Actions: [ "MF", # MOVE_FORWARD "MB", # MOVE_BACKWARD "MR", # MOVE_RIGHT "ML", # MOVE_LEFT "W1", # SELECT_WEAPON1 "W2", # SELECT_WEAPON2 "W3", # SELECT_WEAPON3 "W4", # SELECT_WEAPON4 "W5", # SELECT_WEAPON5 "W6", # SELECT_WEAPON6 "W7", # SELECT_WEAPON7 "ATK", # ATTACK "SPD", # SPEED "TURN",# TURN_LEFT_RIGHT_DELTA ]
VAE Details
DC-AE-Lite with f32c32 configuration:
- Spatial compression: 32x (480x640 pixels → 15x20 latent spatial dims)
- Latent channels: 32
- Precision: float16
- Compression:
3 × 480 × 640RGB →32 × 15 × 20latent (42.7x compression ratio)
Data Structure
Each episode is stored as a group of files inside WebDataset tar shards:
{episode_key}.latents_p1.npy # Player 1 latents (N, 32, 15, 20) float16
{episode_key}.latents_p2.npy # Player 2 latents (N, 32, 15, 20) float16
{episode_key}.actions_p1.npy # Player 1 actions (N, 14) float32
{episode_key}.actions_p2.npy # Player 2 actions (N, 14) float32
{episode_key}.rewards_p1.npy # Player 1 rewards (N,) float32
{episode_key}.rewards_p2.npy # Player 2 rewards (N,) float32
{episode_key}.meta.json # Episode metadata
Usage
Training Loader (streaming, all clips)
from doom_arena.latent_loader import LatentTrainLoader
loader = LatentTrainLoader(
"path/to/latent/shards",
clip_len=16, # frames per clip
batch_size=64,
num_workers=4,
)
for batch in loader:
latents_p1 = batch["latents_p1"] # (B, T, 32, 15, 20) float16
latents_p2 = batch["latents_p2"] # (B, T, 32, 15, 20) float16
actions_p1 = batch["actions_p1"] # (B, T, 14)
actions_p2 = batch["actions_p2"] # (B, T, 14)
rewards_p1 = batch["rewards_p1"] # (B, T)
rewards_p2 = batch["rewards_p2"] # (B, T)
Random-access Dataset
from doom_arena.latent_loader import LatentDataset
ds = LatentDataset("path/to/latent/shards")
ds.summary()
ep = ds[42]
print(ep) # LatentEpisode(dwango5_5min PvP, 10467 frames)
print(ep.latents_p1.shape) # torch.Size([10467, 32, 15, 20])
# Index into frames
clip = ep[100:116] # dict with 16-frame slices of all arrays
Encoding frames to latents
import torch
from diffusers.models.autoencoders.autoencoder_dc import AutoencoderDC
# Load VAE
vae = AutoencoderDC.from_pretrained(
"mit-han-lab/dc-ae-lite-f32c32-sana-1.1-diffusers",
torch_dtype=torch.float16,
).cuda().eval()
# Encode: (B, 3, 480, 640) float16 RGB [-1, 1] → (B, 32, 15, 20) float16
frames = torch.randn(4, 3, 480, 640, dtype=torch.float16, device="cuda")
with torch.no_grad():
latents = vae.encode(frames).latent # (4, 32, 15, 20)
Decoding latents to frames
# Decode: (B, 32, 15, 20) float16 → (B, 3, 480, 640) float16 RGB [-1, 1]
with torch.no_grad():
reconstructed = vae.decode(latents) # (4, 3, 480, 640)
# Convert to uint8 for display
pixels = ((reconstructed.clamp(-1, 1) + 1) / 2 * 255).byte()
Performance
Throughput measured with clip_len=70, batch_size=64, num_workers=4:
| Storage | Frames/s | Batches/s | Seconds/batch |
|---|---|---|---|
| NFS | 3,356 | 0.75 | 1.33s |
| Local NVMe | 22,371 | 4.99 | 0.20s |
Recommendation: Copy shards to local NVMe before training:
rsync -av path/to/shards/ /tmp/pvp_latents/
Source
- Encoded from chrisxx/doom-2players-mp4
- VAE: mit-han-lab/dc-ae-lite-f32c32-sana-1.1-diffusers
- Project: doom-arena
Citation
If you use this dataset, please cite the DC-AE paper:
@article{chen2024dcae,
title={Deep Compression Autoencoder for Efficient High-Resolution Diffusion Models},
author={Chen, Junyu and Cai, Han and Chen, Junsong and Xie, Enze and Yang, Shang and Tang, Haotian and Li, Muyang and Lu, Yao and Han, Song},
journal={arXiv preprint arXiv:2410.10733},
year={2024}
}
- Downloads last month
- 28