Datasets:
File size: 3,272 Bytes
2b88b70 e65d35c 2b88b70 e65d35c 2b88b70 e65d35c 2b88b70 e65d35c | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | ---
pretty_name: Doom Frame Dataset
tags:
- doom
- vizdoom
- reinforcement-learning
- imitation-learning
- webdataset
configs:
- config_name: preview
data_files:
- split: train
path: data/train-000000.tar
- config_name: full
data_files:
- split: train
path: data/train-*.tar
---
# DoomFrameDataset
DoomFrameDataset is a ViZDoom frame-action dataset generated from policy rollouts. It is packaged as WebDataset tar shards for streaming training, imitation learning, behavior cloning, and offline reinforcement-learning experiments.
The dataset contains RGB game frames paired with the action selected by the rollout policy and per-step metadata such as reward, episode id, step id, terminal flag, and value estimate.
## Dataset Size
| Config | Files | Samples | Intended use |
| --- | ---: | ---: | --- |
| `preview` | 1 shard | ~79k | Hugging Face preview and quick sanity checks |
| `full` | 31 shards | 2,398,745 | Training and full streaming reads |
The packaged dataset is about 68 GB.
## Files
```text
data/
train-000000.tar
train-000001.tar
...
train-000030.tar
action_map.json
README.md
```
Each tar shard contains paired files with the same numeric key:
```text
000000000000.png
000000000000.json
000000000001.png
000000000001.json
...
```
The PNG is the game frame. The JSON is the metadata for that frame.
## Sample Metadata
```json
{
"action_id": 1,
"action_name": "TURN_RIGHT",
"action_vector": [0.0, 0.0, 0.0, 0.0, 1.0, 0.0],
"curriculum_level": 1,
"done": false,
"episode": 1,
"frame_path": "frames/episode_001/step_000000.png",
"global_step": 0,
"reward": 0.0,
"source_frame_path": "frames/episode_001/step_000000.png",
"step": 0,
"value": 1.7968196868896484,
"webdataset_key": "000000000000"
}
```
See `action_map.json` for the full action id, action name, and action vector mapping.
## Load The Preview Config
Use `preview` when you only want to verify the dataset or inspect examples in the Hugging Face Dataset Viewer.
```python
from datasets import load_dataset
ds = load_dataset(
"brahmandam/DoomFrameDataset",
"preview",
split="train",
streaming=True,
)
sample = next(iter(ds))
print(sample.keys())
print(sample["json"])
image = sample["png"]
```
## Stream The Full Dataset
Use `full` for training.
```python
from datasets import load_dataset
ds = load_dataset(
"brahmandam/DoomFrameDataset",
"full",
split="train",
streaming=True,
)
for sample in ds:
image = sample["png"]
metadata = sample["json"]
action_id = metadata["action_id"]
break
```
You can also read the shards directly with WebDataset:
```python
import webdataset as wds
urls = "https://huggingface.co/datasets/brahmandam/DoomFrameDataset/resolve/main/data/train-{000000..000030}.tar"
dataset = (
wds.WebDataset(urls)
.decode("pil")
.to_tuple("png", "json")
)
image, metadata = next(iter(dataset))
```
## Notes
The `preview` config intentionally points to a single shard so the Hub can inspect a small part of the dataset without processing the full 68 GB. For training, use the `full` config.
This dataset was generated from automated ViZDoom policy rollouts. It should be treated as gameplay observation/action data, not human demonstrations.
|