Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

DRAMA Benchmarks — Eval Results

Companion dataset to the training-checkpoint repo ssubhnil/cwm. This dataset stores the OOD-evaluation outputs (CSV + raw per-episode returns + figures) for the NeurIPS main-paper benchmark comparing four context-aware world model methods:

Method Framework Source repo
TrajD (ours) PyTorch + Mamba2 SSubhnil/CausalWorldModel
DRAMA (ours, no steering) PyTorch + Mamba2 same repo, Steer: False
DALI-S JAX + DreamerV3 SSubhnil/DALI
cRSSM-S JAX + DreamerV3 dreaming_of_many_worlds, benchmark branch

Protocol, split tables, and the runners that produced these numbers live in the CausalWorldModel repo under benchmark/ and docs/.

Repo layout

DRAMA-benchmarks-eval/
├── README.md                # this file
├── main_results.csv         # single source of truth — one row per (row_id, condition_id)
├── manifest.yaml            # snapshot of benchmark/eval_manifest.yaml at last push
├── eval_conditions.yaml     # snapshot of benchmark/eval_conditions.yaml at last push
├── raw/                     # per-(row_id, condition_id) raw returns JSON
│   └── <row_id>__<condition_id>.json   # keys: raw_returns, git_sha, row metadata
└── figures/
    └── paper_neurips/
        ├── atari_alien_ood.pdf         # grouped bar chart (methods × conditions)
        ├── atari_alien_ood.tex         # booktabs results table (LaTeX \input-able)
        ├── atari_alien_ood.csv         # wide-format aggregated (mean ± std per cell)
        └── atari_alien_conditions.tex  # protocol explainer (mode/difficulty splits)

CSV schema (main_results.csv)

One row per (row_id, condition_id) pair. Columns (see benchmark/eval_shared.CSV_COLUMNS in CausalWorldModel):

Column Meaning
row_id Unique per-checkpoint key (e.g. trajd_alien_K8_s1, dali_alien_s1).
condition_id Eval condition within the row's family (e.g. mode_ood).
method trajd | drama | dali | crssm.
domain atari | dmc | procgen.
env e.g. ALE/Alien-v5.
experiment Training preset (e.g. alien_K_ablation_base, bench_dr_alien).
seed Training seed.
axis mode_diff (Atari), physics / reward / timing (DMC), levels (Procgen).
n_episodes Typically 100.
mean_return / std_return Aggregated over n_episodes.
reward_mse_* Populated only on reward axis (DMC mixed).
raw_returns_path Relative path to the per-episode JSON in raw/.
training_wandb_run_id WandB run that produced the evaluated checkpoint.
eval_wandb_run_id WandB run that logged this eval (deterministic md5(row_id)[:12]).
eval_timestamp_utc / git_sha Reproducibility metadata.

How to pull the data (other cluster → local)

pip install huggingface_hub
huggingface-cli login                # one-time

# Whole dataset
huggingface-cli download ssubhnil/DRAMA-benchmarks-eval \
    --repo-type dataset \
    --local-dir ./eval_pull

# Or in Python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="ssubhnil/DRAMA-benchmarks-eval", repo_type="dataset",
                  local_dir="./eval_pull")

import pandas as pd
df = pd.read_csv("./eval_pull/main_results.csv")
print(df.groupby(["method","env","condition_id"])["mean_return"].agg(["mean","std","count"]))

How to regenerate the figures (locally)

After downloading the dataset, from a CausalWorldModel checkout:

python scripts/atari_alien_ood_report.py \
    --csv  ./eval_pull/main_results.csv \
    --out-dir ./figures_rebuild \
    --prefix atari_alien_ood

The aggregator filters env == ALE/Alien-v5 and groups by method (trajd K=5/K=8, drama, dali, crssm). It is read-only on the dataset — writes land in --out-dir.

How to add DALI and cRSSM rows (JAX server workflow)

The shared contract is in the CausalWorldModel repo at benchmark/eval_shared.py. Stubs with all framework-agnostic plumbing done (manifest parsing, condition filter, fcntl-locked CSV append, raw-JSON writer, WandB attach) live at:

  • benchmark/run_eval_benchmark_dali.py
  • benchmark/run_eval_benchmark_crssm.py

Each has three TODO-JAX hooks (_build_env, _load_agent, _rollout_episodes). Fill those in on the JAX server, add DALI / cRSSM rows to benchmark/eval_manifest.yaml (row templates in each stub's module docstring), then:

# 1. Pull current CSV so the fcntl-locked append merges cleanly
huggingface-cli download ssubhnil/DRAMA-benchmarks-eval --repo-type dataset \
    --local-dir ./eval_pull main_results.csv

cp eval_pull/main_results.csv results/eval/main_results.csv

# 2. Run eval (appends new rows + writes raw/*.json)
python benchmark/run_eval_benchmark_dali.py \
    --manifest benchmark/eval_manifest.yaml \
    --row-id dali_alien_s1 \
    --results-root results/eval/ --wandb
# (repeat for s2…s5; or launch via benchmark/eval_launcher_lsf.py)

# 3. Push delta back to this dataset
python benchmark/push_eval_to_hf.py \
    --results-root results/eval/ \
    --figures-dir /path/to/figures \
    --manifest benchmark/eval_manifest.yaml

After both DALI and cRSSM land, re-run scripts/atari_alien_ood_report.py to produce the final five-way figure (TrajD K=5 + TrajD K=8 + DRAMA + DALI + cRSSM).

Provenance

  • Checkpoints: HuggingFace model repo ssubhnil/cwm.
    • TrajD K=5: trajd_main/atari_alien/K5_s{1..3}/ckpt (sweep rhnyvacz).
    • TrajD K=8: trajd_main/atari_alien/K8_s{1..3}/ckpt (sweep rhnyvacz).
    • DRAMA: drama/atari_alien/seed{1..5} (sweep Drama_latent).
  • Training logs / sweep tracker: docs/ablations.md in the CausalWorldModel repo.
  • Per-run eval log (job IDs, walltime, observations): docs/eval_results.md.
  • Atari OOD split definitions (mode × difficulty partitioning): docs/atari_env_splits.md.

Last updated

  • 2026-05-03 17:13 UTC — git SHA 1ae49db
  • Contents at this revision: 3679 CSV rows (3614 raw JSONs).
Downloads last month
254