File size: 1,810 Bytes
e5cbb51 e8192e5 e5cbb51 e8192e5 e5cbb51 e8192e5 e5cbb51 e8192e5 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | ---
license: apache-2.0
---
# OSCBench Dataset
### Dataset Description
OSCBench is a benchmark dataset designed to evaluate **object state change (OSC)** reasoning in text-to-video (T2V) generation models.
OSCBench organizes prompts into three scenario types:
- **Regular scenarios:** common action–object combinations frequently seen in training data.
- **Novel scenarios:** uncommon but physically plausible action–object pairs that test generalization.
- **Compositional scenarios:** prompts that combine multiple actions or conditions to test compositional reasoning.
### Dataset Statistics
The OSCBench dataset contains **1,120 prompts** organized into three scenario categories:
| Scenario Type | Number of Scenarios | Prompts per Scenario | Total Prompts |
|---------------|---------------------|----------------------|--------------|
| Regular | 108 | 8 | 864 |
| Novel | 20 | 8 | 160 |
| Compositional | 12 | 8 | 96 |
| **Total** | **140** | — | **1,120** |
### Dataset Sources
- **Dataset:** https://huggingface.co/datasets/XianjingHan/OSCBench_Dataset
- **Paper:** https://arxiv.org/abs/2603.11698
- **Project Page:** https://hanxjing.github.io/OSCBench
## Acknowledgements and Citation
If you find this dataset helpful, please consider citing the original work:
```bash
@article{han2026oscbench,
title={OSCBench: Benchmarking Object State Change in Text-to-Video Generation},
author={Han, Xianjing and Zhu, Bin and Hu, Shiqi and Li, Franklin Mingzhe and Carrington, Patrick and Zimmermann, Roger and Chen, Jingjing},
journal={arXiv preprint arXiv:2603.11698},
year={2026}
}
``` |