--- license: apache-2.0 --- # OSCBench Dataset ### Dataset Description OSCBench is a benchmark dataset designed to evaluate **object state change (OSC)** reasoning in text-to-video (T2V) generation models. OSCBench organizes prompts into three scenario types: - **Regular scenarios:** common action–object combinations frequently seen in training data. - **Novel scenarios:** uncommon but physically plausible action–object pairs that test generalization. - **Compositional scenarios:** prompts that combine multiple actions or conditions to test compositional reasoning. ### Dataset Statistics The OSCBench dataset contains **1,120 prompts** organized into three scenario categories: | Scenario Type | Number of Scenarios | Prompts per Scenario | Total Prompts | |---------------|---------------------|----------------------|--------------| | Regular | 108 | 8 | 864 | | Novel | 20 | 8 | 160 | | Compositional | 12 | 8 | 96 | | **Total** | **140** | — | **1,120** | ### Dataset Sources - **Dataset:** https://huggingface.co/datasets/XianjingHan/OSCBench_Dataset - **Paper:** https://arxiv.org/abs/2603.11698 - **Project Page:** https://hanxjing.github.io/OSCBench ## Acknowledgements and Citation If you find this dataset helpful, please consider citing the original work: ```bash @article{han2026oscbench, title={OSCBench: Benchmarking Object State Change in Text-to-Video Generation}, author={Han, Xianjing and Zhu, Bin and Hu, Shiqi and Li, Franklin Mingzhe and Carrington, Patrick and Zimmermann, Roger and Chen, Jingjing}, journal={arXiv preprint arXiv:2603.11698}, year={2026} } ```