Datasets:

Modalities:
Text
Formats:
text
ArXiv:
Libraries:
Datasets
License:
XianjingHan commited on
Commit
e8192e5
·
verified ·
1 Parent(s): 671ac44

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -6
README.md CHANGED
@@ -7,7 +7,7 @@ license: apache-2.0
7
 
8
  ### Dataset Description
9
 
10
- OSCBench is a benchmark dataset designed to evaluate **object state change (OSC)** reasoning in text-to-video (T2V) generation models. It provides structured prompts describing actions applied to objects (e.g., *peeling carrot*, *rolling dough*), where the correct outcome requires generating the appropriate **action-induced object state change**.
11
 
12
  OSCBench organizes prompts into three scenario types:
13
 
@@ -34,10 +34,15 @@ The OSCBench dataset contains **1,120 prompts** organized into three scenario ca
34
  - **Project Page:** https://hanxjing.github.io/OSCBench
35
 
36
 
37
- ## Dataset Structure
38
 
39
- The dataset consists of **structured prompt files** describing actions applied to objects.
40
 
41
- Example prompt:
42
-
43
- A man is slicing apple in the kitchen.
 
 
 
 
 
 
7
 
8
  ### Dataset Description
9
 
10
+ OSCBench is a benchmark dataset designed to evaluate **object state change (OSC)** reasoning in text-to-video (T2V) generation models.
11
 
12
  OSCBench organizes prompts into three scenario types:
13
 
 
34
  - **Project Page:** https://hanxjing.github.io/OSCBench
35
 
36
 
37
+ ## Acknowledgements and Citation
38
 
39
+ If you find this dataset helpful, please consider citing the original work:
40
 
41
+ ```bash
42
+ @article{han2026oscbench,
43
+ title={OSCBench: Benchmarking Object State Change in Text-to-Video Generation},
44
+ author={Han, Xianjing and Zhu, Bin and Hu, Shiqi and Li, Franklin Mingzhe and Carrington, Patrick and Zimmermann, Roger and Chen, Jingjing},
45
+ journal={arXiv preprint arXiv:2603.11698},
46
+ year={2026}
47
+ }
48
+ ```