Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
Junc1i commited on
Commit
0b130c1
·
verified ·
1 Parent(s): 1134467

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -312,7 +312,8 @@ subsets = ["class2image", "text2image", "text_in_image", "force",
312
  ds_all = concatenate_datasets([
313
  load_dataset("CSU-JPG/VP-Bench", name=s, split="train") for s in subsets
314
  ])
315
- Evaluation Results
 
316
  We evaluate multiple methods on VP-Bench using three state-of-the-art VLM evaluators (Gemini3, GPT-5.2, Qwen3.5) and human judges. The metric is success ratio (higher is better). Total denotes the average success rate across all eight task categories.
317
 
318
  Abbreviations: C2I: class-to-image · T2I: text-to-image · TIE: text-in-image edit · FU: force understanding · TBE: text & bbox edit · TU: trajectory understanding · VME: visual marker edit · DE: doodles edit
 
312
  ds_all = concatenate_datasets([
313
  load_dataset("CSU-JPG/VP-Bench", name=s, split="train") for s in subsets
314
  ])
315
+ ```
316
+ ## Evaluation Results
317
  We evaluate multiple methods on VP-Bench using three state-of-the-art VLM evaluators (Gemini3, GPT-5.2, Qwen3.5) and human judges. The metric is success ratio (higher is better). Total denotes the average success rate across all eight task categories.
318
 
319
  Abbreviations: C2I: class-to-image · T2I: text-to-image · TIE: text-in-image edit · FU: force understanding · TBE: text & bbox edit · TU: trajectory understanding · VME: visual marker edit · DE: doodles edit