Update README.md
Browse files
README.md
CHANGED
|
@@ -312,7 +312,8 @@ subsets = ["class2image", "text2image", "text_in_image", "force",
|
|
| 312 |
ds_all = concatenate_datasets([
|
| 313 |
load_dataset("CSU-JPG/VP-Bench", name=s, split="train") for s in subsets
|
| 314 |
])
|
| 315 |
-
|
|
|
|
| 316 |
We evaluate multiple methods on VP-Bench using three state-of-the-art VLM evaluators (Gemini3, GPT-5.2, Qwen3.5) and human judges. The metric is success ratio (higher is better). Total denotes the average success rate across all eight task categories.
|
| 317 |
|
| 318 |
Abbreviations: C2I: class-to-image · T2I: text-to-image · TIE: text-in-image edit · FU: force understanding · TBE: text & bbox edit · TU: trajectory understanding · VME: visual marker edit · DE: doodles edit
|
|
|
|
| 312 |
ds_all = concatenate_datasets([
|
| 313 |
load_dataset("CSU-JPG/VP-Bench", name=s, split="train") for s in subsets
|
| 314 |
])
|
| 315 |
+
```
|
| 316 |
+
## Evaluation Results
|
| 317 |
We evaluate multiple methods on VP-Bench using three state-of-the-art VLM evaluators (Gemini3, GPT-5.2, Qwen3.5) and human judges. The metric is success ratio (higher is better). Total denotes the average success rate across all eight task categories.
|
| 318 |
|
| 319 |
Abbreviations: C2I: class-to-image · T2I: text-to-image · TIE: text-in-image edit · FU: force understanding · TBE: text & bbox edit · TU: trajectory understanding · VME: visual marker edit · DE: doodles edit
|