Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
Junc1i commited on
Commit
ba32dc8
·
verified ·
1 Parent(s): 321cca6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +275 -1
README.md CHANGED
@@ -249,4 +249,278 @@ language:
249
  - en
250
  size_categories:
251
  - 1K<n<10K
252
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
249
  - en
250
  size_categories:
251
  - 1K<n<10K
252
+ ---
253
+ # VP-Bench
254
+ **VP-Bench** is the official evaluation benchmark for [**FlowInOne: Unifying Multimodal Generation as Image-in, Image-out Flow Matching**](https://csu-jpg.github.io/FlowInOne.github.io/).
255
+ It is a rigorously curated benchmark assessing **instruction faithfulness**, **spatial precision**, **visual realism**, and **content consistency** across eight distinct visual prompting tasks.
256
+ > 📄 **Paper**: FlowInOne: Unifying Multimodal Generation as Image-in, Image-out Flow Matching
257
+ > 🌐 **Project Page**: https://csu-jpg.github.io/FlowInOne.github.io/
258
+ > 💻 **Code & Evaluation Scripts**: https://github.com/CSU-JPG/FlowInOne
259
+ ## Evaluation
260
+ Our evaluation scripts are now available on [GitHub](https://github.com/CSU-JPG/FlowInOne)!
261
+ ## Dataset Subsets
262
+ The dataset contains **8 subsets**, each corresponding to a distinct visual instruction task:
263
+ | Subset | Abbrev. | Description |
264
+ |--------|---------|-------------|
265
+ | `class2image` | C2I | Class label rendered in input image → generate corresponding image |
266
+ | `text2image` | T2I | Text instruction rendered in input image → generate image |
267
+ | `text_in_image` | TIE | Edit text content within an image |
268
+ | `force` | FU | Physics-aware force understanding (3 categories) |
269
+ | `text_box_control` | TBE | Text and bounding box editing |
270
+ | `trajectory` | TU | Trajectory understanding and prediction |
271
+ | `vismarker` | VME | Visual marker guided editing (8 categories) |
272
+ | `doodles` | DE | Doodle-based editing |
273
+ ## Dataset Features
274
+ - **input_image** (`image`): The input visual prompt image (with rendered instruction).
275
+ - **output_image** (`image`): The ground-truth output image.
276
+ - **recognized_text** (`string`): The text instruction rendered in the input image (extracted via OCR annotation).
277
+ - **subset** (`string`): The subset name.
278
+ - **category** (`string`): Sub-category within a subset (empty string if not applicable).
279
+ - **image_name** (`string`): The image filename.
280
+ - **input_relpath** (`string`): Relative path of the input image within the subset.
281
+ - **output_relpath** (`string`): Relative path of the output image within the subset.
282
+ - **pair_id** (`string`): Stable SHA1 identifier for each input-output pair.
283
+ ## Loading the Dataset
284
+ ### class2image
285
+ ```python
286
+ from datasets import load_dataset
287
+ ds = load_dataset("CSU-JPG/VP-Bench", "class2image", split="train")
288
+ text2image
289
+ from datasets import load_dataset
290
+ ds = load_dataset("CSU-JPG/VP-Bench", "text2image", split="train")
291
+ text_in_image
292
+ from datasets import load_dataset
293
+ ds = load_dataset("CSU-JPG/VP-Bench", "text_in_image", split="train")
294
+ force
295
+ from datasets import load_dataset
296
+ ds = load_dataset("CSU-JPG/VP-Bench", "force", split="train")
297
+ text_box_control
298
+ from datasets import load_dataset
299
+ ds = load_dataset("CSU-JPG/VP-Bench", "text_box_control", split="train")
300
+ trajectory
301
+ from datasets import load_dataset
302
+ ds = load_dataset("CSU-JPG/VP-Bench", "trajectory", split="train")
303
+ vismarker
304
+ from datasets import load_dataset
305
+ ds = load_dataset("CSU-JPG/VP-Bench", "vismarker", split="train")
306
+ doodles
307
+ from datasets import load_dataset
308
+ ds = load_dataset("CSU-JPG/VP-Bench", "doodles", split="train")
309
+ Load All Subsets
310
+ from datasets import load_dataset, concatenate_datasets
311
+ subsets = ["class2image", "text2image", "text_in_image", "force",
312
+ "text_box_control", "trajectory", "vismarker", "doodles"]
313
+ ds_all = concatenate_datasets([
314
+ load_dataset("CSU-JPG/VP-Bench", name=s, split="train") for s in subsets
315
+ ])
316
+ Evaluation Results
317
+ We evaluate multiple methods on VP-Bench using three state-of-the-art VLM evaluators (Gemini3, GPT-5.2, Qwen3.5) and human judges. The metric is success ratio (higher is better). Total denotes the average success rate across all eight task categories.
318
+
319
+ Abbreviations: C2I: class-to-image · T2I: text-to-image · TIE: text-in-image edit · FU: force understanding · TBE: text & bbox edit · TU: trajectory understanding · VME: visual marker edit · DE: doodles edit
320
+
321
+ Evaluator: Gemini3
322
+ Method C2I T2I TIE FU TBE TU VME DE Total
323
+ Nano Banana (Google, 2025)
324
+ .650
325
+ .980
326
+ .423
327
+ .520
328
+ .614
329
+ .020
330
+ .548
331
+ .721
332
+ .560
333
+ Omnigen2 (Wu et al., 2025)
334
+ .020
335
+ .020
336
+ .017
337
+ .020
338
+ .000
339
+ .000
340
+ .000
341
+ .000
342
+ .007
343
+ Kontext (Labs et al., 2025)
344
+ .050
345
+ .020
346
+ .048
347
+ .007
348
+ .000
349
+ .020
350
+ .010
351
+ .000
352
+ .019
353
+ Qwen-IE-2509 (Wu et al., 2025)
354
+ .230
355
+ .040
356
+ .069
357
+ .000
358
+ .000
359
+ .020
360
+ .023
361
+ .000
362
+ .048
363
+ FlowInOne (Ours)
364
+ .890
365
+ .700
366
+ .355
367
+ .727
368
+ .302
369
+ .520
370
+ .292
371
+ .535
372
+ .540
373
+ Evaluator: GPT-5.2
374
+ Method C2I T2I TIE FU TBE TU VME DE Total
375
+ Nano Banana (Google, 2025)
376
+ .680
377
+ .959
378
+ .152
379
+ .127
380
+ .040
381
+ .136
382
+ .302
383
+
384
+ .302
385
+ Omnigen2 (Wu et al., 2025)
386
+ .110
387
+ .020
388
+ .000
389
+ .000
390
+ .000
391
+ .000
392
+ .000
393
+ .023
394
+ .019
395
+ Kontext (Labs et al., 2025)
396
+ .090
397
+ .020
398
+ .028
399
+ .020
400
+ .000
401
+ .080
402
+ .003
403
+ .093
404
+ .042
405
+ Qwen-IE-2509 (Wu et al., 2025)
406
+ .240
407
+ .120
408
+ .080
409
+ .020
410
+ .022
411
+ .060
412
+ .020
413
+ .047
414
+ .076
415
+ FlowInOne (Ours)
416
+ .850
417
+ .800
418
+ .079
419
+ .500
420
+ .116
421
+ .240
422
+ .083
423
+ .465
424
+ .392
425
+ Evaluator: Qwen3.5
426
+ Method C2I T2I TIE FU TBE TU VME DE Total
427
+ Nano Banana (Google, 2025)
428
+ .600
429
+ .959
430
+ .386
431
+ .367
432
+ .257
433
+ .040
434
+ .321
435
+ .744
436
+ .469
437
+ Omnigen2 (Wu et al., 2025)
438
+ .030
439
+ .020
440
+ .017
441
+ .034
442
+ .000
443
+ .000
444
+ .003
445
+ .047
446
+ .019
447
+ Kontext (Labs et al., 2025)
448
+ .050
449
+ .020
450
+ .042
451
+ .133
452
+ .000
453
+ .060
454
+ .047
455
+ .093
456
+ .056
457
+ Qwen-IE-2509 (Wu et al., 2025)
458
+ .270
459
+ .060
460
+ .080
461
+ .087
462
+ .047
463
+ .040
464
+ .033
465
+ .047
466
+ .083
467
+ FlowInOne (Ours)
468
+ .859
469
+ .720
470
+ .354
471
+ .713
472
+ .272
473
+ .320
474
+ .306
475
+ .481
476
+ .503
477
+ Evaluator: Human
478
+ Method C2I T2I TIE FU TBE TU VME DE Total
479
+ Nano Banana (Google, 2025)
480
+ .602
481
+ .904
482
+ .271
483
+ .250
484
+ .200
485
+ .050
486
+ .229
487
+ .742
488
+ .406
489
+ Omnigen2 (Wu et al., 2025)
490
+ .000
491
+ .000
492
+ .000
493
+ .000
494
+ .000
495
+ .000
496
+ .000
497
+ .000
498
+ .000
499
+ Kontext (Labs et al., 2025)
500
+ .000
501
+ .000
502
+ .043
503
+ .000
504
+ .000
505
+ .000
506
+ .000
507
+ .100
508
+ .018
509
+ Qwen-IE-2509 (Wu et al., 2025)
510
+ .067
511
+ .000
512
+ .029
513
+ .000
514
+ .000
515
+ .000
516
+ .000
517
+ .000
518
+ .012
519
+ FlowInOne (Ours)
520
+ .800
521
+ .645
522
+ .242
523
+ .705
524
+ .255
525
+ .280
526
+ .255