Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
Junc1i commited on
Commit
f60e653
·
verified ·
1 Parent(s): 1651a96

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -317,6 +317,7 @@ ds_all = concatenate_datasets([
317
  We evaluate multiple methods on VP-Bench using three state-of-the-art VLM evaluators (Gemini3, GPT-5.2, Qwen3.5) and human judges. The metric is success ratio (higher is better). Total denotes the average success rate across all eight task categories.
318
 
319
  Abbreviations: C2I: class-to-image · T2I: text-to-image · TIE: text-in-image edit · FU: force understanding · TBE: text & bbox edit · TU: trajectory understanding · VME: visual marker edit · DE: doodles edit
 
320
  **Evaluator: Gemini3**
321
  | Method | C2I | T2I | TIE | FU | TBE | TU | VME | DE | **Total** |
322
  |--------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---------:|
@@ -325,6 +326,7 @@ Abbreviations: C2I: class-to-image · T2I: text-to-image · TIE: text-in-image e
325
  | Kontext (Labs et al., 2025) | .050 | .020 | .048 | .007 | .000 | .020 | .010 | .000 | .019 |
326
  | Qwen-IE-2509 (Wu et al., 2025) | .230 | .040 | .069 | .000 | .000 | .020 | .023 | .000 | .048 |
327
  | **FlowInOne (Ours)** | **.890** | **.700** | **.355** | **.727** | **.302** | **.520** | **.292** | **.535** | **.540** |
 
328
  **Evaluator: GPT-5.2**
329
  | Method | C2I | T2I | TIE | FU | TBE | TU | VME | DE | **Total** |
330
  |--------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---------:|
@@ -333,6 +335,7 @@ Abbreviations: C2I: class-to-image · T2I: text-to-image · TIE: text-in-image e
333
  | Kontext (Labs et al., 2025) | .090 | .020 | .028 | .020 | .000 | .080 | .003 | .093 | .042 |
334
  | Qwen-IE-2509 (Wu et al., 2025) | .240 | .120 | .080 | .020 | .022 | .060 | .020 | .047 | .076 |
335
  | **FlowInOne (Ours)** | **.850** | **.800** | .079 | **.500** | **.116** | **.240** | .083 | **.465** | **.392** |
 
336
  **Evaluator: Qwen3.5**
337
  | Method | C2I | T2I | TIE | FU | TBE | TU | VME | DE | **Total** |
338
  |--------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---------:|
@@ -341,6 +344,7 @@ Abbreviations: C2I: class-to-image · T2I: text-to-image · TIE: text-in-image e
341
  | Kontext (Labs et al., 2025) | .050 | .020 | .042 | .133 | .000 | .060 | .047 | .093 | .056 |
342
  | Qwen-IE-2509 (Wu et al., 2025) | .270 | .060 | .080 | .087 | .047 | .040 | .033 | .047 | .083 |
343
  | **FlowInOne (Ours)** | **.859** | **.720** | **.354** | **.713** | **.272** | **.320** | **.306** | **.481** | **.503** |
 
344
  **Evaluator: Human**
345
  | Method | C2I | T2I | TIE | FU | TBE | TU | VME | DE | **Total** |
346
  |--------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---------:|
 
317
  We evaluate multiple methods on VP-Bench using three state-of-the-art VLM evaluators (Gemini3, GPT-5.2, Qwen3.5) and human judges. The metric is success ratio (higher is better). Total denotes the average success rate across all eight task categories.
318
 
319
  Abbreviations: C2I: class-to-image · T2I: text-to-image · TIE: text-in-image edit · FU: force understanding · TBE: text & bbox edit · TU: trajectory understanding · VME: visual marker edit · DE: doodles edit
320
+
321
  **Evaluator: Gemini3**
322
  | Method | C2I | T2I | TIE | FU | TBE | TU | VME | DE | **Total** |
323
  |--------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---------:|
 
326
  | Kontext (Labs et al., 2025) | .050 | .020 | .048 | .007 | .000 | .020 | .010 | .000 | .019 |
327
  | Qwen-IE-2509 (Wu et al., 2025) | .230 | .040 | .069 | .000 | .000 | .020 | .023 | .000 | .048 |
328
  | **FlowInOne (Ours)** | **.890** | **.700** | **.355** | **.727** | **.302** | **.520** | **.292** | **.535** | **.540** |
329
+
330
  **Evaluator: GPT-5.2**
331
  | Method | C2I | T2I | TIE | FU | TBE | TU | VME | DE | **Total** |
332
  |--------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---------:|
 
335
  | Kontext (Labs et al., 2025) | .090 | .020 | .028 | .020 | .000 | .080 | .003 | .093 | .042 |
336
  | Qwen-IE-2509 (Wu et al., 2025) | .240 | .120 | .080 | .020 | .022 | .060 | .020 | .047 | .076 |
337
  | **FlowInOne (Ours)** | **.850** | **.800** | .079 | **.500** | **.116** | **.240** | .083 | **.465** | **.392** |
338
+
339
  **Evaluator: Qwen3.5**
340
  | Method | C2I | T2I | TIE | FU | TBE | TU | VME | DE | **Total** |
341
  |--------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---------:|
 
344
  | Kontext (Labs et al., 2025) | .050 | .020 | .042 | .133 | .000 | .060 | .047 | .093 | .056 |
345
  | Qwen-IE-2509 (Wu et al., 2025) | .270 | .060 | .080 | .087 | .047 | .040 | .033 | .047 | .083 |
346
  | **FlowInOne (Ours)** | **.859** | **.720** | **.354** | **.713** | **.272** | **.320** | **.306** | **.481** | **.503** |
347
+
348
  **Evaluator: Human**
349
  | Method | C2I | T2I | TIE | FU | TBE | TU | VME | DE | **Total** |
350
  |--------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---------:|