Datasets:
Update README.md
#41
by shim0114 - opened
README.md
CHANGED
|
@@ -22,15 +22,11 @@ https://arxiv.org/abs/2511.22989
|
|
| 22 |
|
| 23 |
## Github Repository
|
| 24 |
|
|
|
|
| 25 |
https://github.com/matsuolab/multibanana/
|
| 26 |
|
| 27 |
## Dataset Summary
|
| 28 |
-
|
| 29 |
-
However, the existing benchmark datasets often focus on the generation with single or a few reference images, which prevents us from measuring the progress on how model performance advances or pointing out their weaknesses, under different multi-reference conditions.
|
| 30 |
-
In addition, their task definitions are still vague, typically limited to axes such as "what to edit" or "how many references are given", and therefore fail to capture the intrinsic difficulty of multi-reference settings.
|
| 31 |
-
To address this gap, we introduce MultiBanana, which is carefully designed to assesses the edge of model capabilities by widely covering multi-reference-specific problems at scale: (1) varying the number of references (up to 8), (2) domain mismatch among references (e.g., photo vs. anime), (3) scale mismatch between reference and target scenes, (4) references containing rare concepts (e.g., a red banana), and (5) multilingual textual references for rendering.
|
| 32 |
-
Our analysis among a variety of text-to-image models reveals their superior performances, typical failure modes, and areas for improvement.
|
| 33 |
-
MultiBanana will be released as an open benchmark to push the boundaries and establish a standardized basis for fair comparison in multi-reference image generation.
|
| 34 |
|
| 35 |
## Acknowledgement
|
| 36 |
This dataset partially incorporates a subset of images from the LAION-5B dataset.
|
|
|
|
| 22 |
|
| 23 |
## Github Repository
|
| 24 |
|
| 25 |
+
For the usage of this benchmark, please see Github Repository.
|
| 26 |
https://github.com/matsuolab/multibanana/
|
| 27 |
|
| 28 |
## Dataset Summary
|
| 29 |
+
MultiBanana-Bench comprises 32 tasks designed to evaluate how well image generation models can faithfully incorporate information from multiple reference images. We report evaluation scores using **Qwen3-VL-8B-Instruct**, a fixed, open-weight judge model. We hope this benchmark, along with its evaluation framework using an open-source VLM as a judge, will serve as a foundation for future research in multi-reference text-to-image generation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
## Acknowledgement
|
| 32 |
This dataset partially incorporates a subset of images from the LAION-5B dataset.
|