Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -5,118 +5,22 @@ task_categories:
|
|
| 5 |
pretty_name: RISEBench
|
| 6 |
size_categories:
|
| 7 |
- n<1K
|
| 8 |
-
configs:
|
| 9 |
-
- config_name: default
|
| 10 |
-
data_files:
|
| 11 |
-
- split: test
|
| 12 |
-
path: data/test-*
|
| 13 |
-
dataset_info:
|
| 14 |
-
features:
|
| 15 |
-
- name: subset
|
| 16 |
-
dtype: string
|
| 17 |
-
- name: prompt
|
| 18 |
-
dtype: string
|
| 19 |
-
- name: source_image
|
| 20 |
-
dtype: image
|
| 21 |
-
- name: reference_image
|
| 22 |
-
dtype: image
|
| 23 |
-
- name: reference
|
| 24 |
-
dtype: string
|
| 25 |
-
- name: reference_txt
|
| 26 |
-
dtype: string
|
| 27 |
-
- name: consistency_free
|
| 28 |
-
dtype: bool
|
| 29 |
-
- name: reasoning_img
|
| 30 |
-
dtype: bool
|
| 31 |
-
- name: reasoning_wo_ins
|
| 32 |
-
dtype: bool
|
| 33 |
-
splits:
|
| 34 |
-
- name: test
|
| 35 |
-
num_bytes: 283754127
|
| 36 |
-
num_examples: 360
|
| 37 |
-
download_size: 283737698
|
| 38 |
-
dataset_size: 283754127
|
| 39 |
---
|
| 40 |
|
| 41 |
# RISEBench
|
| 42 |
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
## Source
|
| 46 |
-
|
| 47 |
-
- Official dataset repo: `PhoenixZ/RISEBench`
|
| 48 |
-
- Official code repo: `VisionXLab/RISEBench`
|
| 49 |
-
- Source artifacts used here:
|
| 50 |
-
- `datav2_total_w_subtask.json`
|
| 51 |
-
- `data.zip`
|
| 52 |
-
|
| 53 |
-
## What is included
|
| 54 |
-
|
| 55 |
-
This repo contains the official full benchmark input images and metadata reorganized into one `test` split with a stable schema.
|
| 56 |
-
|
| 57 |
-
Category counts:
|
| 58 |
-
|
| 59 |
-
- `temporal_reasoning`: 85 rows
|
| 60 |
-
- `causal_reasoning`: 90 rows
|
| 61 |
-
- `spatial_reasoning`: 100 rows
|
| 62 |
-
- `logical_reasoning`: 85 rows
|
| 63 |
-
|
| 64 |
-
Total rows: `360`
|
| 65 |
-
|
| 66 |
-
## Normalization choices
|
| 67 |
-
|
| 68 |
-
- Unified the benchmark into a single `test` split.
|
| 69 |
-
- Stored benchmark routing fields as `suite=risebench` and `subset=<category>`.
|
| 70 |
-
- Preserved the official benchmark id in `sample_id`.
|
| 71 |
-
- Stored source images in `source_image` as Hugging Face `Image` features.
|
| 72 |
-
- Stored optional reference images in `reference_image` when provided by the official data.
|
| 73 |
-
- Preserved the official instruction as both `prompt` and `turn_prompts=[prompt]` for compatibility with the shared image-edit runtime.
|
| 74 |
-
- Preserved official evaluation hints such as `consistency_free`, `reasoning_img`, `reasoning_wo_ins`, and `subtask`.
|
| 75 |
|
| 76 |
## Schema
|
| 77 |
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
- `
|
| 81 |
-
- `
|
| 82 |
-
- `
|
| 83 |
-
- `
|
| 84 |
-
- `
|
| 85 |
-
- `
|
| 86 |
-
- `
|
| 87 |
-
- `turn_prompts`: list containing the prompt
|
| 88 |
-
- `num_turns`: always `1`
|
| 89 |
-
- `source_image_relpath`: original relative source path
|
| 90 |
-
- `source_image`: source image
|
| 91 |
-
- `reference_image_relpath`: optional relative reference image path
|
| 92 |
-
- `reference_image`: optional reference image
|
| 93 |
-
- `reference`: official reference description text
|
| 94 |
-
- `reference_txt`: optional official text answer for logical tasks
|
| 95 |
-
- `reference_old`: optional legacy reference description
|
| 96 |
-
- `problem`: optional official problem marker
|
| 97 |
-
- `consistency_free`: whether appearance-consistency scoring is skipped
|
| 98 |
-
- `reasoning_img`: whether the official reasoning judge uses an additional image input
|
| 99 |
-
- `reasoning_img_op`: optional official flag preserved from source
|
| 100 |
-
- `reasoning_wo_ins`: whether the official logical judge omits the instruction
|
| 101 |
-
- `eval_protocol`: normalized evaluation protocol label
|
| 102 |
-
- `source_repo`: upstream dataset repo id
|
| 103 |
-
- `source_metadata_file`: upstream metadata filename
|
| 104 |
-
- `source_archive_member`: original member path inside `data.zip`
|
| 105 |
-
- `reference_archive_member`: original reference-image member path inside `data.zip`
|
| 106 |
-
|
| 107 |
-
## Usage
|
| 108 |
|
| 109 |
-
``
|
| 110 |
-
from datasets import load_dataset
|
| 111 |
-
|
| 112 |
-
ds = load_dataset("Jialuo21/RISEBench", split="test")
|
| 113 |
-
print(ds[0]["sample_id"], ds[0]["category"], ds[0]["prompt"])
|
| 114 |
-
```
|
| 115 |
-
|
| 116 |
-
Example access pattern:
|
| 117 |
-
|
| 118 |
-
```python
|
| 119 |
-
row = ds[0]
|
| 120 |
-
source = row["source_image"]
|
| 121 |
-
reference = row["reference_image"]
|
| 122 |
-
```
|
|
|
|
| 5 |
pretty_name: RISEBench
|
| 6 |
size_categories:
|
| 7 |
- n<1K
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
# RISEBench
|
| 11 |
|
| 12 |
+
Minimized normalized benchmark dataset for `T2I-Eval`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
## Schema
|
| 15 |
|
| 16 |
+
- `subset`: reasoning category used for benchmark routing
|
| 17 |
+
- `prompt`: editing instruction
|
| 18 |
+
- `source_image`: input image
|
| 19 |
+
- `reference_image`: optional reference answer image for some tasks
|
| 20 |
+
- `reference`: optional text reference description
|
| 21 |
+
- `reference_txt`: optional text answer for logical tasks
|
| 22 |
+
- `consistency_free`: whether consistency judging is skipped
|
| 23 |
+
- `reasoning_img`: whether reasoning judging also uses the input image
|
| 24 |
+
- `reasoning_wo_ins`: whether logical reasoning judging omits the instruction
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
+
Total rows: `360`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|