Datasets:

Modalities:
Image
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Tags:
code
License:
SNR-Bench / README.md
qingjiu151's picture
Upload dataset
767f5c6 verified
metadata
license: apache-2.0
tags:
  - code
size_categories:
  - 1K<n<10K
dataset_info:
  features:
    - name: id
      dtype: string
    - name: source_prompt
      dtype: string
    - name: target_prompt
      dtype: string
    - name: image
      dtype: image
    - name: image_width
      dtype: int32
    - name: image_height
      dtype: int32
  splits:
    - name: test
      num_bytes: 29728028
      num_examples: 80
  download_size: 29723807
  dataset_size: 29728028
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

SNR-Edit: Structure-Aware Noise Rectification for Inversion-Free Flow-Based Editing

Lifan Jiang   Boxi Wu   Yuhang Pei   Tianrun Wu  
Yongyuan Chen   Yan Zhao   Shiyu Yu   Deng Cai
State Key Lab of CAD&CG, Zhejiang University
Submitted to ICML 2026

Paper    Project Page    GitHub


🚧 Coming Soon

SNR-Bench comprises 80 high-quality image-editing cases. Approximately 50% are sampled from PIE-Bench to ensure continuity with standard benchmarks, and the remaining 50% are collected from the web to introduce richer textures and more complex real-world scenes. We cover four editing operations: adjust, change, remove, and add. To minimize ambiguity and improve instruction consistency, all editing instructions for the non--PIE-Bench subset are manually written, refined, and verified through human annotation.

The dataset is currently being prepared and will be released soon.

This repository contains the SNR-Bench dataset used in the paper SNR-Edit: Structure-Aware Noise Rectification for Inversion-Free Flow-Based Editing.

📌 Citation

If you find this dataset helpful, please consider citing:

@misc{jiang2026snreditstructureawarenoiserectification,
      title={SNR-Edit: Structure-Aware Noise Rectification for Inversion-Free Flow-Based Editing}, 
      author={Lifan Jiang and Boxi Wu and Yuhang Pei and Tianrun Wu and Yongyuan Chen and Yan Zhao and Shiyu Yu and Deng Cai},
      year={2026},
      eprint={2601.19180},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2601.19180}, 
}