Aleksandar commited on
Commit
ab30004
·
verified ·
1 Parent(s): cbd6ac8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -2
README.md CHANGED
@@ -32,7 +32,7 @@ dataset_info:
32
  - name: id
33
  dtype: int32
34
  splits:
35
- - name: train
36
  num_bytes: 4106538055.5
37
  num_examples: 1277
38
  download_size: 703956134
@@ -40,6 +40,62 @@ dataset_info:
40
  configs:
41
  - config_name: default
42
  data_files:
43
- - split: train
44
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
 
45
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  - name: id
33
  dtype: int32
34
  splits:
35
+ - name: test
36
  num_bytes: 4106538055.5
37
  num_examples: 1277
38
  download_size: 703956134
 
40
  configs:
41
  - config_name: default
42
  data_files:
43
+ - split: test
44
  path: data/train-*
45
+ task_categories:
46
+ - image-to-image
47
+ language:
48
+ - en
49
+ tags:
50
+ - Exemplar
51
+ - Editing
52
+ - Image2Image
53
+ - Diffusion
54
+ pretty_name: Top-Bench-X
55
+ size_categories:
56
+ - 1K<n<10K
57
  ---
58
+
59
+ # EditCLIP: Representation Learning for Image Editing
60
+ <div align="center">
61
+
62
+ [📑 Paper](https://arxiv.org/abs/2503.20318)
63
+ [💻 Project Page](https://qianwangx.github.io/EditCLIP/)
64
+ [🐙 Github](https://github.com/QianWangX/EditCLIP)
65
+
66
+ </div>
67
+
68
+ ## 📚 Introduction
69
+ The **TOP-Bench-X** dataset offers **Query** and **Exemplar** image pairs tailored for exemplar-based image editing. We built it by adapting the TOP-Bench dataset from [InstructBrush](https://royzhao926.github.io/InstructBrush/). Specifically, we use the original training split to generate exemplar images and the test split to supply their corresponding queries. In total, TOP-Bench-X comprises **1,277** samples, including **257** distinct exemplars and **124** unique queries.
70
+
71
+ <img src="assets/teaser_editclip.png" alt="Teaser figure of EditCLIP" width="100%">
72
+
73
+ ## 💡 Abstract
74
+
75
+ We introduce EditCLIP, a novel representation-learning approach for image editing. Our method learns a unified representation of edits by jointly encoding an input image and its edited counterpart, effectively capturing their transformation. To evaluate its effectiveness, we employ EditCLIP to solve two tasks: exemplar-based image editing and automated edit evaluation. In exemplar-based image editing, we replace text-based instructions in InstructPix2Pix with EditCLIP embeddings computed from a reference exemplar image pair. Experiments demonstrate that our approach outperforms state-of-the-art methods while being more efficient and versatile. For automated evaluation, EditCLIP assesses image edits by measuring the similarity between the EditCLIP embedding of a given image pair and either a textual editing instruction or the EditCLIP embedding of another reference image pair. Experiments show that EditCLIP aligns more closely with human judgments than existing CLIP-based metrics, providing a reliable measure of edit quality and structural preservation.
76
+
77
+
78
+ ## 🧠 Data explained
79
+
80
+ Each sample consists of 4 images (2 pairs of images) and metadata, specifically:
81
+
82
+ 1. *input_test* – the query image \(I_q\) from the test split (“before” image you want to edit)
83
+ 2. *input_gt* – the ground-truth edited version of that query image (“after” image for the test)
84
+ 3. *exemplar_input* – the exemplar’s input image \(I_i\) from the training split (“before” image of the exemplar)
85
+ 4. *exemplar_edit* – the exemplar’s edited image \(I_e\) from the training split (“after” image of the exemplar)
86
+
87
+ ## 🌟 Citation
88
+
89
+ ```bibtex
90
+ @article{wang2025editclip,
91
+ title={EditCLIP: Representation Learning for Image Editing},
92
+ author={Wang, Qian and Cvejic, Aleksandar and Eldesokey, Abdelrahman and Wonka, Peter},
93
+ journal={arXiv preprint arXiv:2503.20318},
94
+ year={2025}
95
+ }
96
+ ```
97
+
98
+ ## 💳 License
99
+
100
+ This dataset is mainly a variation of TOP-Bench, confirm the license from the original authors.
101
+