Files changed (1) hide show
  1. README.md +36 -7
README.md CHANGED
@@ -1,7 +1,36 @@
1
- ---
2
- license: mit
3
- language:
4
- - en
5
- base_model:
6
- - stable-diffusion-v1-5/stable-diffusion-v1-5
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - stable-diffusion-v1-5/stable-diffusion-v1-5
7
+ datasets:
8
+ - timbrooks/instructpix2pix-clip-filtered
9
+ - Aleksandar/Top-Bench-X
10
+ ---
11
+ # EditCLIP: Representation Learning for Image Editing
12
+ [![Paper](https://img.shields.io/badge/arXiv-2503.20318-b31b1b)](https://arxiv.org/abs/2503.20318)
13
+ [![Project Page](https://img.shields.io/badge/🌐-Project_Page-blue)](https://qianwangx.github.io/EditCLIP/)
14
+ [![GitHub](https://img.shields.io/badge/GitHub-Repository-black?logo=github)](https://github.com/QianWangX/EditCLIP)
15
+ [![ICCV 2025](https://img.shields.io/badge/πŸ“·-Published_at_ICCV_2025-blue)](https://iccv2025.thecvf.com/)
16
+
17
+ ## πŸ’‘ Abstract
18
+
19
+ We introduce EditCLIP, a novel representation-learning approach for image editing. Our method learns a unified representation of edits by jointly encoding an input image and its edited counterpart, effectively capturing their transformation. To evaluate its effectiveness, we employ EditCLIP to solve two tasks: exemplar-based image editing and automated edit evaluation. In exemplar-based image editing, we replace text-based instructions in InstructPix2Pix with EditCLIP embeddings computed from a reference exemplar image pair. Experiments demonstrate that our approach outperforms state-of-the-art methods while being more efficient and versatile. For automated evaluation, EditCLIP assesses image edits by measuring the similarity between the EditCLIP embedding of a given image pair and either a textual editing instruction or the EditCLIP embedding of another reference image pair. Experiments show that EditCLIP aligns more closely with human judgments than existing CLIP-based metrics, providing a reliable measure of edit quality and structural preservation.
20
+
21
+ ## πŸ“Š Benchmark
22
+ We evaluate EditCLIP using **Top-Bench-X**, a benchmark for image editing evaluation:
23
+ - **Dataset:** Top-Bench-X
24
+ - **Link:** https://huggingface.co/datasets/Aleksandar/Top-Bench-X
25
+
26
+
27
+ ## 🌟 Citation
28
+ ```bibtex
29
+ @inproceedings{wang2025editclip,
30
+ title={EditCLIP: Representation Learning for Image Editing},
31
+ author={Wang, Qian and Cveji{\'c}, Aleksandar and Eldesokey, Abdelrahman and Wonka, Peter},
32
+ booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
33
+ pages={15960--15970},
34
+ year={2025}
35
+ }
36
+ ```