Update README.md
#1
by
Aleksandar
- opened
README.md
CHANGED
|
@@ -1,7 +1,36 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
language:
|
| 4 |
-
- en
|
| 5 |
-
base_model:
|
| 6 |
-
- stable-diffusion-v1-5/stable-diffusion-v1-5
|
| 7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
base_model:
|
| 6 |
+
- stable-diffusion-v1-5/stable-diffusion-v1-5
|
| 7 |
+
datasets:
|
| 8 |
+
- timbrooks/instructpix2pix-clip-filtered
|
| 9 |
+
- Aleksandar/Top-Bench-X
|
| 10 |
+
---
|
| 11 |
+
# EditCLIP: Representation Learning for Image Editing
|
| 12 |
+
[](https://arxiv.org/abs/2503.20318)
|
| 13 |
+
[](https://qianwangx.github.io/EditCLIP/)
|
| 14 |
+
[](https://github.com/QianWangX/EditCLIP)
|
| 15 |
+
[](https://iccv2025.thecvf.com/)
|
| 16 |
+
|
| 17 |
+
## π‘ Abstract
|
| 18 |
+
|
| 19 |
+
We introduce EditCLIP, a novel representation-learning approach for image editing. Our method learns a unified representation of edits by jointly encoding an input image and its edited counterpart, effectively capturing their transformation. To evaluate its effectiveness, we employ EditCLIP to solve two tasks: exemplar-based image editing and automated edit evaluation. In exemplar-based image editing, we replace text-based instructions in InstructPix2Pix with EditCLIP embeddings computed from a reference exemplar image pair. Experiments demonstrate that our approach outperforms state-of-the-art methods while being more efficient and versatile. For automated evaluation, EditCLIP assesses image edits by measuring the similarity between the EditCLIP embedding of a given image pair and either a textual editing instruction or the EditCLIP embedding of another reference image pair. Experiments show that EditCLIP aligns more closely with human judgments than existing CLIP-based metrics, providing a reliable measure of edit quality and structural preservation.
|
| 20 |
+
|
| 21 |
+
## π Benchmark
|
| 22 |
+
We evaluate EditCLIP using **Top-Bench-X**, a benchmark for image editing evaluation:
|
| 23 |
+
- **Dataset:** Top-Bench-X
|
| 24 |
+
- **Link:** https://huggingface.co/datasets/Aleksandar/Top-Bench-X
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
## π Citation
|
| 28 |
+
```bibtex
|
| 29 |
+
@inproceedings{wang2025editclip,
|
| 30 |
+
title={EditCLIP: Representation Learning for Image Editing},
|
| 31 |
+
author={Wang, Qian and Cveji{\'c}, Aleksandar and Eldesokey, Abdelrahman and Wonka, Peter},
|
| 32 |
+
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
|
| 33 |
+
pages={15960--15970},
|
| 34 |
+
year={2025}
|
| 35 |
+
}
|
| 36 |
+
```
|