peter-sushko commited on
Commit
34da673
verified
1 Parent(s): 0a9e68d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -6
README.md CHANGED
@@ -1,10 +1,61 @@
1
  ---
2
- license: mit
 
 
3
  ---
4
 
5
- # InstructPix2Pix checkpoint finetuned with the [RealEdit](https://peter-sushko.github.io/RealEdit/) dataset
 
 
6
 
7
- ## 1 路 Environment preparation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  Clone the repository and set up the directory structure:
10
 
@@ -23,8 +74,6 @@ cd checkpoints
23
 
24
  Return to the repo root and follow the [InstructPix2Pix installation guide](https://github.com/timothybrooks/instruct-pix2pix) to set up the environment.
25
 
26
- ## 2 路 Running the model
27
-
28
  ### Edit a single image
29
 
30
  ```bash
@@ -55,4 +104,4 @@ If you find this checkpoint helpful, please cite:
55
  primaryClass={cs.CV},
56
  url={https://arxiv.org/abs/2502.03629},
57
  }
58
- ```
 
1
  ---
2
+ license: cc
3
+ tags:
4
+ - image-to-image
5
  ---
6
 
7
+ # REALEDIT: Reddit Edits As a Large-scale Empirical Dataset for Image Transformations
8
+ Project page: https://peter-sushko.github.io/RealEdit/
9
+ Data: https://huggingface.co/datasets/peter-sushko/RealEdit
10
 
11
+ <img src='https://instruct-pix2pix.timothybrooks.com/teaser.jpg'/>
12
+
13
+
14
+ This is the model introduce in realedit paper. There are 2 ways to run inference: either via Diffusers or original InstructPix2Pix pipeline.
15
+
16
+ Option 1: Diffusers library:
17
+
18
+ Install diffusers, transformers library:
19
+
20
+ ```bash
21
+ pip install diffusers accelerate safetensors transformers
22
+ ```
23
+
24
+ Download weights adapted for diffusers:
25
+
26
+ WEIGHTS
27
+
28
+
29
+ ```python
30
+ import PIL
31
+ import requests
32
+ import torch
33
+ from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler
34
+
35
+ model_id = "timbrooks/instruct-pix2pix"
36
+ pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None)
37
+
38
+ CODE TO RUN IT:
39
+
40
+
41
+ pipe.to("cuda")
42
+ pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
43
+
44
+ url = "https://raw.githubusercontent.com/timothybrooks/instruct-pix2pix/main/imgs/example.jpg"
45
+ def download_image(url):
46
+ image = PIL.Image.open(requests.get(url, stream=True).raw)
47
+ image = PIL.ImageOps.exif_transpose(image)
48
+ image = image.convert("RGB")
49
+ return image
50
+ image = download_image(url)
51
+
52
+ prompt = "turn him into cyborg"
53
+ images = pipe(prompt, image=image, num_inference_steps=10, image_guidance_scale=1).images
54
+ images[0]
55
+ ```
56
+
57
+
58
+ Option 2: via InstructPix2Pix pipeline:
59
 
60
  Clone the repository and set up the directory structure:
61
 
 
74
 
75
  Return to the repo root and follow the [InstructPix2Pix installation guide](https://github.com/timothybrooks/instruct-pix2pix) to set up the environment.
76
 
 
 
77
  ### Edit a single image
78
 
79
  ```bash
 
104
  primaryClass={cs.CV},
105
  url={https://arxiv.org/abs/2502.03629},
106
  }
107
+ ```