cocktailpeanut commited on
Commit
b4e7a1c
·
1 Parent(s): 8a7960d
Files changed (2) hide show
  1. README.md +0 -13
  2. readme.md +0 -90
README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Differential Diffusion
3
- emoji: 😻
4
- colorFrom: yellow
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 4.19.1
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
readme.md DELETED
@@ -1,90 +0,0 @@
1
- # Differential Diffusion: Giving Each Pixel its strength
2
- > Eran Levin, Ohad Fried
3
- > Tel Aviv University, Reichman University
4
- > Diffusion models have revolutionized image generation and editing, producing state-of-the-art results in conditioned and unconditioned image synthesis. While current techniques enable user control over the degree of change in an image edit, the controllability is limited to global changes over an entire edited region. This paper introduces a novel framework that enables customization of the amount of change <i>per pixel</i> or <i>per image region</i>. Our framework can be integrated into any existing diffusion model, enhancing it with this capability. Such granular control on the quantity of change opens up a diverse array of new editing capabilities, such as control of the extent to which individual objects are modified, or the ability to introduce gradual spatial changes. Furthermore, we showcase the framework's effectiveness in soft-inpainting---the completion of portions of an image while subtly adjusting the surrounding areas to ensure seamless integration. Additionally, we introduce a new tool for exploring the effects of different change quantities. Our framework operates solely during inference, requiring no model training or fine-tuning. We demonstrate our method with the current open state-of-the-art models, and validate it via both quantitative and qualitative comparisons, and a user study.
5
-
6
- <a href="https://arxiv.org/abs/2306.00950"><img src="https://img.shields.io/badge/arXiv-2306.00950-b31b1b?style=flat&logo=arxiv&logoColor=red"/></a>
7
- <a href="https://differential-diffusion.github.io/"><img src="https://img.shields.io/static/v1?label=Project&message=Website&color=red" height=20.5></a>
8
- <br/>
9
- <img src="assets/teaser.png" width="800px"/>
10
- ## Table of Contents
11
-
12
- - [Requirements](#requirements)
13
- - [Installation](#installation)
14
- - [Usage](#usage)
15
-
16
-
17
- ## Requirements
18
-
19
- - Python (version 3.9)
20
- - GPU (NVIDIA CUDA compatible)
21
- - [Virtualenv](https://virtualenv.pypa.io/) (optional but recommended)
22
-
23
- ## Installation
24
-
25
- - Create a virtual environment (optional but recommended):
26
-
27
- ```bash
28
- python -m venv venv
29
- ```
30
-
31
- Activate the virtual environment:
32
-
33
- On Windows:
34
-
35
- ```bash
36
- venv\Scripts\activate
37
- ```
38
-
39
- On Unix or MacOS:
40
-
41
- ```bash
42
- source venv/bin/activate
43
- ```
44
-
45
- - Install the required dependencies:
46
-
47
- ```bash
48
- pip install -r requirements.txt
49
- ```
50
-
51
- ## Usage
52
- - Ensure that your virtual environment is activated.
53
- - Make sure that your GPU is properly set up and accessible.
54
- - For Stable Diffusion 2.1:
55
- - Run the script:
56
-
57
- ```bash
58
- python SD2/run.py
59
- ```
60
- - For Stable Diffusion XL:
61
- - Run the script:
62
-
63
- ```bash
64
- python SDXL/run.py
65
- ```
66
- - For Kandinsky 2.2:
67
- - Run the script:
68
-
69
- ```bash
70
- python Kandinsky/run.py
71
- ```
72
-
73
- - For DeepFloyd IF:
74
- - Run the script:
75
-
76
- ```bash
77
- python IF/run.py
78
- ```
79
-
80
- ## Citation
81
- ```bibtex
82
- @misc{levin2023differential,
83
- title={Differential Diffusion: Giving Each Pixel Its Strength},
84
- author={Eran Levin and Ohad Fried},
85
- year={2023},
86
- eprint={2306.00950},
87
- archivePrefix={arXiv},
88
- primaryClass={cs.CV}
89
- }
90
- ```