Sebastian Semeniuc commited on
Commit
0b7f035
·
1 Parent(s): bec7879

remove unneccessary files

Browse files
README.md CHANGED
@@ -1,42 +1,18 @@
1
- ---
2
- license: openrail
3
- tags:
4
- - stable-diffusion
5
- - stable-diffusion-diffusers
6
- - controlnet
7
- - endpoints-template
8
- thumbnail: >-
9
- https://huggingface.co/philschmid/ControlNet-endpoint/resolve/main/thumbnail.png
10
- inference: true
11
- duplicated_from: philschmid/ControlNet-endpoint
12
- ---
13
-
14
-
15
  # Inference Endpoint for [ControlNet](https://huggingface.co/lllyasviel/ControlNet) using [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
16
 
17
-
18
  > ControlNet is a neural network structure to control diffusion models by adding extra conditions.
19
  > Official repository: https://github.com/lllyasviel/ControlNet
20
 
21
  ---
22
 
23
- Blog post: [Controlled text to image generation with Inference Endpoints]()
24
-
25
- This repository implements a custom `handler` task for `controlled text-to-image` generation on 🤗 Inference Endpoints. The code for the customized pipeline is in the [handler.py](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/handler.py).
26
-
27
- There is also a [notebook](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`
28
-
29
- ![sample](thumbnail.png)
30
-
31
-
32
  ### expected Request payload
33
 
34
  ```json
35
  {
36
- "inputs": "A prompt used for image generation",
37
- "negative_prompt": "low res, bad anatomy, worst quality, low quality",
38
- "controlnet_type": "depth",
39
- "image" : "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC",
40
  }
41
  ```
42
 
@@ -44,10 +20,10 @@ supported `controlnet_type` are: `canny_edge`, `pose`, `depth`, `scribble`, `seg
44
 
45
  below is an example on how to run a request using Python and `requests`.
46
 
47
-
48
  ## Use Python to send requests
49
 
50
- 1. Get image
 
51
  ```
52
  wget https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png
53
  ```
@@ -105,7 +81,7 @@ prediction.save("result.png")
105
  ```
106
  expected output
107
 
108
- ![sample](result.png)
109
 
110
 
111
  [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang and Maneesh Agrawala.
@@ -114,4 +90,5 @@ Using the pretrained models we can provide control images (for example, a depth
114
 
115
  The abstract of the paper is the following:
116
 
117
- We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Inference Endpoint for [ControlNet](https://huggingface.co/lllyasviel/ControlNet) using [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
2
 
 
3
  > ControlNet is a neural network structure to control diffusion models by adding extra conditions.
4
  > Official repository: https://github.com/lllyasviel/ControlNet
5
 
6
  ---
7
 
 
 
 
 
 
 
 
 
 
8
  ### expected Request payload
9
 
10
  ```json
11
  {
12
+ "inputs": "A prompt used for image generation",
13
+ "negative_prompt": "low res, bad anatomy, worst quality, low quality",
14
+ "controlnet_type": "depth",
15
+ "image": "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC"
16
  }
17
  ```
18
 
 
20
 
21
  below is an example on how to run a request using Python and `requests`.
22
 
 
23
  ## Use Python to send requests
24
 
25
+ 1. Get image
26
+
27
  ```
28
  wget https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png
29
  ```
 
81
  ```
82
  expected output
83
 
84
+ ![sample](result.png)
85
 
86
 
87
  [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang and Maneesh Agrawala.
 
90
 
91
  The abstract of the paper is the following:
92
 
93
+ We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.
94
+ ```
crysis.jpeg DELETED
Binary file (224 kB)
 
huggingface.png DELETED
Binary file (26.9 kB)
 
input_image_vermeer.png DELETED
Binary file (411 kB)
 
result.png DELETED
Binary file (574 kB)
 
result_crysis.png DELETED
Binary file (556 kB)
 
result_huggingface.png DELETED
Binary file (684 kB)
 
thumbnail.png DELETED
Binary file (298 kB)