Text-to-Image
Diffusers
Safetensors
LibreFluxIPAdapterPipeline
neuralvfx commited on
Commit
889dea5
·
verified ·
1 Parent(s): 1600698

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +162 -0
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license:
3
+ - apache-2.0
4
+ - other
5
+ license_name: flux-1-dev-non-commercial-license
6
+ license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
7
+ library_name: diffusers
8
+ pipeline_tag: text-to-image
9
+ datasets:
10
+ - SA1B
11
+ - opendiffusionai/laion2b-squareish-1024px
12
+ base_model:
13
+ - jimmycarter/LibreFLUX
14
+ ---
15
+ # LibreFLUX-IP-Adapter-ControlNet
16
+ ![Example: Control image vs result](examples/control_ip_example.png)
17
+
18
+ This model/pipeline is my [LibreFlux-IP-Adapter](https://huggingface.co/neuralvfx/LibreFlux-IP-Adapter) and my [LibreFlux ControlNet](https://huggingface.co/neuralvfx/LibreFlux-ControlNet) pipelines, into one! [LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX) is used as the underlying Transformer model.
19
+
20
+ # How does this relate to LibreFLUX?
21
+ - Base model is [LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX)
22
+ - Trained in same non-distilled fashion
23
+ - Uses Attention Masking
24
+ - Uses CFG during Inference
25
+
26
+ # Compatibility
27
+ ```py
28
+ pip install -U diffusers==0.35.2
29
+ pip install -U transformers==4.57.1
30
+ ```
31
+
32
+ Low VRAM:
33
+ ```py
34
+ pip install optimum.quanto
35
+ ```
36
+
37
+
38
+ # Load Pipeline
39
+ ```py
40
+ import torch
41
+ from diffusers import DiffusionPipeline
42
+ from huggingface_hub import hf_hub_download
43
+
44
+ model_id = "neuralvfx/LibreFlux-IP-Adapter-ControlNet"
45
+
46
+ device = "cuda" if torch.cuda.is_available() else "cpu"
47
+ dtype = torch.bfloat16 if device == "cuda" else torch.float32
48
+
49
+ pipe = DiffusionPipeline.from_pretrained(
50
+ model_id,
51
+ custom_pipeline=model_id,
52
+ trust_remote_code=True,
53
+ torch_dtype=dtype,
54
+ safety_checker=None
55
+ )
56
+
57
+ # Optional way to download the weights
58
+ hf_hub_download(repo_id="neuralvfx/LibreFlux-IP-Adapter-ControlNet",
59
+ filename="ip_adapter.pt",
60
+ local_dir=".",
61
+ local_dir_use_symlinks=False)
62
+
63
+ pipe.load_ip_adapter('ip_adapter.pt')
64
+
65
+ pipe.to(device)
66
+ ```
67
+
68
+ # Inference
69
+ ```py
70
+ from PIL import Image
71
+ from torchvision.transforms import ToTensor
72
+
73
+
74
+ # Optional way to download test Control Net Image
75
+ hf_hub_download(repo_id="neuralvfx/LibreFlux-IP-Adapter-ControlNet",
76
+ filename="examples/libre_flux_control_image.png",
77
+ local_dir=".",
78
+ local_dir_use_symlinks=False)
79
+
80
+ # Load Control Image
81
+ cond = Image.open("examples/libre_flux_control_image.png").convert("RGB")
82
+ cond = cond.resize((1024, 1024))
83
+
84
+ # Optional way to download test IP Adapter Image
85
+ hf_hub_download(repo_id="neuralvfx/LibreFlux-IP-Adapter-ControlNet",
86
+ filename="examples/merc.jpeg",
87
+ local_dir=".",
88
+ local_dir_use_symlinks=False)
89
+
90
+ # Load IP Adapter Image
91
+ ip_image = Image.open("examples/merc.jpeg").convert("RGB")
92
+ ip_image = ip_image.resize((512, 512))
93
+
94
+ out = pipe(
95
+ prompt="liquid splashing spelling the words libre flux",
96
+ negative_prompt="blurry",
97
+ control_image=cond, # Use the tensor here
98
+ num_inference_steps=75,
99
+ guidance_scale=4.0,
100
+ controlnet_conditioning_scale=1.0,
101
+ ip_adapter_image=ip_image,
102
+ ip_adapter_scale=1.0,
103
+ num_images_per_prompt=1,
104
+ generator= torch.Generator().manual_seed(74),
105
+ return_dict=True,
106
+ )
107
+ out.images[0]
108
+ ```
109
+
110
+ # Load Pipeline ( Low VRAM )
111
+ ```py
112
+ import torch
113
+ from huggingface_hub import hf_hub_download
114
+ from diffusers import DiffusionPipeline
115
+ from optimum.quanto import freeze, quantize, qint8
116
+
117
+ model_id = "neuralvfx/LibreFlux-IP-Adapter"
118
+
119
+ device = "cuda" if torch.cuda.is_available() else "cpu"
120
+ dtype = torch.bfloat16 if device == "cuda" else torch.float32
121
+
122
+ pipe = DiffusionPipeline.from_pretrained(
123
+ model_id,
124
+ custom_pipeline=model_id,
125
+ trust_remote_code=True,
126
+ torch_dtype=dtype,
127
+ safety_checker=None
128
+ )
129
+
130
+ # Optional way to download the weights
131
+ hf_hub_download(repo_id="neuralvfx/LibreFlux-IP-Adapter",
132
+ filename="ip_adapter.pt",
133
+ local_dir=".",
134
+ local_dir_use_symlinks=False)
135
+
136
+ # Load the IP Adapter First
137
+ pipe.load_ip_adapter('ip_adapter.pt')
138
+
139
+ # Quantize and Freeze
140
+ quantize(
141
+ pipe.transformer,
142
+ weights=qint8,
143
+ exclude=[
144
+ "*.norm", "*.norm1", "*.norm2", "*.norm2_context",
145
+ "proj_out", "x_embedder", "norm_out", "context_embedder",
146
+ ],
147
+ )
148
+
149
+ quantize(
150
+ pipe.ip_adapter,
151
+ weights=qint8,
152
+ exclude=[
153
+ "*.norm", "*.norm1", "*.norm2", "*.norm2_context",
154
+ "proj_out", "x_embedder", "norm_out", "context_embedder",
155
+ ],
156
+ )
157
+ freeze(pipe.transformer)
158
+ freeze(pipe.ip_adapter)
159
+
160
+ # Enable Model Offloading
161
+ pipe.enable_model_cpu_offload()
162
+ ```