WhiteAiZ commited on
Commit
e1bd029
·
verified ·
1 Parent(s): 72561cd

update reforge

Browse files
.gitignore CHANGED
@@ -17,6 +17,7 @@ __pycache__
17
  /ui-config.json
18
  /outputs
19
  /config.json
 
20
  /log
21
  /webui.settings.bat
22
  /styles.csv
 
17
  /ui-config.json
18
  /outputs
19
  /config.json
20
+ /config_linux.json
21
  /log
22
  /webui.settings.bat
23
  /styles.csv
README.md CHANGED
@@ -4,13 +4,25 @@ Stable Diffusion WebUI Forge/reForge is a platform on top of [Stable Diffusion W
4
 
5
  The name "Forge" is inspired from "Minecraft Forge". This project is aimed at becoming SD WebUI's Forge.
6
 
7
- # Important: Branches
8
 
9
- * main: Has all the possible upstream changes from A1111, new samplers/schedulers/sd options/etc and now, comfy backend updated to stream, so this deprecated the old forge backend.
10
- * dev: At this point (2025-07-20), it is the same as main branch.
11
- * dev2 and experimental: More unstable than dev, for now same as dev.
 
 
 
 
 
 
 
 
 
 
 
 
12
  * experimental: same as dev2 but with gradio 4.
13
- * main-old: Branch with old forge backend. Kept as backup in any case, but it won't receive updates.
14
 
15
  # Installing Forge/reForge
16
 
@@ -183,609 +195,15 @@ Since the UI got really cluttered with built it extensions, I have removed some
183
  * StableCascade-for-webUI-main: https://github.com/Panchovix/StableCascade-for-webUI-main.git
184
  * StableDiffusion3-for-webUI-main: https://github.com/Panchovix/StableDiffusion3-for-webUI-main.git
185
 
186
- # Original "Old" Forge (commit https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/bfee03d8d9415a925616f40ede030fe7a51cbcfd) information.
187
-
188
- # Screenshots of Comparison (by Illyasviel)
189
-
190
- I tested with several devices, and this is a typical result from 8GB VRAM (3070ti laptop) with SDXL.
191
-
192
- **This is original WebUI:**
193
-
194
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/16893937-9ed9-4f8e-b960-70cd5d1e288f)
195
-
196
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/7bbc16fe-64ef-49e2-a595-d91bb658bd94)
197
-
198
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/de1747fd-47bc-482d-a5c6-0728dd475943)
199
-
200
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/96e5e171-2d74-41ba-9dcc-11bf68be7e16)
201
-
202
- (average about 7.4GB/8GB, peak at about 7.9GB/8GB)
203
-
204
- **This is WebUI Forge/reForge:**
205
-
206
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/ca5e05ed-bd86-4ced-8662-f41034648e8c)
207
-
208
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/3629ee36-4a99-4d9b-b371-12efb260a283)
209
-
210
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/6d13ebb7-c30d-4aa8-9242-c0b5a1af8c95)
211
-
212
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/c4f723c3-6ea7-4539-980b-0708ed2a69aa)
213
-
214
- (average and peak are all 6.3GB/8GB)
215
-
216
- You can see that Forge/reForge does not change WebUI results. Installing Forge/reForge is not a seed breaking change.
217
-
218
- Forge/reForge can perfectly keep WebUI unchanged even for most complicated prompts like `fantasy landscape with a [mountain:lake:0.25] and [an oak:a christmas tree:0.75][ in foreground::0.6][ in background:0.25] [shoddy:masterful:0.5]`.
219
-
220
- All your previous works still work in Forge/reForge!
221
-
222
- # Contribution
223
-
224
- # UNet Patcher
225
-
226
- The full name of the backend is `Stable Diffusion WebUI with Forge/reForge backend`, or for simplicity, the `Forge backend`. The API and python symbols are made similar to previous software only for reducing the learning cost of developers. Backend has a high percentage of Comfy code, about 80-85% or so.
227
-
228
- Now developing an extension is super simple. We finally have a patchable UNet.
229
-
230
- Below is using one single file with 80 lines of codes to support FreeU:
231
-
232
- `extensions-builtin/sd_forge_freeu/scripts/forge_freeu.py`
233
-
234
- ```python
235
- import torch
236
- import gradio as gr
237
- from modules import scripts
238
-
239
-
240
- def Fourier_filter(x, threshold, scale):
241
- x_freq = torch.fft.fftn(x.float(), dim=(-2, -1))
242
- x_freq = torch.fft.fftshift(x_freq, dim=(-2, -1))
243
- B, C, H, W = x_freq.shape
244
- mask = torch.ones((B, C, H, W), device=x.device)
245
- crow, ccol = H // 2, W //2
246
- mask[..., crow - threshold:crow + threshold, ccol - threshold:ccol + threshold] = scale
247
- x_freq = x_freq * mask
248
- x_freq = torch.fft.ifftshift(x_freq, dim=(-2, -1))
249
- x_filtered = torch.fft.ifftn(x_freq, dim=(-2, -1)).real
250
- return x_filtered.to(x.dtype)
251
-
252
-
253
- def set_freeu_v2_patch(model, b1, b2, s1, s2):
254
- model_channels = model.model.model_config.unet_config["model_channels"]
255
- scale_dict = {model_channels * 4: (b1, s1), model_channels * 2: (b2, s2)}
256
-
257
- def output_block_patch(h, hsp, *args, **kwargs):
258
- scale = scale_dict.get(h.shape[1], None)
259
- if scale is not None:
260
- hidden_mean = h.mean(1).unsqueeze(1)
261
- B = hidden_mean.shape[0]
262
- hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
263
- hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
264
- hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / \
265
- (hidden_max - hidden_min).unsqueeze(2).unsqueeze(3)
266
- h[:, :h.shape[1] // 2] = h[:, :h.shape[1] // 2] * ((scale[0] - 1) * hidden_mean + 1)
267
- hsp = Fourier_filter(hsp, threshold=1, scale=scale[1])
268
- return h, hsp
269
-
270
- m = model.clone()
271
- m.set_model_output_block_patch(output_block_patch)
272
- return m
273
-
274
-
275
- class FreeUForForge(scripts.Script):
276
- def title(self):
277
- return "FreeU Integrated"
278
-
279
- def show(self, is_img2img):
280
- # make this extension visible in both txt2img and img2img tab.
281
- return scripts.AlwaysVisible
282
-
283
- def ui(self, *args, **kwargs):
284
- with gr.Accordion(open=False, label=self.title()):
285
- freeu_enabled = gr.Checkbox(label='Enabled', value=False)
286
- freeu_b1 = gr.Slider(label='B1', minimum=0, maximum=2, step=0.01, value=1.01)
287
- freeu_b2 = gr.Slider(label='B2', minimum=0, maximum=2, step=0.01, value=1.02)
288
- freeu_s1 = gr.Slider(label='S1', minimum=0, maximum=4, step=0.01, value=0.99)
289
- freeu_s2 = gr.Slider(label='S2', minimum=0, maximum=4, step=0.01, value=0.95)
290
-
291
- return freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2
292
-
293
- def process_before_every_sampling(self, p, *script_args, **kwargs):
294
- # This will be called before every sampling.
295
- # If you use highres fix, this will be called twice.
296
-
297
- freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2 = script_args
298
-
299
- if not freeu_enabled:
300
- return
301
-
302
- unet = p.sd_model.forge_objects.unet
303
-
304
- unet = set_freeu_v2_patch(unet, freeu_b1, freeu_b2, freeu_s1, freeu_s2)
305
-
306
- p.sd_model.forge_objects.unet = unet
307
-
308
- # Below codes will add some logs to the texts below the image outputs on UI.
309
- # The extra_generation_params does not influence results.
310
- p.extra_generation_params.update(dict(
311
- freeu_enabled=freeu_enabled,
312
- freeu_b1=freeu_b1,
313
- freeu_b2=freeu_b2,
314
- freeu_s1=freeu_s1,
315
- freeu_s2=freeu_s2,
316
- ))
317
-
318
- return
319
- ```
320
-
321
- It looks like this:
322
-
323
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/277bac6e-5ea7-4bff-b71a-e55a60cfc03c)
324
-
325
- Similar components like HyperTile, KohyaHighResFix, SAG, can all be implemented within 100 lines of codes (see also the codes).
326
-
327
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/06472b03-b833-4816-ab47-70712ac024d3)
328
-
329
- ControlNets can finally be called by different extensions.
330
-
331
- Implementing Stable Video Diffusion and Zero123 are also super simple now (see also the codes).
332
-
333
- *Stable Video Diffusion:*
334
-
335
- `extensions-builtin/sd_forge_svd/scripts/forge_svd.py`
336
-
337
- ```python
338
- import torch
339
- import gradio as gr
340
- import os
341
- import pathlib
342
-
343
- from modules import script_callbacks
344
- from modules.paths import models_path
345
- from modules.ui_common import ToolButton, refresh_symbol
346
- from modules import shared
347
-
348
- from modules_forge.forge_util import numpy_to_pytorch, pytorch_to_numpy
349
- from ldm_patched.modules.sd import load_checkpoint_guess_config
350
- from ldm_patched.contrib.external_video_model import VideoLinearCFGGuidance, SVD_img2vid_Conditioning
351
- from ldm_patched.contrib.external import KSampler, VAEDecode
352
-
353
-
354
- opVideoLinearCFGGuidance = VideoLinearCFGGuidance()
355
- opSVD_img2vid_Conditioning = SVD_img2vid_Conditioning()
356
- opKSampler = KSampler()
357
- opVAEDecode = VAEDecode()
358
-
359
- svd_root = os.path.join(models_path, 'svd')
360
- os.makedirs(svd_root, exist_ok=True)
361
- svd_filenames = []
362
-
363
-
364
- def update_svd_filenames():
365
- global svd_filenames
366
- svd_filenames = [
367
- pathlib.Path(x).name for x in
368
- shared.walk_files(svd_root, allowed_extensions=[".pt", ".ckpt", ".safetensors"])
369
- ]
370
- return svd_filenames
371
-
372
-
373
- @torch.inference_mode()
374
- @torch.no_grad()
375
- def predict(filename, width, height, video_frames, motion_bucket_id, fps, augmentation_level,
376
- sampling_seed, sampling_steps, sampling_cfg, sampling_sampler_name, sampling_scheduler,
377
- sampling_denoise, guidance_min_cfg, input_image):
378
- filename = os.path.join(svd_root, filename)
379
- model_raw, _, vae, clip_vision = \
380
- load_checkpoint_guess_config(filename, output_vae=True, output_clip=False, output_clipvision=True)
381
- model = opVideoLinearCFGGuidance.patch(model_raw, guidance_min_cfg)[0]
382
- init_image = numpy_to_pytorch(input_image)
383
- positive, negative, latent_image = opSVD_img2vid_Conditioning.encode(
384
- clip_vision, init_image, vae, width, height, video_frames, motion_bucket_id, fps, augmentation_level)
385
- output_latent = opKSampler.sample(model, sampling_seed, sampling_steps, sampling_cfg,
386
- sampling_sampler_name, sampling_scheduler, positive,
387
- negative, latent_image, sampling_denoise)[0]
388
- output_pixels = opVAEDecode.decode(vae, output_latent)[0]
389
- outputs = pytorch_to_numpy(output_pixels)
390
- return outputs
391
-
392
-
393
- def on_ui_tabs():
394
- with gr.Blocks() as svd_block:
395
- with gr.Row():
396
- with gr.Column():
397
- input_image = gr.Image(label='Input Image', source='upload', type='numpy', height=400)
398
-
399
- with gr.Row():
400
- filename = gr.Dropdown(label="SVD Checkpoint Filename",
401
- choices=svd_filenames,
402
- value=svd_filenames[0] if len(svd_filenames) > 0 else None)
403
- refresh_button = ToolButton(value=refresh_symbol, tooltip="Refresh")
404
- refresh_button.click(
405
- fn=lambda: gr.update(choices=update_svd_filenames),
406
- inputs=[], outputs=filename)
407
-
408
- width = gr.Slider(label='Width', minimum=16, maximum=8192, step=8, value=1024)
409
- height = gr.Slider(label='Height', minimum=16, maximum=8192, step=8, value=576)
410
- video_frames = gr.Slider(label='Video Frames', minimum=1, maximum=4096, step=1, value=14)
411
- motion_bucket_id = gr.Slider(label='Motion Bucket Id', minimum=1, maximum=1023, step=1, value=127)
412
- fps = gr.Slider(label='Fps', minimum=1, maximum=1024, step=1, value=6)
413
- augmentation_level = gr.Slider(label='Augmentation Level', minimum=0.0, maximum=10.0, step=0.01,
414
- value=0.0)
415
- sampling_steps = gr.Slider(label='Sampling Steps', minimum=1, maximum=200, step=1, value=20)
416
- sampling_cfg = gr.Slider(label='CFG Scale', minimum=0.0, maximum=50.0, step=0.1, value=2.5)
417
- sampling_denoise = gr.Slider(label='Sampling Denoise', minimum=0.0, maximum=1.0, step=0.01, value=1.0)
418
- guidance_min_cfg = gr.Slider(label='Guidance Min Cfg', minimum=0.0, maximum=100.0, step=0.5, value=1.0)
419
- sampling_sampler_name = gr.Radio(label='Sampler Name',
420
- choices=['euler', 'euler_ancestral', 'heun', 'heunpp2', 'dpm_2',
421
- 'dpm_2_ancestral', 'lms', 'dpm_fast', 'dpm_adaptive',
422
- 'dpmpp_2s_ancestral', 'dpmpp_sde', 'dpmpp_sde_gpu',
423
- 'dpmpp_2m', 'dpmpp_2m_sde', 'dpmpp_2m_sde_gpu',
424
- 'dpmpp_3m_sde', 'dpmpp_3m_sde_gpu', 'ddpm', 'lcm', 'ddim',
425
- 'uni_pc', 'uni_pc_bh2'], value='euler')
426
- sampling_scheduler = gr.Radio(label='Scheduler',
427
- choices=['normal', 'karras', 'exponential', 'sgm_uniform', 'simple',
428
- 'ddim_uniform'], value='karras')
429
- sampling_seed = gr.Number(label='Seed', value=12345, precision=0)
430
-
431
- generate_button = gr.Button(value="Generate")
432
-
433
- ctrls = [filename, width, height, video_frames, motion_bucket_id, fps, augmentation_level,
434
- sampling_seed, sampling_steps, sampling_cfg, sampling_sampler_name, sampling_scheduler,
435
- sampling_denoise, guidance_min_cfg, input_image]
436
-
437
- with gr.Column():
438
- output_gallery = gr.Gallery(label='Gallery', show_label=False, object_fit='contain',
439
- visible=True, height=1024, columns=4)
440
-
441
- generate_button.click(predict, inputs=ctrls, outputs=[output_gallery])
442
- return [(svd_block, "SVD", "svd")]
443
-
444
-
445
- update_svd_filenames()
446
- script_callbacks.on_ui_tabs(on_ui_tabs)
447
- ```
448
-
449
- Note that although the above codes look like independent codes, they actually will automatically offload/unload any other models. For example, below is me opening webui, load SDXL, generated an image, then go to SVD, then generated image frames. You can see that the GPU memory is perfectly managed and the SDXL is moved to RAM then SVD is moved to GPU.
450
-
451
- Note that this management is fully automatic. This makes writing extensions super simple.
452
-
453
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/de1a2d05-344a-44d7-bab8-9ecc0a58a8d3)
454
-
455
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/14bcefcf-599f-42c3-bce9-3fd5e428dd91)
456
-
457
- Similarly, Zero123:
458
-
459
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/7685019c-7239-47fb-9cb5-2b7b33943285)
460
-
461
- ### Write a simple ControlNet:
462
-
463
- Below is a simple extension to have a completely independent pass of ControlNet that never conflicts any other extensions:
464
-
465
- `extensions-builtin/sd_forge_controlnet_example/scripts/sd_forge_controlnet_example.py`
466
-
467
- Note that this extension is hidden because it is only for developers. To see it in UI, use `--show-controlnet-example`.
468
-
469
- The memory optimization in this example is fully automatic. You do not need to care about memory and inference speed, but you may want to cache objects if you wish.
470
-
471
- ```python
472
- # Use --show-controlnet-example to see this extension.
473
-
474
- import cv2
475
- import gradio as gr
476
- import torch
477
-
478
- from modules import scripts
479
- from modules.shared_cmd_options import cmd_opts
480
- from modules_forge.shared import supported_preprocessors
481
- from modules.modelloader import load_file_from_url
482
- from ldm_patched.modules.controlnet import load_controlnet
483
- from modules_forge.controlnet import apply_controlnet_advanced
484
- from modules_forge.forge_util import numpy_to_pytorch
485
- from modules_forge.shared import controlnet_dir
486
-
487
-
488
- class ControlNetExampleForge(scripts.Script):
489
- model = None
490
-
491
- def title(self):
492
- return "ControlNet Example for Developers"
493
-
494
- def show(self, is_img2img):
495
- # make this extension visible in both txt2img and img2img tab.
496
- return scripts.AlwaysVisible
497
-
498
- def ui(self, *args, **kwargs):
499
- with gr.Accordion(open=False, label=self.title()):
500
- gr.HTML('This is an example controlnet extension for developers.')
501
- gr.HTML('You see this extension because you used --show-controlnet-example')
502
- input_image = gr.Image(source='upload', type='numpy')
503
- funny_slider = gr.Slider(label='This slider does nothing. It just shows you how to transfer parameters.',
504
- minimum=0.0, maximum=1.0, value=0.5)
505
-
506
- return input_image, funny_slider
507
-
508
- def process(self, p, *script_args, **kwargs):
509
- input_image, funny_slider = script_args
510
 
511
- # This slider does nothing. It just shows you how to transfer parameters.
512
- del funny_slider
513
-
514
- if input_image is None:
515
- return
516
-
517
- # controlnet_canny_path = load_file_from_url(
518
- # url='https://huggingface.co/lllyasviel/sd_control_collection/resolve/main/sai_xl_canny_256lora.safetensors',
519
- # model_dir=model_dir,
520
- # file_name='sai_xl_canny_256lora.safetensors'
521
- # )
522
- controlnet_canny_path = load_file_from_url(
523
- url='https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/control_v11p_sd15_canny_fp16.safetensors',
524
- model_dir=controlnet_dir,
525
- file_name='control_v11p_sd15_canny_fp16.safetensors'
526
- )
527
- print('The model [control_v11p_sd15_canny_fp16.safetensors] download finished.')
528
-
529
- self.model = load_controlnet(controlnet_canny_path)
530
- print('Controlnet loaded.')
531
-
532
- return
533
-
534
- def process_before_every_sampling(self, p, *script_args, **kwargs):
535
- # This will be called before every sampling.
536
- # If you use highres fix, this will be called twice.
537
-
538
- input_image, funny_slider = script_args
539
-
540
- if input_image is None or self.model is None:
541
- return
542
-
543
- B, C, H, W = kwargs['noise'].shape # latent_shape
544
- height = H * 8
545
- width = W * 8
546
- batch_size = p.batch_size
547
-
548
- preprocessor = supported_preprocessors['canny']
549
-
550
- # detect control at certain resolution
551
- control_image = preprocessor(
552
- input_image, resolution=512, slider_1=100, slider_2=200, slider_3=None)
553
-
554
- # here we just use nearest neighbour to align input shape.
555
- # You may want crop and resize, or crop and fill, or others.
556
- control_image = cv2.resize(
557
- control_image, (width, height), interpolation=cv2.INTER_NEAREST)
558
-
559
- # Output preprocessor result. Now called every sampling. Cache in your own way.
560
- p.extra_result_images.append(control_image)
561
-
562
- print('Preprocessor Canny finished.')
563
-
564
- control_image_bchw = numpy_to_pytorch(control_image).movedim(-1, 1)
565
-
566
- unet = p.sd_model.forge_objects.unet
567
-
568
- # Unet has input, middle, output blocks, and we can give different weights
569
- # to each layers in all blocks.
570
- # Below is an example for stronger control in middle block.
571
- # This is helpful for some high-res fix passes. (p.is_hr_pass)
572
- positive_advanced_weighting = {
573
- 'input': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2],
574
- 'middle': [1.0],
575
- 'output': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2]
576
- }
577
- negative_advanced_weighting = {
578
- 'input': [0.15, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, 0.95, 1.05, 1.15, 1.25],
579
- 'middle': [1.05],
580
- 'output': [0.15, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, 0.95, 1.05, 1.15, 1.25]
581
- }
582
-
583
- # The advanced_frame_weighting is a weight applied to each image in a batch.
584
- # The length of this list must be same with batch size
585
- # For example, if batch size is 5, the below list is [0.2, 0.4, 0.6, 0.8, 1.0]
586
- # If you view the 5 images as 5 frames in a video, this will lead to
587
- # progressively stronger control over time.
588
- advanced_frame_weighting = [float(i + 1) / float(batch_size) for i in range(batch_size)]
589
-
590
- # The advanced_sigma_weighting allows you to dynamically compute control
591
- # weights given diffusion timestep (sigma).
592
- # For example below code can softly make beginning steps stronger than ending steps.
593
- sigma_max = unet.model.model_sampling.sigma_max
594
- sigma_min = unet.model.model_sampling.sigma_min
595
- advanced_sigma_weighting = lambda s: (s - sigma_min) / (sigma_max - sigma_min)
596
-
597
- # You can even input a tensor to mask all control injections
598
- # The mask will be automatically resized during inference in UNet.
599
- # The size should be B 1 H W and the H and W are not important
600
- # because they will be resized automatically
601
- advanced_mask_weighting = torch.ones(size=(1, 1, 512, 512))
602
-
603
- # But in this simple example we do not use them
604
- positive_advanced_weighting = None
605
- negative_advanced_weighting = None
606
- advanced_frame_weighting = None
607
- advanced_sigma_weighting = None
608
- advanced_mask_weighting = None
609
-
610
- unet = apply_controlnet_advanced(unet=unet, controlnet=self.model, image_bchw=control_image_bchw,
611
- strength=0.6, start_percent=0.0, end_percent=0.8,
612
- positive_advanced_weighting=positive_advanced_weighting,
613
- negative_advanced_weighting=negative_advanced_weighting,
614
- advanced_frame_weighting=advanced_frame_weighting,
615
- advanced_sigma_weighting=advanced_sigma_weighting,
616
- advanced_mask_weighting=advanced_mask_weighting)
617
-
618
- p.sd_model.forge_objects.unet = unet
619
-
620
- # Below codes will add some logs to the texts below the image outputs on UI.
621
- # The extra_generation_params does not influence results.
622
- p.extra_generation_params.update(dict(
623
- controlnet_info='You should see these texts below output images!',
624
- ))
625
-
626
- return
627
-
628
-
629
- # Use --show-controlnet-example to see this extension.
630
- if not cmd_opts.show_controlnet_example:
631
- del ControlNetExampleForge
632
-
633
- ```
634
-
635
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/822fa2fc-c9f4-4f58-8669-4b6680b91063)
636
-
637
-
638
- ### Add a preprocessor
639
-
640
- Below is the full codes to add a normalbae preprocessor with perfect memory managements.
641
-
642
- You can use arbitrary independent extensions to add a preprocessor.
643
-
644
- Your preprocessor will be read by all other extensions using `modules_forge.shared.preprocessors`
645
-
646
- Below codes are in `extensions-builtin\forge_preprocessor_normalbae\scripts\preprocessor_normalbae.py`
647
-
648
- ```python
649
- from modules_forge.supported_preprocessor import Preprocessor, PreprocessorParameter
650
- from modules_forge.shared import preprocessor_dir, add_supported_preprocessor
651
- from modules_forge.forge_util import resize_image_with_pad
652
- from modules.modelloader import load_file_from_url
653
-
654
- import types
655
- import torch
656
- import numpy as np
657
-
658
- from einops import rearrange
659
- from annotator.normalbae.models.NNET import NNET
660
- from annotator.normalbae import load_checkpoint
661
- from torchvision import transforms
662
-
663
-
664
- class PreprocessorNormalBae(Preprocessor):
665
- def __init__(self):
666
- super().__init__()
667
- self.name = 'normalbae'
668
- self.tags = ['NormalMap']
669
- self.model_filename_filters = ['normal']
670
- self.slider_resolution = PreprocessorParameter(
671
- label='Resolution', minimum=128, maximum=2048, value=512, step=8, visible=True)
672
- self.slider_1 = PreprocessorParameter(visible=False)
673
- self.slider_2 = PreprocessorParameter(visible=False)
674
- self.slider_3 = PreprocessorParameter(visible=False)
675
- self.show_control_mode = True
676
- self.do_not_need_model = False
677
- self.sorting_priority = 100 # higher goes to top in the list
678
-
679
- def load_model(self):
680
- if self.model_patcher is not None:
681
- return
682
-
683
- model_path = load_file_from_url(
684
- "https://huggingface.co/lllyasviel/Annotators/resolve/main/scannet.pt",
685
- model_dir=preprocessor_dir)
686
-
687
- args = types.SimpleNamespace()
688
- args.mode = 'client'
689
- args.architecture = 'BN'
690
- args.pretrained = 'scannet'
691
- args.sampling_ratio = 0.4
692
- args.importance_ratio = 0.7
693
- model = NNET(args)
694
- model = load_checkpoint(model_path, model)
695
- self.norm = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
696
-
697
- self.model_patcher = self.setup_model_patcher(model)
698
-
699
- def __call__(self, input_image, resolution, slider_1=None, slider_2=None, slider_3=None, **kwargs):
700
- input_image, remove_pad = resize_image_with_pad(input_image, resolution)
701
-
702
- self.load_model()
703
-
704
- self.move_all_model_patchers_to_gpu()
705
-
706
- assert input_image.ndim == 3
707
- image_normal = input_image
708
-
709
- with torch.no_grad():
710
- image_normal = self.send_tensor_to_model_device(torch.from_numpy(image_normal))
711
- image_normal = image_normal / 255.0
712
- image_normal = rearrange(image_normal, 'h w c -> 1 c h w')
713
- image_normal = self.norm(image_normal)
714
-
715
- normal = self.model_patcher.model(image_normal)
716
- normal = normal[0][-1][:, :3]
717
- normal = ((normal + 1) * 0.5).clip(0, 1)
718
-
719
- normal = rearrange(normal[0], 'c h w -> h w c').cpu().numpy()
720
- normal_image = (normal * 255.0).clip(0, 255).astype(np.uint8)
721
-
722
- return remove_pad(normal_image)
723
-
724
-
725
- add_supported_preprocessor(PreprocessorNormalBae())
726
-
727
- ```
728
-
729
- # New features (that are not available in original WebUI)
730
-
731
- Thanks to Unet Patcher, many new things are possible now and supported in Forge/reForge, including SVD, Z123, masked Ip-adapter, masked controlnet, photomaker, etc.
732
-
733
- Masked Ip-Adapter
734
-
735
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/d26630f9-922d-4483-8bf9-f364dca5fd50)
736
-
737
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/03580ef7-235c-4b03-9ca6-a27677a5a175)
738
-
739
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/d9ed4a01-70d4-45b4-a6a7-2f765f158fae)
740
-
741
- Masked ControlNet
742
-
743
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/872d4785-60e4-4431-85c7-665c781dddaa)
744
-
745
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/335a3b33-1ef8-46ff-a462-9f1b4f2c49fc)
746
-
747
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/b3684a15-8895-414e-8188-487269dfcada)
748
-
749
- PhotoMaker
750
-
751
- (Note that photomaker is a special control that need you to add the trigger word "photomaker". Your prompt should be like "a photo of photomaker")
752
-
753
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/07b0b626-05b5-473b-9d69-3657624d59be)
754
-
755
- Marigold Depth
756
-
757
- ![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/bdf54148-892d-410d-8ed9-70b4b121b6e7)
758
-
759
- # New Sampler (that is not in origin)
760
-
761
- DDPM
762
-
763
- # Others samplers may be available, but after the schedulers merge, they shouldn't be needed.
764
-
765
- # About Extensions
766
-
767
- ControlNet and TiledVAE are integrated, and you should uninstall these two extensions:
768
-
769
- sd-webui-controlnet
770
- multidiffusion-upscaler-for-automatic1111
771
-
772
- Note that **AnimateDiff** is under construction by [continue-revolution](https://github.com/continue-revolution) at [sd-webui-animatediff forge/master branch](https://github.com/continue-revolution/sd-webui-animatediff/tree/forge/master) and [sd-forge-animatediff](https://github.com/continue-revolution/sd-forge-animatediff) (they are in sync). (continue-revolution original words: prompt travel, inf t2v, controlnet v2v have been proven to work well; motion lora, i2i batch still under construction and may be finished in a week")
773
 
774
- Other extensions should work without problems, like:
775
 
776
- canvas-zoom
777
- translations/localizations
778
- Dynamic Prompts
779
- Adetailer
780
- Ultimate SD Upscale
781
- Reactor
782
 
783
- However, if newer extensions use Forge/reForge, their codes can be much shorter.
784
 
785
- Usually if an old extension rework using Forge/reForge's unet patcher, 80% codes can be removed, especially when they need to call controlnet.
786
 
787
- # Support
788
 
789
- Some people have been asking how to donate or support the project, and I'm really grateful for that! I did this buymeacoffe link from some suggestions!
790
 
791
- [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/Panchovix)
 
4
 
5
  The name "Forge" is inspired from "Minecraft Forge". This project is aimed at becoming SD WebUI's Forge.
6
 
7
+ # Forge2/reForge2
8
 
9
+ You can read more on https://github.com/Panchovix/stable-diffusion-webui-reForge/discussions/377#discussioncomment-14010687. You can tell me here if you want to keep these branches here or do something like "reForge2".
10
+
11
+ * newmain_newforge: Based on latest forge2 (gradio4, flux, etc) with some small changes that I plan to add very slowly. For now it has python 3.12 support, sage/flash attention support, all the samplers and schedulers from reForge (1), and recently, support for CFG++ samplers.
12
+ * newforge_dendev: Based on latest ersatzForge fork which is based on forge2 (gradio4, flux, chroma, cosmos, longclip, and a ton more) from @DenOfEquity (https://github.com/DenOfEquity/ersatzForge). Many thanks Den for letting me to work on base on your fork on reForge. I will try to add new features from old reforge as well, like all the samplers.
13
+
14
+ # Suggestion: For stability based on old forge, use forge classic
15
+
16
+ reForge(1) is not really stable for all tasks sadly.
17
+
18
+ So if you want to keep using old forge backend as it is, for sd1.x,2.x and SDXL, I suggest to use forge classic by @Haoming02 instead https://github.com/Haoming02/sd-webui-forge-classic, as at the moment that is the real succesor to old forge.
19
+
20
+ Other branches:
21
+ * main: Main branch with multiple changes and updates. But not stable as main-old branch.
22
+ * dev: Similar to main but with more unstable changes. I.e. using comfy/ldm_patched backend for sd1.x and sdxl instead of A1111.
23
+ * dev2: More unstable than dev, for now same as dev.
24
  * experimental: same as dev2 but with gradio 4.
25
+ * main-old: Branch with old forge backend. Possibly the most stable and older one (2025-03)
26
 
27
  # Installing Forge/reForge
28
 
 
195
  * StableCascade-for-webUI-main: https://github.com/Panchovix/StableCascade-for-webUI-main.git
196
  * StableDiffusion3-for-webUI-main: https://github.com/Panchovix/StableDiffusion3-for-webUI-main.git
197
 
198
+ # Last "Old" Forge commit (https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/bfee03d8d9415a925616f40ede030fe7a51cbcfd) before forge2.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
199
 
200
+ # Support
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
201
 
202
+ Some people have been asking how to donate or support the project, and I'm really grateful for that! I did this buymeacoffe link from some suggestions!
203
 
204
+ [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/Panchovix)
 
 
 
 
 
205
 
 
206
 
 
207
 
 
208
 
 
209
 
 
ldm_patched/modules/model_management.py CHANGED
@@ -238,6 +238,7 @@ except:
238
 
239
  XFORMERS_VERSION = ""
240
  XFORMERS_ENABLED_VAE = True
 
241
  if args.disable_xformers:
242
  XFORMERS_IS_AVAILABLE = False
243
  else:
@@ -1088,6 +1089,8 @@ def flash_attention_enabled():
1088
  def xformers_enabled():
1089
  global directml_enabled
1090
  global cpu_state
 
 
1091
  if cpu_state != CPUState.GPU:
1092
  return False
1093
  if is_intel_xpu():
 
238
 
239
  XFORMERS_VERSION = ""
240
  XFORMERS_ENABLED_VAE = True
241
+ XFORMERS_IS_AVAILABLE = True
242
  if args.disable_xformers:
243
  XFORMERS_IS_AVAILABLE = False
244
  else:
 
1089
  def xformers_enabled():
1090
  global directml_enabled
1091
  global cpu_state
1092
+ if sys.modules.get("xformers") is None:
1093
+ return False
1094
  if cpu_state != CPUState.GPU:
1095
  return False
1096
  if is_intel_xpu():
modules/devices.py CHANGED
@@ -61,6 +61,8 @@ device_omnisr: torch.device = model_management.get_torch_device() # will be man
61
  device_span: torch.device = model_management.get_torch_device() # will be managed by memory management system
62
  device_compact: torch.device = model_management.get_torch_device() # will be managed by memory management system
63
  device_codeformer: torch.device = model_management.get_torch_device() # will be managed by memory management system
 
 
64
  dtype: torch.dtype = model_management.unet_dtype()
65
  dtype_vae: torch.dtype = model_management.vae_dtype()
66
  dtype_unet: torch.dtype = model_management.unet_dtype()
 
61
  device_span: torch.device = model_management.get_torch_device() # will be managed by memory management system
62
  device_compact: torch.device = model_management.get_torch_device() # will be managed by memory management system
63
  device_codeformer: torch.device = model_management.get_torch_device() # will be managed by memory management system
64
+ device_rcan: torch.device = model_management.get_torch_device() # will be managed by memory management system
65
+ device_plksr: torch.device = model_management.get_torch_device() # will be managed by memory management system
66
  dtype: torch.dtype = model_management.unet_dtype()
67
  dtype_vae: torch.dtype = model_management.vae_dtype()
68
  dtype_unet: torch.dtype = model_management.unet_dtype()
modules/modelloader.py CHANGED
@@ -142,7 +142,7 @@ def load_spandrel_model(
142
  path: str | os.PathLike,
143
  *,
144
  device: str | torch.device | None,
145
- prefer_half: bool = False,
146
  dtype: str | torch.dtype | None = None,
147
  expected_architecture: str | None = None,
148
  ) -> spandrel.ModelDescriptor:
@@ -157,18 +157,24 @@ def load_spandrel_model(
157
  logger.warning(
158
  f"Model {path!r} is not a {expected_architecture!r} model (got {arch.name!r})",
159
  )
160
- half = False
 
161
  if prefer_half:
162
  if model_descriptor.supports_half:
163
  model_descriptor.model.half()
164
- half = True
 
 
 
 
 
165
  else:
166
  logger.info("Model %s does not support half precision, ignoring --half", path)
167
  if dtype:
168
  model_descriptor.model.to(dtype=dtype)
169
  model_descriptor.model.eval()
170
  logger.debug(
171
- "Loaded %s from %s (device=%s, half=%s, dtype=%s)",
172
- arch, path, device, half, dtype,
173
  )
174
  return model_descriptor
 
142
  path: str | os.PathLike,
143
  *,
144
  device: str | torch.device | None,
145
+ prefer_half: bool = None,
146
  dtype: str | torch.dtype | None = None,
147
  expected_architecture: str | None = None,
148
  ) -> spandrel.ModelDescriptor:
 
157
  logger.warning(
158
  f"Model {path!r} is not a {expected_architecture!r} model (got {arch.name!r})",
159
  )
160
+ float16 = False
161
+ bfloat16 = False
162
  if prefer_half:
163
  if model_descriptor.supports_half:
164
  model_descriptor.model.half()
165
+ float16 = True
166
+ logger.info("Model %s converted to float16 precision", path)
167
+ #elif model_descriptor.supports_bfloat16:
168
+ # model_descriptor.model.bfloat16()
169
+ # bfloat16 = True
170
+ # logger.info("Model %s converted to bfloat16 precision", path)
171
  else:
172
  logger.info("Model %s does not support half precision, ignoring --half", path)
173
  if dtype:
174
  model_descriptor.model.to(dtype=dtype)
175
  model_descriptor.model.eval()
176
  logger.debug(
177
+ "Loaded %s from %s (device=%s, float16=%s, bfloat16=%s, dtype=%s)",
178
+ model_descriptor, path, device, float16, bfloat16, dtype,
179
  )
180
  return model_descriptor
modules/prompt_parser.py CHANGED
@@ -383,7 +383,7 @@ re_attention = re.compile(r"""
383
  re_break = re.compile(r"\s*\bBREAK\b\s*", re.S)
384
 
385
  def parse_prompt_attention(text):
386
- """
387
  Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
388
  Accepted tokens are:
389
  (abc) - increases attention to abc by a multiplier of 1.1
 
383
  re_break = re.compile(r"\s*\bBREAK\b\s*", re.S)
384
 
385
  def parse_prompt_attention(text):
386
+ r"""
387
  Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
388
  Accepted tokens are:
389
  (abc) - increases attention to abc by a multiplier of 1.1
modules/upscaler.py CHANGED
@@ -1,3 +1,4 @@
 
1
  import os
2
  from abc import abstractmethod
3
 
@@ -5,7 +6,10 @@ import PIL
5
  from PIL import Image
6
 
7
  import modules.shared
8
- from modules import modelloader, shared
 
 
 
9
 
10
  LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
11
  NEAREST = (Image.Resampling.NEAREST if hasattr(Image, 'Resampling') else Image.NEAREST)
@@ -25,8 +29,8 @@ class Upscaler:
25
 
26
  def __init__(self, create_dirs=False):
27
  self.mod_pad_h = None
28
- self.tile_size = modules.shared.opts.ESRGAN_tile
29
- self.tile_pad = modules.shared.opts.ESRGAN_tile_overlap
30
  self.device = modules.shared.device
31
  self.img = None
32
  self.output = None
@@ -56,8 +60,16 @@ class Upscaler:
56
  dest_w = int((img.width * scale) // 8 * 8)
57
  dest_h = int((img.height * scale) // 8 * 8)
58
 
 
 
 
 
 
59
  for i in range(3):
60
- if img.width >= dest_w and img.height >= dest_h and (i > 0 or scale != 1):
 
 
 
61
  break
62
 
63
  if shared.state.interrupted:
@@ -70,9 +82,16 @@ class Upscaler:
70
  if shape == (img.width, img.height):
71
  break
72
 
 
 
 
73
  if img.width != dest_w or img.height != dest_h:
74
  img = img.resize((int(dest_w), int(dest_h)), resample=LANCZOS)
75
 
 
 
 
 
76
  return img
77
 
78
  @abstractmethod
 
1
+ import logging
2
  import os
3
  from abc import abstractmethod
4
 
 
6
  from PIL import Image
7
 
8
  import modules.shared
9
+ from modules import modelloader, shared, devices
10
+
11
+
12
+ logger = logging.getLogger(__name__)
13
 
14
  LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
15
  NEAREST = (Image.Resampling.NEAREST if hasattr(Image, 'Resampling') else Image.NEAREST)
 
29
 
30
  def __init__(self, create_dirs=False):
31
  self.mod_pad_h = None
32
+ self.tile_size = None
33
+ self.tile_pad = None
34
  self.device = modules.shared.device
35
  self.img = None
36
  self.output = None
 
60
  dest_w = int((img.width * scale) // 8 * 8)
61
  dest_h = int((img.height * scale) // 8 * 8)
62
 
63
+ if shared.opts.unload_sd_during_upscale: #is highly possible this doesn't work
64
+ shared.sd_model.to(devices.cpu)
65
+ devices.torch_gc()
66
+ logger.info("Stable Diffusion Model weights are being unloaded from VRAM to RAM prior to upscale")
67
+
68
  for i in range(3):
69
+ # Do not break the loop prior to do_upscale when img and dest are the same size.
70
+ # This is required for 1x scale post-processing models to produce an output image.
71
+ # FIXME: Only allow this behavior when a 1x scale model is selected.
72
+ if img.width > dest_w and img.height > dest_h and scale != 1:
73
  break
74
 
75
  if shared.state.interrupted:
 
82
  if shape == (img.width, img.height):
83
  break
84
 
85
+ if img.width >= dest_w and img.height >= dest_h:
86
+ break
87
+
88
  if img.width != dest_w or img.height != dest_h:
89
  img = img.resize((int(dest_w), int(dest_h)), resample=LANCZOS)
90
 
91
+ if shared.opts.unload_sd_during_upscale:
92
+ shared.sd_model.to(shared.device)
93
+ logger.info("Stable Diffusion Model weights are being reloaded from RAM to VRAM after upscale")
94
+
95
  return img
96
 
97
  @abstractmethod