text stringlengths 0 5.54k |
|---|
└── packages You can download and use the variant you need as shown below. Core ML Inference in Python Install the following libraries to run Core ML inference in Python: Copied pip install huggingface_hub |
pip install git+https://github.com/apple/ml-stable-diffusion Download the Model Checkpoints To run inference in Python, use one of the versions stored in the packages folders because the compiled ones are only compatible with Swift. You may choose whether you want to use original or split_einsum attention. This is how... |
from pathlib import Path |
repo_id = "apple/coreml-stable-diffusion-v1-4" |
variant = "original/packages" |
model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) |
snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) |
print(f"Model downloaded at {model_path}") Inference Once you have downloaded a snapshot of the model, you can test it using Apple’s Python script. Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packag... |
from pathlib import Path |
repo_id = "apple/coreml-stable-diffusion-v1-4" |
variant = "original/compiled" |
model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) |
snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) |
print(f"Model downloaded at {model_path}") Inference To run inference, please clone Apple’s repo: Copied git clone https://github.com/apple/ml-stable-diffusion |
cd ml-stable-diffusion And then use Apple’s command line tool, Swift Package Manager: Copied swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" You have to specify in --resource-path one of the chec... |
Create reproducible pipelines Reproducibility is important for testing, replicating results, and can even be used to improve image quality. However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can’t expec... |
Gaussian noise tensors to denoise and adding noise to the scheduling step. Take a look at the tensor values in the DDIMPipeline after two inference steps: Copied from diffusers import DDIMPipeline |
import numpy as np |
model_id = "google/ddpm-cifar10-32" |
# load model and scheduler |
ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) |
# run pipeline for just two steps and return numpy tensor |
image = ddim(num_inference_steps=2, output_type="np").images |
print(np.abs(image).sum()) Running the code above prints one value, but if you run it again you get a different value. What is going on here? Every time the pipeline is run, torch.randn uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run... |
from diffusers import DDIMPipeline |
import numpy as np |
model_id = "google/ddpm-cifar10-32" |
# load model and scheduler |
ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) |
# create a generator for reproducibility |
generator = torch.Generator(device="cpu").manual_seed(0) |
# run pipeline for just two steps and return numpy tensor |
image = ddim(num_inference_steps=2, output_type="np", generator=generator).images |
print(np.abs(image).sum()) Now when you run the code above, it always prints a value of 1491.1711 no matter what because the Generator object with the seed is passed to all the random functions of the pipeline. If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not ... |
just integer values representing the seed, but this is the recommended design when dealing with |
probabilistic models in PyTorch, as Generators are random states that can be |
passed to multiple pipelines in a sequence. GPU Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you ru... |
from diffusers import DDIMPipeline |
import numpy as np |
model_id = "google/ddpm-cifar10-32" |
# load model and scheduler |
ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) |
ddim.to("cuda") |
# create a generator for reproducibility |
generator = torch.Generator(device="cuda").manual_seed(0) |
# run pipeline for just two steps and return numpy tensor |
image = ddim(num_inference_steps=2, output_type="np", generator=generator).images |
print(np.abs(image).sum()) The result is not the same even though you’re using an identical seed because the GPU uses a different random number generator than the CPU. To circumvent this problem, 🧨 Diffusers has a randn_tensor() function for creating random noise on the CPU, and then moving the tensor to a GPU if nece... |
from diffusers import DDIMPipeline |
import numpy as np |
model_id = "google/ddpm-cifar10-32" |
# load model and scheduler |
ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) |
ddim.to("cuda") |
# create a generator for reproducibility; notice you don't place it on the GPU! |
generator = torch.manual_seed(0) |
# run pipeline for just two steps and return numpy tensor |
image = ddim(num_inference_steps=2, output_type="np", generator=generator).images |
print(np.abs(image).sum()) 💡 If reproducibility is important, we recommend always passing a CPU generator. |
The performance loss is often neglectable, and you’ll generate much more similar |
values than if the pipeline had been run on a GPU. Finally, for more complex pipelines such as UnCLIPPipeline, these are often extremely |
susceptible to precision error propagation. Don’t expect similar results across |
different GPU hardware or PyTorch versions. In this case, you’ll need to run |
exactly the same hardware and PyTorch version for full reproducibility. Deterministic algorithms You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. However, you should be aware that deterministic algorithms may be slower than nondeterministic ones and you may observe a de... |
import torch |
os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" |
torch.backends.cudnn.benchmark = False |
torch.use_deterministic_algorithms(True) Now when you run the same pipeline twice, you’ll get identical results. Copied import torch |
from diffusers import DDIMScheduler, StableDiffusionPipeline |
model_id = "runwayml/stable-diffusion-v1-5" |
pipe = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True).to("cuda") |
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) |
g = torch.Generator(device="cuda") |
prompt = "A bear is playing a guitar on Times Square" |
g.manual_seed(0) |
result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.