text
stringlengths
0
5.54k
from diffusers import ShapEPipeline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16")
pipe = pipe.to(device)
guidance_scale = 15.0
prompt = ["A firecracker", "A birthday cupcake"]
images = pipe(
prompt,
guidance_scale=guidance_scale,
num_inference_steps=64,
frame_size=256,
).images Now use the export_to_gif() function to turn the list of image frames into a gif of the 3D object. Copied from diffusers.utils import export_to_gif
export_to_gif(images[0], "firecracker_3d.gif")
export_to_gif(images[1], "cake_3d.gif") prompt = "A firecracker" prompt = "A birthday cupcake" Image-to-3D To generate a 3D object from another image, use the ShapEImg2ImgPipeline. You can use an existing image or generate an entirely new one. Let’s use the Kandinsky 2.1 model to generate a new image. Copied from diffusers import DiffusionPipeline
import torch
prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
prompt = "A cheeseburger, white background"
image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple()
image = pipeline(
prompt,
image_embeds=image_embeds,
negative_image_embeds=negative_image_embeds,
).images[0]
image.save("burger.png") Pass the cheeseburger to the ShapEImg2ImgPipeline to generate a 3D representation of it. Copied from PIL import Image
from diffusers import ShapEImg2ImgPipeline
from diffusers.utils import export_to_gif
pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda")
guidance_scale = 3.0
image = Image.open("burger.png").resize((256, 256))
images = pipe(
image,
guidance_scale=guidance_scale,
num_inference_steps=64,
frame_size=256,
).images
gif_path = export_to_gif(images[0], "burger_3d.gif") cheeseburger 3D cheeseburger Generate mesh Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you’ll convert the output into a glb file because the 🤗 Datasets library supports mesh visualization of glb files which can be rendered by the Dataset viewer. You can generate mesh outputs for both the ShapEPipeline and ShapEImg2ImgPipeline by specifying the output_type parameter as "mesh": Copied import torch
from diffusers import ShapEPipeline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16")
pipe = pipe.to(device)
guidance_scale = 15.0
prompt = "A birthday cupcake"
images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images Use the export_to_ply() function to save the mesh output as a ply file: You can optionally save the mesh output as an obj file with the export_to_obj() function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! Copied from diffusers.utils import export_to_ply
ply_path = export_to_ply(images[0], "3d_cake.ply")
print(f"Saved to folder: {ply_path}") Then you can convert the ply file to a glb file with the trimesh library: Copied import trimesh
mesh = trimesh.load("3d_cake.ply")
mesh_export = mesh.export("3d_cake.glb", file_type="glb") By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: Copied import trimesh
import numpy as np
mesh = trimesh.load("3d_cake.ply")
rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0])
mesh = mesh.apply_transform(rot)
mesh_export = mesh.export("3d_cake.glb", file_type="glb") Upload the mesh file to your dataset repository to visualize it with the Dataset viewer!
Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao.
This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) —
The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) —
The starting beta value of inference. beta_end (float, defaults to 0.02) —
The final beta value. beta_schedule (str, defaults to "linear") —
The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) —
Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) —
The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we
will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) —
Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) —
The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) —
Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
there is no previous alpha. When this option is True the previous alpha product is fixed to 1,
otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) —
An offset added to the inference steps. You can use a combination of offset=1 and
set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable
Diffusion. prediction_type (str, defaults to epsilon, optional) —
Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process),
sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen
Video paper). thresholding (bool, defaults to False) —
Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such
as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) —
The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) —
The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") —
The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and
Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) —