text stringlengths 0 5.54k |
|---|
generator = torch.manual_seed(0) |
inv_latents = pipeline.invert(caption, image=raw_image, generator=generator).latents |
Now, generate the image with edit directions: |
Copied |
# See the "Generating source and target embeddings" section below to |
# automate the generation of these captions with a pre-trained model like Flan-T5 as explained below. |
source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] |
target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] |
source_embeds = pipeline.get_embeds(source_prompts, batch_size=2) |
target_embeds = pipeline.get_embeds(target_prompts, batch_size=2) |
image = pipeline( |
caption, |
source_embeds=source_embeds, |
target_embeds=target_embeds, |
num_inference_steps=50, |
cross_attention_guidance_amount=0.15, |
generator=generator, |
latents=inv_latents, |
negative_prompt=caption, |
).images[0] |
image.save("edited_image.png") |
Generating source and target embeddings |
The authors originally used the GPT-3 API to generate the source and target captions for discovering |
edit directions. However, we can also leverage open source and public models for the same purpose. |
Below, we provide an end-to-end example with the Flan-T5 model |
for generating captions and CLIP for |
computing embeddings on the generated captions. |
1. Load the generation model: |
Copied |
import torch |
from transformers import AutoTokenizer, T5ForConditionalGeneration |
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xl") |
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16) |
2. Construct a starting prompt: |
Copied |
source_concept = "cat" |
target_concept = "dog" |
source_text = f"Provide a caption for images containing a {source_concept}. " |
"The captions should be in English and should be no longer than 150 characters." |
target_text = f"Provide a caption for images containing a {target_concept}. " |
"The captions should be in English and should be no longer than 150 characters." |
Here, we’re interested in the “cat -> dog” direction. |
3. Generate captions: |
We can use a utility like so for this purpose. |
Copied |
def generate_captions(input_prompt): |
input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") |
outputs = model.generate( |
input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 |
) |
return tokenizer.batch_decode(outputs, skip_special_tokens=True) |
And then we just call it to generate our captions: |
Copied |
source_captions = generate_captions(source_text) |
target_captions = generate_captions(target_concept) |
We encourage you to play around with the different parameters supported by the |
generate() method (documentation) for the generation quality you are looking for. |
4. Load the embedding model: |
Here, we need to use the same text encoder model used by the subsequent Stable Diffusion model. |
Copied |
from diffusers import StableDiffusionPix2PixZeroPipeline |
pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( |
"CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 |
) |
pipeline = pipeline.to("cuda") |
tokenizer = pipeline.tokenizer |
text_encoder = pipeline.text_encoder |
5. Compute embeddings: |
Copied |
import torch |
def embed_captions(sentences, tokenizer, text_encoder, device="cuda"): |
with torch.no_grad(): |
embeddings = [] |
for sent in sentences: |
text_inputs = tokenizer( |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.