GaumlessGraham commited on
Commit
ef9a030
·
verified ·
1 Parent(s): 6d9f285

Update outer.py

Browse files
Files changed (1) hide show
  1. outer.py +438 -0
outer.py CHANGED
@@ -0,0 +1,438 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """beths butterfly training.ipynb
3
+
4
+ Automatically generated by Colab.
5
+
6
+ Original file is located at
7
+ https://colab.research.google.com/drive/1SbxWXhffEnCJ6tVT6ZfTDbY2-cxb063U
8
+
9
+ # Train a diffusion model
10
+
11
+ Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the [Hub](https://huggingface.co/search/full-text?q=unconditional-image-generation&type=model), but if you can't find one you like, you can always train your own!
12
+
13
+ This tutorial will teach you how to train a [UNet2DModel](https://huggingface.co/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel) from scratch on a subset of the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset to generate your own 🦋 butterflies 🦋.
14
+
15
+ <Tip>
16
+
17
+ 💡 This training tutorial is based on the [Training with 🧨 Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook. For additional details and context about diffusion models like how they work, check out the notebook!
18
+
19
+ </Tip>
20
+
21
+ Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics (you can also use [Weights & Biases](https://docs.wandb.ai/) to track your training).
22
+ """
23
+
24
+ # uncomment to install the necessary libraries in Colab
25
+ #!pip install diffusers[training]
26
+
27
+ """We encourage you to share your model with the community, and in order to do that, you'll need to login to your Hugging Face account (create one [here](https://hf.co/join) if you don't already have one!). You can login from a notebook and enter your token when prompted:"""
28
+
29
+ from huggingface_hub import notebook_login
30
+
31
+ notebook_login()
32
+
33
+ #!sudo apt -qq install git-lfs
34
+ #!git config --global credential.helper store
35
+
36
+ """Or login in from the terminal:
37
+
38
+ ```bash
39
+ huggingface-cli login
40
+ ```
41
+
42
+ Since the model checkpoints are quite large, install [Git-LFS](https://git-lfs.com/) to version these large files:
43
+
44
+ ```bash
45
+ !sudo apt -qq install git-lfs
46
+ !git config --global credential.helper store
47
+ ```
48
+
49
+ ## Training configuration
50
+
51
+ For convenience, create a `TrainingConfig` class containing the training hyperparameters (feel free to adjust them):
52
+ """
53
+
54
+ from dataclasses import dataclass
55
+
56
+
57
+ @dataclass
58
+ class TrainingConfig:
59
+ image_size = 256 # the generated image resolution
60
+ train_batch_size = 10
61
+ eval_batch_size = 16 # how many images to sample during evaluation
62
+ num_epochs = 2000
63
+ gradient_accumulation_steps = 1
64
+ learning_rate = 1e-4
65
+ lr_warmup_steps = 250
66
+ save_image_epochs = 500
67
+ save_model_epochs = 500
68
+ mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision
69
+ output_dir = "Outer1730_10Real" # the model name locally and on the HF Hub
70
+
71
+ push_to_hub = True # whether to upload the saved model to the HF Hub
72
+ hub_private_repo = False
73
+ overwrite_output_dir = False # KEEP THIS AS FALSE
74
+ seed = 0
75
+
76
+
77
+ config = TrainingConfig()
78
+
79
+ """## Load the dataset
80
+
81
+ You can easily load the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset with the 🤗 Datasets library:
82
+ """
83
+
84
+ from datasets import load_dataset
85
+
86
+ config.dataset_name = "GaumlessGraham/Outer10Real"
87
+ dataset = load_dataset(config.dataset_name, split="train")
88
+
89
+ """<Tip>
90
+
91
+ 💡 You can find additional datasets from the [HugGan Community Event](https://huggingface.co/huggan) or you can use your own dataset by creating a local [`ImageFolder`](https://huggingface.co/docs/datasets/image_dataset#imagefolder). Set `config.dataset_name` to the repository id of the dataset if it is from the HugGan Community Event, or `imagefolder` if you're using your own images.
92
+
93
+ </Tip>
94
+
95
+ 🤗 Datasets uses the [Image](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Image) feature to automatically decode the image data and load it as a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html) which we can visualize:
96
+ """
97
+
98
+ import matplotlib.pyplot as plt
99
+
100
+ fig, axs = plt.subplots(1, 4, figsize=(16, 4))
101
+ for i, image in enumerate(dataset[:4]["image"]):
102
+ axs[i].imshow(image)
103
+ axs[i].set_axis_off()
104
+ fig.show()
105
+
106
+ """<div class="flex justify-center">
107
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/butterflies_ds.png"/>
108
+ </div>
109
+
110
+ The images are all different sizes though, so you'll need to preprocess them first:
111
+
112
+ * `Resize` changes the image size to the one defined in `config.image_size`.
113
+ * `RandomHorizontalFlip` augments the dataset by randomly mirroring the images.
114
+ * `Normalize` is important to rescale the pixel values into a [-1, 1] range, which is what the model expects.
115
+ """
116
+
117
+ from torchvision import transforms
118
+
119
+ preprocess = transforms.Compose(
120
+ [
121
+ transforms.Resize((config.image_size, config.image_size)),
122
+
123
+ transforms.ToTensor(),
124
+ transforms.Normalize([0.5], [0.5]),
125
+ ]
126
+ )
127
+
128
+ """Use 🤗 Datasets' [set_transform](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.set_transform) method to apply the `preprocess` function on the fly during training:"""
129
+
130
+ def transform(examples):
131
+ images = [preprocess(image) for image in examples["image"]]
132
+ return {"images": images}
133
+
134
+
135
+ dataset.set_transform(transform)
136
+
137
+ """Feel free to visualize the images again to confirm that they've been resized. Now you're ready to wrap the dataset in a [DataLoader](https://pytorch.org/docs/stable/data#torch.utils.data.DataLoader) for training!"""
138
+
139
+ import torch
140
+
141
+ train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True)
142
+
143
+ fig.show()
144
+
145
+ """## Create a UNet2DModel
146
+
147
+ Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a [UNet2DModel](https://huggingface.co/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel):
148
+ """
149
+
150
+ from diffusers import UNet2DModel
151
+
152
+ model = UNet2DModel(
153
+ sample_size=config.image_size, # the target image resolution
154
+ in_channels=1, # the number of input channels, 3 for RGB images
155
+ out_channels=1, # the number of output channels
156
+ layers_per_block=2, # how many ResNet layers to use per UNet block
157
+ block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block
158
+ down_block_types=(
159
+ "DownBlock2D", # a regular ResNet downsampling block
160
+ "DownBlock2D",
161
+ "DownBlock2D",
162
+ "DownBlock2D",
163
+ "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention
164
+ "DownBlock2D",
165
+ ),
166
+ up_block_types=(
167
+ "UpBlock2D", # a regular ResNet upsampling block
168
+ "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention
169
+ "UpBlock2D",
170
+ "UpBlock2D",
171
+ "UpBlock2D",
172
+ "UpBlock2D",
173
+ ),
174
+ )
175
+
176
+ """It is often a good idea to quickly check the sample image shape matches the model output shape:"""
177
+
178
+ sample_image = dataset[0]["images"].unsqueeze(0)
179
+ print("Input shape:", sample_image.shape)
180
+
181
+ print("Output shape:", model(sample_image, timestep=0).sample.shape)
182
+
183
+ """Great! Next, you'll need a scheduler to add some noise to the image.
184
+
185
+ ## Create a scheduler
186
+
187
+ The scheduler behaves differently depending on whether you're using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a *noise schedule* and an *update rule*.
188
+
189
+ Let's take a look at the [DDPMScheduler](https://huggingface.co/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler) and use the `add_noise` method to add some random noise to the `sample_image` from before:
190
+ """
191
+
192
+ import torch
193
+ from PIL import Image
194
+ from diffusers import DDPMScheduler
195
+
196
+ noise_scheduler = DDPMScheduler(num_train_timesteps=1000)
197
+ noise = torch.randn(sample_image.shape)
198
+ timesteps = torch.LongTensor([50])
199
+ noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps)
200
+
201
+ #Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0])
202
+
203
+ """<div class="flex justify-center">
204
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/noisy_butterfly.png"/>
205
+ </div>
206
+
207
+ The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by:
208
+ """
209
+
210
+ import torch.nn.functional as F
211
+
212
+ noise_pred = model(noisy_image, timesteps).sample
213
+ loss = F.mse_loss(noise_pred, noise)
214
+
215
+ """## Train the model
216
+
217
+ By now, you have most of the pieces to start training the model and all that's left is putting everything together.
218
+
219
+ First, you'll need an optimizer and a learning rate scheduler:
220
+ """
221
+
222
+ from diffusers.optimization import get_cosine_schedule_with_warmup
223
+
224
+ optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate)
225
+ lr_scheduler = get_cosine_schedule_with_warmup(
226
+ optimizer=optimizer,
227
+ num_warmup_steps=config.lr_warmup_steps,
228
+ num_training_steps=(len(train_dataloader) * config.num_epochs),
229
+ )
230
+
231
+ """Then, you'll need a way to evaluate the model. For evaluation, you can use the [DDPMPipeline](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ddpm#diffusers.DDPMPipeline) to generate a batch of sample images and save it as a grid:"""
232
+
233
+ from diffusers import DDPMPipeline
234
+ import math
235
+ import os
236
+
237
+
238
+ def make_grid(images, rows, cols):
239
+ w, h = images[0].size
240
+ grid = Image.new("RGB", size=(cols * w, rows * h))
241
+ for i, image in enumerate(images):
242
+ grid.paste(image, box=(i % cols * w, i // cols * h))
243
+ return grid
244
+
245
+ def evalfirst(config, epoch, pipeline):
246
+ # Sample some images from random noise (this is the backward diffusion process).
247
+ # The default pipeline output type is `List[PIL.Image]`
248
+ images = pipeline(
249
+ batch_size=config.eval_batch_size,
250
+ generator=torch.manual_seed(config.seed),
251
+ ).images
252
+
253
+ # Make a grid out of the images
254
+ image_grid = make_grid(images, rows=4, cols=4)
255
+
256
+ # Save the images
257
+ test_dir = os.path.join(config.output_dir, "samples")
258
+ os.makedirs(test_dir, exist_ok=True)
259
+ image_grid.save(f"{test_dir}/{epoch:04d}.png")
260
+
261
+
262
+ def evaluate(config, epoch, pipeline):
263
+ import random
264
+ import sys
265
+ # Sample some images from random noise (this is the backward diffusion process).
266
+ # The default pipeline output type is `List[PIL.Image]`
267
+ for k in range(1, 14):
268
+
269
+ images = pipeline(
270
+ batch_size=config.eval_batch_size,
271
+ generator=torch.manual_seed(config.seed),
272
+ ).images
273
+
274
+
275
+ # Save the images
276
+ #test_dir = os.path.join(config.output_dir, "samples"+config.time_started)
277
+ test_dir = os.path.join(config.output_dir, "samples_generated")
278
+ if not os.path.exists(test_dir):
279
+ os.makedirs(test_dir)
280
+
281
+ for i, image in enumerate(images):
282
+ image.save(f"{test_dir}/{(i+((k-1)*16)):04d}.png")
283
+
284
+ config.seed = random.randint(1, 1000)
285
+
286
+
287
+ """Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub.
288
+
289
+ <Tip>
290
+
291
+ 💡 The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you can't wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you're waiting for your model to finish training. 🤗
292
+
293
+ </Tip>
294
+ """
295
+
296
+ from accelerate import Accelerator
297
+ from huggingface_hub import HfFolder, Repository, whoami
298
+ from tqdm.auto import tqdm
299
+ from pathlib import Path
300
+ import os
301
+
302
+
303
+ def get_full_repo_name(model_id: str, organization: str = None, token: str = None):
304
+ if token is None:
305
+ token = HfFolder.get_token()
306
+ if organization is None:
307
+ username = whoami(token)["name"]
308
+ return f"{username}/{model_id}"
309
+ else:
310
+ return f"{organization}/{model_id}"
311
+
312
+
313
+ def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler):
314
+ import sys
315
+
316
+ # Initialize accelerator and tensorboard logging
317
+ accelerator = Accelerator(
318
+ mixed_precision=config.mixed_precision,
319
+ gradient_accumulation_steps=config.gradient_accumulation_steps,
320
+ log_with="tensorboard",
321
+ project_dir=os.path.join(config.output_dir, "logs"),
322
+ )
323
+ if accelerator.is_main_process:
324
+ if config.push_to_hub:
325
+ repo_name = get_full_repo_name(Path(config.output_dir).name)
326
+ repo = Repository(config.output_dir, clone_from=repo_name)
327
+ elif config.output_dir is not None:
328
+ os.makedirs(config.output_dir, exist_ok=True)
329
+ accelerator.init_trackers("train_example")
330
+
331
+ # Prepare everything
332
+ # There is no specific order to remember, you just need to unpack the
333
+ # objects in the same order you gave them to the prepare method.
334
+ model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
335
+ model, optimizer, train_dataloader, lr_scheduler
336
+ )
337
+
338
+ global_step = 0
339
+
340
+ # Now you train the model
341
+ for epoch in range(config.num_epochs):
342
+ progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process)
343
+ progress_bar.set_description(f"Epoch {epoch}")
344
+
345
+ for step, batch in enumerate(train_dataloader):
346
+ clean_images = batch["images"]
347
+ # Sample noise to add to the images
348
+ noise = torch.randn(clean_images.shape).to(clean_images.device)
349
+ bs = clean_images.shape[0]
350
+
351
+ # Sample a random timestep for each image
352
+ timesteps = torch.randint(
353
+ 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device
354
+ ).long()
355
+
356
+ # Add noise to the clean images according to the noise magnitude at each timestep
357
+ # (this is the forward diffusion process)
358
+ noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps)
359
+
360
+ with accelerator.accumulate(model):
361
+ # Predict the noise residual
362
+ noise_pred = model(noisy_images, timesteps, return_dict=False)[0]
363
+ loss = F.mse_loss(noise_pred, noise)
364
+ accelerator.backward(loss)
365
+
366
+ accelerator.clip_grad_norm_(model.parameters(), 1.0)
367
+ optimizer.step()
368
+ lr_scheduler.step()
369
+ optimizer.zero_grad()
370
+
371
+ progress_bar.update(1)
372
+ logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step}
373
+ progress_bar.set_postfix(**logs)
374
+ accelerator.log(logs, step=global_step)
375
+ global_step += 1
376
+
377
+ # After each epoch you optionally sample some demo images with evaluate() and save the model
378
+ if accelerator.is_main_process:
379
+ pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler)
380
+
381
+ if ((epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1) and epoch > 195: #Change if want to not evaluate before a certain epoch
382
+ evalfirst(config, epoch, pipeline)
383
+
384
+ model_dir = os.path.join(config.output_dir, str(epoch))
385
+ os.makedirs(model_dir, exist_ok=True)
386
+
387
+ repo.push_to_hub(commit_message=f"Sample Images Epoch {epoch}", blocking=True)
388
+
389
+
390
+
391
+ if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1:
392
+ if config.push_to_hub:
393
+
394
+ evaluate(config, epoch, pipeline)
395
+
396
+ model_dir = os.path.join(config.output_dir, str(epoch))
397
+ os.makedirs(model_dir, exist_ok=True)
398
+
399
+ pipeline.save_pretrained(model_dir)
400
+ repo.push_to_hub(commit_message=f"Epoch {epoch}", blocking=True)
401
+ sys.exit(0)
402
+ else:
403
+ pipeline.save_pretrained(config.output_dir)
404
+
405
+ """Phew, that was quite a bit of code! But you're finally ready to launch the training with 🤗 Accelerate's [notebook_launcher](https://huggingface.co/docs/accelerate/main/en/package_reference/launchers#accelerate.notebook_launcher) function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training:"""
406
+
407
+ #!/usr/local/cuda/bin/nvcc --version
408
+
409
+ #!nvidia-smi
410
+
411
+ from accelerate import notebook_launcher
412
+
413
+
414
+ args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler)
415
+ #print(model)
416
+
417
+ notebook_launcher(train_loop, args, num_processes=1)
418
+
419
+ """Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model!"""
420
+
421
+ import glob
422
+
423
+ sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png"))
424
+ Image.open(sample_images[-1])
425
+
426
+ """<div class="flex justify-center">
427
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/butterflies_final.png"/>
428
+ </div>
429
+
430
+ ## Next steps
431
+
432
+ Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the [🧨 Diffusers Training Examples](https://huggingface.co/docs/diffusers/main/en/tutorials/../training/overview) page. Here are some examples of what you can learn:
433
+
434
+ * [Textual Inversion](https://huggingface.co/docs/diffusers/main/en/tutorials/../training/text_inversion), an algorithm that teaches a model a specific visual concept and integrates it into the generated image.
435
+ * [DreamBooth](https://huggingface.co/docs/diffusers/main/en/tutorials/../training/dreambooth), a technique for generating personalized images of a subject given several input images of the subject.
436
+ * [Guide](https://huggingface.co/docs/diffusers/main/en/tutorials/../training/text2image) to finetuning a Stable Diffusion model on your own dataset.
437
+ * [Guide](https://huggingface.co/docs/diffusers/main/en/tutorials/../training/lora) to using LoRA, a memory-efficient technique for finetuning really large models faster.
438
+ """