Update innerRace.py
Browse files- innerRace.py +57 -11
innerRace.py
CHANGED
|
@@ -1,14 +1,23 @@
|
|
| 1 |
# -*- coding: utf-8 -*-
|
| 2 |
"""beths butterfly training.ipynb
|
|
|
|
| 3 |
Automatically generated by Colab.
|
|
|
|
| 4 |
Original file is located at
|
| 5 |
https://colab.research.google.com/drive/1SbxWXhffEnCJ6tVT6ZfTDbY2-cxb063U
|
|
|
|
| 6 |
# Train a diffusion model
|
|
|
|
| 7 |
Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the [Hub](https://huggingface.co/search/full-text?q=unconditional-image-generation&type=model), but if you can't find one you like, you can always train your own!
|
|
|
|
| 8 |
This tutorial will teach you how to train a [UNet2DModel](https://huggingface.co/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel) from scratch on a subset of the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset to generate your own π¦ butterflies π¦.
|
|
|
|
| 9 |
<Tip>
|
|
|
|
| 10 |
π‘ This training tutorial is based on the [Training with 𧨠Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook. For additional details and context about diffusion models like how they work, check out the notebook!
|
|
|
|
| 11 |
</Tip>
|
|
|
|
| 12 |
Before you begin, make sure you have π€ Datasets installed to load and preprocess image datasets, and π€ Accelerate, to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics (you can also use [Weights & Biases](https://docs.wandb.ai/) to track your training).
|
| 13 |
"""
|
| 14 |
|
|
@@ -25,29 +34,34 @@ notebook_login()
|
|
| 25 |
#!git config --global credential.helper store
|
| 26 |
|
| 27 |
"""Or login in from the terminal:
|
|
|
|
| 28 |
```bash
|
| 29 |
huggingface-cli login
|
| 30 |
```
|
|
|
|
| 31 |
Since the model checkpoints are quite large, install [Git-LFS](https://git-lfs.com/) to version these large files:
|
|
|
|
| 32 |
```bash
|
| 33 |
!sudo apt -qq install git-lfs
|
| 34 |
!git config --global credential.helper store
|
| 35 |
```
|
|
|
|
| 36 |
## Training configuration
|
|
|
|
| 37 |
For convenience, create a `TrainingConfig` class containing the training hyperparameters (feel free to adjust them):
|
| 38 |
"""
|
| 39 |
|
| 40 |
from dataclasses import dataclass
|
| 41 |
-
|
| 42 |
|
| 43 |
@dataclass
|
| 44 |
class TrainingConfig:
|
| 45 |
image_size = 256 # the generated image resolution
|
| 46 |
train_batch_size = 10
|
| 47 |
-
eval_batch_size = 16
|
| 48 |
-
num_epochs =
|
| 49 |
gradient_accumulation_steps = 1
|
| 50 |
-
learning_rate = 1e-4
|
| 51 |
lr_warmup_steps = 500
|
| 52 |
save_image_epochs = 25
|
| 53 |
save_model_epochs = 25
|
|
@@ -59,13 +73,11 @@ class TrainingConfig:
|
|
| 59 |
overwrite_output_dir = False # KEEP THIS AS FALSE
|
| 60 |
seed = 0
|
| 61 |
|
| 62 |
-
time_started = datetime.datetime.now().strftime('%a %d %b %Y, %I:%M%p')
|
| 63 |
|
| 64 |
config = TrainingConfig()
|
| 65 |
|
| 66 |
-
|
| 67 |
-
|
| 68 |
"""## Load the dataset
|
|
|
|
| 69 |
You can easily load the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset with the π€ Datasets library:
|
| 70 |
"""
|
| 71 |
|
|
@@ -75,8 +87,11 @@ config.dataset_name = "GaumlessGraham/7inchInnerRace1730AugData"
|
|
| 75 |
dataset = load_dataset(config.dataset_name, split="train")
|
| 76 |
|
| 77 |
"""<Tip>
|
|
|
|
| 78 |
π‘ You can find additional datasets from the [HugGan Community Event](https://huggingface.co/huggan) or you can use your own dataset by creating a local [`ImageFolder`](https://huggingface.co/docs/datasets/image_dataset#imagefolder). Set `config.dataset_name` to the repository id of the dataset if it is from the HugGan Community Event, or `imagefolder` if you're using your own images.
|
|
|
|
| 79 |
</Tip>
|
|
|
|
| 80 |
π€ Datasets uses the [Image](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Image) feature to automatically decode the image data and load it as a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html) which we can visualize:
|
| 81 |
"""
|
| 82 |
|
|
@@ -91,7 +106,9 @@ fig.show()
|
|
| 91 |
"""<div class="flex justify-center">
|
| 92 |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/butterflies_ds.png"/>
|
| 93 |
</div>
|
|
|
|
| 94 |
The images are all different sizes though, so you'll need to preprocess them first:
|
|
|
|
| 95 |
* `Resize` changes the image size to the one defined in `config.image_size`.
|
| 96 |
* `RandomHorizontalFlip` augments the dataset by randomly mirroring the images.
|
| 97 |
* `Normalize` is important to rescale the pixel values into a [-1, 1] range, which is what the model expects.
|
|
@@ -126,6 +143,7 @@ train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_
|
|
| 126 |
fig.show()
|
| 127 |
|
| 128 |
"""## Create a UNet2DModel
|
|
|
|
| 129 |
Pretrained models in 𧨠Diffusers are easily created from their model class with the parameters you want. For example, to create a [UNet2DModel](https://huggingface.co/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel):
|
| 130 |
"""
|
| 131 |
|
|
@@ -163,8 +181,11 @@ print("Input shape:", sample_image.shape)
|
|
| 163 |
print("Output shape:", model(sample_image, timestep=0).sample.shape)
|
| 164 |
|
| 165 |
"""Great! Next, you'll need a scheduler to add some noise to the image.
|
|
|
|
| 166 |
## Create a scheduler
|
|
|
|
| 167 |
The scheduler behaves differently depending on whether you're using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a *noise schedule* and an *update rule*.
|
|
|
|
| 168 |
Let's take a look at the [DDPMScheduler](https://huggingface.co/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler) and use the `add_noise` method to add some random noise to the `sample_image` from before:
|
| 169 |
"""
|
| 170 |
|
|
@@ -182,6 +203,7 @@ noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps)
|
|
| 182 |
"""<div class="flex justify-center">
|
| 183 |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/noisy_butterfly.png"/>
|
| 184 |
</div>
|
|
|
|
| 185 |
The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by:
|
| 186 |
"""
|
| 187 |
|
|
@@ -191,7 +213,9 @@ noise_pred = model(noisy_image, timesteps).sample
|
|
| 191 |
loss = F.mse_loss(noise_pred, noise)
|
| 192 |
|
| 193 |
"""## Train the model
|
|
|
|
| 194 |
By now, you have most of the pieces to start training the model and all that's left is putting everything together.
|
|
|
|
| 195 |
First, you'll need an optimizer and a learning rate scheduler:
|
| 196 |
"""
|
| 197 |
|
|
@@ -209,7 +233,7 @@ lr_scheduler = get_cosine_schedule_with_warmup(
|
|
| 209 |
from diffusers import DDPMPipeline
|
| 210 |
import math
|
| 211 |
import os
|
| 212 |
-
|
| 213 |
|
| 214 |
def make_grid(images, rows, cols):
|
| 215 |
w, h = images[0].size
|
|
@@ -231,13 +255,16 @@ def evaluate(config, epoch, pipeline):
|
|
| 231 |
image_grid = make_grid(images, rows=4, cols=4)
|
| 232 |
|
| 233 |
# Save the images
|
| 234 |
-
test_dir = os.path.join(config.output_dir, "samples"
|
| 235 |
os.makedirs(test_dir, exist_ok=True)
|
| 236 |
image_grid.save(f"{test_dir}/{epoch:04d}.png")
|
| 237 |
|
| 238 |
"""Now you can wrap all these components together in a training loop with π€ Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub.
|
|
|
|
| 239 |
<Tip>
|
|
|
|
| 240 |
π‘ The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you can't wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you're waiting for your model to finish training. π€
|
|
|
|
| 241 |
</Tip>
|
| 242 |
"""
|
| 243 |
|
|
@@ -272,7 +299,7 @@ def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_s
|
|
| 272 |
repo = Repository(config.output_dir, clone_from=repo_name)
|
| 273 |
elif config.output_dir is not None:
|
| 274 |
os.makedirs(config.output_dir, exist_ok=True)
|
| 275 |
-
accelerator.init_trackers("
|
| 276 |
|
| 277 |
# Prepare everything
|
| 278 |
# There is no specific order to remember, you just need to unpack the
|
|
@@ -329,6 +356,11 @@ def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_s
|
|
| 329 |
|
| 330 |
if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1:
|
| 331 |
if config.push_to_hub:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 332 |
repo.push_to_hub(commit_message=f"Epoch {epoch}", blocking=True)
|
| 333 |
else:
|
| 334 |
pipeline.save_pretrained(config.output_dir)
|
|
@@ -352,4 +384,18 @@ notebook_launcher(train_loop, args, num_processes=1)
|
|
| 352 |
import glob
|
| 353 |
|
| 354 |
sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png"))
|
| 355 |
-
Image.open(sample_images[-1])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# -*- coding: utf-8 -*-
|
| 2 |
"""beths butterfly training.ipynb
|
| 3 |
+
|
| 4 |
Automatically generated by Colab.
|
| 5 |
+
|
| 6 |
Original file is located at
|
| 7 |
https://colab.research.google.com/drive/1SbxWXhffEnCJ6tVT6ZfTDbY2-cxb063U
|
| 8 |
+
|
| 9 |
# Train a diffusion model
|
| 10 |
+
|
| 11 |
Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the [Hub](https://huggingface.co/search/full-text?q=unconditional-image-generation&type=model), but if you can't find one you like, you can always train your own!
|
| 12 |
+
|
| 13 |
This tutorial will teach you how to train a [UNet2DModel](https://huggingface.co/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel) from scratch on a subset of the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset to generate your own π¦ butterflies π¦.
|
| 14 |
+
|
| 15 |
<Tip>
|
| 16 |
+
|
| 17 |
π‘ This training tutorial is based on the [Training with 𧨠Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook. For additional details and context about diffusion models like how they work, check out the notebook!
|
| 18 |
+
|
| 19 |
</Tip>
|
| 20 |
+
|
| 21 |
Before you begin, make sure you have π€ Datasets installed to load and preprocess image datasets, and π€ Accelerate, to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics (you can also use [Weights & Biases](https://docs.wandb.ai/) to track your training).
|
| 22 |
"""
|
| 23 |
|
|
|
|
| 34 |
#!git config --global credential.helper store
|
| 35 |
|
| 36 |
"""Or login in from the terminal:
|
| 37 |
+
|
| 38 |
```bash
|
| 39 |
huggingface-cli login
|
| 40 |
```
|
| 41 |
+
|
| 42 |
Since the model checkpoints are quite large, install [Git-LFS](https://git-lfs.com/) to version these large files:
|
| 43 |
+
|
| 44 |
```bash
|
| 45 |
!sudo apt -qq install git-lfs
|
| 46 |
!git config --global credential.helper store
|
| 47 |
```
|
| 48 |
+
|
| 49 |
## Training configuration
|
| 50 |
+
|
| 51 |
For convenience, create a `TrainingConfig` class containing the training hyperparameters (feel free to adjust them):
|
| 52 |
"""
|
| 53 |
|
| 54 |
from dataclasses import dataclass
|
| 55 |
+
|
| 56 |
|
| 57 |
@dataclass
|
| 58 |
class TrainingConfig:
|
| 59 |
image_size = 256 # the generated image resolution
|
| 60 |
train_batch_size = 10
|
| 61 |
+
eval_batch_size = 16 # how many images to sample during evaluation
|
| 62 |
+
num_epochs = 250
|
| 63 |
gradient_accumulation_steps = 1
|
| 64 |
+
learning_rate = 1e-4
|
| 65 |
lr_warmup_steps = 500
|
| 66 |
save_image_epochs = 25
|
| 67 |
save_model_epochs = 25
|
|
|
|
| 73 |
overwrite_output_dir = False # KEEP THIS AS FALSE
|
| 74 |
seed = 0
|
| 75 |
|
|
|
|
| 76 |
|
| 77 |
config = TrainingConfig()
|
| 78 |
|
|
|
|
|
|
|
| 79 |
"""## Load the dataset
|
| 80 |
+
|
| 81 |
You can easily load the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset with the π€ Datasets library:
|
| 82 |
"""
|
| 83 |
|
|
|
|
| 87 |
dataset = load_dataset(config.dataset_name, split="train")
|
| 88 |
|
| 89 |
"""<Tip>
|
| 90 |
+
|
| 91 |
π‘ You can find additional datasets from the [HugGan Community Event](https://huggingface.co/huggan) or you can use your own dataset by creating a local [`ImageFolder`](https://huggingface.co/docs/datasets/image_dataset#imagefolder). Set `config.dataset_name` to the repository id of the dataset if it is from the HugGan Community Event, or `imagefolder` if you're using your own images.
|
| 92 |
+
|
| 93 |
</Tip>
|
| 94 |
+
|
| 95 |
π€ Datasets uses the [Image](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Image) feature to automatically decode the image data and load it as a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html) which we can visualize:
|
| 96 |
"""
|
| 97 |
|
|
|
|
| 106 |
"""<div class="flex justify-center">
|
| 107 |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/butterflies_ds.png"/>
|
| 108 |
</div>
|
| 109 |
+
|
| 110 |
The images are all different sizes though, so you'll need to preprocess them first:
|
| 111 |
+
|
| 112 |
* `Resize` changes the image size to the one defined in `config.image_size`.
|
| 113 |
* `RandomHorizontalFlip` augments the dataset by randomly mirroring the images.
|
| 114 |
* `Normalize` is important to rescale the pixel values into a [-1, 1] range, which is what the model expects.
|
|
|
|
| 143 |
fig.show()
|
| 144 |
|
| 145 |
"""## Create a UNet2DModel
|
| 146 |
+
|
| 147 |
Pretrained models in 𧨠Diffusers are easily created from their model class with the parameters you want. For example, to create a [UNet2DModel](https://huggingface.co/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel):
|
| 148 |
"""
|
| 149 |
|
|
|
|
| 181 |
print("Output shape:", model(sample_image, timestep=0).sample.shape)
|
| 182 |
|
| 183 |
"""Great! Next, you'll need a scheduler to add some noise to the image.
|
| 184 |
+
|
| 185 |
## Create a scheduler
|
| 186 |
+
|
| 187 |
The scheduler behaves differently depending on whether you're using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a *noise schedule* and an *update rule*.
|
| 188 |
+
|
| 189 |
Let's take a look at the [DDPMScheduler](https://huggingface.co/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler) and use the `add_noise` method to add some random noise to the `sample_image` from before:
|
| 190 |
"""
|
| 191 |
|
|
|
|
| 203 |
"""<div class="flex justify-center">
|
| 204 |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/noisy_butterfly.png"/>
|
| 205 |
</div>
|
| 206 |
+
|
| 207 |
The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by:
|
| 208 |
"""
|
| 209 |
|
|
|
|
| 213 |
loss = F.mse_loss(noise_pred, noise)
|
| 214 |
|
| 215 |
"""## Train the model
|
| 216 |
+
|
| 217 |
By now, you have most of the pieces to start training the model and all that's left is putting everything together.
|
| 218 |
+
|
| 219 |
First, you'll need an optimizer and a learning rate scheduler:
|
| 220 |
"""
|
| 221 |
|
|
|
|
| 233 |
from diffusers import DDPMPipeline
|
| 234 |
import math
|
| 235 |
import os
|
| 236 |
+
|
| 237 |
|
| 238 |
def make_grid(images, rows, cols):
|
| 239 |
w, h = images[0].size
|
|
|
|
| 255 |
image_grid = make_grid(images, rows=4, cols=4)
|
| 256 |
|
| 257 |
# Save the images
|
| 258 |
+
test_dir = os.path.join(config.output_dir, "samples")
|
| 259 |
os.makedirs(test_dir, exist_ok=True)
|
| 260 |
image_grid.save(f"{test_dir}/{epoch:04d}.png")
|
| 261 |
|
| 262 |
"""Now you can wrap all these components together in a training loop with π€ Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub.
|
| 263 |
+
|
| 264 |
<Tip>
|
| 265 |
+
|
| 266 |
π‘ The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you can't wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you're waiting for your model to finish training. π€
|
| 267 |
+
|
| 268 |
</Tip>
|
| 269 |
"""
|
| 270 |
|
|
|
|
| 299 |
repo = Repository(config.output_dir, clone_from=repo_name)
|
| 300 |
elif config.output_dir is not None:
|
| 301 |
os.makedirs(config.output_dir, exist_ok=True)
|
| 302 |
+
accelerator.init_trackers("train_example")
|
| 303 |
|
| 304 |
# Prepare everything
|
| 305 |
# There is no specific order to remember, you just need to unpack the
|
|
|
|
| 356 |
|
| 357 |
if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1:
|
| 358 |
if config.push_to_hub:
|
| 359 |
+
|
| 360 |
+
model_dir = os.path.join(config.output_dir, str(epoch))
|
| 361 |
+
os.makedirs(model_dir, exist_ok=True)
|
| 362 |
+
|
| 363 |
+
pipeline.save_pretrained(model_dir)
|
| 364 |
repo.push_to_hub(commit_message=f"Epoch {epoch}", blocking=True)
|
| 365 |
else:
|
| 366 |
pipeline.save_pretrained(config.output_dir)
|
|
|
|
| 384 |
import glob
|
| 385 |
|
| 386 |
sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png"))
|
| 387 |
+
Image.open(sample_images[-1])
|
| 388 |
+
|
| 389 |
+
"""<div class="flex justify-center">
|
| 390 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/butterflies_final.png"/>
|
| 391 |
+
</div>
|
| 392 |
+
|
| 393 |
+
## Next steps
|
| 394 |
+
|
| 395 |
+
Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the [𧨠Diffusers Training Examples](https://huggingface.co/docs/diffusers/main/en/tutorials/../training/overview) page. Here are some examples of what you can learn:
|
| 396 |
+
|
| 397 |
+
* [Textual Inversion](https://huggingface.co/docs/diffusers/main/en/tutorials/../training/text_inversion), an algorithm that teaches a model a specific visual concept and integrates it into the generated image.
|
| 398 |
+
* [DreamBooth](https://huggingface.co/docs/diffusers/main/en/tutorials/../training/dreambooth), a technique for generating personalized images of a subject given several input images of the subject.
|
| 399 |
+
* [Guide](https://huggingface.co/docs/diffusers/main/en/tutorials/../training/text2image) to finetuning a Stable Diffusion model on your own dataset.
|
| 400 |
+
* [Guide](https://huggingface.co/docs/diffusers/main/en/tutorials/../training/lora) to using LoRA, a memory-efficient technique for finetuning really large models faster.
|
| 401 |
+
"""
|