Loading settings from /content/LoRA/config/config_file.toml... /content/LoRA/config/config_file prepare tokenizer Downloading (…)olve/main/vocab.json: 100% 961k/961k [00:00<00:00, 1.13MB/s] Downloading (…)olve/main/merges.txt: 100% 525k/525k [00:00<00:00, 823kB/s] Downloading (…)cial_tokens_map.json: 100% 389/389 [00:00<00:00, 257kB/s] Downloading (…)okenizer_config.json: 100% 905/905 [00:00<00:00, 399kB/s] update token length: 225 Load dataset config from /content/LoRA/config/dataset_config.toml prepare images. found directory /content/LoRA/train_data contains 11 image files 2750 train images with repeating. 0 reg images. no regularization images / 正則化画像が見つかりませんでした [Dataset 0] batch_size: 6 resolution: (512, 512) enable_bucket: True min_bucket_reso: 256 max_bucket_reso: 1024 bucket_reso_steps: 64 bucket_no_upscale: False [Subset 0 of Dataset 0] image_dir: "/content/LoRA/train_data" image_count: 11 num_repeats: 250 shuffle_caption: True keep_tokens: 0 caption_dropout_rate: 0 caption_dropout_every_n_epoches: 0 caption_tag_dropout_rate: 0 color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, is_reg: False class_tokens: mksks caption_extension: .txt [Dataset 0] loading image sizes. 100% 11/11 [00:00<00:00, 439.09it/s] make buckets number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む) bucket 0: resolution (384, 640), count: 250 bucket 1: resolution (512, 512), count: 1000 bucket 2: resolution (576, 448), count: 500 bucket 3: resolution (640, 384), count: 1000 mean ar error (without repeats): 0.0969292682863697 prepare accelerator Using accelerator 0.15.0 or above. loading model for process 0/1 load StableDiffusion checkpoint loading u-net: loading vae: Downloading (…)lve/main/config.json: 100% 4.52k/4.52k [00:00<00:00, 3.13MB/s] Downloading pytorch_model.bin: 100% 1.71G/1.71G [00:23<00:00, 73.3MB/s] loading text encoder: Replace CrossAttention.forward to use xformers [Dataset 0] caching latents. 100% 5/5 [00:13<00:00, 2.62s/it] import network module: lycoris.kohya Using rank adaptation algo: lora Use Dropout value: 0.0 Create LyCORIS Module create LyCORIS for Text Encoder: 72 modules. Create LyCORIS Module create LyCORIS for U-Net: 278 modules. enable LyCORIS for text encoder enable LyCORIS for U-Net prepare optimizer, data loader etc. Deprecated: use prepare_optimizer_params(text_encoder_lr, unet_lr, learning_rate) instead of prepare_optimizer_params(text_encoder_lr, unet_lr) CUDA SETUP: CUDA runtime path found: /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.5 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so... use 8-bit AdamW optimizer | {} override steps. steps for 2 epochs is / 指定エポックまでのステップ数: 920 running training / 学習開始 num train images * repeats / 学習画像の数×繰り返し回数: 2750 num reg images / 正則化画像の数: 0 num batches per epoch / 1epochのバッチ数: 460 num epochs / epoch数: 2 batch size per device / バッチサイズ: 6 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 920 steps: 0% 0/920 [00:00