mobled37 commited on
Commit
b888171
·
verified ·
1 Parent(s): dc6b43d

End of training

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -3,7 +3,7 @@
3
  license: creativeml-openrail-m
4
  base_model: None
5
  datasets:
6
- - uoft-cs/cifar10
7
  tags:
8
  - stable-diffusion
9
  - stable-diffusion-diffusers
@@ -14,7 +14,7 @@ inference: true
14
 
15
  # Text-to-image finetuning - mobled37/vae-model-finetuned
16
 
17
- This pipeline was finetuned from **None** on the **uoft-cs/cifar10** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: Nothing:
18
 
19
 
20
  ## Training info
@@ -22,11 +22,11 @@ This pipeline was finetuned from **None** on the **uoft-cs/cifar10** dataset. Be
22
  These are the key hyperparameters used during training:
23
 
24
  * Epochs: 100
25
- * Learning rate: 0.0006144
26
- * Batch size: 2048
27
  * Gradient accumulation steps: 2
28
- * Image resolution: 28
29
  * Mixed-precision: fp16
30
 
31
 
32
- More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/wearesameasyou/vae-fine-tune/runs/8ukoviyx).
 
3
  license: creativeml-openrail-m
4
  base_model: None
5
  datasets:
6
+ - vipseg
7
  tags:
8
  - stable-diffusion
9
  - stable-diffusion-diffusers
 
14
 
15
  # Text-to-image finetuning - mobled37/vae-model-finetuned
16
 
17
+ This pipeline was finetuned from **None** on the **vipseg** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: Nothing:
18
 
19
 
20
  ## Training info
 
22
  These are the key hyperparameters used during training:
23
 
24
  * Epochs: 100
25
+ * Learning rate: 4.8e-06
26
+ * Batch size: 16
27
  * Gradient accumulation steps: 2
28
+ * Image resolution: 30
29
  * Mixed-precision: fp16
30
 
31
 
32
+ More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/wearesameasyou/vae-fine-tune/runs/bgysrmyj).