Update README.md
Browse files
README.md
CHANGED
|
@@ -38,9 +38,9 @@ The prepared Rico dataset used for this training can be accessed here (https://h
|
|
| 38 |
|
| 39 |
I did 7 fine-tuning sessions for 20 epochs on an NVIDIA GeForce RTX 4070 Ti SUPER with 16GB VRAM, taking approximately 57.89 hours to complete 254.1k training steps. I split the small components into 5 parts due to the large number of data (666.7k data) and are trained at batch size 11 and resolution 384 for 5 sessions, the first for 3 epochs and the rest for 2 epochs. The model is then trained on the big components (46.7k data) at batch size 9 and resolution 448 for 4 epochs and finally on the full UIs (65.5k data) at batch size 7 and resolution 512 for 5 epochs. I first did the final session with validation before backing up and redoing it without it to have the model trained on all of the UIs.
|
| 40 |
|
| 41 |
-
I have uploaded all of the related training configurations used in EveryDream2 in "ed2-config". The order of the main configuration files used is as follows: "rico_diffusion_v2_comp.json", "rico_diffusion_v2_comp_image.json", "rico_diffusion_v2_comp_icon.json", "rico_diffusion_v2_comp_button.json", "rico_diffusion_v2_comp_list_item.json", "rico_diffusion_v2_comp_big", and finally "rico_diffusion.json". "rico_diffusion_v2_full" is the validation version of the final session before backing up and redoing it with "rico_diffusion.json".
|
| 42 |
|
| 43 |
-
The "v2" part is because I tried fine-tuning the model in a single session with just the UI screenshots using "rico_diffusion_v1.json", no fine-tuning on individual UI components first, calling the new model Rico Diffusion V1 to compare it with the result of "rico_diffusion_v2_full" called Rico Diffusion V2 while the model you can access in this repository is just called Rico Diffusion.
|
| 44 |
|
| 45 |
The final model turned out decently well at creating UI mockups. It's still not optimal, especially with many UI components (10 or more) but it is still far better than the base model given that I had to limit the amount of training epochs and batch size due to the limited hardware I have access to.
|
| 46 |
|
|
|
|
| 38 |
|
| 39 |
I did 7 fine-tuning sessions for 20 epochs on an NVIDIA GeForce RTX 4070 Ti SUPER with 16GB VRAM, taking approximately 57.89 hours to complete 254.1k training steps. I split the small components into 5 parts due to the large number of data (666.7k data) and are trained at batch size 11 and resolution 384 for 5 sessions, the first for 3 epochs and the rest for 2 epochs. The model is then trained on the big components (46.7k data) at batch size 9 and resolution 448 for 4 epochs and finally on the full UIs (65.5k data) at batch size 7 and resolution 512 for 5 epochs. I first did the final session with validation before backing up and redoing it without it to have the model trained on all of the UIs.
|
| 40 |
|
| 41 |
+
I have uploaded all of the related training configurations used in EveryDream2 in "ed2-config". The order of the main configuration files used is as follows: "rico_diffusion_v2_comp.json", "rico_diffusion_v2_comp_image.json", "rico_diffusion_v2_comp_icon.json", "rico_diffusion_v2_comp_button.json", "rico_diffusion_v2_comp_list_item.json", "rico_diffusion_v2_comp_big", and finally "rico_diffusion.json". "rico_diffusion_v2_full.json" is the validation version of the final session before backing up and redoing it with "rico_diffusion.json".
|
| 42 |
|
| 43 |
+
The "v2" part is because I tried fine-tuning the model in a single session with just the UI screenshots using "rico_diffusion_v1.json", no fine-tuning on individual UI components first, calling the new model Rico Diffusion V1 to compare it with the result of "rico_diffusion_v2_full.json" called Rico Diffusion V2 while the model you can access in this repository is just called Rico Diffusion.
|
| 44 |
|
| 45 |
The final model turned out decently well at creating UI mockups. It's still not optimal, especially with many UI components (10 or more) but it is still far better than the base model given that I had to limit the amount of training epochs and batch size due to the limited hardware I have access to.
|
| 46 |
|