Commit
·
9073ee3
1
Parent(s):
4e4b6a7
Update README.md
Browse files
README.md
CHANGED
|
@@ -41,6 +41,7 @@ This model is still in its infancy and it's meant to be constantly updated and t
|
|
| 41 |
- #### Stable:
|
| 42 |
- [vae.sygil_muse_v0.1.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.sygil_muse_v0.1.pt): Trained from scratch for 3.0M steps with **dim: 128** and **vq_codebook_size: 256**.
|
| 43 |
- [maskgit.sygil_muse_v0.1.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/maskgit.sygil_muse_v0.1.pt): Maskgit trained from the VAE for 3.46M steps
|
|
|
|
| 44 |
- #### Beta:
|
| 45 |
- [vae.1999500.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.1999500.pt): Trained from scratch for 1.99M steps and higher **vq_codebook_size** than before.
|
| 46 |
- [maskgit.39000.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/maskgit.39000.pt): Maskgit trained from the VAE for 39K steps using the hyperparameters `heads 16` and `depth 22` for testing, these values have huge performance effects, the vram usage was also increased so it is just for testing, the quality on this checkpoint did increase a lot and requires a lot less training which is something we want but we need to find a balance between quality and performance.
|
|
@@ -57,18 +58,21 @@ The model was trained on the following dataset:
|
|
| 57 |
**Hardware and others**
|
| 58 |
- **Hardware:** 1 x Nvidia RTX 3050 GPU
|
| 59 |
- **Hours Trained:** NaN.
|
| 60 |
-
- **Gradient Accumulations**:
|
| 61 |
- **Batch:** 1
|
| 62 |
- **Learning Rate:** 1e-4
|
| 63 |
-
- **Learning Rate Scheduler:** `
|
|
|
|
| 64 |
- **Optimizer:** Adam
|
| 65 |
- **Warmup Steps:** 10,000
|
| 66 |
-
- **Number of Cycles:**
|
| 67 |
- **Resolution/Image Size**: First trained at a resolution of 64x64, then increased to 256x256 and then to 512x512. Check the notes down below for more details on this.
|
| 68 |
- **Dimension:** 128
|
|
|
|
| 69 |
- **vq_codebook_size:** 8192
|
| 70 |
- **heads:** 8
|
| 71 |
- **depth:** 4
|
|
|
|
| 72 |
- **Total Training Steps:** 1,999,500
|
| 73 |
|
| 74 |
Note: On Muse we can change the image_size or resolution at any time without having to train the model from scratch again, this allows us to first train the model at low resolution using the same `dim` and `vq_codebook_size` to train faster and then we can increase the `image_size` and use a higher resolution once the model has trained enough.
|
|
|
|
| 41 |
- #### Stable:
|
| 42 |
- [vae.sygil_muse_v0.1.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.sygil_muse_v0.1.pt): Trained from scratch for 3.0M steps with **dim: 128** and **vq_codebook_size: 256**.
|
| 43 |
- [maskgit.sygil_muse_v0.1.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/maskgit.sygil_muse_v0.1.pt): Maskgit trained from the VAE for 3.46M steps
|
| 44 |
+
- [vae.sygil_muse_v0.5.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.sygil_muse_v0.5.pt): Trained from scratch for 1.99M steps with **dim: 128** and **vq_codebook_size: 8192**.
|
| 45 |
- #### Beta:
|
| 46 |
- [vae.1999500.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.1999500.pt): Trained from scratch for 1.99M steps and higher **vq_codebook_size** than before.
|
| 47 |
- [maskgit.39000.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/maskgit.39000.pt): Maskgit trained from the VAE for 39K steps using the hyperparameters `heads 16` and `depth 22` for testing, these values have huge performance effects, the vram usage was also increased so it is just for testing, the quality on this checkpoint did increase a lot and requires a lot less training which is something we want but we need to find a balance between quality and performance.
|
|
|
|
| 58 |
**Hardware and others**
|
| 59 |
- **Hardware:** 1 x Nvidia RTX 3050 GPU
|
| 60 |
- **Hours Trained:** NaN.
|
| 61 |
+
- **Gradient Accumulations**: 10
|
| 62 |
- **Batch:** 1
|
| 63 |
- **Learning Rate:** 1e-4
|
| 64 |
+
- **Learning Rate Scheduler:** `cosine_with_restarts`
|
| 65 |
+
- **Scheduler Power:** 0.5
|
| 66 |
- **Optimizer:** Adam
|
| 67 |
- **Warmup Steps:** 10,000
|
| 68 |
+
- **Number of Cycles:** 10,000
|
| 69 |
- **Resolution/Image Size**: First trained at a resolution of 64x64, then increased to 256x256 and then to 512x512. Check the notes down below for more details on this.
|
| 70 |
- **Dimension:** 128
|
| 71 |
+
- **vq_codebook_dim:** 4096
|
| 72 |
- **vq_codebook_size:** 8192
|
| 73 |
- **heads:** 8
|
| 74 |
- **depth:** 4
|
| 75 |
+
- **Random Crop:** True
|
| 76 |
- **Total Training Steps:** 1,999,500
|
| 77 |
|
| 78 |
Note: On Muse we can change the image_size or resolution at any time without having to train the model from scratch again, this allows us to first train the model at low resolution using the same `dim` and `vq_codebook_size` to train faster and then we can increase the `image_size` and use a higher resolution once the model has trained enough.
|