End of training
Browse files
README.md
CHANGED
|
@@ -21,7 +21,7 @@ This pipeline was finetuned from **None** on the **vipseg** dataset. Below are s
|
|
| 21 |
|
| 22 |
These are the key hyperparameters used during training:
|
| 23 |
|
| 24 |
-
* Epochs:
|
| 25 |
* Learning rate: 1.92e-05
|
| 26 |
* Batch size: 64
|
| 27 |
* Gradient accumulation steps: 2
|
|
@@ -29,4 +29,4 @@ These are the key hyperparameters used during training:
|
|
| 29 |
* Mixed-precision: fp16
|
| 30 |
|
| 31 |
|
| 32 |
-
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/wearesameasyou/vae-fine-tune/runs/
|
|
|
|
| 21 |
|
| 22 |
These are the key hyperparameters used during training:
|
| 23 |
|
| 24 |
+
* Epochs: 1000
|
| 25 |
* Learning rate: 1.92e-05
|
| 26 |
* Batch size: 64
|
| 27 |
* Gradient accumulation steps: 2
|
|
|
|
| 29 |
* Mixed-precision: fp16
|
| 30 |
|
| 31 |
|
| 32 |
+
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/wearesameasyou/vae-fine-tune/runs/032v2xm0).
|