Text-to-image finetuning - mobled37/vae-model-finetuned

This pipeline was finetuned from None on the vipseg dataset. Below are some example images generated with the finetuned pipeline using the following prompts: Nothing:

Training info

These are the key hyperparameters used during training:

  • Epochs: 1000
  • Learning rate: 1.92e-05
  • Batch size: 64
  • Gradient accumulation steps: 2
  • Image resolution: 30
  • Mixed-precision: fp16

More information on all the CLI arguments and the environment are available on your wandb run page.

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support