Reducing Image Generation Latency on Limited GPU Resources

#4
by feel-123 - opened

The model produces good results, but inference takes about 4–5 minutes per image on a 16 GB GPU. Please suggest ways to optimize or speed up the generation process.

Sign up or log in to comment