* Part 1 consists of 89,785 HQ 1024x1024 curated face images. It uses "inspiration" images from Artstation-Artistic-face-HQ dataset (AAHQ), Close-Up Humans dataset and UIBVFED dataset. * Part 2 consists of 91,361 HQ 1024x1024 curated face images. It uses "inspiration" images from Face Synthetics dataset and by sampling from the Stable Diffusion v1.4 text to image generator using varied face portrait prompts. * Part 3 consists of 118,358 HQ 1024x1024 curated face images. It uses "inspiration" images by sampling from StyleGAN2 mapping network with very high truncation psi coefficients to increase diversity of the generation. Here, the e4e encoder is basically used a new kind of truncation trick. * Part 4 consists of 125,754 HQ 1024x1024 curated face images. It uses "inspiration" images by sampling from the Stable Diffusion v2.1 text to image generator using varied face portrait prompts. * Synthetic Faces High Quality - Text 2 Image (SFHQ-T2I) This dataset consists of 122,726 high quality 1024x1024 curated face images, and was created by creating random prompt strings that were sent to multiple "text to image" models (Flux1.pro, Flux1.dev, Flux1.schnell, SDXL, DALL-E 3) and dropping bad generations using a semi manual curation process.