π§΅ WaxFashionStyleGAN
A custom-trained StyleGAN2 model for generating African Wax Print-inspired fashion patterns. This model explores the intersection of AI and African fashion, aiming to increase the representation of African textile design in digital spaces. Trained on a synthetic dataset of African Wax Print images, the model generates vibrant, culturally rich fashion patterns. It's designed for designers, researchers, and creators to generate unique fashion patterns.
π Model Overview
Model Type: Fine-tuned StyleGAN2
Framework: PyTorch, StyleGAN2-ADA
Resolution: 1024x1024 images
Use Case: Fashion design, textile prototyping, creative AI pattern generation
π― Use Cases
Generate synthetic African Wax fabric designs
Assist in textile prototyping and visual merchandising
Enable designers to explore culturally diverse fashion patterns
π Dataset
This model was trained on a synthetic dataset of African Wax Print patterns using the StyleGAN2 architecture.
Name: AfricanWaxPatterns_5KDataset
Type: Image dataset
Size: ~5,000 curated samples
Dataset Link: paceailab/AfricanWaxPatterns_5KDataset
π οΈ How to Use
π§ In Python (Colab or local):
!git clone https://github.com/researchpace/waxfashion.git
from huggingface_hub import hf_hub_download
import torch
import legacy
import dnnlib
import sys
sys.path.append('/content/waxfashion/stylegan2-ada-pytorch')
# Load model
model_path = hf_hub_download("paceailab/Waxfashion_StyleGAN", "selected_models/styleGAN2ada_Africanwax.pkl")
# Load the pre-trained StyleGAN2 model
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
with dnnlib.util.open_url(model_path) as f:
G = legacy.load_network_pkl(f)['G_ema'].to(device) # Load the generator
# Generate and display image
import numpy as np
import PIL.Image
def generate_image(seed=42):
z = torch.from_numpy(np.random.RandomState(seed).randn(1, G.z_dim)).to(device)
img = G(z, None) # Generate image
img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8)[0].cpu().numpy()
return PIL.Image.fromarray(img)
image = generate_image(seed=100)
image.show()
π GitHub Repository
For more information, code, please visit the GitHub repository:
π Training Details for Example Selected StyleGAN2 Model
Training was done on university High Performance Computer. Explored various parameters
Gpus : 1
Kimg : 5000
Cfg : stylegan2
Gamma : 10
Metrics : fid50k_full, kid50k_full, pr50k3_full, ppl2_wend, is50k
Learning Rate : 0.002
Training Batch Size : 32
Map depth : 8
πΈ Sample Outputs for the above trained model
Prompt Output
At 1000 kimg
At 2000 kimg
At 3000 kimg
At 4000 kimg
At 5000 kimg
π Evaluation Metrics
The performance of the WaxFashionStyleGAN model is evaluated using the following metrics:
- FID50k_full: Frechet Inception Distance (FID) calculated over 50,000 images. A decrease in FID indicates better quality and diversity of generated images.
- KID50k_full: Kernel Inception Distance (KID) calculated over 50,000 images. A decrease in KID indicates better quality of the generated images.
- PR50k3_full: Precision-Recall (PR) score for 50,000 images. An increase in PR score indicates better coverage of the image distribution.
- PPL2_wend: Perplexity score at the 2nd generation stage. A higher value indicates better diversity in the generated images.
- IS50k: Inception Score (IS) calculated over 50,000 images. An increase in IS indicates that the generated images are diverse and of high quality.
ποΈ Files & Versioning
This repository includes the necessary files for using the WaxFashionStyleGAN model:
- stylegan2-ada-pytorch/: Contains the core StyleGAN2 implementation, including:
- dnnlib/: Helper library for managing configurations and other utilities.
- torch_utils/: Contains PyTorch-specific utilities for model manipulation.
- training/: Scripts for model training.
- metrics/: Contains code for calculating evaluation metrics like FID.
- generate.py: Script for generating images from the trained model.
- train.py: Main script to start training the StyleGAN2 model.
- trainv1.py: Alternate training script for different configurations.
- legacy.py: Code for compatibility with older versions of StyleGAN models.
- style_mixing.py: Script for mixing styles between different models or generated images.
- projector.py: Tool to project images into the latent space of the StyleGAN2 model.
- selected_models/: Directory for storing pre-trained model checkpoints.
- metrics_graph.ipynb: Jupyter notebook for analyzing training metrics and model performance.
- config.yml: Configuration file for training parameters.
- requirements.txt: Lists dependencies for running the model.
- environment.yml: Conda environment file to set up the development environment.
- .gitattributes: Git LFS management for large model files.
- model_index.json: Metadata file for model configuration and indexing.
π Related Resources
π StyleGAN2 Paper
π StyleGAN2-ADA GitHub
π‘ Citation
If you use this model, please cite or credit:
@misc{stylegan2025,
title={WaxFashionStyleGAN},
author={Pace AI Lab},
year={2025},
howpublished={\url{https://huggingface.co/paceailab/WaxFashionStyleGAN}}
}