DCGAN β€” Human Face Generation (CelebA)

A Deep Convolutional GAN trained on CelebA to generate 64Γ—64 human face images.

Config

Property Value
Architecture DCGAN (Radford et al., 2015)
Dataset CelebA β€” train split (162,770 images)
Resolution 64Γ—64 RGB
Latent dim 100
Epochs 50
Batch size 256
Optimizer Adam lr=0.0002 β₁=0.5
Mixed precision torch.bfloat16

Usage

import torch, torch.nn as nn
from huggingface_hub import hf_hub_download

class Generator(nn.Module):
    def __init__(self, nz=100, ngf=64, nc=3):
        super().__init__()
        self.main = nn.Sequential(
            nn.ConvTranspose2d(nz,    ngf*8, 4, 1, 0, bias=False), nn.BatchNorm2d(ngf*8), nn.ReLU(True),
            nn.ConvTranspose2d(ngf*8, ngf*4, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf*4), nn.ReLU(True),
            nn.ConvTranspose2d(ngf*4, ngf*2, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf*2), nn.ReLU(True),
            nn.ConvTranspose2d(ngf*2, ngf,   4, 2, 1, bias=False), nn.BatchNorm2d(ngf),   nn.ReLU(True),
            nn.ConvTranspose2d(ngf,   nc,    4, 2, 1, bias=False), nn.Tanh()
        )
    def forward(self, x): return self.main(x)

weights = hf_hub_download(repo_id="arzumanabbasov/DCGAN_CELEBA_50_EPOCH", filename="generator.pt")
netG = Generator()
netG.load_state_dict(torch.load(weights, map_location="cpu"))
netG.eval()

with torch.no_grad():
    noise = torch.randn(16, 100, 1, 1)
    faces = netG(noise)       # (16, 3, 64, 64) in [-1, 1]
    faces = (faces + 1) / 2  # β†’ [0, 1] for display
Downloads last month
43
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support