Synth.Eye GAN
Synth.Eye GAN is a data-driven extension of the Synth.Eye platform. It replaces physics-based Blender rendering with StyleGAN2-ADA to generate synthetic training images of fingerprint residue on industrial parts, which are used to train YOLO defect detection models for real-time inspection.
Models
| File | Type | Resolution | Description |
|---|---|---|---|
front.pkl |
StyleGAN2-ADA generator | 256×256 px | Generates industrial part front-side images |
back.pkl |
StyleGAN2-ADA generator | 256×256 px | Generates industrial part back-side images |
fingerprint.pkl |
StyleGAN2-ADA generator | 128×128 px | Generates fingerprint residue images |
yolov8m_object_detection |
YOLOv8 Medium | imgsz 640 | Detects part orientation — Cls_Obj_Front_Side (0), Cls_Obj_Back_Side (1) |
yolov8m_defect_detection |
YOLOv8 Medium | imgsz 640 | Detects fingerprint residue defects — Cls_Defect_Fingerprint (0) |
Workflow
- Synthetic data generation — GAN models generate front-side and fingerprint images. A compositing pipeline blends them with pressure simulation, motion blur, and alpha effects to produce labeled training images.
- YOLO training — Both YOLO models are trained entirely on the synthetic composite dataset.
- Real-time inspection — A PyQt5 desktop application runs both YOLO models live on a Basler camera feed for industrial surface inspection.
Visual Examples
GAN Output: Real vs. Synthetic
Left: real training photos captured at INTEMAC Research Center. Right: GAN-generated synthetic images from front.pkl, back.pkl, and fingerprint.pkl.
YOLO Inference
Dual-model inference on a real industrial part — blue frame: yolov8m_object_detection (part orientation), orange box: yolov8m_defect_detection (fingerprint residue defect).
Training Data
| Model | Dataset | Size | Availability |
|---|---|---|---|
| Front/Back GAN | Proprietary photos from INTEMAC Research Center | ~130 images per side | Not public; cropped versions available on HF Datasets |
| Fingerprint GAN | SOCOFing | 6,000 scanned fingerprint images | Public (Kaggle) |
| YOLO models | Synthetic composites from Dataset_v2 and Dataset_v3 | See HF Datasets | Public |
Architecture
StyleGAN2-ADA generators use a custom fork at LukasMoravansky/stylegan2-ada-pytorch, based on the original NVlabs/stylegan2-ada-pytorch.
Fork additions (changelog):
- PyTorch 2.x compatibility fix (custom bilinear interpolation for R1 regularization)
- Windows support via
STYLEGAN2_FORCE_REF_IMPLenvironment variable - Training launcher with named configuration presets
YOLO models use Ultralytics YOLOv8 Medium (≥ 8.4.48).
Usage
GAN inference (requires the StyleGAN2-ADA fork)
import pickle
import torch
with open("front.pkl", "rb") as f:
G = pickle.load(f)["G_ema"].cuda()
z = torch.randn(1, G.z_dim).cuda()
c = torch.zeros(1, G.c_dim).cuda()
img = G(z, c) # (1, 3, 256, 256), range [-1, 1]
YOLO inference
from ultralytics import YOLO
model = YOLO("yolov8m_object_detection")
results = model("image.jpg", imgsz=640)
Links
| Resource | Link |
|---|---|
| GitHub (source) | LukasMoravansky/Synth_Eye_GAN |
| Training dataset | LukasMoravansky/Synth-Eye-GAN-Data |
| StyleGAN2-ADA fork | LukasMoravansky/stylegan2-ada-pytorch |
| Original StyleGAN2-ADA | NVlabs/stylegan2-ada-pytorch |
| Synth.Eye (predecessor) | LukasMoravansky/Synth_Eye |
⚠️ Limitations
- Small training set: Front/Back GANs were trained on ~130 real images per side. Output diversity is limited accordingly.
- Domain-specific: All models are tuned to a single industrial part type from INTEMAC Research Center. They are not general-purpose generators or detectors.
- Fingerprint domain gap: The fingerprint GAN was trained on scanned ink fingerprints (SOCOFing), which differ from optical-camera fingerprint residue on metal surfaces. Expect some visual mismatch relative to the target deployment domain.