Pixel Diffusion Model
A conditional Denoising Diffusion Probabilistic Model (DDPM) for generating 16x16 pixel art sprites with class-based control and real-time visualization.
Overview
This project operates in two phases: a training phase (detailed in Training.ipynb) and an inference/application phase (detailed in app.py). The model from the first phase is loaded into the second to create an interactive application for generating pixel art sprites.
How It Works: A Detailed Breakdown
The core of this project is a conditional Denoising Diffusion Probabilistic Model (DDPM). The process can be broken down into data handling, model architecture, training, and inference.
1. Data and Scheduling
- Data Handling: The model is trained on 16x16 pixel art sprites. The
PixelArtDatasetclass in the training notebook is custom-built for this data. - Noise Schedule: A
DiffusionScheduleclass implements a cosine noise schedule. This defines how noise is added to an image overT=1000timesteps. The model's job is to learn how to reverse this process, starting from pure noise and gradually denoising it back to a clean image.
2. The Model: ContextUNet
The model's "brain" is the ContextUNet. This architecture is specifically designed to handle and be controlled by external information.
- U-Net Structure: It is a standard U-Net with a downsampling path, a bottleneck, and an upsampling path. Skip-connections link the downsampling layers to the upsampling layers, which helps the model preserve fine details (crucial for pixel art).
- Context Injection: The model is given three pieces of information at every step:
- The Noisy Image (
x_t) - The Timestep (
t) - The Class Condition (
c): The control mechanism (e.g., "Characters" or "Monsters").
- The Noisy Image (
- Embedding Combination: The time and class embeddings are combined (
emb = t_emb + c_emb) and injected into everyResidualBlock. This ensures the model is constantly reminded of the target category and current noise level.
3. Training: Learning to Denoise
The training loop teaches the model to predict the original noise added to a clean image.
- Load clean image
xand labelc. - Choose random timestep
t. - Add noise according to the cosine schedule.
- Feed noisy image,
t, andcinto theContextUNet. - Optimize using Mean Squared Error (
MSE) between predicted and actual noise.
4. Inference: Guided Generation
Using Classifier-Free Guidance (CFG) for explicit control:
- Start: Pure random noise.
- Denoising Loop: Iterate backward from
T-1to0. - CFG Step: The model runs twice (Conditional and Unconditional).
- Guidance:
eps = eps_uncond + guidance_scale * (eps_cond - eps_uncond). - Step: Use guided noise to slightly clean the image.
Key Improvements
- Cosine Noise Schedule: Improves sample quality and training stability compared to linear schedules.
- Classifier-Free Guidance (CFG): Allows users to control how strictly the model follows the class prompt.
- Exponential Moving Average (EMA): Uses a "shadow" copy of weights to produce more stable and higher-quality final images.
- Nearest Neighbor Interpolation: Preserves the sharp, blocky nature of pixel art during resizing.
- Attention Blocks: Learns long-range spatial relationships in deeper U-Net layers.
- Live-Updating Generator: Yields intermediate denoising steps for a real-time "fade-in" effect in the UI.
Technical Details
- Architecture: Conditional U-Net with attention blocks
- Timesteps: 1000 diffusion steps
- Resolution: 16x16 pixels (upscaled to 256x256)
- Guidance: Classifier-Free Guidance (CFG)
- Noise Schedule: Cosine schedule
License
This project is licensed under the MIT License.
Acknowledgments
Inspiration drawn from modern diffusion research including DDPM and CFG techniques.