Datasets:
Tasks:
Image Classification
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
10K - 100K
License:
| license: mit | |
| task_categories: | |
| - image-classification | |
| language: | |
| - en | |
| tags: | |
| - mnist | |
| - image | |
| - digit | |
| - synthetic | |
| - houdini | |
| pretty_name: MNIST Bakery Dataset | |
| size_categories: | |
| - 10K<n<100K | |
| # π§ MNIST Bakery Dataset | |
|  | |
| A procedurally synthesized variant of the classic MNIST dataset, created using **SideFX Houdini** and designed for experimentation in **data augmentation**, **synthetic data generation**, and **model robustness research**. | |
| See the [ML-Research](https://github.com/atcarter714/ML-Research) repository on GitHub for Python notebooks, experiments and the Houdini scene files. | |
|  | |
| --- | |
| ## π― Purpose | |
| This dataset demonstrates how **procedural generation pipelines** in 3D tools like Houdini can be used to create **high-quality, synthetic training data** for machine learning tasks. It is intended for: | |
| - Benchmarking model performance using synthetic vs. real data | |
| - Training models in **low-data** or **zero-shot** environments | |
| - Developing robust classifiers that generalize beyond typical datasets | |
| - Evaluating augmentation and generalization strategies in vision models | |
| --- | |
| ## π οΈ Generation Pipeline | |
| All data was generated using the `.hip` scene: | |
| ```bash | |
| ./houdini/digitgen_v02.hip | |
| ``` | |
| ## π§ͺ Methodology | |
| ### 1. Procedural Digit Assembly | |
| - Each digit `0`β`9` is generated using a random font in each frame via Houdiniβs **Font SOP**. | |
| - Digits are arranged in a clean **8Γ8 grid**, forming sprite sheets with **64 digits per render**. | |
| ### 2. Scene Variability | |
| - Fonts are randomly selected per frame. | |
| - Procedural distortions are applied including: | |
| - Rotation | |
| - Translation | |
| - Skew | |
| - Mountain noise displacement | |
| - This ensures high variability across samples. | |
| ### 3. Rendering | |
| - Scene renders are executed via **Mantra** or **Karma**. | |
| - Output format: **grayscale 224Γ224 px** sprite sheets (`.exr` or `.jpg`). | |
|  | |
| ### 4. Compositing & Cropping | |
| - A **COP2 network** slices the sprite sheet into **28Γ28** digit tiles. | |
| - Each tile is labeled by its original digit and saved to: | |
| ``` | |
| ./output/0/img_00001.jpg | |
| ./output/1/img_00001.jpg | |
| ... | |
| ``` | |
|  | |
| ### π§Ύ Dataset Structure | |
| ```bash | |
| mnist_bakery_data/ | |
| βββ 0/ | |
| β βββ img_00001.jpg | |
| β βββ ... | |
| βββ 1/ | |
| β βββ img_00001.jpg | |
| β βββ ... | |
| ... | |
| βββ 9/ | |
| β βββ img_00001.jpg | |
| ``` | |
| - All images: grayscale `.jpg`, 28Γ28 resolution | |
| - Total: **40,960 samples** | |
| - ~4,096 samples per digit | |
| --- | |
| ## π Statistics | |
| | Set | Samples | Mean | StdDev | | |
| |-----------|---------|---------|----------| | |
| | MNIST | 60,000 | 0.1307 | 0.3081 | | |
| | Synthetic | 40,960 | 0.01599 | 0.07722 | | |
| > Combine mean/std using weighted averaging if mixing both datasets. | |
| --- | |
| ## π Usage Example | |
| ```python | |
| from torchvision import transforms, datasets | |
| transform = transforms.Compose([ | |
| transforms.Grayscale(), | |
| transforms.ToTensor(), | |
| transforms.Normalize(mean=[0.01599], std=[0.07722]) # Approximate weighted normalization | |
| ]) | |
| dataset = datasets.ImageFolder('./mnist_bakery_data', transform=transform) | |
| ``` | |
| --- | |
| #### π§ Credits | |
| **Author**: Aaron T. Carter | |
| **Organization**: Arkaen Solutions | |
| **Tools Used**: Houdini, PyTorch, PIL | |
| ___ | |