Datasets:
Tasks:
Image Classification
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
10K - 100K
License:
File size: 3,396 Bytes
ccbb98e 4d06363 b6254af ccbb98e 4d06363 ccbb98e 4d06363 ccbb98e 4d06363 ccbb98e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 |
---
license: mit
task_categories:
- image-classification
language:
- en
tags:
- mnist
- image
- digit
- synthetic
- houdini
pretty_name: MNIST Bakery Dataset
size_categories:
- 10K<n<100K
---
# π§ MNIST Bakery Dataset

A procedurally synthesized variant of the classic MNIST dataset, created using **SideFX Houdini** and designed for experimentation in **data augmentation**, **synthetic data generation**, and **model robustness research**.
See the [ML-Research](https://github.com/atcarter714/ML-Research) repository on GitHub for Python notebooks, experiments and the Houdini scene files.

---
## π― Purpose
This dataset demonstrates how **procedural generation pipelines** in 3D tools like Houdini can be used to create **high-quality, synthetic training data** for machine learning tasks. It is intended for:
- Benchmarking model performance using synthetic vs. real data
- Training models in **low-data** or **zero-shot** environments
- Developing robust classifiers that generalize beyond typical datasets
- Evaluating augmentation and generalization strategies in vision models
---
## π οΈ Generation Pipeline
All data was generated using the `.hip` scene:
```bash
./houdini/digitgen_v02.hip
```
## π§ͺ Methodology
### 1. Procedural Digit Assembly
- Each digit `0`β`9` is generated using a random font in each frame via Houdiniβs **Font SOP**.
- Digits are arranged in a clean **8Γ8 grid**, forming sprite sheets with **64 digits per render**.
### 2. Scene Variability
- Fonts are randomly selected per frame.
- Procedural distortions are applied including:
- Rotation
- Translation
- Skew
- Mountain noise displacement
- This ensures high variability across samples.
### 3. Rendering
- Scene renders are executed via **Mantra** or **Karma**.
- Output format: **grayscale 224Γ224 px** sprite sheets (`.exr` or `.jpg`).

### 4. Compositing & Cropping
- A **COP2 network** slices the sprite sheet into **28Γ28** digit tiles.
- Each tile is labeled by its original digit and saved to:
```
./output/0/img_00001.jpg
./output/1/img_00001.jpg
...
```

### π§Ύ Dataset Structure
```bash
mnist_bakery_data/
βββ 0/
β βββ img_00001.jpg
β βββ ...
βββ 1/
β βββ img_00001.jpg
β βββ ...
...
βββ 9/
β βββ img_00001.jpg
```
- All images: grayscale `.jpg`, 28Γ28 resolution
- Total: **40,960 samples**
- ~4,096 samples per digit
---
## π Statistics
| Set | Samples | Mean | StdDev |
|-----------|---------|---------|----------|
| MNIST | 60,000 | 0.1307 | 0.3081 |
| Synthetic | 40,960 | 0.01599 | 0.07722 |
> Combine mean/std using weighted averaging if mixing both datasets.
---
## π Usage Example
```python
from torchvision import transforms, datasets
transform = transforms.Compose([
transforms.Grayscale(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.01599], std=[0.07722]) # Approximate weighted normalization
])
dataset = datasets.ImageFolder('./mnist_bakery_data', transform=transform)
```
---
#### π§ Credits
**Author**: Aaron T. Carter
**Organization**: Arkaen Solutions
**Tools Used**: Houdini, PyTorch, PIL
___
|