Arkaen-AtC commited on
Commit
ccbb98e
Β·
verified Β·
1 Parent(s): 56099cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -122
README.md CHANGED
@@ -1,122 +1,127 @@
1
- # 🧁 MNIST Bakery Dataset
2
-
3
- A procedurally synthesized variant of the classic MNIST dataset, created using **SideFX Houdini** and designed for experimentation in **data augmentation**, **synthetic data generation**, and **model robustness research**.
4
-
5
- ---
6
-
7
- ## 🎯 Purpose
8
-
9
- This dataset demonstrates how **procedural generation pipelines** in 3D tools like Houdini can be used to create **high-quality, synthetic training data** for machine learning tasks. It is intended for:
10
-
11
- - Benchmarking model performance using synthetic vs. real data
12
- - Training models in **low-data** or **zero-shot** environments
13
- - Developing robust classifiers that generalize beyond typical datasets
14
- - Evaluating augmentation and generalization strategies in vision models
15
-
16
- ---
17
-
18
- ## πŸ› οΈ Generation Pipeline
19
-
20
- All data was generated using the `.hip` scene:
21
- ```bash
22
- ./houdini/digitgen_v02.hip
23
- ```
24
-
25
- ## πŸ§ͺ Methodology
26
-
27
- ### 1. Procedural Digit Assembly
28
- - Each digit `0`–`9` is generated using a random font in each frame via Houdini’s **Font SOP**.
29
- - Digits are arranged in a clean **8Γ—8 grid**, forming sprite sheets with **64 digits per render**.
30
-
31
- ### 2. Scene Variability
32
- - Fonts are randomly selected per frame.
33
- - Procedural distortions are applied including:
34
- - Rotation
35
- - Translation
36
- - Skew
37
- - Mountain noise displacement
38
- - This ensures high variability across samples.
39
-
40
- ### 3. Rendering
41
- - Scene renders are executed via **Mantra** or **Karma**.
42
- - Output format: **grayscale 224Γ—224 px** sprite sheets (`.exr` or `.jpg`).
43
-
44
- ### 4. Compositing & Cropping
45
- - A **COP2 network** slices the sprite sheet into **28Γ—28** digit tiles.
46
- - Each tile is labeled by its original digit and saved to:
47
- ```
48
- ./output/0/img_00001.jpg
49
- ./output/1/img_00001.jpg
50
- ...
51
- ```
52
-
53
- ### 🧾 Dataset Structure
54
-
55
- ```bash
56
- mnist_bakery_data/
57
- β”œβ”€β”€ 0/
58
- β”‚ β”œβ”€β”€ img_00001.jpg
59
- β”‚ β”œβ”€β”€ ...
60
- β”œβ”€β”€ 1/
61
- β”‚ β”œβ”€β”€ img_00001.jpg
62
- β”‚ └── ...
63
- ...
64
- β”œβ”€β”€ 9/
65
- β”‚ └── img_00001.jpg
66
- ```
67
-
68
-
69
- - All images: grayscale `.jpg`, 28Γ—28 resolution
70
- - Total: **40,960 samples**
71
- - ~4,096 samples per digit
72
-
73
- ---
74
-
75
- ## πŸ“Š Statistics
76
-
77
- | Set | Samples | Mean | StdDev |
78
- |-----------|---------|---------|----------|
79
- | MNIST | 60,000 | 0.1307 | 0.3081 |
80
- | Synthetic | 40,960 | 0.01599 | 0.07722 |
81
-
82
- > Combine mean/std using weighted averaging if mixing both datasets.
83
-
84
- ---
85
-
86
- ## πŸ“š Usage Example
87
-
88
- ```python
89
- from torchvision import transforms, datasets
90
-
91
- transform = transforms.Compose([
92
- transforms.Grayscale(),
93
- transforms.ToTensor(),
94
- transforms.Normalize(mean=[0.01599], std=[0.07722]) # Approximate weighted normalization
95
- ])
96
-
97
- dataset = datasets.ImageFolder('./mnist_bakery_data', transform=transform)
98
- ```
99
-
100
- ---
101
-
102
- #### 🧠 Credits
103
-
104
- **Author**: Aaron T. Carter
105
-
106
- **Organization**: Arkaen Solutions
107
-
108
- **Tools Used**: Houdini, PyTorch, PIL
109
-
110
- ___
111
-
112
- language:
113
- - "List of ISO 639-1 code for your language"
114
- - en
115
- pretty_name: "MNIST Bakery Dataset"
116
- tags:
117
- - vision
118
- - image
119
- license: "MIT"
120
- task_categories:
121
- - image-classification
122
- - multi-label-image-classification
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - image-classification
5
+ language:
6
+ - en
7
+ tags:
8
+ - mnist
9
+ - image
10
+ - digit
11
+ - synthetic
12
+ - houdini
13
+ pretty_name: MNIST Bakery Dataset
14
+ size_categories:
15
+ - 10K<n<100K
16
+ ---
17
+ # 🧁 MNIST Bakery Dataset
18
+
19
+ A procedurally synthesized variant of the classic MNIST dataset, created using **SideFX Houdini** and designed for experimentation in **data augmentation**, **synthetic data generation**, and **model robustness research**.
20
+
21
+ ---
22
+
23
+ ## 🎯 Purpose
24
+
25
+ This dataset demonstrates how **procedural generation pipelines** in 3D tools like Houdini can be used to create **high-quality, synthetic training data** for machine learning tasks. It is intended for:
26
+
27
+ - Benchmarking model performance using synthetic vs. real data
28
+ - Training models in **low-data** or **zero-shot** environments
29
+ - Developing robust classifiers that generalize beyond typical datasets
30
+ - Evaluating augmentation and generalization strategies in vision models
31
+
32
+ ---
33
+
34
+ ## πŸ› οΈ Generation Pipeline
35
+
36
+ All data was generated using the `.hip` scene:
37
+ ```bash
38
+ ./houdini/digitgen_v02.hip
39
+ ```
40
+
41
+ ## πŸ§ͺ Methodology
42
+
43
+ ### 1. Procedural Digit Assembly
44
+ - Each digit `0`–`9` is generated using a random font in each frame via Houdini’s **Font SOP**.
45
+ - Digits are arranged in a clean **8Γ—8 grid**, forming sprite sheets with **64 digits per render**.
46
+
47
+ ### 2. Scene Variability
48
+ - Fonts are randomly selected per frame.
49
+ - Procedural distortions are applied including:
50
+ - Rotation
51
+ - Translation
52
+ - Skew
53
+ - Mountain noise displacement
54
+ - This ensures high variability across samples.
55
+
56
+ ### 3. Rendering
57
+ - Scene renders are executed via **Mantra** or **Karma**.
58
+ - Output format: **grayscale 224Γ—224 px** sprite sheets (`.exr` or `.jpg`).
59
+
60
+ ### 4. Compositing & Cropping
61
+ - A **COP2 network** slices the sprite sheet into **28Γ—28** digit tiles.
62
+ - Each tile is labeled by its original digit and saved to:
63
+ ```
64
+ ./output/0/img_00001.jpg
65
+ ./output/1/img_00001.jpg
66
+ ...
67
+ ```
68
+
69
+ ### 🧾 Dataset Structure
70
+
71
+ ```bash
72
+ mnist_bakery_data/
73
+ β”œβ”€β”€ 0/
74
+ β”‚ β”œβ”€β”€ img_00001.jpg
75
+ β”‚ β”œβ”€β”€ ...
76
+ β”œβ”€β”€ 1/
77
+ β”‚ β”œβ”€β”€ img_00001.jpg
78
+ β”‚ └── ...
79
+ ...
80
+ β”œβ”€β”€ 9/
81
+ β”‚ └── img_00001.jpg
82
+ ```
83
+
84
+
85
+ - All images: grayscale `.jpg`, 28Γ—28 resolution
86
+ - Total: **40,960 samples**
87
+ - ~4,096 samples per digit
88
+
89
+ ---
90
+
91
+ ## πŸ“Š Statistics
92
+
93
+ | Set | Samples | Mean | StdDev |
94
+ |-----------|---------|---------|----------|
95
+ | MNIST | 60,000 | 0.1307 | 0.3081 |
96
+ | Synthetic | 40,960 | 0.01599 | 0.07722 |
97
+
98
+ > Combine mean/std using weighted averaging if mixing both datasets.
99
+
100
+ ---
101
+
102
+ ## πŸ“š Usage Example
103
+
104
+ ```python
105
+ from torchvision import transforms, datasets
106
+
107
+ transform = transforms.Compose([
108
+ transforms.Grayscale(),
109
+ transforms.ToTensor(),
110
+ transforms.Normalize(mean=[0.01599], std=[0.07722]) # Approximate weighted normalization
111
+ ])
112
+
113
+ dataset = datasets.ImageFolder('./mnist_bakery_data', transform=transform)
114
+ ```
115
+
116
+ ---
117
+
118
+ #### 🧠 Credits
119
+
120
+ **Author**: Aaron T. Carter
121
+
122
+ **Organization**: Arkaen Solutions
123
+
124
+ **Tools Used**: Houdini, PyTorch, PIL
125
+
126
+ ___
127
+