Datasets:
Tasks:
Feature Extraction
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
10K - 100K
Tags:
code
License:
| license: mit | |
| task_categories: | |
| - feature-extraction | |
| language: | |
| - en | |
| tags: | |
| - code | |
| pretty_name: MNIST Visual Curation | |
| size_categories: | |
| - 10K<n<100K | |
| # Curation of the famous MNIST Dataset | |
| The curation was done using qualitative analysis of the dataset, following visualization techniques like **PCA** and **UMAP** and score-based categorization of the samples using metrics like **hardness**, **mistakenness**, or **uniqueness**. | |
| The code of the curation can be found on GitHub: | |
| π https://github.com/Conscht/MNIST_Curation_Repo/tree/main | |
| This curated version of MNIST introduces an additional **IDK (βI Donβt Knowβ)** label for digits that are ambiguous, noisy, or of low quality. It is intended for experiments on robust classification, dataset curation, and handling uncertain or hard-to-classify examples. | |
| --- | |
| ## π Overview | |
| Compared to the original MNIST dataset, this curated version: | |
| - keeps the original digit classes **0β9** | |
| - adds an **11th class: `IDK`** | |
| - moves visually ambiguous or questionable digits into the `IDK` class | |
| Questionable digits include: | |
| - distorted or spaghetti-like shapes | |
| - digits that are hard even for humans to classify | |
| - strong outliers in the embedding space | |
| - samples often misclassified by the baseline model | |
| --- | |
| ## π§ How the Curation Was Done | |
| The curation process combined **qualitative inspection** and **quantitative metrics**: | |
| 1. Train a **LeNet-5** classifier on the original MNIST digits. | |
| 2. Extract **embeddings** from the penultimate layer of the network. | |
| 3. Visualize these embeddings with **PCA** and **UMAP** in **FiftyOne** to identify clusters, outliers, and ambiguous regions. | |
| 4. Compute several **FiftyOne Brain metrics**: | |
| - `hardness` | |
| - `mistakenness` | |
| - `uniqueness` | |
| - `representativeness` | |
| 5. Use these metrics to surface suspicious samples: | |
| - highly mistaken or hard examples | |
| - high-uniqueness outliers | |
| - misclassified samples | |
| 6. Inspect these subsets inside the **FiftyOne App** and manually decide which samples should be relabeled as **IDK**. | |
| Example of visualized embedding space: | |
|  | |
| --- | |
| ## π Dataset Structure | |
| The dataset is exported in **ImageClassificationDirectoryTree** format: | |
| ```text | |
| root/ | |
| βββ train/ | |
| β βββ 0/ | |
| β βββ 1/ | |
| β βββ ... | |
| β βββ 9/ | |
| β βββ IDK/ | |
| βββ test/ | |
| βββ 0/ | |
| βββ 1/ | |
| βββ ... | |
| βββ 9/ | |
| βββ IDK/ | |
| @article{lecun1998gradient, | |
| title={Gradient-based learning applied to document recognition}, | |
| author={LeCun, Yann and Bottou, L{\'e}on and Bengio, Yoshua and Haffner, Patrick}, | |
| journal={Proceedings of the IEEE}, | |
| volume={86}, | |
| number={11}, | |
| pages={2278--2324}, | |
| year={1998}, | |
| publisher={IEEE} | |
| } |