|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- image-classification |
|
|
library_name: datasets |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- semi-supervised-learning |
|
|
- deduplicated |
|
|
- stl-10 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train_labeled |
|
|
path: train-*.tar |
|
|
- split: train_unlabeled |
|
|
path: unlabeled-*.tar |
|
|
- split: test |
|
|
path: test-*.tar |
|
|
--- |
|
|
|
|
|
# Dataset Card for STL-10 Cleaned (Deduplicated Training Set) |
|
|
|
|
|
[Paper](https://huggingface.co/papers/2506.03582) | [Code](https://github.com/Shu1L0n9/SemiOccam) |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
This dataset is a modified version of the [STL-10 dataset](https://cs.stanford.edu/~acoates/stl10/). The primary modification involves **deduplicating the training set** by removing any images that are exact byte-for-byte matches (based on SHA256 hash) with images present in the original STL-10 test set. The dataset comprises this cleaned training set and the original, unmodified STL-10 test set. |
|
|
|
|
|
The goal is to provide a cleaner separation between training and testing data, potentially leading to more reliable model evaluation for tasks such as image classification, representation learning, and self-supervised learning. |
|
|
|
|
|
**Dataset Contents:** |
|
|
* **Training Set**: Derived from the combined 5,000 labeled and 92,455 unlabeled images of the original STL-10, with duplicates against the test set removed. |
|
|
* **Original Test Set**: The standard 8,000 test images from STL-10. |
|
|
|
|
|
All images are 96x96 pixels and are provided in PNG format. |
|
|
|
|
|
## 🧼 Clean STL-10 Dataset, 🔧 How to Load? |
|
|
|
|
|
🎉 We uploaded our cleaned STL-10 dataset to Hugging Face! You can easily load and use it with the 🤗 'datasets' or `webdataset` library. |
|
|
|
|
|
### Data Splits |
|
|
|
|
|
* `train_labeled` |
|
|
* `train_unlabeled` |
|
|
* `test` |
|
|
|
|
|
|
|
|
### 🥸 Load with datasets library (Recommended, Quick start) |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Login using e.g. `huggingface-cli login` to access this dataset |
|
|
ds = load_dataset("Shu1L0n9/CleanSTL-10") |
|
|
``` |
|
|
|
|
|
### 🔧 Load with WebDataset |
|
|
|
|
|
```python |
|
|
import webdataset as wds |
|
|
from huggingface_hub import HfFileSystem, get_token, hf_hub_url |
|
|
|
|
|
splits = {'train_labeled': 'train-*.tar', 'train_unlabeled': 'unlabeled-*.tar', 'test': 'test-*.tar'} |
|
|
|
|
|
# Login using e.g. `huggingface-cli login` to access this dataset |
|
|
fs = HfFileSystem() |
|
|
files = [fs.resolve_path(path) for path in fs.glob("hf://datasets/Shu1L0n9/CleanSTL-10/" + splits["train_labeled"])] |
|
|
urls = [hf_hub_url(file.repo_id, file.path_in_repo, repo_type="dataset") for file in files] |
|
|
urls = f"pipe: curl -s -L -H 'Authorization:Bearer {get_token()}' {'::'.join(urls)}" |
|
|
|
|
|
ds = wds.WebDataset(urls).decode() |
|
|
``` |
|
|
|
|
|
> ℹ️ Requires: `webdataset`, `huggingface_hub` |
|
|
> Install with: |
|
|
|
|
|
```bash |
|
|
pip install webdataset huggingface_hub |
|
|
``` |
|
|
|
|
|
### 🔑 How to Get Your Hugging Face Token |
|
|
|
|
|
To download from Hugging Face with authentication, you’ll need a **User Access Token**: |
|
|
|
|
|
1. Visit [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens) |
|
|
2. Click **“New token”** |
|
|
3. Choose a name and select **“Read”** permission |
|
|
4. Click **“Generate”**, then copy the token |
|
|
5. Paste it into your script: |
|
|
|
|
|
```python |
|
|
token = "your_token_here" |
|
|
``` |
|
|
|
|
|
> ⚠️ **Keep your token private** and avoid hardcoding it in shared scripts. |
|
|
|
|
|
#### 💡 Optional: Use Environment Variable |
|
|
|
|
|
To avoid hardcoding your token: |
|
|
|
|
|
```bash |
|
|
export HF_TOKEN=your_token_here |
|
|
``` |
|
|
|
|
|
Then in your Python script: |
|
|
|
|
|
```python |
|
|
import os |
|
|
token = os.getenv("HF_TOKEN") |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
|
|
|
**Please cite our work if you use this dataset:** |
|
|
```bibtex |
|
|
@misc{yann2025semioccam, |
|
|
title={SemiOccam: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels}, |
|
|
author={Rui Yann and Tianshuo Zhang and Xianglei Xing}, |
|
|
year={2025}, |
|
|
eprint={2506.03582}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2506.03582} |
|
|
} |
|
|
|
|
|
@inproceedings{coates2011analysis, |
|
|
title={An analysis of single-layer networks in unsupervised feature learning}, |
|
|
author={Coates, Adam and Ng, Andrew and Lee, Honglak}, |
|
|
booktitle={Proceedings of the fourteenth international conference on artificial intelligence and statistics}, |
|
|
pages={215--223}, |
|
|
year={2011}, |
|
|
organization={JMLR Workshop and Conference Proceedings} |
|
|
} |
|
|
``` |
|
|
If you use this specific cleaned version, please also acknowledge its origin and the cleaning process. You can refer to it as "STL-10 Cleaned (Deduplicated Training Set)" derived from the work by Coates et al. |
|
|
(Example: "We used the STL-10 Cleaned (Deduplicated Training Set) version, where training images identical to test images were removed. The original STL-10 dataset was introduced by Coates et al. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
### Data Instances |
|
|
|
|
|
Each data instance consists of an image and its corresponding label. The data is organized into `train` and `test` splits, with each split containing an `images` subfolder and a `metadata.csv` file. |
|
|
|
|
|
**Example from `metadata.csv` (within a split like `train` or `test`):** |
|
|
``` |
|
|
file_path,label |
|
|
images/image_000000.png,0 |
|
|
images/image_000001.png,-1 |
|
|
... |
|
|
``` |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
When loaded using Hugging Face `datasets`, the following fields are typically available: |
|
|
|
|
|
* `image` (PIL.Image.Image): The image object. |
|
|
* `label` (integer or `datasets.ClassLabel`): The class label for the image. |
|
|
* `0-9`: Corresponds to the 10 classes of STL-10 (airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck). |
|
|
* `-1`: Indicates an image originating from the unlabeled portion of the original STL-10 training set. This label is explicitly mapped in the `ClassLabel` feature for clarity. |
|
|
* `file_path` (string, custom loaded): The relative path to the image file as stored in `metadata.csv` (e.g., `images/image_000000.png`). This field might need to be loaded manually if not part of the default `imagefolder` loading features. |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
The primary motivation for creating this version of STL-10 was to mitigate potential data leakage where training samples might be identical to test samples. By removing such duplicates, this dataset aims to provide a more robust basis for evaluating machine learning models. |
|
|
|
|
|
### Source Data |
|
|
|
|
|
* **Original Dataset**: [STL-10 dataset](https://cs.stanford.edu/~acoates/stl10/) by Adam Coates, Honglak Lee, and Andrew Y. Ng (Stanford University). |
|
|
* **Image Origin**: The images in STL-10 were originally sourced from [ImageNet](https://www.image-net.org/). |
|
|
|
|
|
### Processing Steps |
|
|
|
|
|
1. The original STL-10 `train` (5,000 labeled images), `unlabeled` (100,000 images), and `test` (8,000 images) splits were loaded using `torchvision.datasets`. |
|
|
2. Hashes were computed for all images in the `test` split to create a set of unique test image hashes. |
|
|
3. The `train` and `unlabeled` images were combined to form a single training pool. |
|
|
4. Hashes were computed for each image in this combined training pool. |
|
|
6. Any training image whose hash matched a hash in the test set's hash pool was identified as a duplicate and excluded from the cleaned training set. |
|
|
7. The remaining "cleaned" training images and the original test images were saved as individual PNG files within their respective split directories (`train/images/`, `test/images/` and `unlabeled/images/`). |
|
|
8. For each split, a `metadata.csv` file was generated, mapping the relative image file paths to their labels. |
|
|
|
|
|
## Intended Use |
|
|
|
|
|
### Direct Use Cases |
|
|
|
|
|
* Supervised image classification (on the 10 labeled classes). |
|
|
* Unsupervised or self-supervised representation learning (leveraging all images, including those with label -1). |
|
|
* Benchmarking computer vision models, particularly when there is a concern about direct train/test overlap in the original STL-10. |
|
|
|
|
|
### Out-of-Scope Use Cases |
|
|
|
|
|
This dataset is not intended for applications where an absolute guarantee of no visual similarity (beyond exact duplicates) between train and test sets is critical, as the deduplication method only removes byte-for-byte identical images. |
|
|
|
|
|
## Limitations and Bias |
|
|
|
|
|
* The dataset inherits the general characteristics, content, and potential biases of the original STL-10 and ImageNet datasets from which the images were sourced. |
|
|
* All images are of a fixed resolution (96x96 pixels). |
|
|
* The deduplication process is based on SHA256 hashes of the image byte content. It will not identify or remove images that are visually very similar but not byte-for-byte identical (e.g., due to different compression, minor augmentations if present in source, or slight cropping variations if they existed). |
|
|
* The number of samples in the `train` split is reduced from the original 105,000 by the number of duplicates found. The exact number should be verified from the finalized dataset. |
|
|
|
|
|
## Dataset Curators |
|
|
|
|
|
This cleaned version of the STL-10 dataset was prepared by [https://github.com/Shu1L0n9]. |
|
|
|
|
|
## Licensing Information |
|
|
|
|
|
This modified dataset (including the cleaned training set, original test set prepared in this format, and associated metadata files) is released under the **Apache License 2.0**. |
|
|
The original STL-10 images are sourced from ImageNet. While STL-10 is widely used for research, be mindful of the original image sources if your use case extends beyond typical research applications. |
|
|
|
|
|
This dataset was used in the paper [SemiOccam: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels](https://huggingface.co/papers/2506.03582). |