Datasets:
metadata
license: mit
task_categories:
- image-classification
- object-detection
language:
- en
tags:
- computer-vision
- polygons
- shapes
- synthetic-data
- image-generation
pretty_name: Shape Polygons Dataset
size_categories:
- 10K<n<100K
Shape Polygons Dataset
A synthetic dataset containing 70,000 images of various colored polygons (triangles to octagons) rendered on black backgrounds.
Dataset Description
This dataset consists of programmatically generated polygon images with full metadata about each shape's properties. It's designed for tasks such as:
- Shape Classification: Classify polygons by number of vertices (3-8)
- Regression Tasks: Predict shape properties (size, angle, position, color)
- Object Detection: Locate and identify shapes within images
- Generative Models: Train models to generate geometric shapes
Dataset Statistics
| Split | Number of Images |
|---|---|
| Train | 60,000 |
| Test | 10,000 |
| Total | 70,000 |
Shape Types
The dataset includes 6 different polygon types:
- Triangle (3 vertices)
- Quadrilateral (4 vertices)
- Pentagon (5 vertices)
- Hexagon (6 vertices)
- Heptagon (7 vertices)
- Octagon (8 vertices)
Dataset Structure
shape-polygons-dataset/
├── train/
│ ├── images/
│ │ ├── 00001.png
│ │ ├── 00002.png
│ │ └── ... (60,000 images)
│ └── metadata.csv
├── test/
│ ├── images/
│ │ ├── 00001.png
│ │ ├── 00002.png
│ │ └── ... (10,000 images)
│ └── metadata.csv
└── README.md
Metadata Fields
Each metadata.csv contains the following columns:
| Column | Type | Description |
|---|---|---|
filename |
string | Image filename (e.g., "00001.png") |
size |
float | Relative size of the polygon (0.0 - 1.0) |
angle |
float | Rotation angle in degrees (0.0 - 360.0) |
vertices |
int | Number of vertices (3-8) |
center_x |
float | X-coordinate of center (0.0 - 1.0, normalized) |
center_y |
float | Y-coordinate of center (0.0 - 1.0, normalized) |
color_r |
float | Red color component (0.0 - 1.0) |
color_g |
float | Green color component (0.0 - 1.0) |
color_b |
float | Blue color component (0.0 - 1.0) |
Sample Images
Here are some example images from the dataset:
Usage
Loading with Hugging Face Datasets
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("your-username/shape-polygons-dataset")
# Access train and test splits
train_data = dataset["train"]
test_data = dataset["test"]
# Get a sample
sample = train_data[0]
print(f"Vertices: {sample['vertices']}, Size: {sample['size']:.2f}")
Loading with Pandas
import pandas as pd
from PIL import Image
import os
# Load metadata
train_metadata = pd.read_csv("train/metadata.csv")
test_metadata = pd.read_csv("test/metadata.csv")
# Load an image
img_path = os.path.join("train/images", train_metadata.iloc[0]["filename"])
image = Image.open(img_path)
image.show()
# Filter by number of vertices (e.g., triangles only)
triangles = train_metadata[train_metadata["vertices"] == 3]
print(f"Number of triangles: {len(triangles)}")
PyTorch DataLoader Example
import torch
from torch.utils.data import Dataset, DataLoader
from PIL import Image
import pandas as pd
import os
class PolygonDataset(Dataset):
def __init__(self, root_dir, split="train", transform=None):
self.root_dir = root_dir
self.split = split
self.transform = transform
self.metadata = pd.read_csv(os.path.join(root_dir, split, "metadata.csv"))
def __len__(self):
return len(self.metadata)
def __getitem__(self, idx):
row = self.metadata.iloc[idx]
img_path = os.path.join(self.root_dir, self.split, "images", row["filename"])
image = Image.open(img_path).convert("RGB")
if self.transform:
image = self.transform(image)
# Number of vertices as classification label (0-5 for 3-8 vertices)
label = row["vertices"] - 3
return image, label
# Create dataset and dataloader
dataset = PolygonDataset("path/to/dataset", split="train")
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
Use Cases
- Beginner-Friendly ML Projects: Simple dataset for learning image classification
- Shape Recognition Systems: Training models to identify geometric shapes
- Property Regression: Predicting continuous values (size, angle, position)
- Multi-Task Learning: Combining classification and regression objectives
- Data Augmentation Research: Studying effects of synthetic data on model performance
- Benchmark Dataset: Evaluating new architectures on a controlled, balanced dataset
License
This dataset is released under the MIT License.
Citation
If you use this dataset in your research, please cite it as:
@dataset{shape_polygons_dataset,
title={Shape Polygons Dataset},
year={2024},
url={https://huggingface.co/datasets/your-username/shape-polygons-dataset},
note={A synthetic dataset of 70,000 polygon images for computer vision tasks}
}
Contributing
Contributions are welcome! Feel free to:
- Report issues
- Suggest improvements
- Submit pull requests
Contact
For questions or feedback, please open an issue on the repository.