File size: 5,920 Bytes
a0baaca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
---
license: mit
task_categories:
  - image-classification
  - object-detection
language:
  - en
tags:
  - computer-vision
  - polygons
  - shapes
  - synthetic-data
  - image-generation
pretty_name: Shape Polygons Dataset
size_categories:
  - 10K<n<100K
---

# Shape Polygons Dataset

A synthetic dataset containing 70,000 images of various colored polygons (triangles to octagons) rendered on black backgrounds.

## Dataset Description

This dataset consists of programmatically generated polygon images with full metadata about each shape's properties. It's designed for tasks such as:

- **Shape Classification**: Classify polygons by number of vertices (3-8)
- **Regression Tasks**: Predict shape properties (size, angle, position, color)
- **Object Detection**: Locate and identify shapes within images
- **Generative Models**: Train models to generate geometric shapes

### Dataset Statistics

| Split | Number of Images |
|-------|------------------|
| Train | 60,000 |
| Test  | 10,000 |
| **Total** | **70,000** |

### Shape Types

The dataset includes 6 different polygon types:
- **Triangle** (3 vertices)
- **Quadrilateral** (4 vertices)
- **Pentagon** (5 vertices)
- **Hexagon** (6 vertices)
- **Heptagon** (7 vertices)
- **Octagon** (8 vertices)

## Dataset Structure

```
shape-polygons-dataset/
├── train/
│   ├── images/
│   │   ├── 00001.png
│   │   ├── 00002.png
│   │   └── ... (60,000 images)
│   └── metadata.csv
├── test/
│   ├── images/
│   │   ├── 00001.png
│   │   ├── 00002.png
│   │   └── ... (10,000 images)
│   └── metadata.csv
└── README.md
```

### Metadata Fields

Each `metadata.csv` contains the following columns:

| Column | Type | Description |
|--------|------|-------------|
| `filename` | string | Image filename (e.g., "00001.png") |
| `size` | float | Relative size of the polygon (0.0 - 1.0) |
| `angle` | float | Rotation angle in degrees (0.0 - 360.0) |
| `vertices` | int | Number of vertices (3-8) |
| `center_x` | float | X-coordinate of center (0.0 - 1.0, normalized) |
| `center_y` | float | Y-coordinate of center (0.0 - 1.0, normalized) |
| `color_r` | float | Red color component (0.0 - 1.0) |
| `color_g` | float | Green color component (0.0 - 1.0) |
| `color_b` | float | Blue color component (0.0 - 1.0) |

## Sample Images

Here are some example images from the dataset:

<div style="display: flex; gap: 10px; flex-wrap: wrap;">
  <img src="train/images/00001.png" width="64" height="64" alt="Sample 1">
  <img src="train/images/00003.png" width="64" height="64" alt="Sample 2">
  <img src="train/images/00005.png" width="64" height="64" alt="Sample 3">
  <img src="train/images/00016.png" width="64" height="64" alt="Sample 4">
</div>

## Usage

### Loading with Hugging Face Datasets

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("your-username/shape-polygons-dataset")

# Access train and test splits
train_data = dataset["train"]
test_data = dataset["test"]

# Get a sample
sample = train_data[0]
print(f"Vertices: {sample['vertices']}, Size: {sample['size']:.2f}")
```

### Loading with Pandas

```python
import pandas as pd
from PIL import Image
import os

# Load metadata
train_metadata = pd.read_csv("train/metadata.csv")
test_metadata = pd.read_csv("test/metadata.csv")

# Load an image
img_path = os.path.join("train/images", train_metadata.iloc[0]["filename"])
image = Image.open(img_path)
image.show()

# Filter by number of vertices (e.g., triangles only)
triangles = train_metadata[train_metadata["vertices"] == 3]
print(f"Number of triangles: {len(triangles)}")
```

### PyTorch DataLoader Example

```python
import torch
from torch.utils.data import Dataset, DataLoader
from PIL import Image
import pandas as pd
import os

class PolygonDataset(Dataset):
    def __init__(self, root_dir, split="train", transform=None):
        self.root_dir = root_dir
        self.split = split
        self.transform = transform
        self.metadata = pd.read_csv(os.path.join(root_dir, split, "metadata.csv"))
    
    def __len__(self):
        return len(self.metadata)
    
    def __getitem__(self, idx):
        row = self.metadata.iloc[idx]
        img_path = os.path.join(self.root_dir, self.split, "images", row["filename"])
        image = Image.open(img_path).convert("RGB")
        
        if self.transform:
            image = self.transform(image)
        
        # Number of vertices as classification label (0-5 for 3-8 vertices)
        label = row["vertices"] - 3
        
        return image, label

# Create dataset and dataloader
dataset = PolygonDataset("path/to/dataset", split="train")
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
```

## Use Cases

1. **Beginner-Friendly ML Projects**: Simple dataset for learning image classification
2. **Shape Recognition Systems**: Training models to identify geometric shapes
3. **Property Regression**: Predicting continuous values (size, angle, position)
4. **Multi-Task Learning**: Combining classification and regression objectives
5. **Data Augmentation Research**: Studying effects of synthetic data on model performance
6. **Benchmark Dataset**: Evaluating new architectures on a controlled, balanced dataset

## License

This dataset is released under the [MIT License](LICENSE).

## Citation

If you use this dataset in your research, please cite it as:

```bibtex
@dataset{shape_polygons_dataset,
  title={Shape Polygons Dataset},
  year={2024},
  url={https://huggingface.co/datasets/your-username/shape-polygons-dataset},
  note={A synthetic dataset of 70,000 polygon images for computer vision tasks}
}
```

## Contributing

Contributions are welcome! Feel free to:
- Report issues
- Suggest improvements
- Submit pull requests

## Contact

For questions or feedback, please open an issue on the repository.