| | --- |
| | license: mit |
| | task_categories: |
| | - image-classification |
| | - visual-question-answering |
| | tags: |
| | - counting |
| | - shapes |
| | - vision |
| | - cognitive-science |
| | - psychology |
| | size_categories: |
| | - 1K<n<10K |
| | --- |
| | |
| | # Shape Counting Dataset |
| |
|
| | A dataset for evaluating shape counting abilities in vision models and humans. |
| |
|
| | ## Dataset Description |
| |
|
| | This dataset contains images with varying numbers of **squares**, **triangles**, and **stars** on a white background. Each image is provided in multiple versions: the original clean image plus several noisy variants. |
| |
|
| | ### Image Specifications |
| | - **Size**: 256×256 pixels |
| | - **Format**: Grayscale PNG |
| | - **Shape size**: 18 pixels |
| | - **Background**: White (255) |
| | - **Shapes**: Black (0) |
| |
|
| | ## Dataset Structure |
| |
|
| | ### Fields |
| |
|
| | | Field | Type | Description | |
| | |-------|------|-------------| |
| | | `image` | Image | The shape image (original or noisy) | |
| | | `image_id` | int | Unique ID for the base image (same across noise variants) | |
| | | `noise_type` | string | Type of noise applied (see below) | |
| | | `count_squares` | int | Number of squares in the image | |
| | | `count_triangles` | int | Number of triangles in the image | |
| | | `count_stars` | int | Number of stars in the image | |
| | | `total_shapes` | int | Total number of shapes (sum of all counts) | |
| | | `bucket` | int | Bucket number (1, 2, or 3) | |
| | | `bucket_name` | string | Bucket description | |
| | | `difficulty` | string | "medium" or "hard" | |
| | | `intended_counts` | string | Original intended counts before placement | |
| |
|
| | ### Buckets Explained |
| |
|
| | The dataset is organized into 3 buckets based on shape type complexity: |
| |
|
| | | Bucket | Name | Description | |
| | |--------|------|-------------| |
| | | **1** | `single_shape` | Only ONE type of shape (all squares, all triangles, or all stars) | |
| | | **2** | `two_shapes` | TWO different shape types mixed together | |
| | | **3** | `three_shapes` | ALL THREE shape types mixed together | |
| |
|
| | ### Difficulty Levels |
| |
|
| | | Difficulty | Total Shapes | Description | |
| | |------------|--------------|-------------| |
| | | **medium** | 11-36 | Moderate counting difficulty | |
| | | **hard** | 37-60 | Challenging counting task | |
| |
|
| | ### Noise Types |
| |
|
| | Each image is provided in 8 variants: |
| |
|
| | | Noise Type | Description | |
| | |------------|-------------| |
| | | `original` | Clean image, no noise | |
| | | `salt_pepper_medium` | 15% salt & pepper noise | |
| | | `salt_pepper_heavy` | 25% salt & pepper noise | |
| | | `salt_pepper_extreme` | 35% salt & pepper noise | |
| | | `blur_heavy` | Gaussian blur (radius=3) | |
| | | `blur_extreme` | Gaussian blur (radius=5) | |
| | | `motion_blur` | Horizontal motion blur (size=9) | |
| | | `motion_blur_heavy` | Horizontal motion blur (size=13) | |
| |
|
| | ## Usage |
| |
|
| | ### Loading the Dataset |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Load the full dataset |
| | dataset = load_dataset("nooranis/shape-counting-dataset") |
| | |
| | # Filter for only original (clean) images |
| | original_only = dataset.filter(lambda x: x['noise_type'] == 'original') |
| | |
| | # Filter for a specific noise type |
| | blurry = dataset.filter(lambda x: x['noise_type'] == 'blur_heavy') |
| | |
| | # Filter by difficulty |
| | hard_images = dataset.filter(lambda x: x['difficulty'] == 'hard') |
| | |
| | # Filter by bucket |
| | single_shape = dataset.filter(lambda x: x['bucket'] == 1) |
| | ``` |
| |
|
| | ### Example: Get image and count |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | dataset = load_dataset("nooranis/shape-counting-dataset") |
| | |
| | # Get a sample |
| | sample = dataset['train'][0] |
| | |
| | # Access the image |
| | image = sample['image'] # PIL Image |
| | |
| | # Access counts |
| | print(f"Squares: {sample['count_squares']}") |
| | print(f"Triangles: {sample['count_triangles']}") |
| | print(f"Stars: {sample['count_stars']}") |
| | print(f"Total: {sample['total_shapes']}") |
| | ``` |
| |
|
| | ### Example: Evaluate a vision model |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | dataset = load_dataset("nooranis/shape-counting-dataset") |
| | |
| | # Get original images only |
| | test_data = dataset['train'].filter(lambda x: x['noise_type'] == 'original') |
| | |
| | for sample in test_data: |
| | image = sample['image'] |
| | true_count = sample['count_stars'] # or count_squares, count_triangles |
| | |
| | # Your model prediction here |
| | predicted = your_model.count_stars(image) |
| | |
| | # Compare |
| | is_correct = (predicted == true_count) |
| | ``` |
| |
|
| | ## Dataset Statistics |
| |
|
| | - **Total images**: 1795 base images |
| | - **With noise variants**: 14360 total rows |
| | - **Noise types**: 8 |
| | - **Buckets**: 3 |
| | - **Difficulty levels**: 2 (medium, hard) |
| |
|