Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
SAM3 Blind Spots Dataset
Overview
This dataset highlights failure cases and limitations (blind spots) observed while experimenting with the SAM3 segmentation model. The purpose of this dataset is to analyze scenarios where the model struggles to correctly segment objects when guided by text prompts.
The dataset includes different types of scenes such as:
- simple object detection
- complex multi-object scenes
- spatial reasoning
- action-based prompts
- camouflaged objects
- sports scenes
Each data point records:
- the image scenario
- the prompt given to the model
- the expected segmentation result
- the actual model output
These examples help illustrate where the model performs well and where it fails.
Model Tested
Model: facebook/sam3 Release Date: November 2025
SAM3 is a promptable image segmentation model designed to generate segmentation masks for objects in images based on prompts such as points, bounding boxes, or text descriptions.
How the Model Was Loaded
The model was tested using Kaggle with GPU support.
Example code used to load the model:
Dataset Structure
sam3-blindspots-dataset
β
βββ images
β βββ image1.jpg
β βββ image2.jpg
β βββ image3.jpg
β βββ ...
β
βββ dataset.csv
βββ testing_notebook.ipynb
β
βββ README.md
Dataset Examples
| Image # | Image Scenario | Prompt | Expected Output | Model Output |
|---|---|---|---|---|
| 1 | Dog sitting on a sofa | dog | dog segmented | Correctly segmented |
| 2 | Image with utensils on table | smallest size plate | smallest plate segmented | Incorrectly segmented |
| 3 | Bicycle partially hidden | bicycle | bicycle segmented | Correctly segmented |
| 4 | Complex table scene with many objects | ink pot | ink pot segmented | Not segmented |
| 5 | Table containing pen, book, cup, glasses | glasses | glasses segmented | Correctly segmented |
| 6 | Table containing pen, book, cup, glasses | pen | 3 pens segmented | Correctly segmented |
| 7 | Two dogs (black and white) | white dog | white dog segmented | Correctly segmented |
| 8 | One man holding pen and paper while others discuss | man holding pen and paper | that person segmented | Two men segmented (one incorrect) |
| 9 | Speakers where one is less visible | speaker | speaker segmented | Correctly segmented |
| 10 | Partially visible chair | chair | chair segmented | Correctly segmented |
| 11 | One girl using laptop while others discuss | girl using laptop | only that girl segmented | Two girls segmented (one incorrect) |
| 12 | Camouflaged owl in environment | owl | owl segmented | Correctly segmented |
| 13 | Three people discussing; one girl holding a pen | girl holding a pen | girl segmented | Not segmented |
| 14 | Three people discussing; girl on left side | left side girl | that girl segmented | Not segmented |
| 15 | Camouflaged chameleon | chameleon | chameleon segmented | Not segmented |
| 16 | People celebrating a trophy | man looking at trophy | that person segmented | Not segmented |
| 17 | Cricket fielders and wicket keeper | cricket wicket keeper | the wicket keeper segmented | Not segmented |
| 18 | Players with one player bowing down | man bowing down | the person bowing down segmented | Not segmented |
| 19 | Players with one referee | football referee | referee segmented | Two men segmented (one incorrect) |
Observed Blind Spots
1. Action-Based Understanding
The model struggles with prompts that describe actions or interactions between people and objects.
Examples:
- man holding pen and paper
- man looking at trophy
- girl holding a pen
- man bowing down
These require understanding actions, not just object presence.
2. Spatial Reasoning
The model fails when prompts involve relative spatial descriptions.
Example:
- left side girl
The model has difficulty identifying objects based on relative position within the scene.
3. Camouflaged Objects
Objects that blend with their surroundings are difficult for the model to segment.
Examples:
- chameleon
- camouflaged animals
This indicates a weakness in detecting low contrast or camouflaged objects.
4. Complex Scenes with Many Objects
In cluttered environments, the model sometimes fails to identify the correct object.
Example:
- ink pot among many objects on a table
5. Domain-Specific Objects (Sports Scenes)
The model struggles with sports-related roles or specific entities.
Examples:
- cricket wicket keeper
- football referee
These roles require understanding contextual roles rather than simple objects.
Recommended Datasets for Fine-Tuning
To address these limitations, the model could be fine-tuned on datasets containing:
Referring Expression Segmentation
Datasets that map natural language descriptions to object masks.
Examples:
- RefCOCO
- RefCOCO+
- RefCOCOg
Human Interaction Datasets
Datasets that capture human-object interactions and actions.
Examples:
- Visual Genome
- GQA
Camouflage Detection Datasets
Datasets designed for detecting camouflaged objects.
Examples:
- CAMO
- COD10K
Estimated Dataset Size for Improvement
A dataset of approximately 50,000 β 200,000 annotated samples would likely be needed to significantly improve performance.
Each sample should include:
- image
- natural language prompt
- segmentation mask
Purpose of This Dataset
This dataset is intended to:
- highlight limitations of segmentation models
- analyze failure cases of prompt-based segmentation
- support research on multimodal reasoning and segmentation models
License
This dataset is shared for research and educational purposes.
- Downloads last month
- 9