Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 193, in _generate_tables
examples = [ujson_loads(line) for line in batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Inspect Anything
This repository provides the Inspect-Anything (InsA) benchmark annotation files (JSON) proposed in our paper, without the original images.
The InsA benchmark is introduced in the UniSpector: Towards Universal Open-set Defect Recognition via Spectral-Contrastive Visual Prompting (CVPR 2026).
For instructions on how to prepare the images, see the dataset preparation section.
Overview
To evaluate generalization ability to unseen defects, we introduce Inspect Anything (InsA), a benchmark for open-set defect detection and segmentation within a visual prompting framework.
InsA evaluates three aspects of generalization:
- (1) in-domain generalization to novel defect types that share visual similarity with the seen sets,
- (2) cross-domain transferability to unseen defects emerging under different material properties, imaging conditions, and defect morphologies, and
- (3) prompt-level generalization, quantified by aggregating results across multiple class partitions and independently sampled prompt sets to reflect stability rather than dependence on a single prompt configuration.
We construct this benchmark from seven industrial inspection datasets: GC10-DET, Magnetic Tile Surface Defect, Real-IAD, MVTec AD, 3CAD, VISION, and VisA.
Among these, GC10-DET, Magnetic Tile Surface Defect, Real-IAD, and MVTec AD are used as in‑domain datasets, with defect categories partitioned into seen classes (training) and unseen classes (testing).
In contrast, 3CAD, VISION, and VisA serve as cross‑domain datasets, used exclusively for evaluating transferability to novel defect types and object appearances.
To ensure robust estimates of generalization performance, we form three independent seen–unseen splits using random seeds {42, 82, 777}, holding out roughly 25% of defect categories in each in‑domain dataset as unseen test classes.
Dataset Composition and Characteristics
The InsA benchmark is composed of diverse products and materials, enabling evaluation of open‑set visual inspection performance across varied distributions.
Defect images are captured under a mix of distinct imaging conditions (e.g., very bright or very dark illumination), resulting in generally large standard deviations in HSV channels.
These distributions also vary across datasets, leading to differing mean values and further highlighting the challenge of generalizing visual inspection models across heterogeneous industrial data.
| Dataset | Images | Defect Instances | Defect Categories | In-domain Seen (Train) | In-domain Unseen (Test) | Cross-domain |
|---|---|---|---|---|---|---|
| GC10 | 2,292 | 3,563 | 10 | 7 | 3 | - |
| MagneticTile | 388 | 514 | 5 | 4 | 1 | - |
| Real-IAD | 49,237 | 55,326 | 111 | 83 | 28 | - |
| MVTec | 1,258 | 1,836 | 73 | 53 | 20 | - |
| 3CAD | 11,074 | 16,559 | 46 | - | - | 46 |
| VISION | 1,757 | 3,553 | 44 | - | - | 44 |
| VisA | 1,167 | 2,131 | 71 | - | - | 71 |
| Total | 67,173 | 83,482 | 360 | 147 | 52 | 161 |
The table below summarizes key characteristics of each dataset, including product types, materials, and color distribution statistics (mean ± std) for each HSV channel.
| Dataset | Product | Material | H (mean ± std) | S (mean ± std) | V (mean ± std) |
|---|---|---|---|---|---|
| GC10‑DET | Steel | Steel | 0.0 ± 0.0 |
0.0 ± 0.0 |
85.7 ± 39.6 |
| Magnetic Tile | Magnetic Tile | Steel | 0.0 ± 0.0 |
0.0 ± 0.0 |
109.4 ± 47.4 |
| Real‑IAD | PCB, Toy Brick, Transistor, etc. | Plastic, Rubber, Wood, etc. | 33.4 ± 49.5 |
40.6 ± 65.7 |
110.2 ± 105.5 |
| MVTec AD | Cable, Hazelnut, Tile, etc. | Glass, Metal, Fabric, etc. | 46.7 ± 55.1 |
46.0 ± 47.8 |
120.5 ± 66.7 |
| 3CAD | Camera Cover, Tablet PC, etc. | Aluminum, Copper, etc. | 7.2 ± 19.2 |
16.7 ± 49.0 |
78.6 ± 70.5 |
| VISION | Capacitor, Lens, Screw, etc. | Plastic, Steel, Wood, etc. | 25.2 ± 41.1 |
38.4 ± 70.7 |
106.3 ± 83.3 |
| VisA | Candle, Capsule, Macaroni, etc. | Plastic, Food, etc. | 47.7 ± 36.4 |
95.7 ± 71.9 |
101.9 ± 66.1 |
Detailed Description
GC10‑DET
GC10-DET is a defect detection dataset collected from metallic surfaces in industrial environments. It contains about 2,300 defect images with bounding‑box annotations for 10 defect categories:silk spot, welding line, punching hole, water spot, crescent gap, oil spot, inclusion, waist folding, crease, and rolled pit.
Magnetic Tile Surface Defect
The magnetic tile surface defect dataset contains images of magnetic tile surfaces collected from industrial production lines, covering both defective and defect‑free samples.
It provides pixel‑level annotations for 5 defect categories: blowhole, crack, break, fray, and uneven.
In our benchmark, we discard normal images and retain only defective samples, keeping approximately 400 defect images from the original release.
Real‑IAD
Real-IAD is a multi‑view industrial anomaly dataset comprising 30 real‑world objects fabricated from diverse materials (e.g., plastic, rubber).
Each object is captured under 5 viewpoints and exhibits 2–5 distinct defect modes selected from 8 defect families:pit, deformation, abrasion, scratch, damage, missing parts, foreign objects, and contamination. We keep only viewpoints equipped with pixel‑level defect masks as anomaly samples and use the official 1024×1024 resolution images.
From the binary defect masks, we perform connected‑component labeling with 8‑connectivity to obtain instance‑level segments, and discard components whose width or height is smaller than 1% of the image width or height to remove tiny noisy polygons.
MVTec AD
MVTec AD is a real‑world industrial anomaly dataset that contains defective images across multiple object and texture categories, with various defect types such as contamination, oil, cuts, and cracks.
In our benchmark, we discard all “good” images and use only anomalous samples with polygon masks. From these masks, we extract individual defect instances and remove components whose width or height is smaller than 1% of the corresponding image dimension to filter out tiny noisy regions.
3CAD
3CAD is a large‑scale anomaly detection dataset collected from real 3C product manufacturing lines, covering representative defects that arise in practical production environments.
It focuses on parts made of three common materials (Aluminum, Iron, Copper) and includes multiple types of 3C components (e.g., camera covers, tablets, and PCs). Since 3CAD is designed for anomaly detection, we discard the “good” class (without polygon annotations) and also remove the Multiple-defects class, whose defect instances cannot be clearly assigned to specific categories.
After this filtering, we obtain a total of 46 defect categories. From the remaining polygons, we extract individual defect instances and discard components whose width or height is smaller than 1% of the image dimension to remove tiny noisy regions.
VISION
VISION is a benchmark unifying 14 industrial inspection subsets, each corresponding to a distinct object class from real manufacturing lines and captured at its native (often high) resolution; image sizes therefore vary across subsets.
It provides pixel‑level instance masks for 44 defect categories. For InsA, we retain only images in the train and val partitions that include polygon annotations and discard the inference split, whose labels are withheld.
From the polygon masks, we remove segments whose width or height is smaller than 1% of the image width or height to filter out tiny noisy regions.
VisA
VisA is an industrial visual inspection dataset containing normal and defective images from 12 object categories, some of which exhibit large variations in object location and pose across images.
It covers both surface‑level defects (e.g., scratches, dents) and structural defects (e.g., misplacement). In our benchmark, we discard all normal images and retain only those that contain at least one annotated defective region.
We also exclude the Other defect class, whose semantics are not clearly specified. From the binary defect masks, we apply connected‑component labeling with 8‑connectivity to obtain instance‑level segments, and discard components whose width or height is smaller than 1% of the image dimension to remove tiny noisy regions.
Acknowledgement
We gratefully acknowledge Minhoi Kim for their major contributions to the construction and refinement of this dataset.
License
This dataset is released under the MIT License, which allows unrestricted use for both non-commercial and commercial purposes.
- Downloads last month
- 103