You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Wafer VQA Dataset

Overview

Wafer VQA Dataset is a multimodal benchmark for wafer map understanding, visual question answering, and defect reasoning. It is built on top of the MixedWM38 wafer-map collection and organized into two annotation styles:

  • tuple_generation: one multi-question response per image, intended for GRPO or other sequence-level optimization settings
  • stepwise_reasoning: one stepwise dialogue per image, intended for supervised fine-tuning

Summary

  • Images: 38,015 PNG wafer maps
  • Total classes: 38
  • Annotation styles: 2
  • Source benchmark questions per image: 5
  • Image resolution: 416 x 416
  • Language: English

Directory Layout

Wafer_VQA_dataset/
β”œβ”€β”€ croissant.json
β”œβ”€β”€ generate_release_metadata.py
β”œβ”€β”€ annotations/
β”‚   β”œβ”€β”€ tuple_generation/
β”‚   β”‚   β”œβ”€β”€ train_5pct.json
β”‚   β”‚   β”œβ”€β”€ train.json
β”‚   β”‚   β”œβ”€β”€ val.json
β”‚   β”‚   └── test.json
β”‚   β”œβ”€β”€ stepwise_reasoning/
β”‚   β”‚   β”œβ”€β”€ train.json
β”‚   β”‚   β”œβ”€β”€ val.json
β”‚   β”‚   └── test.json
β”‚   └── metadata/
β”‚       β”œβ”€β”€ class_taxonomy.json
β”‚       └── eval_reference_pool.json
β”œβ”€β”€ images/
β”‚   β”œβ”€β”€ single/
β”‚   β”œβ”€β”€ mixed_2/
β”‚   β”œβ”€β”€ mixed_3/
β”‚   └── mixed_4/
β”œβ”€β”€ knowledge_base/
β”‚   β”œβ”€β”€ location_targets_by_image.json
β”‚   β”œβ”€β”€ q4_q5_prompt_templates.json
β”‚   β”œβ”€β”€ root_cause_refs_by_class.json
β”‚   └── visual_description_refs_by_class.json
└── previews/
    └── class_preview_grid.png

Annotation Files

1. Tuple Generation

Files in annotations/tuple_generation/ store one consolidated response per image. Each sample keeps only:

  • id
  • image
  • conversations

2. Stepwise Reasoning

Files in annotations/stepwise_reasoning/ store a step-by-step dialogue for the same wafer image. Each sample also keeps only:

  • id
  • image
  • conversations

3. Evaluation Metadata

annotations/metadata/eval_reference_pool.json is a centralized evaluation reference file. It maps sample IDs to question-specific evaluation targets and metrics, including:

  • exact-match targets
  • multi-label reference sets
  • reference answer pools for semantic evaluation

4. Class Taxonomy

annotations/metadata/class_taxonomy.json is the canonical class ontology for public release. It links:

  • folder-level class IDs such as edge_ring_scratch
  • canonical display labels such as Edge-Ring+Scratch
  • component defect patterns used for Q2 answers
  • image directories and sample counts

This file should be treated as the authoritative mapping layer for external users.

Data Schema

Each training sample follows this minimal schema:

{
  "id": "donut_edge_ring_scratch_15558",
  "image": "images/mixed_3/donut_edge_ring_scratch/wm38_15558.png",
  "conversations": [
    {"from": "system", "value": "..."},
    {"from": "human", "value": "<image>\n..."},
    {"from": "gpt", "value": "..."}
  ]
}

Notes:

  • image is a relative path from the dataset root.
  • conversations uses the common multimodal chat format adopted by LLaVA-style loaders.
  • GRPO-style tuple samples use IDs with a _multiq suffix to distinguish them from stepwise samples derived from the same image.
  • The candidate option order in Q2 and Q3 is intentionally shuffled across samples to reduce position bias and should not be interpreted as a fixed label order.

Splits

Stepwise Reasoning Split

  • train: 7,602 samples
  • val: 3,800 samples
  • test: 26,613 samples

Tuple Generation Split

  • train_5pct: 1,900 samples
  • train: 7,602 samples
  • val: 3,800 samples
  • test: 26,613 samples

Image Categories

The dataset contains 38 image categories:

  • 1 normal category
  • 8 single-pattern categories
  • 29 mixed-pattern categories

At the folder level, images are grouped into:

  • images/single: normal and single-pattern wafer maps
  • images/mixed_2: two-pattern mixed wafer maps
  • images/mixed_3: three-pattern mixed wafer maps
  • images/mixed_4: four-pattern mixed wafer maps

Class Taxonomy

The release includes annotations/metadata/class_taxonomy.json to make the class ontology explicit and machine-readable.

The taxonomy separates three naming layers that were easy to confuse in the internal build artifacts:

  • class_id: folder-facing identifier such as center_edge_loc_scratch
  • canonical_label: public display label such as Center+Edge-Loc+Scratch
  • component_pattern_labels: the canonical defect labels used by Q2 answers

Auxiliary Knowledge Base

The knowledge_base/ directory contains reference resources used during dataset generation and evaluation:

  • location_targets_by_image.json: image-level location labels for the localization question
  • visual_description_refs_by_class.json: class-level visual description references
  • q4_q5_prompt_templates.json: prompt variants for Q4 and Q5
  • root_cause_refs_by_class.json: class-level root-cause analysis references

Responsible AI Notes

Data Limitations

  • The dataset is derived from rendered wafer maps rather than raw fab imagery, inline sensors, or process logs.
  • Class frequencies are benchmark-oriented and do not reflect real production-line prevalence.
  • Root-cause answers are benchmark references rather than verified fab incident reports.
  • The dataset is English-only and should not be assumed valid for multilingual operator workflows.

Data Biases

  • The benchmark centers on a fixed ontology of nine defect patterns and five recurring questions.
  • Mixed-pattern cases are compositionally formed from a limited pattern vocabulary, so real rare morphologies are underrepresented.
  • The visual style is cleaner and more structured than many real manufacturing environments, which may induce overconfident model behavior.

Personal or Sensitive Information

The dataset contains no personal data, demographic attributes, medical records, or customer information. It consists of wafer-map images plus industrial defect-analysis text annotations.

Data Use Cases

  • Recommended: multimodal fine-tuning, VQA benchmarking, defect classification, localization, visual description, and root-cause reasoning research on wafer maps.
  • Recommended: robustness studies where Q2 and Q3 candidate options are intentionally shuffled to reduce position bias.
  • Not established: direct production deployment for process control, yield disposition, or autonomous fab intervention without human review and site-specific validation.

Synthetic Data and Provenance

  • The dataset contains derived and benchmark-structured data and should be treated as containing synthetic or semi-synthetic benchmark supervision.
  • The source lineage traces back to MixedWM38 / WaferMap and the derived VQA packaging documented in this repository.

Source

This dataset is derived from the MixedWM38 wafer map collection:

  • Repository: MixedWM38 / WaferMap
  • Original paper: Wang et al., "Deformable Convolutional Networks for Efficient Mixed-type Wafer Defect Pattern Recognition"

Citation

If you use this dataset, please cite the original MixedWM38 source and add your own dataset citation here when your release metadata is finalized.

@article{wang2020deformable,
  title={Deformable Convolutional Networks for Efficient Mixed-type Wafer Defect Pattern Recognition},
  author={Wang, Junliang and Xu, Chuqiao and Yang, Zhen and Zhang, Jie and Li, Xinyu},
  journal={IEEE Transactions on Semiconductor Manufacturing},
  year={2020}
}
Downloads last month
36