Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    UnidentifiedImageError
Message:      cannot identify image file <_io.BytesIO object at 0x7fb5973744f0>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2061, in __iter__
                  batch = formatter.format_batch(pa_table)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 472, in format_batch
                  batch = self.python_features_decoder.decode_batch(batch)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 234, in decode_batch
                  return self.features.decode_batch(batch, token_per_repo_id=self.token_per_repo_id) if self.features else batch
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2161, in decode_batch
                  decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id)
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1419, in decode_nested_example
                  return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/image.py", line 190, in decode_example
                  image = PIL.Image.open(bytes_)
                          ^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/PIL/Image.py", line 3498, in open
                  raise UnidentifiedImageError(msg)
              PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7fb5973744f0>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

PlantVillageVQA

Associated paper: arXiv:2508.17117
GitHub repository (Currently Private): SyedNazmusSakib/PlantVillageVQA

PlantVillageVQA is a multimodal dataset for visual question answering (VQA) in plant pathology. It contains 193,609 question–answer (QA) items paired with 55,448 leaf images spanning 14 crops and 38 diseases. Questions are organized into nine categories and three cognitive levels: perception/identification, symptom grounding/verification, and higher‑order reasoning (diagnosis, causality, counterfactuals).

Images originate from the open PlantVillage corpus and are redistributed here with QA supervision in a flat image layout. We release the dataset under CC BY 4.0 to maximize reuse.

The dataset card follows Hugging Face guidance for dataset documentation and responsible use.

Summary

  • Images: 55,448 JPEGs, flat naming: images/image_000001.jpgimage_055448.jpg.
  • Annotations: 193,609 QA pairs in CSV (PlantVillageVQA.csv) and JSON (PlantVillageVQA.json) with identical schema.
  • Fields: image_id, question_type, question, answer, image_path, split.
  • License: CC BY 4.0 (Creative Commons Attribution 4.0 International).
  • Primary deposit (authoritative): Mendeley Data DOI: 10.17632/XXXXXX.1.
    Mirrors: Hugging Face Hub (this page) and Kaggle Datasets. Each mirror cites the DOI and provides SHA‑256 checksums for verification. Persistent DOIs and formal data citation align with Nature/Scientific Data recommendations.

Supported Tasks and Benchmarks

  • Visual Question Answering (VQA): binary, short, and descriptive answers across nine categories.
  • Symptom grounding: text–vision alignment for canonical symptom phrases.
  • Open diagnosis: free‑text disease naming from visual evidence.
  • Causal and counterfactual reasoning: pathogen attribution and healthy‑state contrast.

Baseline experiments (CLIP, LXMERT, FLAVA) show models exceed chance on binary tasks and capture key terms in descriptive answers, yet retain headroom on reasoning‑heavy categories (details in the paper).

Languages

  • English prompts and answers.

Dataset Structure

Files

images/                  # 55,448 JPEGs, flat naming
PlantVillageVQA.csv      # 193,609 rows
PlantVillageVQA.json     # JSON mirror

Data Fields

Field Type Description
image_id string Stable identifier of the image.
question_type string One of nine categories (see below).
question string Natural‑language question.
answer string Ground‑truth answer (binary or descriptive). Counterfactual answers are template‑constrained and symptom‑grounded.
image_path string Relative path to image, e.g., images/image_000123.jpg.
split string Recommended partition: train, val, or test.

Splits

We provide split tags in the annotations. Users may re‑split for specific studies, but should report the split policy for comparability.

Question Taxonomy

Level 1 — Perception / Identification

  1. Existence & Sanity Check — on‑task image/leaf present.
  2. Plant Species Identification — host recognition.
  3. General Health Assessment — healthy vs. diseased triage.

Level 2 — Symptom Grounding / Verification

  1. Visual Attribute Grounding — canonical symptom phrases ↔ visual evidence.
  2. Detailed Verification — compositional check: crop + disease.

Level 3 — Reasoning / Inference

  1. Specific Disease Identification — open diagnosis.
  2. Comprehensive Description — holistic expert‑style summary.
  3. Causal Reasoning — cause/pathogen attribution.
  4. Counterfactual Reasoning — healthy contrast; remove disease‑specific symptoms.

Creation Process and Curation

We used a multi‑stage pipeline:

  1. Programmatic QA synthesis from structured labels.
  2. Linguistic diversification via expert‑phrased templates; no free‑form LLM generation.
  3. Expert pathology review (Phase 1): logical fit, medical relevance, clarity.
  4. Deterministic correction of counterfactuals: disease→symptom phrase map; regenerate only when label provenance is verifiable.
  5. Strict fix‑or‑delete policy: 9,981 counterfactual pairs corrected and kept; 17,261 unverifiable pairs deleted.
  6. Automated screening (Phase 2): flagged low‑information and low Q–A‑similarity pairs; sample checks showed high acceptance.
  7. Final release: 193,609 QA pairs across 55,448 images.

Avoiding unconstrained LLM generation minimizes hallucination risk and keeps changes auditable.

Citation

Please cite both the dataset and the PlantVillage source corpus:

@article{sakib2025plantvillagevqa,
  title={PlantVillageVQA: A Visual Question Answering Dataset for Benchmarking Vision-Language Models in Plant Science},
  author={Sakib, Syed Nazmus and Haque, Nafiul and Hossain, Mohammad Zabed and Arman, Shifat E},
  journal={arXiv preprint arXiv:2508.17117},
  year={2025}
}

License

CC BY 4.0 Universal (Creative Commons Attribution 4.0 International).

Downloads last month
4

Paper for sohamjune/PlantVillageVQA