Datasets:
Modalities:
Image
Languages:
English
Size:
10K<n<100K
ArXiv:
Tags:
computer-vision
vision-language
visual-question-answering
image-segmentation
object-detection
image-classification
License:
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pretty_name: PaveBench
|
| 3 |
+
size_categories:
|
| 4 |
+
- 10K<n<100K
|
| 5 |
+
license: mit
|
| 6 |
+
tags:
|
| 7 |
+
- computer-vision
|
| 8 |
+
- vision-language
|
| 9 |
+
- visual-question-answering
|
| 10 |
+
- semantic-segmentation
|
| 11 |
+
- object-detection
|
| 12 |
+
- image-classification
|
| 13 |
+
- multimodal-learning
|
| 14 |
+
- benchmark
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
# PaveBench: A Versatile Benchmark for Pavement Distress Perception and Interactive Vision-Language Analysis
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
## Abstract
|
| 22 |
+
|
| 23 |
+
PaveBench is a large-scale benchmark for pavement distress perception and interactive vision-language analysis on real-world highway inspection images. It supports four core tasks: classification, object detection, semantic segmentation, and vision-language question answering. On the visual side, PaveBench provides large-scale annotations on real top-down pavement images and includes a curated hard-distractor subset for robustness evaluation. On the multimodal side, it introduces PaveVQA, a real-image question answering dataset supporting single-turn, multi-turn, and expert-corrected interactions, covering recognition, localization, quantitative estimation, and maintenance reasoning. :contentReference[oaicite:2]{index=2}
|
| 24 |
+
|
| 25 |
+
## About the Dataset
|
| 26 |
+
|
| 27 |
+
PaveBench is built on real-world highway inspection images collected in Liaoning Province, China, using a highway inspection vehicle equipped with a high-resolution line-scan camera. The captured images are top-down orthographic pavement views, which preserve the geometric properties of distress patterns and support reliable downstream quantification. The dataset provides unified annotations for multiple pavement distress tasks and is designed to connect visual perception with interactive vision-language analysis. :contentReference[oaicite:3]{index=3}
|
| 28 |
+
|
| 29 |
+
The visual subset contains **20,124** high-resolution pavement images of size **512 × 512**. It supports:
|
| 30 |
+
- image classification
|
| 31 |
+
- object detection
|
| 32 |
+
- semantic segmentation
|
| 33 |
+
|
| 34 |
+
In addition, the multimodal subset, **PaveVQA**, contains **32,160** question-answer pairs, including:
|
| 35 |
+
- **10,050** single-turn queries
|
| 36 |
+
- **20,100** multi-turn interactions
|
| 37 |
+
- **2,010** error-correction pairs
|
| 38 |
+
|
| 39 |
+
These question-answer pairs cover recognition, localization, quantitative estimation, severity assessment, and maintenance recommendation. :contentReference[oaicite:4]{index=4}
|
| 40 |
+
|
| 41 |
+
## Distress Categories
|
| 42 |
+
|
| 43 |
+
PaveBench includes six visual categories:
|
| 44 |
+
- Longitudinal Crack
|
| 45 |
+
- Transverse Crack
|
| 46 |
+
- Alligator Crack
|
| 47 |
+
- Patch
|
| 48 |
+
- Pothole
|
| 49 |
+
- Negative Sample
|
| 50 |
+
|
| 51 |
+
These annotations are organized through a hierarchical pipeline covering classification, detection, and segmentation. :contentReference[oaicite:5]{index=5}
|
| 52 |
+
|
| 53 |
+
## Hard Distractors
|
| 54 |
+
|
| 55 |
+
A key feature of PaveBench is its curated **hard-distractor subset**. During annotation, the dataset explicitly retains visually confusing real-world patterns such as:
|
| 56 |
+
- pavement stains
|
| 57 |
+
- shadows
|
| 58 |
+
- road markings
|
| 59 |
+
|
| 60 |
+
These distractors often co-occur with real pavement distress and closely resemble true distress patterns, making the benchmark more realistic and more challenging for robustness evaluation. :contentReference[oaicite:6]{index=6} :contentReference[oaicite:7]{index=7}
|
| 61 |
+
|
| 62 |
+
## PaveVQA
|
| 63 |
+
|
| 64 |
+
PaveVQA is a real-image visual question answering benchmark built on top of PaveBench. It supports:
|
| 65 |
+
- single-turn QA
|
| 66 |
+
- multi-turn dialogue
|
| 67 |
+
- expert-corrected interactions
|
| 68 |
+
|
| 69 |
+
The questions are designed around practical pavement inspection needs, including:
|
| 70 |
+
- presence verification
|
| 71 |
+
- distress classification
|
| 72 |
+
- localization
|
| 73 |
+
- quantitative analysis
|
| 74 |
+
- severity assessment
|
| 75 |
+
- maintenance recommendation
|
| 76 |
+
|
| 77 |
+
Structured metadata derived from visual annotations, such as bounding boxes, pixel area, and skeleton length, is used to support grounded and low-hallucination question answering. :contentReference[oaicite:8]{index=8}
|
| 78 |
+
|
| 79 |
+
## Data Collection
|
| 80 |
+
|
| 81 |
+
Raw pavement images were collected using a highway inspection vehicle traveling at approximately **80 km/h**. The acquisition system uses a high-resolution line-scan camera to capture orthographic views of asphalt pavement. The collected continuous scans were processed using a standard enhancement pipeline including:
|
| 82 |
+
- denoising
|
| 83 |
+
- sharpening
|
| 84 |
+
- contrast enhancement
|
| 85 |
+
- histogram equalization
|
| 86 |
+
|
| 87 |
+
These steps improve the visibility of pavement distress while reducing background noise. :contentReference[oaicite:9]{index=9}
|
| 88 |
+
|
| 89 |
+
## Dataset Statistics
|
| 90 |
+
|
| 91 |
+
According to the paper:
|
| 92 |
+
- Visual subset: **20,124** images
|
| 93 |
+
- Image resolution: **512 × 512**
|
| 94 |
+
- VQA subset: **32,160** QA pairs
|
| 95 |
+
- Four primary analysis tasks
|
| 96 |
+
- Fourteen fine-grained VQA sub-categories
|
| 97 |
+
|
| 98 |
+
PaveBench is designed to provide a unified foundation for both precise visual perception and interactive multimodal reasoning in the pavement domain. :contentReference[oaicite:10]{index=10}
|
| 99 |
+
|
| 100 |
+
## Benchmark Tasks
|
| 101 |
+
|
| 102 |
+
PaveBench supports four core tasks:
|
| 103 |
+
1. Classification
|
| 104 |
+
2. Object Detection
|
| 105 |
+
3. Semantic Segmentation
|
| 106 |
+
4. Vision-Language Question Answering
|
| 107 |
+
|
| 108 |
+
It also includes an agent-augmented evaluation setting where vision-language models are combined with domain-specific tools for more reliable quantitative analysis. :contentReference[oaicite:11]{index=11} :contentReference[oaicite:12]{index=12}
|
| 109 |
+
|
| 110 |
+
## Usage
|
| 111 |
+
|
| 112 |
+
Example usage with `datasets`:
|
| 113 |
+
|
| 114 |
+
```python
|
| 115 |
+
from datasets import load_dataset
|
| 116 |
+
|
| 117 |
+
dataset = load_dataset("MML-Group/PaveBench")
|