Datasets:
File size: 5,955 Bytes
f172362 578bc7c b3946d0 f172362 d5c0679 f172362 b1d6e0c b3946d0 b1d6e0c f172362 d5c0679 f172362 b1d6e0c f172362 b1d6e0c f172362 b1d6e0c f172362 b1d6e0c f172362 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 |
---
pretty_name: PM25Vision
tags:
- computer-vision
- pm2.5
- regression
- classification
- air-quality
- AQI
task_categories:
- image-classification
- other
license: cc-by-4.0
language:
- en
size_categories:
- 10K<n<100K
---
# PM25Vision
## Dataset Summary
PM25Vision (PM25V) is a large-scale dataset for estimating air quality (PM2.5) from street-level imagery. It pairs **Mapillary** photos with **World Air Quality Index (WAQI)** PM2.5 records, covering 2014–2025, 3,261 monitoring stations, and 11,114 cleaned and balanced images with PM2.5 AQI labels.

## Tasks
- **Regression**: Predict continuous PM2.5 **AQI** values.
- **Classification**: Predict discrete AQI levels.
## Baseline Results
### Regression
| Model | R² | MAE | RMSE | Acc | F1 |
|-----------------|------|------|------|------|------|
| EfficientNet-B0 | 0.55 | 36.6 | 54.6 | 0.46 | 0.45 |
| ResNet50 | 0.50 | 38.6 | 57.5 | 0.44 | 0.35 |
| ViT-B/16 | 0.23 | 50.3 | 71.7 | 0.35 | 0.30 |
### Classification
| Model | Acc | F1 | Precision | Recall |
|-----------------|------|------|-----------|--------|
| ResNet50 | 0.44 | 0.38 | 0.48 | 0.37 |
| ViT-B/16 | 0.40 | 0.37 | 0.41 | 0.36 |
| EfficientNet-B0 | 0.40 | 0.34 | 0.42 | 0.33 |
## Usage
### Quick Start
```python
import torch
import torch.nn as nn
import torch.optim as optim
from datasets import load_dataset
from torch.utils.data import DataLoader
import torchvision.transforms as T
from PIL import Image
from io import BytesIO
# ===== Load dataset =====
ds = load_dataset("DeadCardassian/PM25Vision")
transform = T.Compose([
T.Resize((224, 224)),
T.ToTensor(),
])
def collate_fn(batch):
imgs = [transform(Image.open(BytesIO(x["image"])).convert("RGB")) for x in batch]
labels = [x["pm25"] for x in batch] # pm25 AQI value
return torch.stack(imgs), torch.tensor(labels, dtype=torch.float32)
train_loader = DataLoader(ds["train"], batch_size=32, shuffle=True, collate_fn=collate_fn)
# ===== Simple CNN =====
class SimpleCNN(nn.Module):
def __init__(self):
super().__init__()
self.net = nn.Sequential(
nn.Conv2d(3, 16, 3, padding=1), nn.ReLU(), nn.MaxPool2d(2),
nn.Conv2d(16, 32, 3, padding=1), nn.ReLU(), nn.MaxPool2d(2),
nn.Conv2d(32, 64, 3, padding=1), nn.ReLU(), nn.AdaptiveAvgPool2d(1),
)
self.fc = nn.Linear(64, 1) # regression
def forward(self, x):
x = self.net(x)
x = x.view(x.size(0), -1)
return self.fc(x).squeeze(1)
# ===== Training loop =====
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SimpleCNN().to(device)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.MSELoss()
for epoch in range(5): # 5 epoch for demo
for imgs, labels in train_loader:
imgs, labels = imgs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model(imgs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1}: train loss = {loss.item():.4f}")
```
Notes:
To switch from AQI values (regression) to AQI levels (classification), simply add a mapping like:
```python
def map_pm25_to_class(pm25):
if pm25 <= 50.4: return 0
elif pm25 <= 100.4: return 1
elif pm25 <= 150.4: return 2
elif pm25 <= 200.4: return 3
elif pm25 <= 300.4: return 4
else: return 5
```
### Label Fields
| Field | Type | Description |
|----------------|---------|----------------------------------------------------------------------|
| `**image_id**` | int64 | Unique image identifier (from Mapillary). |
| `station_id` | int64 | WAQI monitoring station ID. |
| `captured_at` | object | Date when the image was captured (YYYY-MM-DD). |
| `camera_angle` | float64 | Camera orientation (if available). |
| `longitude` | float64 | Longitude of the station. |
| `latitude` | float64 | Latitude of the station. |
| `quality_score`| float64 | Image quality score from Mapillary (if available). |
| `downloaded_at`| object | Timestamp when the sample was downloaded. |
| `**pm25**` | float64 | Average PM2.5 AQI value of the day that the image was captured. |
| `filename` | object | Image filename, located in the `images/` directory. |
| `quality` | object | ResNet18 classified label for image quality (e.g., `good` or `bad`). |
| `pm25_bin` | object | Discrete AQI level label (e.g., `0–50`, `51–100`, etc.). |
**Only `image_id` and `pm25` will be used most of the time.**
### Splits
- **Train**: 80% of samples, balanced across AQI bins.
- **Test**: 20% of samples, balanced across AQI bins.
## Limitations
- WAQI temporal resolution is **daily**, may miss intra-day variation.
- Spatial accuracy limited to 5 km around stations.
- Rare extreme AQI classes remain underrepresented.
## Access
- Arxiv: [PM25Vision](https://arxiv.org/abs/2509.16519)
- Online demo: [pm25vision.com](http://www.pm25vision.com)
- Kaggle (Download the entire data folder in a zip file, suitable for expansion needs): [PM25Vision](https://www.kaggle.com/datasets/DeadCardassian/pm25vision)
## Citation
```bibtex
@misc{han2025pm25visionlargescalebenchmarkdataset,
title={PM25Vision: A Large-Scale Benchmark Dataset for Visual Estimation of Air Quality},
author={Yang Han},
year={2025},
eprint={2509.16519},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.16519},
}
``` |