PM25Vision / README.md
DeadCardassian's picture
Update README.md
578bc7c verified
metadata
pretty_name: PM25Vision
tags:
  - computer-vision
  - pm2.5
  - regression
  - classification
  - air-quality
  - AQI
task_categories:
  - image-classification
  - other
license: cc-by-4.0
language:
  - en
size_categories:
  - 10K<n<100K

PM25Vision

Dataset Summary

PM25Vision (PM25V) is a large-scale dataset for estimating air quality (PM2.5) from street-level imagery. It pairs Mapillary photos with World Air Quality Index (WAQI) PM2.5 records, covering 2014–2025, 3,261 monitoring stations, and 11,114 cleaned and balanced images with PM2.5 AQI labels. Dataset Overview

Tasks

  • Regression: Predict continuous PM2.5 AQI values.
  • Classification: Predict discrete AQI levels.

Baseline Results

Regression

Model MAE RMSE Acc F1
EfficientNet-B0 0.55 36.6 54.6 0.46 0.45
ResNet50 0.50 38.6 57.5 0.44 0.35
ViT-B/16 0.23 50.3 71.7 0.35 0.30

Classification

Model Acc F1 Precision Recall
ResNet50 0.44 0.38 0.48 0.37
ViT-B/16 0.40 0.37 0.41 0.36
EfficientNet-B0 0.40 0.34 0.42 0.33

Usage

Quick Start

import torch
import torch.nn as nn
import torch.optim as optim
from datasets import load_dataset
from torch.utils.data import DataLoader
import torchvision.transforms as T
from PIL import Image
from io import BytesIO

# ===== Load dataset =====
ds = load_dataset("DeadCardassian/PM25Vision")

transform = T.Compose([
    T.Resize((224, 224)),
    T.ToTensor(),
])

def collate_fn(batch):
    imgs = [transform(Image.open(BytesIO(x["image"])).convert("RGB")) for x in batch]
    labels = [x["pm25"] for x in batch]   # pm25 AQI value
    return torch.stack(imgs), torch.tensor(labels, dtype=torch.float32)

train_loader = DataLoader(ds["train"], batch_size=32, shuffle=True, collate_fn=collate_fn)

# ===== Simple CNN =====
class SimpleCNN(nn.Module):
    def __init__(self):
        super().__init__()
        self.net = nn.Sequential(
            nn.Conv2d(3, 16, 3, padding=1), nn.ReLU(), nn.MaxPool2d(2),
            nn.Conv2d(16, 32, 3, padding=1), nn.ReLU(), nn.MaxPool2d(2),
            nn.Conv2d(32, 64, 3, padding=1), nn.ReLU(), nn.AdaptiveAvgPool2d(1),
        )
        self.fc = nn.Linear(64, 1)  # regression

    def forward(self, x):
        x = self.net(x)
        x = x.view(x.size(0), -1)
        return self.fc(x).squeeze(1)

# ===== Training loop =====
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SimpleCNN().to(device)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.MSELoss()

for epoch in range(5):  # 5 epoch for demo
    for imgs, labels in train_loader:
        imgs, labels = imgs.to(device), labels.to(device)

        optimizer.zero_grad()
        outputs = model(imgs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

    print(f"Epoch {epoch+1}: train loss = {loss.item():.4f}")

Notes: To switch from AQI values (regression) to AQI levels (classification), simply add a mapping like:

def map_pm25_to_class(pm25):
    if pm25 <= 50.4: return 0
    elif pm25 <= 100.4: return 1
    elif pm25 <= 150.4: return 2
    elif pm25 <= 200.4: return 3
    elif pm25 <= 300.4: return 4
    else: return 5

Label Fields

Field Type Description
**image_id** int64 Unique image identifier (from Mapillary).
station_id int64 WAQI monitoring station ID.
captured_at object Date when the image was captured (YYYY-MM-DD).
camera_angle float64 Camera orientation (if available).
longitude float64 Longitude of the station.
latitude float64 Latitude of the station.
quality_score float64 Image quality score from Mapillary (if available).
downloaded_at object Timestamp when the sample was downloaded.
**pm25** float64 Average PM2.5 AQI value of the day that the image was captured.
filename object Image filename, located in the images/ directory.
quality object ResNet18 classified label for image quality (e.g., good or bad).
pm25_bin object Discrete AQI level label (e.g., 0–50, 51–100, etc.).

Only image_id and pm25 will be used most of the time.

Splits

  • Train: 80% of samples, balanced across AQI bins.
  • Test: 20% of samples, balanced across AQI bins.

Limitations

  • WAQI temporal resolution is daily, may miss intra-day variation.
  • Spatial accuracy limited to 5 km around stations.
  • Rare extreme AQI classes remain underrepresented.

Access

Citation

@misc{han2025pm25visionlargescalebenchmarkdataset,
      title={PM25Vision: A Large-Scale Benchmark Dataset for Visual Estimation of Air Quality}, 
      author={Yang Han},
      year={2025},
      eprint={2509.16519},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.16519}, 
}