DeadCardassian commited on
Commit
b1d6e0c
·
verified ·
1 Parent(s): c856e32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -17
README.md CHANGED
@@ -42,17 +42,73 @@ PM25Vision (PM25V) is a large-scale dataset for estimating air quality (PM2.5) f
42
  | ViT-B/16 | 0.40 | 0.37 | 0.41 | 0.36 |
43
  | EfficientNet-B0 | 0.40 | 0.34 | 0.42 | 0.33 |
44
 
45
- ## Dataset Structure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
- The dataset is organized into two main splits: **train** and **test**, each containing:
48
-
49
- - **`images/`**: all image files used in the dataset.
50
- - **`samples_by_bin/`**: a small set of 30 example images per AQI bin (for quick visual inspection).
51
- - **`metadata.csv`**: a CSV file describing metadata (including pm2.5 labels) for each image.
52
-
53
- ### Metadata Fields
54
-
55
- Each row in `metadata.csv` contains:
56
 
57
  | Field | Type | Description |
58
  |----------------|---------|----------------------------------------------------------------------|
@@ -69,6 +125,8 @@ Each row in `metadata.csv` contains:
69
  | `quality` | object | ResNet18 classified label for image quality (e.g., `good` or `bad`). |
70
  | `pm25_bin` | object | Discrete AQI level label (e.g., `0–50`, `51–100`, etc.). |
71
 
 
 
72
  ### Splits
73
 
74
  - **Train**: 80% of samples, balanced across AQI bins.
@@ -81,16 +139,19 @@ Each row in `metadata.csv` contains:
81
  - Rare extreme AQI classes remain underrepresented.
82
 
83
  ## Access
84
- - Arxiv: ...
85
  - Online demo: [pm25vision.com](http://www.pm25vision.com)
 
86
 
87
  ## Citation
88
  ```bibtex
89
- @misc{pm25vision2025,
90
- title = {PM25Vision: Street-level imagery with PM2.5 annotations},
91
- author = {Han, Yang},
92
- year = {2025},
93
- publisher = {Hugging Face Datasets},
94
- url = {https://huggingface.co/datasets/DeadCardassian/PM25Vision}
 
 
95
  }
96
  ```
 
42
  | ViT-B/16 | 0.40 | 0.37 | 0.41 | 0.36 |
43
  | EfficientNet-B0 | 0.40 | 0.34 | 0.42 | 0.33 |
44
 
45
+ ## Usage
46
+
47
+ ### Quick Start
48
+
49
+ ```python
50
+ import torch
51
+ import torch.nn as nn
52
+ import torch.optim as optim
53
+ from datasets import load_dataset
54
+ from torch.utils.data import DataLoader
55
+ import torchvision.transforms as T
56
+ from PIL import Image
57
+ from io import BytesIO
58
+
59
+ # ===== Load dataset =====
60
+ ds = load_dataset("DeadCardassian/PM25Vision")
61
+
62
+ transform = T.Compose([
63
+ T.Resize((224, 224)),
64
+ T.ToTensor(),
65
+ ])
66
+
67
+ def collate_fn(batch):
68
+ imgs = [transform(Image.open(BytesIO(x["image"])).convert("RGB")) for x in batch]
69
+ labels = [x["pm25"] for x in batch] # pm25 AQI value
70
+ return torch.stack(imgs), torch.tensor(labels, dtype=torch.float32)
71
+
72
+ train_loader = DataLoader(ds["train"], batch_size=32, shuffle=True, collate_fn=collate_fn)
73
+
74
+ # ===== Simple CNN =====
75
+ class SimpleCNN(nn.Module):
76
+ def __init__(self):
77
+ super().__init__()
78
+ self.net = nn.Sequential(
79
+ nn.Conv2d(3, 16, 3, padding=1), nn.ReLU(), nn.MaxPool2d(2),
80
+ nn.Conv2d(16, 32, 3, padding=1), nn.ReLU(), nn.MaxPool2d(2),
81
+ nn.Conv2d(32, 64, 3, padding=1), nn.ReLU(), nn.AdaptiveAvgPool2d(1),
82
+ )
83
+ self.fc = nn.Linear(64, 1) # regression
84
+
85
+ def forward(self, x):
86
+ x = self.net(x)
87
+ x = x.view(x.size(0), -1)
88
+ return self.fc(x).squeeze(1)
89
+
90
+ # ===== Training loop =====
91
+ device = "cuda" if torch.cuda.is_available() else "cpu"
92
+ model = SimpleCNN().to(device)
93
+ optimizer = optim.Adam(model.parameters(), lr=1e-3)
94
+ criterion = nn.MSELoss()
95
+
96
+ for epoch in range(5): # 5 epoch for demo
97
+ for imgs, labels in train_loader:
98
+ imgs, labels = imgs.to(device), labels.to(device)
99
+
100
+ optimizer.zero_grad()
101
+ outputs = model(imgs)
102
+ loss = criterion(outputs, labels)
103
+ loss.backward()
104
+ optimizer.step()
105
+
106
+ print(f"Epoch {epoch+1}: train loss = {loss.item():.4f}")
107
+ ```
108
+
109
+
110
+ ### Label Fields
111
 
 
 
 
 
 
 
 
 
 
112
 
113
  | Field | Type | Description |
114
  |----------------|---------|----------------------------------------------------------------------|
 
125
  | `quality` | object | ResNet18 classified label for image quality (e.g., `good` or `bad`). |
126
  | `pm25_bin` | object | Discrete AQI level label (e.g., `0–50`, `51–100`, etc.). |
127
 
128
+ **Only `image_id` and `pm25` will be used most of the time.**
129
+
130
  ### Splits
131
 
132
  - **Train**: 80% of samples, balanced across AQI bins.
 
139
  - Rare extreme AQI classes remain underrepresented.
140
 
141
  ## Access
142
+ - Arxiv: [PM25Vision](https://arxiv.org/abs/2509.16519)
143
  - Online demo: [pm25vision.com](http://www.pm25vision.com)
144
+ - Kaggle (Download the entire data folder in a zip file, suitable for expansion needs): [PM25Vision](https://www.kaggle.com/datasets/DeadCardassian/pm25vision)
145
 
146
  ## Citation
147
  ```bibtex
148
+ @misc{han2025pm25visionlargescalebenchmarkdataset,
149
+ title={PM25Vision: A Large-Scale Benchmark Dataset for Visual Estimation of Air Quality},
150
+ author={Yang Han},
151
+ year={2025},
152
+ eprint={2509.16519},
153
+ archivePrefix={arXiv},
154
+ primaryClass={cs.CV},
155
+ url={https://arxiv.org/abs/2509.16519},
156
  }
157
  ```