gdurkin/fire_risk_properties
Viewer • Updated • 7.69k • 10
How to use gdurkin/cali_fire_risk with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-segmentation", model="gdurkin/cali_fire_risk") # Load model directly
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
processor = AutoImageProcessor.from_pretrained("gdurkin/cali_fire_risk")
model = Mask2FormerForUniversalSegmentation.from_pretrained("gdurkin/cali_fire_risk")Base: gdurkin/cdl_mask2former_v3_mspc
Labels: ['background', 'road_paved', 'dirt_gravel', 'grass_dry', 'grass_healthy', 'vegetation', 'water', 'building_all']
This repo hosts a Mask2Former model fine-tuned on NAIP 512×512 chips for wildfire-related landcover “superbuckets.”
gdurkin/cali_fire_risk@best-20250920_160245 FWIoU is the mean IoU weighted by each class's pixel frequency: sum_c f_c * IoU_c. It emphasizes overall pixelwise accuracy while still penalizing mistakes.
| id | label | IoU | support |
|---|---|---|---|
| 0 | background | 0.0000 | 172666 |
| 1 | road_paved | 0.6855 | 80261762 |
| 2 | dirt_gravel | 0.3848 | 57473062 |
| 3 | grass_dry | 0.2654 | 22281420 |
| 4 | grass_healthy | 0.4975 | 40281607 |
| 5 | vegetation | 0.6658 | 47722088 |
| 6 | water | 0.4366 | 3090841 |
| 7 | building_all | 0.7445 | 59095050 |
import torch
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
repo = "gdurkin/cali_fire_risk"
rev = "best-20250920_160245" # or a tag like "v0.1"
processor = AutoImageProcessor.from_pretrained(repo, revision=rev)
model = Mask2FormerForUniversalSegmentation.from_pretrained(repo, revision=rev).eval()
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# pv: FloatTensor[B,3,H,W] normalized per `processor`
with torch.no_grad():
out = model(pixel_values=pv.to(device))
pred = processor.post_process_semantic_segmentation(out, target_sizes=[(H, W)])[0]
Base model
gdurkin/cdl_mask2former_v3_mspc