Improve model card: add pipeline tag, paper link, and usage instructions
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -2,9 +2,47 @@
|
|
| 2 |
tags:
|
| 3 |
- model_hub_mixin
|
| 4 |
- pytorch_model_hub_mixin
|
|
|
|
| 5 |
---
|
| 6 |
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
-
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
tags:
|
| 3 |
- model_hub_mixin
|
| 4 |
- pytorch_model_hub_mixin
|
| 5 |
+
pipeline_tag: image-segmentation
|
| 6 |
---
|
| 7 |
|
| 8 |
+
# AnomalyVFM: Transforming Vision Foundation Models into Zero-Shot Anomaly Detectors
|
| 9 |
+
|
| 10 |
+
AnomalyVFM is a general and effective framework that transforms pretrained Vision Foundation Models (VFMs), such as RADIO, DINOv2, or SigLIP2, into strong zero-shot anomaly detectors. This model was presented at CVPR 2026.
|
| 11 |
+
|
| 12 |
+
- **Paper:** [AnomalyVFM -- Transforming Vision Foundation Models into Zero-Shot Anomaly Detectors](https://huggingface.co/papers/2601.20524)
|
| 13 |
+
- **Repository:** [GitHub](https://github.com/MaticFuc/AnomalyVFM)
|
| 14 |
+
- **Project Page:** [https://maticfuc.github.io/anomaly_vfm/](https://maticfuc.github.io/anomaly_vfm/)
|
| 15 |
+
|
| 16 |
+
## Overview
|
| 17 |
+
|
| 18 |
+
Zero-shot anomaly detection aims to detect and localise abnormal regions in images without access to any in-domain training images. AnomalyVFM addresses this by combining a robust three-stage synthetic dataset generation scheme with a parameter-efficient adaptation mechanism using low-rank feature adapters (LoRA/DoRA) and a confidence-weighted pixel loss.
|
| 19 |
+
|
| 20 |
+
With RADIO as a backbone, AnomalyVFM achieves an average image-level AUROC of 94.1% across 9 diverse datasets, substantially outperforming previous methods.
|
| 21 |
+
|
| 22 |
+
## Usage
|
| 23 |
+
|
| 24 |
+
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration.
|
| 25 |
+
|
| 26 |
+
### Single Image Prediction
|
| 27 |
+
|
| 28 |
+
To test the model on a single image, you can use the prediction script provided in the [official repository](https://github.com/MaticFuc/AnomalyVFM):
|
| 29 |
+
|
| 30 |
+
```bash
|
| 31 |
+
python predict_single_image.py --image-path /path/to/image.png --model-path /path/to/folder/with/model.pkl
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
By default, the output will be saved as `pred.png`.
|
| 35 |
+
|
| 36 |
+
## Citation
|
| 37 |
+
|
| 38 |
+
If this work contributes to your research, please consider citing:
|
| 39 |
+
|
| 40 |
+
```bibtex
|
| 41 |
+
@InProceedings{fucka2026anomaly_vfm,
|
| 42 |
+
title={AnomalyVFM -- Transforming Vision Foundation Models into Zero-Shot Anomaly Detectors},
|
| 43 |
+
author={Fučka, Matic and Zavrtanik, Vitjan and Skočaj, Danijel},
|
| 44 |
+
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
| 45 |
+
month = {June},
|
| 46 |
+
year = {2026}
|
| 47 |
+
}
|
| 48 |
+
```
|