Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
{}
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# Transfer learning using Hybrid Semantic Change Detection Data
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
We provide the weights used in the experiments of our CVPR'25 paper [The Change You Want To Detect](https://arxiv.org/abs/2503.15683).
|
| 9 |
+
|
| 10 |
+
## Dual U-Net
|
| 11 |
+
|
| 12 |
+
The model is a relatively simple Dual U-Net composed of two nearly identical parallel U-Nets. One responsible for semantic segmentation, the other for binary change detection. Besides using skip connections in each U-Net, extracted features from the "semantic encoder" are also transmitted to the "change detection decoder".
|
| 13 |
+
|
| 14 |
+
Both images are sequentially and independently passed through the "semantic U-Net", that produce a semantic map for each image, and extracted features are stored.
|
| 15 |
+
Then images are concatenated, passed through the "change detection U-Net" while injecting the previously stored features and a binary change map is produced.
|
| 16 |
+
|
| 17 |
+
The backbones are ResNet-50 pretrained on ImageNet.
|
| 18 |
+
|
| 19 |
+
## Model checkpoint
|
| 20 |
+
|
| 21 |
+
Here we provide the weights for our Dual U-Net that have been obtained after a pre-training on our Hybrid Semantic Change Dataset **FSC-180k**. They can be used for fine-tuning or inference on real change dataset using [our code](https://github.com/yb23/HySCDG) and specifically passing the .ckpt file as ‘--run_id‘ variable.
|
| 22 |
+
|