|
|
--- |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- Hyperspectral image classification |
|
|
- Mask autoencoder |
|
|
--- |
|
|
# HSIMAE: A Unified Masked Autoencoder with large-scale pretraining for Hyperspectral Image Classification |
|
|
|
|
|
 |
|
|
|
|
|
 |
|
|
|
|
|
## ✨ Highlights |
|
|
### Large-Scale and Diverse Dataset for HSI Pretraining |
|
|
A large and diverse HSI dataset named HSIHybrid was curated for large-scale HSI pre-training. It consisted of 15 HSI datasets from different hyperspectral sensors. After splitting into image patches, a total of **4 million** HSI patches with a spatial size of 9×9 were obtained. |
|
|
|
|
|
### New MAE Architecture for HSI domain |
|
|
A modified MAE named HSIMAE that utilized separate spatial-spectral encoders followed by fusion blocks to learn spatial correlation and spectral correlation of HSI data was proposed. |
|
|
|
|
|
### Dual-branch finetuning to leverage unlabeled data of target dataset |
|
|
A dual-branch fine-tuning framework was introduced to leverage the unlabeled data of the downstream HSI dataset and suppressed overfitting on small training samples. |
|
|
|
|
|
## 🧑💻 Contact |
|
|
|
|
|
Wang Yue |
|
|
E-mail: ryanwy@csu.edu.cn |