English
JTRNEO commited on
Commit
d9a7e7d
·
verified ·
1 Parent(s): b3759a7

Update README.md

Browse files

# SynRS3D: A Synthetic Dataset for Global 3D Semantic Understanding from Monocular Remote Sensing Imagery

**Authors:**
[Jian Song](https://scholar.google.ch/citations?user=CgcMFJsAAAAJ&hl=zh-CN)<sup>1,2</sup>, [Hongruixuan Chen](https://scholar.google.ch/citations?user=XOk4Cf0AAAAJ&hl=zh-CN&oi=ao)<sup>1</sup>, [Weihao Xuan](https://weihaoxuan.com/)<sup>1,2</sup>, [Junshi Xia](https://scholar.google.com/citations?user=n1aKdTkAAAAJ&hl=en)<sup>2</sup>, [Naoto Yokoya](https://scholar.google.co.jp/citations?user=DJ2KOn8AAAAJ&hl=en)<sup>1,2</sup>

<sup>1</sup> The University of Tokyo
<sup>2</sup> RIKEN AIP

**Conference:** Neural Information Processing Systems (Spotlight), 2024

For more details, please refer to our [paper](https://arxiv.org/pdf/2406.18151) and visit our GitHub [repository](https://github.com/JTRNEO/SynRS3D).

---

### Overview

**TL;DR:**
We are excited to release two high-performing models for **height estimation** and **land cover mapping**. These models were trained on the SynRS3D dataset using our novel domain adaptation method, **RS3DAda**.

- **Encoder:** Vision Transformer (ViT-L), pretrained with **DINOv2**
- **Decoder:** [DPT](https://arxiv.org/abs/2103.13413), trained from scratch

These models excel in tasks involving large-scale global 3D semantic understanding from high-resolution remote sensing imagery. Feel free to integrate them into your projects for enhanced performance in related applications.

---

### How to Cite

If you find the RS3DAda model useful in your research, please consider citing:

```


@misc
{song2024synrs3dsyntheticdatasetglobal,
title={SynRS3D: A Synthetic Dataset for Global 3D Semantic Understanding from Monocular Remote Sensing Imagery},
author={Jian Song and Hongruixuan Chen and Weihao Xuan and Junshi Xia and Naoto Yokoya},
year={2024},
eprint={2406.18151},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2406.18151},
}
```

---

### Contact

For any questions or feedback, please reach out via email at **song@ms.k.u-tokyo.ac.jp**.

We hope you enjoy using the pretrained RS3DAda models!

Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -1,3 +1,9 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - JTRNEO/SynRS3D
5
+ language:
6
+ - en
7
+ base_model:
8
+ - facebook/dinov2-large
9
+ ---