File size: 1,764 Bytes
cafd2d6
 
 
6780bd8
 
 
 
cafd2d6
 
 
 
6780bd8
 
 
 
 
cafd2d6
 
 
 
 
76c7d8e
6780bd8
cafd2d6
 
 
 
 
cc7e351
cafd2d6
cc7e351
 
8895a18
cc7e351
 
 
 
 
 
 
 
 
 
cafd2d6
 
6780bd8
 
 
4b0d109
 
 
 
 
 
 
 
 
 
 
6780bd8
cc7e351
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license: mit
tags:
  - downscaling
  - edsr
  - ERA5 - COSMO-REA6
  - wind
library_name: super-image
inference: false
---

# RCAN-DSC (4× Downscaling of Wind Velocities)

This model is a custom-trained version of the [RCAN](https://arxiv.org/abs/1807.02758) model from the [`super-image`](https://github.com/eugenesiow/super-image) library.  
It is adapted for downscaling of **2-channel ERA5 data** (e.g., wind u and v components), by a factor of 4× (trained using **COSMO-REA6** as high-resolution data). 



## 🧠 Model Description

- Based on the original RCAN architecture from `super-image`.
- `sub_mean` and `add_mean` normalization layers have been **removed**
- Supports **multi-channel inputs**, currently set up for **2-channel wind velocity fields**.


## 🧪 Example

```python
from super_image import RcanModel, RcanConfig
from huggingface_hub import hf_hub_download
import torch

# load model
path = hf_hub_download(repo_id="lschmidt/rcan-dsc", filename="rcan_model.py")
exec(open(path).read())
model = load_rcan()

# load config
config, _ = RcanConfig.from_pretrained("lschmidt/rcan-dsc")

# load pretrained weights
state_dict_path = hf_hub_download(repo_id="lschmidt/rcan-dsc", filename="pytorch_model_4x.pt")
state_dict = torch.load(state_dict_path, map_location="cpu")
model.load_state_dict(state_dict, strict=False)

# generate sample data (B, C, W, H) 
inputs = torch.randn(1, 2, 10, 10)

# or use test data
data_path = hf_hub_download(
    repo_id="lschmidt/rcan-dsc",
    filename="test_wind_velocities.nc",
    subfolder="test_data"  
)
ds = xr.open_dataset(data_path)
u = ds["u100"].values[0]
v = ds["v100"].values[0]
inputs = torch.from_numpy(np.stack([u, v], axis=0)).unsqueeze(0).float()  

# prediction
output = model(inputs)