File size: 1,892 Bytes
aab5a27
 
 
93ddd33
 
 
 
 
 
aab5a27
 
 
 
 
 
93ddd33
 
da1c069
93ddd33
 
aab5a27
 
 
93ddd33
 
 
 
 
 
 
aab5a27
93ddd33
 
 
 
 
 
 
 
 
 
 
 
 
 
aab5a27
 
93ddd33
aab5a27
93ddd33
aab5a27
 
93ddd33
aab5a27
 
 
93ddd33
 
 
 
 
aab5a27
93ddd33
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: mit
tags:
  - downscaling
  - ERA5 - COSMO-REA6
  - reanalysis data
  - wind velocities
  - diffusion
  - superresolution
library_name: diffusers
model_type: ddim
datasets:
  - your-dataset-name
---

# DDIM-DSC: 4ร— Downscaling of Wind Velocities

**DDIM-DSC** is a custom-trained [Denoising Diffusion Implicit Model (DDIM)](https://github.com/huggingface/diffusers) designed for the **downscaling of wind velocity fields** from coarse- to high-resolution using reanalysis data.

It performs **4ร— spatial downscaling** on 2-channel wind fields (u and v components), using **ERA5** as low-resolution input and **COSMO-REA6** as the high-resolution target.

---

## ๐Ÿ“Š Data

- **Input**: ERA5 100 m wind components (u, v), 2 channels  
- **Target**: COSMO-REA6 100 m wind components (u, v), 2 channels  
- **Sequence length**: 3 (with temporal context across 3 timesteps)  
- **Total input channels**: 8 (2 channels ร— 3 timesteps + 2 static channels)

---

## ๐Ÿง  Model Architecture

- **Model type**: DDIM (using `diffusers`)
- **Scheduler**: DDIMScheduler  
- **Conditioning**: Concatenated temporal sequences  
- **Latent noise sampling**: 10 per input  
- **Scale factor**: 4ร—  
- **Input channels**: 8  
- **Output channels**: 2  
- **Note**: The low-resolution input must be **resized to high-resolution shape using bilinear interpolation** before being passed into the model.

## ๐Ÿš€ Usage

```python
import torch
from diffusers import DiffusionPipeline

# load the custom DDIM pipeline
pipe = DiffusionPipeline.from_pretrained(
    "lschmidt/ddim-dsc",
    custom_pipeline="cond_ddim_pipeline",
    trust_remote_code=True
)

# create a sample low-resolution input --> shape: (sequence_length, channels, height, width)
lres_image = torch.randn((3, 2, 32, 32)).to(pipe.device)

# interpolate to match high-resolution


# run inference
outputs = pipe(image=lres_image)