--- license: mit tags: - downscaling - ERA5 - COSMO-REA6 - reanalysis data - wind velocities - diffusion - superresolution library_name: diffusers model_type: ddim datasets: - your-dataset-name --- # DDIM-DSC: 4× Downscaling of Wind Velocities **DDIM-DSC** is a custom-trained [Denoising Diffusion Implicit Model (DDIM)](https://github.com/huggingface/diffusers) designed for the **downscaling of wind velocity fields** from coarse- to high-resolution using reanalysis data. It performs **4× spatial downscaling** on 2-channel wind fields (u and v components), using **ERA5** as low-resolution input and **COSMO-REA6** as the high-resolution target. --- ## 📊 Data - **Input**: ERA5 100 m wind components (u, v), 2 channels - **Target**: COSMO-REA6 100 m wind components (u, v), 2 channels - **Sequence length**: 3 (with temporal context across 3 timesteps) - **Total input channels**: 8 (2 channels × 3 timesteps + 2 static channels) --- ## 🧠 Model Architecture - **Model type**: DDIM (using `diffusers`) - **Scheduler**: DDIMScheduler - **Conditioning**: Concatenated temporal sequences - **Latent noise sampling**: 10 per input - **Scale factor**: 4× - **Input channels**: 8 - **Output channels**: 2 - **Note**: The low-resolution input must be **resized to high-resolution shape using bilinear interpolation** before being passed into the model. ## 🚀 Usage ```python import torch from diffusers import DiffusionPipeline # load the custom DDIM pipeline pipe = DiffusionPipeline.from_pretrained( "lschmidt/ddim-dsc", custom_pipeline="cond_ddim_pipeline", trust_remote_code=True ) # create a sample low-resolution input --> shape: (sequence_length, channels, height, width) lres_image = torch.randn((3, 2, 32, 32)).to(pipe.device) # interpolate to match high-resolution # run inference outputs = pipe(image=lres_image)