File size: 31,671 Bytes
1ed770c
 
ae5bd71
1ed770c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2752b7b
1ed770c
 
 
 
 
 
 
 
 
 
2752b7b
1ed770c
 
 
 
 
eb377bc
1ed770c
2752b7b
8d69668
 
1ed770c
 
 
eb377bc
1ed770c
2752b7b
1ed770c
 
2752b7b
1ed770c
 
 
 
 
 
 
 
 
 
 
 
2752b7b
 
 
1ed770c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2752b7b
1ed770c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
# iRDiffAE v1.0 — Technical Report

**iR**epa **Diff**usion **A**uto**E**ncoder = **iRDiffAE**

A fast, single-GPU-trainable diffusion autoencoder with spatially structured
latents for rapid downstream model convergence. Encoding runs ~5× faster than
Flux VAE; single-step decoding runs ~3× faster.

## Contents

1. [VP Diffusion Parameterization](#1-vp-diffusion-parameterization)
   - [Forward Process](#11-forward-process) · [Log SNR](#12-log-signal-to-noise-ratio) · [Cosine Schedule](#13-cosine-interpolated-schedule) · [X-Prediction](#14-x-prediction-objective) · [Sampling](#15-sampling)
2. [Architecture](#2-architecture)
   - [Overview](#21-overview) · [DiCo Block](#22-dico-block) · [Encoder](#23-encoder) · [Decoder](#24-decoder) · [AdaLN](#25-adaln-shared-base--low-rank-deltas) · [PDG](#26-path-drop-guidance-pdg)
3. [Design Choices](#3-design-choices)
   - [Convolutional Architecture](#31-convolutional-architecture) · [Single-Stride Encoder](#32-single-stride-encoder-with-final-bottleneck) · [Diffusion vs GAN Decoding](#33-diffusion-decoding-vs-gan-based-decoding) · [Skip Connection & PDG](#34-skip-connection-and-path-drop-guidance) · [iREPA](#35-half-channel-representation-alignment-irepa)
4. [Model Configuration](#4-model-configuration)
5. [Training](#5-training)
   - [Data](#51-data) · [Timestep Sampling](#52-timestep-sampling) · [Latent Noise Sync](#53-latent-noise-synchronization-dito-regularization) · [Noise Standards](#54-pixel-vs-latent-noise-standards) · [Optimizer](#55-optimizer-and-hyperparameters) · [Loss](#56-loss)
6. [Inference](#6-inference)
   - [Sampling Pipeline](#61-sampling-pipeline) · [Recommended Settings](#62-recommended-settings) · [Usage](#63-usage)
7. [Results](#7-results)
   - [Interactive Viewer](#71-interactive-viewer) · [Inference Settings](#72-inference-settings) · [Global Metrics](#73-global-metrics) · [Per-Image PSNR](#74-per-image-psnr-db) · [Latent Smoothness](#75-latent-space-smoothness)

**References:**

- **SiD2** — Hoogeboom et al., *Simpler Diffusion (SiD2): 1.5 FID on ImageNet512 with pixel-space diffusion*, [arXiv:2410.19324](https://arxiv.org/abs/2410.19324), ICLR 2025.
- **DiTo** — Yin et al., *Diffusion Autoencoders are Scalable Image Tokenizers*, [arXiv:2501.18593](https://arxiv.org/abs/2501.18593), 2025.
- **DiCo** — Ai et al., *DiCo: Revitalizing ConvNets for Scalable and Efficient Diffusion Modeling*, [arXiv:2505.11196](https://arxiv.org/abs/2505.11196), 2025.
- **SPRINT** — Park et al., *Sprint: Sparse-Dense Residual Fusion for Efficient Diffusion Transformers*, [arXiv:2510.21986](https://arxiv.org/abs/2510.21986), 2025.
- **Z-image** — Cai et al., *Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer*, [arXiv:2511.22699](https://arxiv.org/abs/2511.22699), 2025.
- **iREPA** — Singh et al., *What matters for Representation Alignment: Global Information or Spatial Structure?*, [arXiv:2512.10794](https://arxiv.org/abs/2512.10794), 2025.

---

## 1. VP Diffusion Parameterization

iRDiffAE uses the variance-preserving (VP) diffusion framework from SiD2
with an x-prediction objective.

### 1.1 Forward Process

Given a clean image \\(x_0\\), the forward process constructs a noisy sample at
continuous time \\(t \in [0, 1]\\):

$$x_t = \alpha_t \, x_0 + \sigma_t \, \varepsilon, \qquad \varepsilon \sim \mathcal{N}(0, s^2 I)$$

where \\(s = 0.558\\) is the pixel-space noise standard deviation (estimated from
the dataset image distribution) and the VP constraint holds:

$$\alpha_t^2 + \sigma_t^2 = 1$$

### 1.2 Log Signal-to-Noise Ratio

The schedule is parameterized through the log signal-to-noise ratio:

$$\lambda_t = \log \frac{\alpha_t^2}{\sigma_t^2}$$

which monotonically decreases as \\(t \to 1\\) (pure noise). From \\(\lambda_t\\) we
recover \\(\alpha_t\\) and \\(\sigma_t\\) via the sigmoid function:

$$\alpha_t = \sqrt{\sigma(\lambda_t)}, \qquad \sigma_t = \sqrt{\sigma(-\lambda_t)}$$

where \\(\sigma(\cdot)\\) is the logistic sigmoid.

### 1.3 Cosine-Interpolated Schedule

Following SiD2, the logSNR schedule uses cosine interpolation:

$$\lambda(t) = -2 \log \tan(a \cdot t + b)$$

where \\(a\\) and \\(b\\) are computed to satisfy the boundary conditions
\\(\lambda(0) = \lambda_\text{max}\\) and \\(\lambda(1) = \lambda_\text{min}\\):

$$b = \arctan\!\bigl(e^{-\lambda_\text{max}/2}\bigr), \qquad a = \arctan\!\bigl(e^{-\lambda_\text{min}/2}\bigr) - b$$

SiD2 also defines a "shifted cosine" variant with resolution-dependent additive
shifts \\(\Delta_\text{high}\\) and \\(\Delta_\text{low}\\):

$$\lambda_\text{shifted}(t) = (1 - t) \cdot [\lambda(t) + \Delta_\text{high}] + t \cdot [\lambda(t) + \Delta_\text{low}]$$

iRDiffAE uses \\(\lambda_\text{min} = -10\\), \\(\lambda_\text{max} = 10\\),
\\(\Delta_\text{high} = 0\\), and \\(\Delta_\text{low} = 0\\) (no resolution-dependent
shift), so the schedule reduces to the unshifted cosine interpolation.

### 1.4 X-Prediction Objective

The model predicts the clean image \\(\hat{x}_0 = f_\theta(x_t, t, z)\\)
conditioned on the encoder latents \\(z\\).

**Schedule-invariant loss.** Following SiD2, the training loss is defined as an
integral over logSNR \\(\lambda\\), making it invariant to the choice of noise
schedule:

$$\mathcal{L}(x) = \int w(\lambda) \, \| x_0 - \hat{x}_0 \|^2 \, d\lambda$$

Since timesteps are sampled uniformly \\(t \sim \mathcal{U}(0,1)\\) rather than
integrated over \\(\lambda\\) directly, the change of variable
\\(d\lambda = \frac{d\lambda}{dt} \, dt\\) introduces a Jacobian factor:

$$\mathcal{L} = \mathbb{E}_{t \sim \mathcal{U}(0,1)} \left[ \left(-\frac{d\lambda}{dt}\right) \cdot w(\lambda(t)) \cdot \| x_0 - \hat{x}_0 \|^2 \right]$$

**Sigmoid weighting.** SiD2 defines the weighting function in \\(\varepsilon\\)-prediction
form as \\(\sigma(b - \lambda)\\) — a sigmoid centered at bias \\(b\\). Converting from
\\(\varepsilon\\)-prediction to \\(x\\)-prediction MSE via
\\(\|\varepsilon - \hat{\varepsilon}\|^2 = e^{\lambda} \|x_0 - \hat{x}_0\|^2\\)
gives:

$$\sigma(b - \lambda) \cdot e^{\lambda} = e^b \cdot \sigma(\lambda - b)$$

Combining the Jacobian with the weighting, the per-sample weight used in
training is:

$$\text{weight}(t) = -\frac{1}{2} \frac{d\lambda}{dt} \cdot e^b \cdot \sigma(\lambda(t) - b)$$

The bias \\(b = -2.0\\) controls the relative emphasis on high-SNR (low-noise) vs
low-SNR (high-noise) timesteps. A more negative \\(b\\) shifts emphasis toward
noisier timesteps.

### 1.5 Sampling

At inference, each timestep \\(t\\) in the schedule is first mapped to logSNR via
the cosine-interpolated schedule (Section 1.3), then to diffusion coefficients:

$$t \;\xrightarrow{\text{schedule}}\; \lambda(t) \;\xrightarrow{\text{sigmoid}}\; \alpha_t = \sqrt{\sigma(\lambda)}, \quad \sigma_t = \sqrt{\sigma(-\lambda)}$$

**DDIM.** The default sampler uses a descending time schedule
\\(t_0 > t_1 > \cdots > t_N\\) with \\(N\\) denoising steps. At each step:

1. Predict \\(\hat{x}_0 = f_\theta(x_{t_i}, t_i, z)\\)
2. Reconstruct \\(\hat{\varepsilon} = \frac{x_{t_i} - \alpha_{t_i} \hat{x}_0}{\sigma_{t_i}}\\)
3. Step: \\(x_{t_{i+1}} = \alpha_{t_{i+1}} \hat{x}_0 + \sigma_{t_{i+1}} \hat{\varepsilon}\\)

**DPM++2M.** Also supported as an alternative sampler, using a half-lambda
(\\(\lambda/2\\)) exponential integrator for faster convergence with fewer steps.

---

## 2. Architecture

### 2.1 Overview

iRDiffAE consists of a deterministic encoder and an iterative VP diffusion
decoder. The encoder maps an image to a compact spatial latent, and the decoder
reconstructs the image by iteratively denoising from Gaussian noise,
conditioned on both the latents and the diffusion timestep.

```
Encoder:  x ∈ ℝ^{B×3×H×W}  →  z ∈ ℝ^{B×C×h×w}     (deterministic, single pass)
Decoder:  (z, t, x_t)       →  x̂₀ ∈ ℝ^{B×3×H×W}    (iterative, N diffusion steps)
```

where \\(h = H / p\\), \\(w = W / p\\), \\(p\\) is the patch size, and \\(C\\) is the
bottleneck dimension.

### 2.2 DiCo Block

Both encoder and decoder use DiCo blocks (from the [DiCo paper](https://arxiv.org/abs/2505.11196)),
a convolution-based alternative to transformer blocks. Each block consists of
two residual paths:

**Conv path:**

$$y = \text{Conv}_{1 \times 1} \to \text{DWConv}_{k \times k} \to \text{SiLU} \to \text{CCA} \to \text{Conv}_{1 \times 1}$$

**MLP path:**

$$y = \text{Conv}_{1 \times 1} \to \text{GELU} \to \text{Conv}_{1 \times 1}$$

where \\(\text{DWConv}_{k \times k}\\) is a depthwise convolution (default \\(k = 7\\))
and \\(\text{CCA}\\) is Compact Channel Attention:

$$\text{CCA}(y) = y \odot \sigma\bigl(\text{Conv}_{1 \times 1}(\text{AvgPool}(y))\bigr)$$

Both paths use channel-wise RMSNorm (without affine parameters) as pre-norm.
Residual connections use gating:

- **Encoder (unconditioned):** learned per-channel gate parameters
  \\(x \leftarrow x + g \cdot y\\), where \\(g\\) is a learnable vector initialized to zero.
- **Decoder (conditioned):** AdaLN-Zero gating via
  \\(x \leftarrow x + \tanh(g_\text{adaln}) \cdot y\\), where \\(g_\text{adaln}\\) comes
  from the timestep conditioning.

### 2.3 Encoder

The encoder is deterministic — no variational posterior, no KL loss. Latent
normalization uses channel-wise RMSNorm without affine parameters, following
DiTo's finding that this outperforms KL regularization.

```
Input:       x ∈ ℝ^{B×3×H×W}
Patchify:    PixelUnshuffle(p) → Conv 1×1     →  ℝ^{B×D×h×w}
Norm:        ChannelWise RMSNorm (affine)
Blocks:      DiCoBlock × depth_enc              (unconditioned, learned gates)
Bottleneck:  Conv 1×1 (D → C)
Norm out:    ChannelWise RMSNorm (no affine)
Output:      z ∈ ℝ^{B×C×h×w}
```

### 2.4 Decoder

The decoder predicts \\(\hat{x}_0\\) from noisy input \\(x_t\\), conditioned on
encoder latents \\(z\\) and timestep \\(t\\).

```
Patchify x_t:  PixelUnshuffle(p) → Conv 1×1   →  ℝ^{B×D×h×w}
Norm:          ChannelWise RMSNorm (affine)
Upsample z:    Conv 1×1 (C → D) → RMSNorm     →  ℝ^{B×D×h×w}
Fuse:          Concat[x_feat, z_up] → Conv 1×1 →  ℝ^{B×D×h×w}

Time embed:    t → sinusoidal → MLP            →  cond ∈ ℝ^{B×D}

Start blocks:  DiCoBlock × 2                    (AdaLN conditioned)
Middle blocks: DiCoBlock × (depth - 4)          (AdaLN conditioned)
Skip fusion:   Concat[start_out, middle_out] → Conv 1×1
End blocks:    DiCoBlock × 2                    (AdaLN conditioned)

Norm:          ChannelWise RMSNorm (affine)
Output head:   Conv 1×1 (D → 3·p²) → PixelShuffle(p)  →  x̂₀ ∈ ℝ^{B×3×H×W}
```

### 2.5 AdaLN: Shared Base + Low-Rank Deltas

Timestep conditioning follows the Z-image style AdaLN
([Cai et al., 2025](https://arxiv.org/abs/2511.22699)): a shared base projection
plus a low-rank delta per layer, scale-and-gate modulation with no shift, and a
\\(\tanh\\) on the gate.

A single base projector is shared across all decoder layers, and each layer
adds a low-rank correction:

$$m_i = \text{Base}(\text{SiLU}(\text{cond})) + \Delta_i(\text{SiLU}(\text{cond}))$$

where \\(\text{Base}: \mathbb{R}^D \to \mathbb{R}^{4D}\\) is a linear projection
(zero-initialized) and \\(\Delta_i: \mathbb{R}^D \xrightarrow{\text{down}} \mathbb{R}^r \xrightarrow{\text{up}} \mathbb{R}^{4D}\\)
is a low-rank factorization with rank \\(r\\) (zero-initialized up-projection).

The packed modulation \\(m_i \in \mathbb{R}^{B \times 4D}\\) is chunked into four
vectors \\((\text{scale}_\text{conv}, \text{gate}_\text{conv}, \text{scale}_\text{mlp}, \text{gate}_\text{mlp})\\)
which modulate the conv and MLP paths (no shift term):

$$\hat{x} = \text{RMSNorm}(x) \odot (1 + \text{scale})$$
$$x \leftarrow x + \tanh(\text{gate}) \cdot f(\hat{x})$$

### 2.6 Path-Drop Guidance (PDG)

At inference, iRDiffAE supports Path-Drop Guidance — a classifier-free
guidance analogue that does not require training with conditioning dropout.
Instead, it exploits the decoder's skip connection:

1. **Conditional pass:** run all blocks normally → \\(\hat{x}_0^\text{cond}\\)
2. **Unconditional pass:** replace the middle block output with a learned
   mask feature \\(m \in \mathbb{R}^{1 \times D \times 1 \times 1}\\) (initialized
   to zero), effectively dropping the deep processing path → \\(\hat{x}_0^\text{uncond}\\)
3. **Guided prediction:** \\(\hat{x}_0 = \hat{x}_0^\text{uncond} + s \cdot (\hat{x}_0^\text{cond} - \hat{x}_0^\text{uncond})\\)

where \\(s\\) is the guidance strength.

---

## 3. Design Choices

### 3.1 Convolutional Architecture

iRDiffAE uses a fully convolutional architecture rather than a
vision transformer. For an autoencoder whose goal is faithful pixel-level
reconstruction (not global semantic understanding), convolutions offer
several advantages:

- **Resolution generalization.** Convolutions operate on local patches and
  generalize naturally to arbitrary image dimensions without interpolating
  position embeddings or suffering attention distribution shift from
  sequence length changes with global attention. Convolutions are also
  more efficient than sliding window attention for local operations.
- **Translation invariance.** The built-in inductive bias of weight sharing
  across spatial positions is well matched to reconstruction, where the same
  local patterns (edges, textures, gradients) conditioned on the low-frequency
  latent recur throughout the image.
- **Locality.** Reconstruction quality depends on preserving fine spatial
  detail. Convolutions are inherently local operators, avoiding the
  quadratic cost of global attention while focusing computation where it
  matters most for reconstruction.

Transformers are better suited for image *generation* (where global context
and long-range dependencies are essential), but convolutions are better
suited for autoencoders. The DiCo block provides a well-tested,
strong building block for convolutional diffusion models, combining depthwise
convolutions with compact channel attention in a design that has been
validated at scale.

### 3.2 Single-Stride Encoder with Final Bottleneck

The encoder uses a single spatial stride (via PixelUnshuffle at the input)
followed by a stack of DiCo blocks operating at constant spatial resolution,
then a final 1×1 convolution to project from model dimension \\(D\\) to
bottleneck dimension \\(C\\). This differs from classical VAE encoders that use
progressive downsampling with channel expansion at each stage.

The single-stride design ensures that all encoder blocks see the full
spatial resolution and full channel width simultaneously. The information
bottleneck is imposed only at the very end, where a single linear projection
selects which \\(C\\) channels to retain. Progressive compression forces early
layers to discard information before the full feature representation has been
computed, which is both computationally heavier and representationally
suboptimal.

### 3.3 Diffusion Decoding vs. GAN-Based Decoding

Empirically, diffusion autoencoders produce a much cleaner latent space than
patch-GAN + LPIPS-driven VAEs. The iterative diffusion process acts as a
strong structural prior on the decoder, which in turn relaxes the pressure
on the encoder to encode every pixel perfectly — the latent space can focus
on semantically meaningful structure rather than adversarial reconstruction
artifacts. This makes diffusion AE latents easier for a downstream
latent-space diffusion model to learn.

**Training efficiency.** The diffusion AE training objective is a
straightforward weighted MSE loss with no adversarial component — no
discriminator, no LPIPS perceptual loss, no delicate GAN balancing. At batch
size 128, the model uses less than 30 GB of VRAM and runs at 7–10 iterations
per second, making it trainable on a single RTX 5090 in one to two days.
By contrast, GAN + LPIPS-based VAEs require many days of H100 time and are
notoriously difficult to stabilize, with no publicly known working recipe
for training from scratch at comparable quality.

### 3.4 Skip Connection and Path-Drop Guidance

The decoder's start → middle → skip-fuse → end architecture is inspired by
SPRINT's sparse-dense residual fusion. The start blocks process the fused
input (noised image + latents) at full fidelity, the middle blocks perform
deeper processing, and the skip connection concatenates the start block
output with the middle block output before the end blocks.

This design serves three purposes:

1. **Regularization.** The skip path ensures that even if the middle blocks
   are dropped or poorly conditioned, the end blocks still receive
   meaningful features from the start blocks.
2. **High-frequency preservation.** The start blocks (which see the input
   most directly) pass fine detail through the skip to the end blocks,
   preventing the middle blocks from washing out high-frequency information.
3. **Path-Drop Guidance (PDG).** At inference, replacing the middle block
   output with a learned zero-initialized mask feature creates an
   "unconditional" prediction that preserves the skip path but drops the
   deep processing. Interpolating between the conditional and unconditional
   predictions (as in classifier-free guidance) sharpens the output
   distribution — and hence the reconstructed image — without requiring
   any training-time conditioning dropout.

### 3.5 Half-Channel Representation Alignment (iREPA)

Singh et al. ([iREPA, arXiv:2512.10794](https://arxiv.org/abs/2512.10794)) show
that **spatial structure** of pretrained encoder representations — not global
semantic accuracy — drives generation quality when using representation
alignment to guide diffusion training. Their method aligns internal diffusion
features with patch tokens from a frozen vision encoder (e.g. DINOv2) using
patch-wise cosine similarity, with a conv-based projection and spatial
normalization to preserve local structure.

iRDiffAE adopts iREPA but aligns only the **first half** of the bottleneck
channels (64 of 128) to a frozen DINOv3-S teacher. The rationale: models like
DINOv3-S are trained for semantic understanding and do not preserve
high-frequency detail. Aligning all channels biases the encoder toward dropping
fine detail in favour of semantic structure. By aligning only half, the
bottleneck decomposes into:

- **Channels 0–63 (aligned):** semantic and spatial structure, guided by the
  teacher's patch tokens.
- **Channels 64–127 (free):** fine detail and high-frequency information,
  driven purely by the reconstruction loss.

The alignment operates on the encoder output **after** the final RMSNorm
(no affine), so the teacher sees unit-RMS normalized features.

**Implementation details:**

```
Encoder latents z ∈ ℝ^{B×128×h×w}  (after RMSNorm)

        z_aligned = z[:, :64, :, :]

        Conv2d 3×3 (64 → 384, padding=1)   ← iREPA conv projection

        student tokens ∈ ℝ^{B×T×384}

        patch-wise cosine similarity with DINOv3-S tokens
```

The teacher's patch tokens are spatially normalized before comparison
(\\(\gamma = 0.7\\), removing 70% of the global mean) following iREPA's
prescription. The alignment loss is weighted at 0.5 for most of training,
reduced to 0.25 toward the end to improve reconstruction fidelity.

**Tradeoff.** The alignment costs 2–3 dB of average reconstruction PSNR
compared to training without it. In exchange, downstream diffusion and flow
matching models trained on the aligned latent space converge significantly
faster — empirically validating the iREPA finding that spatial structure of
the latent representation matters more than raw reconstruction fidelity for
generation quality.

---

## 4. Model Configuration

| Parameter | Value |
|---|---|
| Patch size \\(p\\) | 16 |
| Bottleneck dim \\(C\\) | 128 |
| Compression ratio | 6× |
| Model dim \\(D\\) | 896 |
| Total parameters | 133.4M |
| Encoder depth | 4 |
| Decoder depth | 8 |
| Decoder layout | 2 start + 4 middle + 2 end |
| MLP ratio | 4.0 |
| Depthwise kernel | 7×7 |
| AdaLN rank \\(r\\) | 128 |
| \\(\lambda_\text{min}\\) | −10 |
| \\(\lambda_\text{max}\\) | +10 |
| Sigmoid bias \\(b\\) | −2.0 |
| Pixel noise std \\(s\\) | 0.558 |

**Compression ratio** = \\((3 \times p^2) / C\\): the factor by which the latent
representation is smaller than the raw pixel data. With patch size 16 and 128
bottleneck channels, the encoder produces a \\(16\times\\) spatial downsampling
(\\(256\times\\) area reduction) at 6× total compression.

---

## 5. Training

### 5.1 Data

Training uses ~5M images at various resolutions: mostly photographs, with
a significant proportion of illustrations and text-heavy images (documents,
screenshots, book covers, diagrams) to encourage crisp line and edge
reconstruction. Images are loaded via two strategies in a 50/50 mix:

- **Full-image downsampling:** images are bucketed by aspect ratio and
  downsampled to ~256² resolution (preserving aspect ratio).
- **Random 256×256 crops:** deterministic patches extracted from images
  stored at ≥512px resolution.

This mixed strategy exposes the model to both global scene composition (via
downsampled full images) and fine local detail (via crops from higher-resolution
sources).

### 5.2 Timestep Sampling

Timesteps are drawn via **stratified uniform sampling**, a variance reduction
technique from Monte Carlo integration. The base distribution is uniform over
the endpoint-trimmed domain \\([\varepsilon, 1 - \varepsilon]\\). Rather than
drawing \\(B\\) i.i.d. samples (which can cluster or leave gaps by chance),
stratified sampling divides the domain into \\(B\\) equal-mass buckets and draws
exactly one sample per bucket:

$$t_i = u_\text{lo} + (u_\text{hi} - u_\text{lo}) \cdot \frac{i + U_i}{B}, \qquad U_i \sim \mathcal{U}(0, 1), \quad i = 0, \ldots, B-1$$

where \\(u_\text{lo} = F(\varepsilon)\\), \\(u_\text{hi} = F(1 - \varepsilon)\\), and
\\(F\\) is the CDF of the base distribution (identity for uniform). This
guarantees that every batch covers the full timestep range evenly, reducing
the variance of the per-batch gradient estimate without introducing bias.

Endpoint trimming uses \\(\varepsilon = \sigma(-7.5) \approx 5.5 \times 10^{-4}\\),
keeping \\(|\lambda| \leq 15\\).

### 5.3 Latent Noise Synchronization (DiTo Regularization)

Following DiTo, encoder latents are regularized via noise synchronization
during training. With probability \\(p = 0.1\\), a subset of clean latents \\(z_0\\)
are replaced with noisy versions:

$$z_\tau = (1 - \tau_\text{fm}) \cdot z_0 + \tau_\text{fm} \cdot \varepsilon_z, \qquad \varepsilon_z \sim \mathcal{N}(0, I)$$

where \\(\tau\\) is sampled uniformly in \\([0, t]\\) (ensuring the latent is never
noisier than the pixel-space input) and converted to a flow-matching time
via the logSNR mapping, since downstream latent-space models are expected
to use flow matching:

$$\tau_\text{fm} = \sigma(-\tfrac{1}{2} \, \lambda(\tau))$$

This synchronizes the noising process in latent space with pixel space,
ensuring that the latent representation remains useful when a downstream
latent diffusion model adds noise during its own forward process.

### 5.4 Pixel vs. Latent Noise Standards

The model uses different noise standard deviations in pixel space and
latent space:

- **Pixel space:** \\(s = 0.558\\), matching an estimate of the per-channel standard
  deviation of natural images over the training dataset. This ensures that at
  \\(t = 1\\) the noise distribution roughly matches the data distribution scale.
- **Latent space:** \\(s = 1.0\\), because encoder latents are RMSNorm'd to unit
  scale. Downstream latent diffusion models (which use flow matching)
  operate with this unit-variance assumption.

The conversion between pixel-space VP logSNR and latent-space flow-matching
time uses the sigmoid mapping \\(t_\text{fm} = \sigma(-\frac{1}{2}\lambda)\\),
which naturally accounts for the different noise scales.

### 5.5 Optimizer and Hyperparameters

| Hyperparameter | Value |
|---|---|
| Optimizer | AdamW |
| Learning rate | \\(1 \times 10^{-4}\\) |
| Weight decay | 0 |
| Adam \\(\varepsilon\\) | \\(1 \times 10^{-8}\\) |
| LR schedule | Constant (after warmup), halved for last 20% of training |
| Warmup steps | 2,000 |
| Batch size | 128 |
| EMA decay | 0.9999 |
| Precision | AMP bfloat16 (FP32 master weights, TF32 matmul) |
| Compilation | `torch.compile` enabled |
| Training steps | 700k |
| Training images | ~5M |
| Hardware | Single GPU |

### 5.6 Loss

$$\mathcal{L} = \mathcal{L}_\text{recon} + w_\text{repa} \cdot \mathcal{L}_\text{repa}$$

\\(\mathcal{L}_\text{recon}\\) is the SiD2 sigmoid-weighted x-prediction MSE
(Section 1.4) with bias \\(b = -2.0\\), computed in float32 for numerical
stability.

\\(\mathcal{L}_\text{repa}\\) is the iREPA half-channel alignment loss
(Section 3.5): mean patch-wise negative cosine similarity between the first
64 encoder channels (projected via 3×3 conv) and spatially-normalized
DINOv3-S tokens. \\(w_\text{repa} = 0.5\\) for the majority of training,
lowered to 0.25 toward the end to recover reconstruction fidelity.

---

## 6. Inference

### 6.1 Sampling Pipeline

Decoding proceeds by iteratively denoising from Gaussian noise
(\\(\varepsilon \sim \mathcal{N}(0, s^2 I)\\) with \\(s = 0.558\\)). A descending
time schedule \\(t_0 > t_1 > \cdots > t_{N-1}\\) is generated (linearly spaced
by default), and at each step \\(t_i\\) is mapped to logSNR and then to diffusion
coefficients:

1. Compute \\(\lambda_i = \lambda(t_i)\\) via the cosine-interpolated schedule
2. Derive \\(\alpha_i = \sqrt{\sigma(\lambda_i)}\\), \\(\sigma_i = \sqrt{\sigma(-\lambda_i)}\\)
3. Run the DDIM or DPM++2M update step (Section 1.5)

The initial state is \\(x_{t_0} = \sigma_{t_0} \cdot \varepsilon\\) (pure noise
scaled by the first-step sigma).

### 6.2 Recommended Settings

**1 DDIM step** with **PDG disabled** is generally recommended — it achieves
the best PSNR and is extremely fast (a single forward pass through the
decoder). For images with sharp text or fine line art, 10–20 steps can
sometimes improve edge crispness.

| Setting | Recommended | Sharp text |
|---|---|---|
| Sampler | DDIM | DDIM or DPM++2M |
| Steps | 1 | 10–20 |
| Schedule | Linear | Linear |
| PDG | Disabled | Disabled or 2.0 |

**Reconstruction PSNR vs. decode steps** (N=2000 images, 2/3 photos + 1/3 book
covers, EMA weights):

| Decode steps | Avg PSNR (dB) |
|---|---|
| 1 | 33.71 |
| 10 | 32.69 |
| 20 | 32.30 |

PSNR decreases slightly with more steps because the model is trained for
single-step x-prediction; additional sampling steps introduce accumulated
discretization error. The 128-channel bottleneck preserves enough information
that a single decoder pass suffices for high-fidelity reconstruction.

Multi-step sampling can help recover sharper edges on text and line art.
PDG (strength 2–4) further increases perceptual sharpness but tends to
hallucinate high-frequency detail — a direct manifestation of the
**perception-distortion tradeoff**.

**Inference latency** (batch of 4 × 256×256, bf16, NVIDIA RTX PRO 6000
Blackwell, 100 iterations after warmup):

| Operation | iRDiffAE | Flux.1 VAE | Flux.2 VAE |
|---|---|---|---|
| Encode | 2.1 ms | 11.6 ms | 9.1 ms |
| Decode (1 step) | 8.3 ms | 24.9 ms | 20.0 ms |
| Decode (10 steps) | 52.7 ms | — | — |
| Decode (20 steps) | 100.6 ms | — | — |
| Roundtrip (enc + 1-step dec) | 11.1 ms | 36.4 ms | 29.0 ms |

Encoding is ~5× faster than Flux.1 and ~4× faster than Flux.2. Single-step
decoding is ~3× faster than both Flux VAEs; multi-step decoding trades speed
for perceptual sharpness.

### 6.3 Usage

```python
from ir_diffae import IRDiffAE, IRDiffAEInferenceConfig

model = IRDiffAE.from_pretrained("data-archetype/irdiffae-v1", device="cuda")  # bfloat16 by default

# Encode
latents = model.encode(images)  # [B, 3, H, W] → [B, 128, H/16, W/16]

# Decode — PSNR-optimal (1 step, single forward pass)
cfg = IRDiffAEInferenceConfig(num_steps=1, sampler="ddim")
recon = model.decode(latents, height=H, width=W, inference_config=cfg)

# Decode — perceptual sharpness (10 steps + PDG)
cfg_sharp = IRDiffAEInferenceConfig(
    num_steps=10, sampler="ddim", pdg_enabled=True, pdg_strength=2.0
)
recon_sharp = model.decode(latents, height=H, width=W, inference_config=cfg_sharp)
```

---

## Citation

```bibtex
@misc{ir_diffae,
  title   = {iRDiffAE: A Fast, Representation Aligned Diffusion Autoencoder with DiCo Blocks},
  author  = {data-archetype},
  year    = {2026},
  month   = feb,
  url     = {https://github.com/data-archetype/irdiffae},
}
```

---

## 7. Results

Reconstruction quality evaluated on a curated set of test images covering photographs, book covers, and documents. Flux.1 VAE (patch 8, 16 channels) is included as a reference at the same 12x compression ratio as the c64 variant.

### 7.1 Interactive Viewer

**[Open full-resolution comparison viewer](https://huggingface.co/spaces/data-archetype/irdiffae-results)** — side-by-side reconstructions, RGB deltas, and latent PCA with adjustable image size.

### 7.2 Inference Settings

| Setting | Value |
|---------|-------|
| Sampler | ddim |
| Steps | 1 |
| Schedule | linear |
| Seed | 42 |
| PDG | no_path_dropg |
| Batch size (timing) | 4 |

> All models run in bfloat16. Timings measured on an NVIDIA RTX Pro 6000 (Blackwell).

### 7.3 Global Metrics

| Metric | irdiffae_v1 (1 step) | Flux.1 VAE | Flux.2 VAE |
|--------|--------|--------|--------|
| Avg PSNR (dB) | 31.77 | 32.76 | 34.16 |
| Avg encode (ms/image) | 2.5 | 64.8 | 46.3 |
| Avg decode (ms/image) | 5.7 | 138.1 | 92.5 |

### 7.4 Per-Image PSNR (dB)

| Image | irdiffae_v1 (1 step) | Flux.1 VAE | Flux.2 VAE |
|-------|--------|--------|--------|
| p640x1536:94623 | 30.99 | 31.29 | 33.50 |
| p640x1536:94624 | 27.21 | 27.62 | 30.03 |
| p640x1536:94625 | 30.48 | 31.65 | 33.98 |
| p640x1536:94626 | 28.96 | 29.44 | 31.53 |
| p640x1536:94627 | 29.17 | 28.70 | 30.53 |
| p640x1536:94628 | 25.55 | 26.38 | 28.88 |
| p960x1024:216264 | 40.92 | 40.87 | 45.39 |
| p960x1024:216265 | 26.18 | 25.82 | 27.80 |
| p960x1024:216266 | 43.61 | 47.77 | 46.20 |
| p960x1024:216267 | 37.12 | 37.65 | 39.23 |
| p960x1024:216268 | 35.75 | 35.27 | 36.13 |
| p960x1024:216269 | 29.14 | 28.45 | 30.24 |
| p960x1024:216270 | 32.06 | 31.92 | 34.18 |
| p960x1024:216271 | 38.73 | 38.92 | 42.18 |
| p704x1472:94699 | 40.81 | 40.43 | 41.79 |
| p704x1472:94700 | 29.52 | 29.52 | 32.08 |
| p704x1472:94701 | 35.01 | 35.44 | 37.90 |
| p704x1472:94702 | 30.74 | 30.74 | 32.50 |
| p704x1472:94703 | 28.50 | 29.07 | 31.35 |
| p704x1472:94704 | 28.68 | 29.22 | 31.84 |
| p704x1472:94705 | 35.91 | 36.38 | 37.44 |
| p704x1472:94706 | 31.12 | 31.50 | 33.66 |
| r256_p1344x704:15577 | 28.10 | 28.32 | 29.98 |
| r256_p1344x704:15578 | 28.29 | 29.35 | 30.79 |
| r256_p1344x704:15579 | 29.86 | 30.44 | 31.83 |
| r256_p1344x704:15580 | 34.01 | 36.12 | 36.03 |
| r256_p1344x704:15581 | 33.41 | 37.42 | 36.94 |
| r256_p1344x704:15582 | 29.12 | 30.64 | 32.10 |
| r256_p1344x704:15583 | 32.61 | 34.67 | 34.54 |
| r256_p1344x704:15584 | 28.72 | 30.34 | 31.76 |
| r256_p896x1152:144131 | 30.73 | 33.10 | 33.60 |
| r256_p896x1152:144132 | 33.13 | 34.23 | 35.32 |
| r256_p896x1152:144133 | 35.70 | 37.85 | 37.33 |
| r256_p896x1152:144134 | 31.72 | 34.25 | 34.47 |
| r256_p896x1152:144135 | 27.34 | 28.17 | 29.87 |
| r256_p896x1152:144136 | 32.89 | 35.24 | 35.68 |
| r256_p896x1152:144137 | 29.78 | 32.70 | 32.86 |
| r256_p896x1152:144138 | 24.86 | 24.15 | 25.63 |
| VAE_accuracy_test_image | 32.62 | 36.69 | 35.25 |