# Method
Network architecture details for the full 4.8M parameter model (5.3M with upsampling module) and the small 1.0M parameter model. The context and feature encoders have the same architecture, the only difference is that the feature encoder uses instance normalization while the context encoder uses batch normalization. In RAFT-S, we replace the residual units with bottleneck residual units. The update block takes in context features, correlation features, and flow features to update the latent hidden state. The updated hidden status is used to predict the flow update. The full model uses two convolutional GRU update blocks with 1x5 filters and 5x1 filters respectively, while the small model uses a single GRU with 3x3 filters.Illistration of the upsampling module. Each pixel of the high resolution flow field (small boxes) is taken to be the convex combination of its 9 coarse resolution neighbors using weights predicted by the network.Our upsampling module improves accuracy near motion boundaries, and also allows RAFT to recover the flow of small fast moving objects such as the birds shown in the figure.
**Photometric Augmentation:** We perform photometric augmentation by randomly perturbing brightness, contrast, saturation, and hue. We use the Torchvision `ColorJitter` with brightness 0.4, contrast 0.4, saturation 0.4, and hue 0.5/$\pi$. On KITTI, we reduce the degree of augmentation to brightness 0.3, contrast 0.3, saturation 0.3, and hue 0.3/$\pi$. With probablity 0.2, color augmentation is performed to each of the images independently.
**Spatial Augmentation:** We perform spatial augmentation by randomly rescaling and stretching the images. The degree of random scaling depends on the dataset. For FlyingChairs, we perform spatial augmentation in the range $2^{[-0.2, 1.0]}$, FlyingThings $2^{[-0.4, 0.8]}$, Sintel $2^{[-0.2, 0.6]}$, and KITTI $2^{[-0.2, 0.4]}$. Spatial augmentation is performed with probability 0.8.
**Occlusion Augmentation:** Following HSM-Net [@hsm], we also randomly erase rectangular regions in $I_2$ with probability 0.5 to simulate occlusions.
(Left) EPE on the Sintel set as a function of the number of iterations at inference time. (Right) Magnitude of each update ||Δfk||2 averaged over all pixels indicating convergence to a fixed point fk → f*.