Introduction
Deep Image Compression uses Deep Neural Networks (DNN) for image compression. Instead of relying on handcrafted representations to capture natural image statistics, DNN methods learn this representation directly from the data. Recent results show that indeed they perform better than traditional methods.
Ultimately, there is a limit to the compression rate of all methods, that is governed by the rate-distortion curve. This curve determines, for any given rate, what is the minimal amount of distortion that we must pay. We can break this barrier by introducing side information that can assist the network in compressing the target image even further.
Figure 1 gives an example of results obtained by our system. The left image shows the results of a state-of-the-art deep image compression algorithm. The right image shows the results of our method that relies on side information. As can be seen, our method does a better job of restoring the details.
One can catalogue image compression schemes into three classes (see Figure 2). The first (top row) is a standard image compression scheme. Such a network makes no use of side information, and the trade-off is governed by the rate-distortion curve of the image.
Deep Video Compression (second row in Figure 2) goes one step further and, in addition to natural image statistics, also relies on previous frames as side information that is available to both the encoder and the decoder. The availability of this side information improves the compression ratio of video compared to images. The limit of this scheme is bounded by the conditional probability of the current frame given previous frames. This works well when the two frames are correlated, as is often the case in video.
We consider a different scenario in which the side information is only available at the decoder side (third row of Figure 2). This is different from deep video compression, where side information is available both to the decoder and the encoder. It turns out that even in this case, the compression scheme can benefit from side information. That is, DSC can, in theory, achieve the same compression ratios as deep video compression, even though the side information is not available to the encoder. But when does this scenario occur in practice?
It turns out that this DSC scenario occurs quite frequently, and here are a couple of examples. Consider the case of a camera array. For simplicity, we focus on a stereo camera, which is the simplest of camera arrays. The left and right cameras of the stereo pair are each equipped with a micro-controller that captures the image from the camera, compresses it, and sends it to the host computer. Since both cameras capture the same scene at the same time, their content is highly correlated with each other. But since the left and right cameras do not communicate, they only communicate with the host computer and can not use the fact that they capture highly correlated images to improve the compression ratio. This puts a heavy burden on the host computer, which must capture two images in the case of stereo camera and many more in the case of a camera array.
Fig. 2: Different compression schemes. (a) Single image encoding-decoding. (b) Video coding: joint encoding-decoding. The successive frame Y is used as side information. (c) Distributed source coding - image X is encoded and then decoded using correlated side information image Y.
Now suppose that the left camera transmitted its image to the host computer and the right camera as well. Then the right camera can encode its image conditioned on the left image and transmit fewer bits to the host computer. This reduces the burden on the host computer at the cost of sending the left image to the right camera. Distributed Source Coding theory tells us that we do not have to transmit the image from the left camera to the right camera at all, and still achieve the same compression ratio. When considering a camera array with multiple cameras, the savings can be substantial.
Camera arrays are assumed to be calibrated and synchronized, but we can take a much more general approach. For example, a group of people taking pictures of some event is a common occurrence nowadays. We can treat that as a distributed, uncalibrated, and unsynchronized camera array. Instead of each person uploading his images to the cloud, we can pick, at random, a reference person to upload his images to the cloud and let the rest of the people upload their images conditioned on the reference images.
Taking this idea one step further, we envision a scenario in which before uploading an image to the cloud, we will first transmit the camera's position and orientation (information that is already collected by smartphones). As a result, the cloud will be able to select existing images that are only stored in the cloud to use as side information.
Our approach is using recent advances in deep image compression, where we add side information to the decoder side. During training, we provide the network with pairs of real-world, correlated images. The network learns to compress the input image, and then add the side information image to help restore the original image. At inference time, the encoder is used for compressing the image before transmitting it. The rest of the network, which lies at the receiver side, is used by the decoder to decode the original image, using the compressed image and the side information image. To the best of our knowledge, this is the first time Deep Learning is used for DSC in the context of image compression.
We evaluate our system on two versions of the KITTI dataset that are designed to simulate some of the scenarios described earlier. In the first, we use the KITTI Stereo dataset to simulate the scenario of a camera array (in this case, a stereo camera). In the second case, we use pairs of images from the KITTI Stereo dataset that are taken several frames apart. This case is designed to simulate the scenario where an image is uploaded to the cloud, and some other image, from the same location, is used as side information.
Our experiments show that using the side information can help reduce the communication bandwidth by anywhere between 10% and 50%, depending on the distortion level and the correlation between the side information image and the image to be compressed.
Method
The overall architecture of the network is given in Figure 3. The encoder has access to the input image X, and the decoder has access to a correlated image Y . Our architecture consists of two sub-networks, the first is an auto-encoder designed for image compression and based on the model of Mentzer et al. [19]. It takes the input image X and produces the decoded image Xdec. The second network takes the decoded image Xdec along with image Y and uses it to construct a synthetic side information image Ysyn. The decoded image Xdec and synthetic side information Ysyn are then concatenated and used to produce the final output image Xˆ. The entire network, consisting of both sub-networks, is trained jointly. Then, at inference time, the encoder uses the encoder part of the auto-encoder sub-network, while the decoder uses the rest of the network.
It should be noted that the quantized latent vector Z¯ of our auto-encoder network is not designed to reconstruct the original image X, nor is it designed to create a coset from which the decoder can recover the correct X. Its goal is to provide sufficient information to construct a good synthetic image Y that,
Fig. 3: Our network's architecture. The image X is encoded to $\bar{Z}$ and decoded to the image $X_{dec}$ using the auto-encoder model based on [19]. $X_{dec}$ is used to create $Y_{syn}$ using the SI-Finder block that finds for each patch in $X_{dec}$ , the closest patch in Y. $X_{dec}$ and $Y_{syn}$ are concatenated (marked as $\oplus$ ) and forwarded to the SI-Net block that outputs the final reconstruction - $\hat{X}$ . The SI-Net block is based on [9] and uses convolution layers with increasing dilation rates that approximate enlarged convolutions receptive field. $C \times K \times K$ notation in the SI-Net block refers to $K \times K$ convolutions with C filters. The number following the pipe indicates the rate of kernel dilation.
together with the decoded image $X_{dec}$ , can be used to recover the final result $\hat{X}$ . This means it should reconstruct an image $X_{dec}$ that has sufficient details to search for good patches in Y that are as correlated, as much as possible, with their corresponding patches in X.
Formally, image compression algorithms encode an input image X to some quantized latent representation $\bar{Z}$ from which they can decode a reconstructed image $X_{dec}$ . The goal of the compression is to minimize a distortion function. The trade-off between compression rate and distortion is defined by:
where $H(\bar{Z})$ is the entropy of $\bar{Z}$ (i.e., the bit cost of encoding $\bar{Z}$ ), $d(X,\hat{X})$ is the distortion function and $\beta$ is a scalar that sets the trade-off between the two.
We wish to minimize (1) given a correlated image Y that is only available to the decoder. To do that, we wish to create an image $Y_{syn}$ from Y that is aligned with X. Let f encode the offset of every patch in $X_{dec}$ to its corresponding patch in $Y_{dec}$ , where $Y_{dec}$ is the result of passing Y through the auto-encoder:
(2)
where $\operatorname{corr}(\cdot)$ is a correlation metric, $\pi(X_{dec}(i))$ is the patch around pixel $X_{dec}(i)$ . Then the synthetic image $Y_{syn}$ is given by:
Fig. 4: SI-Finder block illustration. This block receives Xdec and Y images, projects Y to the same plane as Xdec by passing Y through the auto-encoder in inference mode to receive Ydec. Each non-overlapping patch in image Xdec is compared to all possible patches in Ydec. The location of the maximum correlation patch in Ydec is chosen, and the corresponding patch is taken from Y image. Finally, the patch is placed in Ysyn in the corresponding Xdec patch location.
That is, Ysyn is a reconstruction of X from Y . We perform this reconstruction step in the SI-Finder block, which is illustrated in Figure 4. It receives the images Xdec and Y . We then pass Y through the auto-encoder to produce Ydec (this is only done at inference mode, so the encoder does not learn anything about Y ). We do this since we found that matching Ydec with Xdec works better than matching Y with Xdec. Then, the SI-Finder compares each non-overlapping patch in Xdec to all possible patches in Ydec. This creates a (sparse) function f that is used to create Ysyn from Y . It should be noted that the SI-Finder is implemented as part of the network graph using CNN layers but is non-trainable since the CNN kernels are the image Xdec.
Eventually we feed Xdec and Ysyn to the SI-Net block and let it try to reconstruct X. Since we use concatenation of Xdec to the side information image Ysyn during training, we must maintain a reconstruction loss over Xdec. Therefore, the total rate-distortion trade-off from (1) is set to be:
(4)
where α denotes the weight for the final system's output Xˆ, and the total distortion weight sums to 1 in order to maintain the balance between the distortion and the rate.


