Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2001.04753/main_diagram/main_diagram.drawio +0 -0
- 2001.04753/paper_text/intro_method.md +71 -0
- 2106.03921/main_diagram/main_diagram.drawio +1 -0
- 2106.03921/main_diagram/main_diagram.pdf +0 -0
- 2106.03921/paper_text/intro_method.md +40 -0
- 2107.08829/main_diagram/main_diagram.drawio +0 -0
- 2107.08829/main_diagram/main_diagram.pdf +0 -0
- 2107.08829/paper_text/intro_method.md +89 -0
- 2110.08851/main_diagram/main_diagram.drawio +1 -0
- 2110.08851/main_diagram/main_diagram.pdf +0 -0
- 2110.08851/paper_text/intro_method.md +118 -0
- 2110.11945/main_diagram/main_diagram.drawio +1 -0
- 2110.11945/main_diagram/main_diagram.pdf +0 -0
- 2110.11945/paper_text/intro_method.md +123 -0
- 2110.14633/main_diagram/main_diagram.drawio +1 -0
- 2110.14633/main_diagram/main_diagram.pdf +0 -0
- 2110.14633/paper_text/intro_method.md +89 -0
- 2111.12082/main_diagram/main_diagram.drawio +0 -0
- 2111.12082/paper_text/intro_method.md +67 -0
- 2204.01172/main_diagram/main_diagram.drawio +1 -0
- 2204.01172/main_diagram/main_diagram.pdf +0 -0
- 2204.01172/paper_text/intro_method.md +119 -0
- 2204.12516/main_diagram/main_diagram.drawio +1 -0
- 2204.12516/main_diagram/main_diagram.pdf +0 -0
- 2204.12516/paper_text/intro_method.md +116 -0
- 2205.13662/main_diagram/main_diagram.drawio +1 -0
- 2205.13662/main_diagram/main_diagram.pdf +0 -0
- 2205.13662/paper_text/intro_method.md +142 -0
- 2205.14962/main_diagram/main_diagram.drawio +1 -0
- 2205.14962/main_diagram/main_diagram.pdf +0 -0
- 2205.14962/paper_text/intro_method.md +63 -0
- 2205.15544/main_diagram/main_diagram.drawio +1 -0
- 2205.15544/main_diagram/main_diagram.pdf +0 -0
- 2205.15544/paper_text/intro_method.md +107 -0
- 2208.01838/main_diagram/main_diagram.drawio +0 -0
- 2208.01838/paper_text/intro_method.md +73 -0
- 2208.09170/main_diagram/main_diagram.drawio +1 -0
- 2208.09170/main_diagram/main_diagram.pdf +0 -0
- 2208.09170/paper_text/intro_method.md +92 -0
- 2209.01814/main_diagram/main_diagram.drawio +0 -0
- 2209.01814/paper_text/intro_method.md +71 -0
- 2210.14128/main_diagram/main_diagram.drawio +1 -0
- 2210.14128/main_diagram/main_diagram.pdf +0 -0
- 2210.14128/paper_text/intro_method.md +123 -0
- 2210.16613/main_diagram/main_diagram.drawio +1 -0
- 2210.16613/main_diagram/main_diagram.pdf +0 -0
- 2210.16613/paper_text/intro_method.md +57 -0
- 2211.12254/main_diagram/main_diagram.drawio +1 -0
- 2211.12254/main_diagram/main_diagram.pdf +0 -0
- 2211.12254/paper_text/intro_method.md +82 -0
2001.04753/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2001.04753/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep Image Compression uses Deep Neural Networks (DNN) for image compression. Instead of relying on handcrafted representations to capture natural image statistics, DNN methods learn this representation directly from the data. Recent results show that indeed they perform better than traditional methods.
|
| 4 |
+
|
| 5 |
+
Ultimately, there is a limit to the compression rate of all methods, that is governed by the rate-distortion curve. This curve determines, for any given rate, what is the minimal amount of distortion that we must pay. We can break this barrier by introducing side information that can assist the network in compressing the target image even further.
|
| 6 |
+
|
| 7 |
+
Figure 1 gives an example of results obtained by our system. The left image shows the results of a state-of-the-art deep image compression algorithm. The right image shows the results of our method that relies on side information. As can be seen, our method does a better job of restoring the details.
|
| 8 |
+
|
| 9 |
+
One can catalogue image compression schemes into three classes (see Figure 2). The first (top row) is a standard image compression scheme. Such a network makes no use of side information, and the trade-off is governed by the rate-distortion curve of the image.
|
| 10 |
+
|
| 11 |
+
Deep Video Compression (second row in Figure 2) goes one step further and, in addition to natural image statistics, also relies on previous frames as side information that is available to both the encoder and the decoder. The availability of this side information improves the compression ratio of video compared to images. The limit of this scheme is bounded by the conditional probability of the current frame given previous frames. This works well when the two frames are correlated, as is often the case in video.
|
| 12 |
+
|
| 13 |
+
We consider a different scenario in which the side information is only available at the decoder side (third row of Figure 2). This is different from deep video compression, where side information is available both to the decoder and the encoder. It turns out that even in this case, the compression scheme can benefit from side information. That is, DSC can, in theory, achieve the same compression ratios as deep video compression, even though the side information is not available to the encoder. But when does this scenario occur in practice?
|
| 14 |
+
|
| 15 |
+
It turns out that this DSC scenario occurs quite frequently, and here are a couple of examples. Consider the case of a camera array. For simplicity, we focus on a stereo camera, which is the simplest of camera arrays. The left and right cameras of the stereo pair are each equipped with a micro-controller that captures the image from the camera, compresses it, and sends it to the host computer. Since both cameras capture the same scene at the same time, their content is highly correlated with each other. But since the left and right cameras do not communicate, they only communicate with the host computer and can not use the fact that they capture highly correlated images to improve the compression ratio. This puts a heavy burden on the host computer, which must capture two images in the case of stereo camera and many more in the case of a camera array.
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+
Fig. 2: Different compression schemes. (a) Single image encoding-decoding. (b) Video coding: joint encoding-decoding. The successive frame Y is used as side information. (c) Distributed source coding - image X is encoded and then decoded using correlated side information image Y.
|
| 20 |
+
|
| 21 |
+
Now suppose that the left camera transmitted its image to the host computer and the right camera as well. Then the right camera can encode its image conditioned on the left image and transmit fewer bits to the host computer. This reduces the burden on the host computer at the cost of sending the left image to the right camera. Distributed Source Coding theory tells us that we do not have to transmit the image from the left camera to the right camera at all, and still achieve the same compression ratio. When considering a camera array with multiple cameras, the savings can be substantial.
|
| 22 |
+
|
| 23 |
+
Camera arrays are assumed to be calibrated and synchronized, but we can take a much more general approach. For example, a group of people taking pictures of some event is a common occurrence nowadays. We can treat that as a distributed, uncalibrated, and unsynchronized camera array. Instead of each person uploading his images to the cloud, we can pick, at random, a reference person to upload his images to the cloud and let the rest of the people upload their images conditioned on the reference images.
|
| 24 |
+
|
| 25 |
+
Taking this idea one step further, we envision a scenario in which before uploading an image to the cloud, we will first transmit the camera's position and orientation (information that is already collected by smartphones). As a result, the cloud will be able to select existing images that are only stored in the cloud to use as side information.
|
| 26 |
+
|
| 27 |
+
Our approach is using recent advances in deep image compression, where we add side information to the decoder side. During training, we provide the network with pairs of real-world, correlated images. The network learns to compress the input image, and then add the side information image to help restore the original image. At inference time, the encoder is used for compressing the image before transmitting it. The rest of the network, which lies at the receiver side, is used by the decoder to decode the original image, using the compressed image and the side information image. To the best of our knowledge, this is the first time Deep Learning is used for DSC in the context of image compression.
|
| 28 |
+
|
| 29 |
+
We evaluate our system on two versions of the KITTI dataset that are designed to simulate some of the scenarios described earlier. In the first, we use the KITTI Stereo dataset to simulate the scenario of a camera array (in this case, a stereo camera). In the second case, we use pairs of images from the KITTI Stereo dataset that are taken several frames apart. This case is designed to simulate the scenario where an image is uploaded to the cloud, and some other image, from the same location, is used as side information.
|
| 30 |
+
|
| 31 |
+
Our experiments show that using the side information can help reduce the communication bandwidth by anywhere between 10% and 50%, depending on the distortion level and the correlation between the side information image and the image to be compressed.
|
| 32 |
+
|
| 33 |
+
# Method
|
| 34 |
+
|
| 35 |
+
The overall architecture of the network is given in Figure 3. The encoder has access to the input image X, and the decoder has access to a correlated image Y . Our architecture consists of two sub-networks, the first is an auto-encoder designed for image compression and based on the model of Mentzer et al. [19]. It takes the input image X and produces the decoded image Xdec. The second network takes the decoded image Xdec along with image Y and uses it to construct a synthetic side information image Ysyn. The decoded image Xdec and synthetic side information Ysyn are then concatenated and used to produce the final output image Xˆ. The entire network, consisting of both sub-networks, is trained jointly. Then, at inference time, the encoder uses the encoder part of the auto-encoder sub-network, while the decoder uses the rest of the network.
|
| 36 |
+
|
| 37 |
+
It should be noted that the quantized latent vector Z¯ of our auto-encoder network is not designed to reconstruct the original image X, nor is it designed to create a coset from which the decoder can recover the correct X. Its goal is to provide sufficient information to construct a good synthetic image Y that,
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
Fig. 3: Our network's architecture. The image X is encoded to $\bar{Z}$ and decoded to the image $X_{dec}$ using the auto-encoder model based on [19]. $X_{dec}$ is used to create $Y_{syn}$ using the SI-Finder block that finds for each patch in $X_{dec}$ , the closest patch in Y. $X_{dec}$ and $Y_{syn}$ are concatenated (marked as $\oplus$ ) and forwarded to the SI-Net block that outputs the final reconstruction - $\hat{X}$ . The SI-Net block is based on [9] and uses convolution layers with increasing dilation rates that approximate enlarged convolutions receptive field. $C \times K \times K$ notation in the SI-Net block refers to $K \times K$ convolutions with C filters. The number following the pipe indicates the rate of kernel dilation.
|
| 42 |
+
|
| 43 |
+
together with the decoded image $X_{dec}$ , can be used to recover the final result $\hat{X}$ . This means it should reconstruct an image $X_{dec}$ that has sufficient details to search for good patches in Y that are as correlated, as much as possible, with their corresponding patches in X.
|
| 44 |
+
|
| 45 |
+
Formally, image compression algorithms encode an input image X to some quantized latent representation $\bar{Z}$ from which they can decode a reconstructed image $X_{dec}$ . The goal of the compression is to minimize a distortion function. The trade-off between compression rate and distortion is defined by:
|
| 46 |
+
|
| 47 |
+
$$d(X,\hat{X}) + \beta H(\bar{Z}) \tag{1}$$
|
| 48 |
+
|
| 49 |
+
where $H(\bar{Z})$ is the entropy of $\bar{Z}$ (i.e., the bit cost of encoding $\bar{Z}$ ), $d(X,\hat{X})$ is the distortion function and $\beta$ is a scalar that sets the trade-off between the two.
|
| 50 |
+
|
| 51 |
+
We wish to minimize (1) given a correlated image Y that is only available to the decoder. To do that, we wish to create an image $Y_{syn}$ from Y that is aligned with X. Let f encode the offset of every patch in $X_{dec}$ to its corresponding patch in $Y_{dec}$ , where $Y_{dec}$ is the result of passing Y through the auto-encoder:
|
| 52 |
+
|
| 53 |
+
$$f(i) = \underset{j}{\operatorname{argmax}} \operatorname{corr}(\pi(X_{dec}(i)), \pi(Y_{dec}(j)))$$
|
| 54 |
+
(2)
|
| 55 |
+
|
| 56 |
+
where $\operatorname{corr}(\cdot)$ is a correlation metric, $\pi(X_{dec}(i))$ is the patch around pixel $X_{dec}(i)$ . Then the synthetic image $Y_{syn}$ is given by:
|
| 57 |
+
|
| 58 |
+
$$Y_{syn}(i) = Y(f(i)) \tag{3}$$
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
|
| 62 |
+
Fig. 4: SI-Finder block illustration. This block receives Xdec and Y images, projects Y to the same plane as Xdec by passing Y through the auto-encoder in inference mode to receive Ydec. Each non-overlapping patch in image Xdec is compared to all possible patches in Ydec. The location of the maximum correlation patch in Ydec is chosen, and the corresponding patch is taken from Y image. Finally, the patch is placed in Ysyn in the corresponding Xdec patch location.
|
| 63 |
+
|
| 64 |
+
That is, Ysyn is a reconstruction of X from Y . We perform this reconstruction step in the SI-Finder block, which is illustrated in Figure 4. It receives the images Xdec and Y . We then pass Y through the auto-encoder to produce Ydec (this is only done at inference mode, so the encoder does not learn anything about Y ). We do this since we found that matching Ydec with Xdec works better than matching Y with Xdec. Then, the SI-Finder compares each non-overlapping patch in Xdec to all possible patches in Ydec. This creates a (sparse) function f that is used to create Ysyn from Y . It should be noted that the SI-Finder is implemented as part of the network graph using CNN layers but is non-trainable since the CNN kernels are the image Xdec.
|
| 65 |
+
|
| 66 |
+
Eventually we feed Xdec and Ysyn to the SI-Net block and let it try to reconstruct X. Since we use concatenation of Xdec to the side information image Ysyn during training, we must maintain a reconstruction loss over Xdec. Therefore, the total rate-distortion trade-off from (1) is set to be:
|
| 67 |
+
|
| 68 |
+
$$(1 - \alpha) \cdot d(X, X_{dec}) + \alpha d(X, \hat{X}) + \beta H(\bar{Z})$$
|
| 69 |
+
(4)
|
| 70 |
+
|
| 71 |
+
where α denotes the weight for the final system's output Xˆ, and the total distortion weight sums to 1 in order to maintain the balance between the distortion and the rate.
|
2106.03921/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2020-06-25T07:01:59.556Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36" version="13.3.0" etag="OqMXauIYIrv-_a01AEBV" type="device"><diagram id="Fp3jIv_j2Py9scc2z906">7V1bc6s2EP41zLQP8SCu9qOv7UPONCc5M+15pEaxaTFyZXxi99dXgGQQgprYAgU7zoMtIUB837La1a4UzZxuDr9gb7v+gnwYaobuHzRzphkGMG2bfCU1x6zGtd2sYoUDnzbKK16CfyGt1GntPvDhjmsYIxTGwZavXKIogsuYq/MwRm98s1cU8nfdeisoVLwsvVCs/T3w43VWOzTcvP5XGKzW7M7AGWVHNh5rTJ9kt/Z89FaoMueaOcUIxdmvzWEKwwQ8hkt23qLm6KljGEZxkxOM7IQfXrinz0b7FR/Zw2K0j3yYtNc1c/K2DmL4svWWydE3Qi+pW8ebkJQA+fkahOEUhQin55qL9EPqV9jzA9IndixCEUyaoyim9ILk6rQ3EMfwUPtE4IQTETCINjDGR9LkwMsIFS3g0PJbTpRt0rp1gSTDppUeFY7V6dI5fuQHhbAaTlNAD/pEcmgR4XiNVijywnleO+Hxzds8IrSlqP4F4/hIcfL2MeIxJ8Dg4x/J+QObFb8Xj80O9OJZ6UhLJa5e008FV+SYM59OHatEmHUiLHlIjq4d2uMlrXIc+oZ6eAUp2E5zWjEMvTj4wV//Go6sT44acWS76jiylXB0COICRaT0nXFCfucEJYVjka3sJN3mmNUHo6Fzht209ARxQACCmFFeoM9uRt9QZM9UyJ7Tqzescgh6N9wKFZqrAm45sFm6OtiGouljT6aPL5o9k2oDjWZzfTxrx9gxeGPHdEVjp9LWkWDqjHoidcCQL2L01CcUkCueyHBtnoyyQZkJPj2pBPSpF42wZzbuzWpYRhqnYlsgsqmwA3CHgFsKxzTWn4J2/qYZTki6Mdnt/+SocP7ZJ+5y+tgPu/S5x6QB0LeH9OHZcfJrlXx/1YwpYBcjHUmvlx3qjeJ31Cl+oMbJvcA4v8CSBk5XowUwLxsuxhh7x0KzbdJg9/775GxnV7x4LFLjT1/nq7WuTiv8adNUqE7VONRqAbdUGgxOu+NX1PPxCyj0XIASh/lajeVa3Cs00IF15jU6O7vU8N1imH+U6SUwvHFlVgm4pRBwJb6+YsAthcM16w8/N/Uyf+rT3BRQ6KMYt+6vsygWpyJs+RJ7kctxMjcZ804pCP2+9le7KIbRwwHfTeBtLkPSBvsqwVI42PcspC4HcEvhYM/604bn8tzRzBswpnqadtJKyEXhsHbrbnxlaLWtKbl3D2vlAM+5Ye3/218/rN16nL0yYvxhjJyacF9jaZA8D6smDaAgDSPLeo88SDNYPljyS0VegdTxc9P38VOlWzgSuBkMBgJ05OFiHp9djNHfsJy8WgCNVnlhsIpIcUnASIR6kkAVLL1wTA9sAt9P37wqQnjKytinZdpJIIGI4Xke7JZ4MMX5jfvloUkST2tEAIGI59+epOqS0XS0qNQlFSmsp/RWGbDyqBoVeeBtqRmWqVJA9cvjF6moLhbjkVs/YsrMLegUObPXyJWjWp1CJ/rtvYJOpdTZvYauPCPRKXRinLtX0KmUOleAbjJ/Jr6C/qAlS9cwJF8/xdiLdq8IbyD+WTKsc9OYC/ZMXdaU1DRlXUTZcCtglrIkS/TH7tfWHJ4X99ZMzU/fq97m75IItuaCW/NJ5ye2XpRPKGR1yaOn+ohpjnxSwgRLD5iv4jzFt8IcRXY+f81k6oK71aLJ3aBv+YmVU77bovZuEtXl3En+GrgxxZWtMiWmJDBGdwOVJTqJT5ib0uo2r3qxmM3Gk258ny5hFr3GWpiNnsMsOEpd4iz6mJU4J+k2PUdZpTSL7mglys83oDQEB6xLnEXftRbnLkIHreKsUp5FR/d+zdeyG1HBQ2vWq+g13y8PTazC1ohQk4V95R4Nw6FWiFEnezRo3YSpLer+cjtsDBsTLn8XFNEd79/y99OaOQWxZHbrD78A3m4hkb06xcXSO1sBb996gjdjjdMXLWTCNRZ34w4BZ/amEsBFN/VzBXxR948U6n4la54vEGqns+Xsdoe6X3R579cFsMzzr0FbLoDd7+BvWYN0Gfy1RTe2T9BZKqETA7q3GjcX/JuKwDloK3BufwZs69VslwFbdqt7iL8JOrnDKV62KW0TnLvYf6NNnAUF3iXOxqdiqVcsHU7hOqKLyWeCNHAxHeJiCv5lMh4/pInkNfked0l1aQEYqLDUAfNtilRbMqgWw7ESqR5/3Y8/ueYtN3egku4PtIE2O3J+j76mkxp0/OBm6ozGDMnf7lrSXl83udFX2dBocZqOFPN/nJHNK+X/fsSc/wc=</diagram></mxfile>
|
2106.03921/main_diagram/main_diagram.pdf
ADDED
|
Binary file (24.3 kB). View file
|
|
|
2106.03921/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Automatically solving math word problems has a long history dating back to the middle sixties [\(Bo](#page-10-0)[brow,](#page-10-0) [1964\)](#page-10-0). Early approaches were rule-based matching systems that solve the problem symbolically. Even though there are some impressive symbolic systems that operate in a relatively narrow domain, the inability to successfully scale them up is sometimes presented as a critique of the goodold-fashioned AI, or GOFAI [\(Dreyfus et al.,](#page-10-1) [1992\)](#page-10-1). One issue is to create a formalism that covers all
|
| 4 |
+
|
| 5 |
+
the aspects needed to solve these problems. On the other hand, deep learning [\(LeCun et al.,](#page-11-0) [2015\)](#page-11-0) aims to develop artificial general intelligence that scales better to various problems.
|
| 6 |
+
|
| 7 |
+
However, despite many successes in computer vision and natural language processing [\(Devlin et al.,](#page-10-2) [2018;](#page-10-2) [He et al.,](#page-10-3) [2016;](#page-10-3) [Krizhevsky et al.,](#page-11-1) [2012;](#page-11-1) [Lan](#page-11-2) [et al.,](#page-11-2) [2019;](#page-11-2) [Mikolov et al.,](#page-11-3) [2013\)](#page-11-3), data-driven methods evade our dream of building a system with basic, every-day, mathematical skills. As largescale natural language models become more common [\(Devlin et al.,](#page-10-2) [2018;](#page-10-2) [Brown et al.,](#page-10-4) [2020\)](#page-10-4), we would expect them to also reason mathematically.
|
| 8 |
+
|
| 9 |
+
Since natural language understanding also involves symbolic manipulation [\(Liang,](#page-11-4) [2016\)](#page-11-4), we treat mathematical reasoning as a language understanding and revisit the data-driven paradigm. For that, we rely on a recent language model, BERT [\(Devlin et al.,](#page-10-5) [2019\)](#page-10-5), and challenge it with math word problems [\(Ling et al.,](#page-11-5) [2017\)](#page-11-5). Even though such language models have initially shown promising results, more recent investigation shows they may rely on various biases in their predictions [\(Hendricks et al.,](#page-10-6) [2018;](#page-10-6) [Brown et al.,](#page-10-4) [2020;](#page-10-4) [Bhardwaj et al.,](#page-10-7) [2020;](#page-10-7) [Kurita et al.,](#page-11-6) [2019\)](#page-11-6). Here, we also follow that line of investigation and show these models can answer correctly without an understanding of the rationale behind it.
|
| 10 |
+
|
| 11 |
+
Furthermore, as directly predicting answers to math problems often requires multiple steps of reasoning, we show that we can improve BERT's generalization by exposing it to rationales [\(Ling](#page-11-5) [et al.,](#page-11-5) [2017;](#page-11-5) [Hendricks et al.,](#page-10-8) [2016;](#page-10-8) [Lei et al.,](#page-11-7) [2016\)](#page-11-7). These are, however, only used during training similarly to a teacher that shows a student a justification for each answer. But then, the student is evaluated only on the ability to answer these questions during the college exam correctly with no access to rationales. Finally, to learn a better representation from rationales and to improve the generalization
|
| 12 |
+
|
| 13 |
+
<span id="page-1-0"></span>
|
| 14 |
+
|
| 15 |
+
*Figure 1: BERT (right) and our novel extension (left). We use shared architecture but we separate question tokens (green blocks) from rationales (blue blocks) using different segment and positional embeddings. We show all three losses. MLM predicts masked tokens (depicted here as* P rQ,k*). We use ROP or NROP to predict if the ordering of rationale steps is correct. For question-answering, we fine-tune the whole model with a classification layer using softmax. We use the embedding that corresponds to the [CLS] token as the input representation.*
|
| 16 |
+
|
| 17 |
+
even further, we introduce novel pretext tasks and corresponding losses, which we name (Neighbor) Reasoning Order Prediction (ROP or NROP). We also show that permutation invariant losses can lead to less biased representations. With that, we outperform other data-driven baselines, and are even on-par with methods that are more tailored to mathworld problems and the AQuA-RAT dataset.
|
| 18 |
+
|
| 19 |
+
# Method
|
| 20 |
+
|
| 21 |
+
We use the following methods, each initialized with BERT-base pre-trained on Wikipedia and Books Corpus [\(Devlin et al.,](#page-10-2) [2018;](#page-10-2) [Zhu et al.,](#page-12-0) [2015\)](#page-12-0). Note that, in fine-tuning they all have the same number of parameters.
|
| 22 |
+
|
| 23 |
+
- 1) BERT-base. We fine-tune BERT to predict the correct answer and show its transfer to math word problems.
|
| 24 |
+
- 2) BERT-AQuA. We use the MLM loss on the AQuA-RAT questions before training to predict correct answer.
|
| 25 |
+
- 3) BERT-AQuA-RAT. We use the MLM loss on the AQuA-RAT questions and rationales and show if we can inject knowledge from rationales into BERT.
|
| 26 |
+
- 4) BERT-(N)ROP. We use the MLM loss and the novel (N)ROP loss for coherence prediction (defined later) and show if we can improve the results by focusing the model on rationales.
|
| 27 |
+
|
| 28 |
+
Later in this paper, we propose permutation invariant losses that additionally reduce positional biases of the BERT-base model, and can work with all the pretext tasks described above.
|
| 29 |
+
|
| 30 |
+
<span id="page-1-1"></span>
|
| 31 |
+
|
| 32 |
+
*Figure 2: ROP or NROP with positive (left) and negative (right) labels. We randomly swap two rationales and classify if that change has happened.*
|
| 33 |
+
|
| 34 |
+
We base our architecture on BERT [\(Devlin et al.,](#page-10-5) [2019\)](#page-10-5) that has 12 transformer blocks [\(Vaswani](#page-12-1) [et al.,](#page-12-1) [2017\)](#page-12-1). As the core, we use the standard configuration described in [\(Devlin et al.,](#page-10-5) [2019\)](#page-10-5). We use three self-supervised losses. One is the standard Masked Language Modelling (MLM) but extended to work on rationales. Other two are our new losses, (Neighbour) Reasoning Order Prediction (ROP or NROP). Figure [1](#page-1-0) shows two variants of our models. Note that, during fine-tuning, rationales and all the self-supervised losses are discarded.
|
| 35 |
+
|
| 36 |
+
MLM is the Masked Language Modelling [\(Devlin](#page-10-5) [et al.,](#page-10-5) [2019\)](#page-10-5). We randomly mask 15% of the input tokens by a special token [MASK]. The objective of this loss is to predict the masked token using its context casted as a classification problem over the tokenizer vocabulary. Loss is calculated only on masked tokens. We extend this loss to rationales. First, we randomly choose whether we mask a question or rationale. Next, we follow the procedure above applied to either a question or rationale. However, to encourage binding between questions and rationales, we use the whole context for the predictions. Interestingly, there are parallels between masking numbers and solving mathematical equations, where it can be seen as solving the equation with unknown. For example, 2+[MASK] = 4 becomes 2 + x = 4. As a consequence, models during training organically deal with mathematical calculations without defining a specific loss for mathematics allowing soft transitions between natural and more formal languages.
|
| 37 |
+
|
| 38 |
+
ROP is our novel coherence loss. Since rationales are sequences of consecutive reasoning steps, the order of the execution is critical as shown in Figure [2.](#page-1-1) Following this intuition, we introduce
|
| 39 |
+
|
| 40 |
+
Reasoning Order Prediction (ROP) that predicts whether the order of the rationale steps is preserved. Hence it encourages the network to pay more attention to rationales. The loss is similar to Sentence Order Prediction (SOP) [\(Lan et al.,](#page-11-2) [2019\)](#page-11-2), but ours is focused on learning reasoning steps. NROP is an extension of ROP where only consecutive rationale steps are swapped making the prediction (swap or no swap) task more challenging and, hence, it can arguably lead to a better representation as understanding the correct ordering is more nuanced. Indeed, we observe that our models trained with NROP correctly predict if swap has occurred in about 75% cases, while with ROP in about 78% cases (both on the validation set). This indeed, confirms our hypothesis that NROP task is more challenging than ROP.
|
2107.08829/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2107.08829/main_diagram/main_diagram.pdf
ADDED
|
Binary file (65.2 kB). View file
|
|
|
2107.08829/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
We consider the problem setting of learning in partially observed Markov decision processes (POMDPs), which can be described with the tuple: $\mdp = (\mathcal{S}, \mathcal{A}, \mathcal{X}, \mathcal{R}, \dynamics, \mathcal{U}, \gamma)$, where $\state \in \mathcal{S}$ is the state space, $\action \in\mathcal{A}$ is the action space, $\obs \in\mathcal{X}$ is the observation space and $r = \mathcal{R}(\state,\action)$ is a reward function. The state evolution is Markovian and governed by the dynamics as $\state' \sim \dynamics(\cdot|\state,\action)$. Finally, the observations are generated through the observation model $\obs \sim \mathcal{U}(\cdot | \state)$. The widely studied Markov decision process (MDP) is a special case of this 7-tuple where the underlying state is directly observed in the observation model.
|
| 4 |
+
|
| 5 |
+
In this work, we study imitation learning in unknown POMDPs. Thus, we do not have access to the underlying dynamics, the true state representation of the POMDP, or the reward function. In place of the rewards, the agent is provided with a fixed set of expert demonstrations collected by executing an expert policy $\policy^E$, which we assume is optimal under the unknown reward function. The agent can interact with the environment and must learn a policy $\policy(\action_t | \obs_{\leq t})$ that mimics the expert.
|
| 6 |
+
|
| 7 |
+
In line with prior work, we interpret imitation learning as a divergence minimization problem [@GAIL2016Ho; @DIV2019Sayed; @Ke2019ImitationLA]. For simplicity of exposition, we consider the MDP case in this section, and discuss POMDP extensions in Section [3.2](#sec:algo_pomdp){reference-type="ref" reference="sec:algo_pomdp"}. Let $\rho^\policy_\mdp(\state, \action) = (1-\gamma) \sum_{t=0}^{\infty} \gamma^t P(\state_t = \state, \action_t = \action)
|
| 8 |
+
% \]$ be the discounted state-action visitation distribution of a policy $\policy$ in MDP $\mdp$. Then, a divergence minimization objective for imitation learning corresponds to $$\begin{equation}
|
| 9 |
+
\label{eq:density_objective}
|
| 10 |
+
\min_\policy \ \ \mathbb{D}(\rho^\policy_\mdp, \rho^E_\mdp),
|
| 11 |
+
\end{equation}$$ where $\rho^E_\mdp$ is the discounted visitation distribution of the expert policy $\policy^E$, and $\mathbb{D}$ is a divergence measure between probability distributions such as KL-divergence, Jensen-Shannon divergence, or a generic $f-$divergence. To see why this is a reasonable objective, let $J(\policy, \mdp)$ denote the expected value of a policy $\policy$ in $\mdp$. Inverse RL [@Ziebart2008MaximumEI; @GAIL2016Ho; @GAIL201Finn] interprets the expert as the optimal policy under some unknown reward function. With respect to this unknown reward function, the sub-optimality of any policy $\policy$ can be bounded as: $$\abs{J(\policy^E, \mdp) - J(\policy, \mdp) } \leq \frac{R_{\max}}{1-\gamma} \ \mathbb{D}_{TV}(\rho^\policy_\mdp, \rho^E_\mdp),$$ since the policy performance is $(1-\gamma) \cdot J(\policy, \mdp) = \E_{(\state, \action) \sim \rho^\policy_\mdp} \left[ r(\state, \action) \right]$. We use $\mathbb{D}_{TV}$ to denote total variation distance. Since various divergence measures are related to the total variation distance, optimizing the divergence between visitation distributions in state space amounts to optimizing a bound on the policy sub-optimality.
|
| 12 |
+
|
| 13 |
+
With the divergence minimization viewpoint, any standard generative modeling technique including density estimation, VAEs, GANs etc. can in principle be used to minimize Eq. [\[eq:density_objective\]](#eq:density_objective){reference-type="ref" reference="eq:density_objective"}. However, in practice, use of certain generative modeling techniques can be difficult. A standard density estimation technique would involve directly parameterizing $\rho_\mdp^\policy$, say through auto-regressive flows, and learning the density model. However, a policy that induces the learned visitation distribution in $\mdp$ is not guaranteed to exist and may prove hard to recover. Similar challenges prevent the direct application of a VAE based generative model as well. In contrast, GANs allow for a policy based parameterization, since it only requires the ability to sample from the generative model and does not require the likelihood. This approach was followed in GAIL, leading to the optimization $$\begin{equation}
|
| 14 |
+
\label{eq:gail_objective}
|
| 15 |
+
\max_\policy \ \min_{D_\psi} \ \E_{(\state, \action) \sim \rho_\mdp^E} \left[ - \log D_\psi(\state, \action) \right] \ + \ \E_{(\state, \action) \sim \rho_\mdp^\policy} \left[ - \log \left( 1 - D_\psi(\state, \action) \right) \right],
|
| 16 |
+
\end{equation}$$ where $D_\psi$ is a discriminative classifier used to distinguish between samples from the expert distribution and the policy generated distribution. Results from @Goodfellow2014GenerativeAN and @GAIL2016Ho suggest that the learning objective in Eq. [\[eq:gail_objective\]](#eq:gail_objective){reference-type="ref" reference="eq:gail_objective"} corresponds to the divergence minimization objective in Eq. [\[eq:density_objective\]](#eq:density_objective){reference-type="ref" reference="eq:density_objective"} with Jensen-Shannon divergence. In order to estimate the second expectation in Eq. [\[eq:gail_objective\]](#eq:gail_objective){reference-type="ref" reference="eq:gail_objective"} we require on-policy samples from $\pi$, which is often data-inefficient and difficult to scale to high-dimensional image observations. Some off-policy algorithms [@DAC2019Kostrikov; @SAM2019Blonde] replace the expectation under the policy distribution with expectation under the current replay buffer distribution, which allows for off-policy training, but can no longer guarantee that the induced visitation distribution of the learned policy will match that of the expert.
|
| 17 |
+
|
| 18 |
+
Imitation learning methods based on expert distribution matching have unique challenges. Improving the generative distribution of trajectories (through policy optimization, as we do not have control over the environment dynamics) requires samples from $\rho_\mdp^\policy$, which requires rolling out $\policy$ in the environment. Furthermore, the optimization landscape of a saddle point problem (see Eq. [\[eq:gail_objective\]](#eq:gail_objective){reference-type="ref" reference="eq:gail_objective"}) can require many iterations of learning, each requiring fresh on-policy rollouts. This is different from typical generative modeling applications [@Goodfellow2014GenerativeAN; @Brock2019LargeSG] where sampling from the generator is cheap. To overcome these challenges, we present a model-based imitation learning algorithm. Model-based algorithms can utilize a large number of *synthetic on-policy* rollouts using the learned dynamics model, with periodic model correction. In addition, learning the dynamics model serves as a rich auxiliary task for state representation learning, making policy learning easier and more sample efficient. For conceptual clarity and ease of exposition, we first present our conceptual algorithm in the MDP setting in Section [3.1](#sec:algo_mdp){reference-type="ref" reference="sec:algo_mdp"}, and then extend this algorithm to the POMDP case in Section [3.2](#sec:algo_pomdp){reference-type="ref" reference="sec:algo_pomdp"}. Finally, we present a practical version of our algorithm in Sections [3.3](#sec:practical_algo){reference-type="ref" reference="sec:practical_algo"} and [3.4](#sec:zeroshottransfer){reference-type="ref" reference="sec:zeroshottransfer"}.
|
| 19 |
+
|
| 20 |
+
Model-based algorithms for RL and IL involve learning an approximate dynamics model $\widehat{\dynamics}$ using environment interactions. The learned dynamics model can be used to construct an approximate MDP $\mdphat$. In our context of imitation learning, learning a dynamics model allows us to generate samples from $\mdphat$ as a surrogate for samples from $\mdp$, leading to the objective: $$\begin{equation}
|
| 21 |
+
\label{eq:model_based_density_objective}
|
| 22 |
+
\min_\policy \ \ \mathbb{D}(\rho^\policy_{\mdphat}, \rho^E_\mdp),
|
| 23 |
+
\end{equation}$$ which can serve as a good proxy to Eq. [\[eq:density_objective\]](#eq:density_objective){reference-type="ref" reference="eq:density_objective"} as long as the model approximation is accurate. This intuition can be captured using the following lemma (see Appendix [7](#app:theoretical_proofs){reference-type="ref" reference="app:theoretical_proofs"} for proof).
|
| 24 |
+
|
| 25 |
+
::: lemma
|
| 26 |
+
**Lemma 1**. *(Simultaneous policy and model deviation) Suppose we have an $\alpha$-approximate dynamics model given by $\mathbb{D}_{TV}(\widehat{\dynamics}(\state, \action), \dynamics(\state, \action)) \leq \alpha \ \forall (\state, \action)$. Let $R_{\max} = \max_{(s, a)} \mathcal{R}(\state,\action)$ be the maximum of the unknown reward in the MDP with unknown dynamics $\dynamics$. For any policy $\policy$, we can bound the sub-optimality with respect to the expert policy $\policy^E$ as: $$\begin{equation}
|
| 27 |
+
\label{eq:TVbound}
|
| 28 |
+
\abs{J(\policy^E, \mdp) - J(\policy, \mdp) } \leq \frac{R_{\max}}{1-\gamma} \ \mathbb{D}_{TV}(\rho^\policy_{\mdphat}, \rho^E_\mdp) + \frac{\alpha \cdot R_{\max}}{(1-\gamma)^2}.
|
| 29 |
+
\end{equation}$$*
|
| 30 |
+
:::
|
| 31 |
+
|
| 32 |
+
Thus, the divergence minimization in Eq. [\[eq:model_based_density_objective\]](#eq:model_based_density_objective){reference-type="ref" reference="eq:model_based_density_objective"} serves as an approximate bound on the sub-optimality with a bias that is proportional to the model error. Thus, we ultimately propose to solve the following saddle point optimization problem: $$\begin{equation}
|
| 33 |
+
\label{eq:our_objective_mdp}
|
| 34 |
+
\max_\policy \ \min_{D_\psi} \ \E_{(\state, \action) \sim \rho_\mdp^E} \left[ - \log D_\psi(\state, \action) \right] \ + \ \E_{(\state, \action) \sim \rho_{\mdphat}^\policy} \left[ - \log \left( 1 - D_\psi(\state, \action) \right) \right],
|
| 35 |
+
\end{equation}$$ which requires generating on-policy samples only from the learned model $\mdphat$. We can interleave policy learning according to Eq. [\[eq:our_objective_mdp\]](#eq:our_objective_mdp){reference-type="ref" reference="eq:our_objective_mdp"} with performing policy rollouts in the real environment to iteratively improve the model. Provided the policy is updated sufficiently slowly, @RajeswaranGameMBRL show that such interleaved policy and model learning corresponds to a stable and convergent algorithm, while being highly sample efficient.
|
| 36 |
+
|
| 37 |
+
In POMDPs, the underlying state is not directly observed, and thus cannot be directly used by the policy. In this case, we typically use the notion of *belief state*, which is defined to be the filtering distribution $P(\state_t | \history_t)$, where we denote history with $\history_t := (\obs_{\leq t}, \action_{<t})$. By using the historical information, the belief state provides more information about the current state, and can enable the learning of better policies. However, learning and maintaining an explicit distribution over states can be difficult. Thus, we consider learning a latent representation of the history $\latent_t = q(\history_t)$, so that $P(\state_t | \history_t) \approx P(\state_t | \latent_t)$. To develop an algorithm for the POMDP setting, we first make the key observation that imitation learning in POMDPs can be reduced to divergence minimization in the latent belief state representation. To formalize this intuition, we introduce Theorem [1](#thm:divergence_bound){reference-type="ref" reference="thm:divergence_bound"}.
|
| 38 |
+
|
| 39 |
+
::: {#thm:divergence_bound .theorem}
|
| 40 |
+
**Theorem 1**. *(Divergence in latent space) Consider a POMDP $\mdp$, and let $\latent_t$ be a latent space representation of the history and belief state such that $P(\state_t|\obs_{\leq t}, \action_{<t}) = P(\state_t|\latent_t)$. Let the policy class be such that $\action_t \sim \policy(\cdot | \latent_t)$, so that $P(\state_t|\latent_t, \action_t)=P(\state_t|\latent_t)$. Let $D_f$ be a generic $f-$divergence. Then the following inequalities hold: $$D_f(\rho_\mdp^{\policy}(\obs, \action)||\rho_\mdp^E(\obs, \action))\leq D_f(\rho_\mdp^{\policy}(\state, \action)||\rho_\mdp^E(\state, \action))\leq D_f(\rho_\mdp^{\policy}(\latent, \action)||\rho_\mdp^E(\latent, \action))$$*
|
| 41 |
+
:::
|
| 42 |
+
|
| 43 |
+
The condition $P(\state_t|\latent_t, \action_t)=P(\state_t|\latent_t)$ essentially states that the actions of both the agent and the expert do not carry additional information about the state beyond what is available in the history. This will be true of all agents trained based on some representation of the history, and only excludes policies trained on ground truth states. Since we cannot hope to compete with policy classes that fundamentally have access to more information like the ground truth state, we believe this is a benign assumption. Theorem [1](#thm:divergence_bound){reference-type="ref" reference="thm:divergence_bound"} suggests that the divergence of visitation distributions in the latent space represents an upper bound of the divergence in the state and observation spaces. This is particularly useful, since we do not have access to the ground-truth states of the POMDP and matching the expert marginal distribution in the high-dimensional observation space (such as images) could be difficult.
|
| 44 |
+
|
| 45 |
+
Furthermore, based on the results in Section [2.1](#sec:prelim_divergence_min){reference-type="ref" reference="sec:prelim_divergence_min"}, minimizing the state divergence results in minimizing a bound on policy sub-optimality as well. These results provide a direct way to extend the results from Section [3.1](#sec:algo_mdp){reference-type="ref" reference="sec:algo_mdp"} to the POMDP setting. If we can learn an encoder $\latent_t = q(\obs_{\leq t}, \action_{<t})$ that captures sufficient statistics of the history, and a latent state space dynamics model $\latent_{t+1} \sim \widehat{\dynamics}(\cdot | \latent_t, \action_t)$, then we can learn the policy by extending Eq. [\[eq:our_objective_mdp\]](#eq:our_objective_mdp){reference-type="ref" reference="eq:our_objective_mdp"} to the induced MDP in the latent space as: $$\begin{equation}
|
| 46 |
+
\label{eq:our_objective_pomdp}
|
| 47 |
+
\max_\policy \ \min_{D_\psi} \ \E_{(\latent, \action) \sim \rho_\mdp^E (\latent, \action)} \left[ - \log D_\psi(\latent, \action) \right] \ + \ \E_{(\latent, \action) \sim \rho_{\mdphat}^\policy (\latent, \action)} \left[ - \log \left( 1 - D_\psi(\latent, \action) \right) \right].
|
| 48 |
+
\end{equation}$$ Once learned, the policy can be composed with the encoder for deployment in the POMDP. Similar approach was also taken by [@gangwani2019learning], however they only use the model for representation purposes in low-dimensional domains and do not carry out model-based training.
|
| 49 |
+
|
| 50 |
+
:::: algorithm
|
| 51 |
+
[]{#algo:MainAlgorithm label="algo:MainAlgorithm"}
|
| 52 |
+
|
| 53 |
+
::: algorithmic
|
| 54 |
+
**Require**: Expert demos $\mathcal{B}_E$, environment buffer $\mathcal{B}_{\policy}$. Randomly initialize variational model $\{q_{\theta}, \widehat{\dynamics}_{\theta}\}$, policy $\policy_{\psi}$ and discriminator $D_{\psi}$ [`// Environment Data Collection`]{style="color: purple"} Estimate latent state from the belief distribution $\latent_t\sim q_{\theta}(\cdot|\obs_{t}, \latent_{t-1}, \action_{t-1})$ Sample action $\action_t\sim \policy_{\psi}(\action_t|\latent_t)$ Step environment and get observation $\obs_{t+1}$ Add data $\{\obs_{1:T}, \action_{1:T-1}\}$ to policy replay buffer $\mathcal{B}_{\policy}$ [`// Dynamics Learning`]{style="color: purple"} Sample a batch of trajectories $\{\obs_{1:T}, \action_{1:T-1}\}$ from the joint buffer $\mathcal{B}_E\cup \mathcal{B}_{\policy}$
|
| 55 |
+
|
| 56 |
+
Optimize the variational model $\{q_{\theta}, \widehat{\dynamics}_{\theta}\}$ using Equation [\[eq1model\]](#eq1model){reference-type="ref" reference="eq1model"}
|
| 57 |
+
|
| 58 |
+
[`// Adversarial Policy Learning`]{style="color: purple"} Sample trajectories from expert buffer $\{\obs^E_{1:T}, \action^E_{1:T-1}\}\sim\mathcal{B}_E$ Infer expert latent states $\latent^E_{1:T}\sim q_{\theta}(\cdot|\obs^E_{1:T}, \action^E_{1:T-1})$ using the belief model $q_{\theta}$ Generate latent rollouts $\latent^{\policy_{\psi}}_{1:H}$ using the policy $\policy_{\psi}$ from the forward model $\widehat{\dynamics}_{\theta}$
|
| 59 |
+
|
| 60 |
+
Update the discriminator $D_{\psi}$ with data $\latent^{E}_{1:T}, \latent^{\policy_{\psi}}_{1:H}$ using Equation [\[eq:our_objective_pomdp\]](#eq:our_objective_pomdp){reference-type="ref" reference="eq:our_objective_pomdp"} Update the policy $\policy_{\psi}$ to improve the value function in Equation [\[eq:policy_objective\]](#eq:policy_objective){reference-type="ref" reference="eq:policy_objective"}
|
| 61 |
+
:::
|
| 62 |
+
::::
|
| 63 |
+
|
| 64 |
+
The divergence bound of Theorem [1](#thm:divergence_bound){reference-type="ref" reference="thm:divergence_bound"} allows us to develop a practical algorithm if we can learn a good belief state representation. Following prior work [@watter2015embed; @zhang2019solar; @SLAC20202Lee; @gelada2019deepmdp; @PlanNet2019Hafner; @Dreamer2020Hafner] we optimize the ELBO: $$\begin{equation}
|
| 65 |
+
\label{eq1model}
|
| 66 |
+
\max_{\theta}\widehat{\mathbb{E}}_{q_{\theta}}\Big[
|
| 67 |
+
\sum_{t=1}^{T}\underbrace{\log\widehat{\mathcal{U}}_{\theta}(\obs_{t}|\latent_{t})}_{\text{reconstruction}}-\underbrace{\mathbb{D}_{KL}(q_{\theta}(\latent_{t}|\obs_{t}, \latent_{t-1}, \action_{t-1})||\widehat{\dynamics}_{\theta}(\latent_{t}|\latent_{t-1}, \action_{t-1}))}_{\text{forward model}}\Big].
|
| 68 |
+
\end{equation}$$ where $q_{\theta}$ is a state inference network, $\widehat{\dynamics}_{\theta}$ is a latent dynamics model and $\widehat{\mathcal{U}}$ is an observation model. Here we jointly train a belief representation with network $q_{\theta}$ and a latent dynamics model $\widehat{\dynamics}_{\theta}$. Given the learned latent model we can use any on-policy RL algorithm to train the policy using Eq. [\[eq:our_objective_pomdp\]](#eq:our_objective_pomdp){reference-type="ref" reference="eq:our_objective_pomdp"}, however in our setup, the RL objective is a differentiable function of the policy, model, and discriminator parameters. We can then optimize the policy by directly back-propagating through $\widehat{\dynamics}_{\theta}$ using the objective: $$\begin{equation}
|
| 69 |
+
\label{eq:policy_objective}
|
| 70 |
+
\max_{\policy_{\psi}}V_{\theta, \psi}^K(\latent_t)=\max_{\policy_{\psi}}\mathbb{E}_{\pi_\psi, \widehat{\dynamics}_\theta} \left[ \sum_{\tau=t}^{t+K-1}\gamma^{\tau-t}\log D_{\psi}(\latent^{\policy_{\psi}}_\tau, \action^{\policy_{\psi}}_\tau)+\gamma^{K}V_{\psi}(\latent^{\policy_{\psi}}_{t+K}) \right]
|
| 71 |
+
\end{equation}$$Finally, we train the discriminator $D_{\psi}$ using Eq. [\[eq:our_objective_mdp\]](#eq:our_objective_mdp){reference-type="ref" reference="eq:our_objective_mdp"} with on-policy rollots from the model $\widehat{\dynamics}$. Our full approach is outlined in Algorithm [\[alg:vmail\]](#alg:vmail){reference-type="ref" reference="alg:vmail"}, for more details, see Appendix [8](#app:practical_VMAIL){reference-type="ref" reference="app:practical_VMAIL"}.
|
| 72 |
+
|
| 73 |
+
{#fig:multitask width="\\textwidth"}
|
| 74 |
+
|
| 75 |
+
:::: algorithm
|
| 76 |
+
[]{#algo:TransferAlgorithm label="algo:TransferAlgorithm"}
|
| 77 |
+
|
| 78 |
+
::: algorithmic
|
| 79 |
+
**Require**: Expert demos $\mathcal{B}_E^i$ for each source task, expert demos $\mathcal{B}_E$ for target task Randomly initialize policy $\policy_{\psi}$, and discriminator $D_{\psi}$ Train Alg [\[alg:vmail\]](#alg:vmail){reference-type="ref" reference="alg:vmail"} on source tasks, yielding shared model $\{q_{\theta}, \widehat{\dynamics}_{\theta}\}$ and aggregated replay buffer $\mathcal{B}_{\policy}$ [`// Dynamics Fine-Tuning using Expert Trajectories`]{style="color: purple"}
|
| 80 |
+
|
| 81 |
+
Update the variational model $\{q_{\theta}, \widehat{\dynamics}_{\theta}\}$ using Equation [\[eq1model\]](#eq1model){reference-type="ref" reference="eq1model"} with data from $\mathcal{B}_E\cup \mathcal{B}_{\policy}$
|
| 82 |
+
|
| 83 |
+
[`// Adversarial Policy Learning`]{style="color: purple"} Update discriminator $D_{\psi}$ and policy $\pi_{\psi}$ with Equations [\[eq:our_objective_pomdp\]](#eq:our_objective_pomdp){reference-type="ref" reference="eq:our_objective_pomdp"} and [\[eq:policy_objective\]](#eq:policy_objective){reference-type="ref" reference="eq:policy_objective"}.
|
| 84 |
+
:::
|
| 85 |
+
::::
|
| 86 |
+
|
| 87 |
+
Our model-based approach is well suited to the problem of zero-shot transfer to new imitation learning tasks, i.e. transferring to a new task using a modest number of demonstrations and no additional samples collected in the environment.. In particular, we assume a set of source tasks {$\mathcal{T}^i$}, each with a buffer of expert demonstrations $\mathcal{B}^i_E$. Each source task corresponds to a different POMDP with different underlying rewards, but shared dynamics. During training, the agent can interact with each source environment and collect additional data. At test time, we're introduced with a new target task $\mathcal{T}$ with corresponding expert demonstrations $\mathcal{B}_E$ and the goal is to obtain a policy that achieves high reward without additional interaction with the environment.
|
| 88 |
+
|
| 89 |
+
Our proposed method is illustrated in Fig. [2](#fig:multitask){reference-type="ref" reference="fig:multitask"}. The key observation is that we can optimize Eq. [\[eq:our_objective_pomdp\]](#eq:our_objective_pomdp){reference-type="ref" reference="eq:our_objective_pomdp"} under our model and still obtain an upper bound on policy sub-optimality via Eq. [\[eq:TVbound\]](#eq:TVbound){reference-type="ref" reference="eq:TVbound"}. Furthermore, the sub-optimality is bound by the accuracy of our model over the marginal state-action distribution of the target task expert. Specifically, we first train on all of the source tasks using Algorithm [\[alg:vmail\]](#alg:vmail){reference-type="ref" reference="alg:vmail"}, training a single shared variational model across the tasks. By fine-tuning that model on data that includes the target task expert demonstrations our hope is that we can get an accurate model and thus a high-quality policy. Similarly to Algorithm [\[alg:vmail\]](#alg:vmail){reference-type="ref" reference="alg:vmail"}, we then train a discriminator and policy for the target task using only model rollouts. This approach is outlined in Algorithm [\[alg:fewshot\]](#alg:fewshot){reference-type="ref" reference="alg:fewshot"}.
|
2110.08851/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-18T10:20:38.422Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" etag="Mi-zg1ymCeEHPyOs6dUq" version="15.7.3" type="google"><diagram id="VOB6O6GJFVtugNY0akm9" name="Page-3">7V1bd5u4Fv41WWvmISwkdOMxiZuZzqRtTjPttPPShQHHnBDjg3Fuv/5IXGyMZBvbCOwpfrGRscD6Pvbe+rQlnVlXjy+/xc50/CHy/PAMmt7LmTU4gxAQG/E3UfKalZwDYGUl93Hg5WctC+6CNz8vNPPSeeD5s5UTkygKk2C6WuhGk4nvJitlThxHz6unjaJw9apT5z6/orksuHOd0JdO+zvwknFWyiBdlv/uB/fj4sr8L2ffPDrFyXkVs7HjRc+la1nvzqyrOIqS7NPjy5UfitYr2uVpdBN8dT4407+eyMv5f8eX/rc/zrO7vN7lJ4u/EPuTZO+q7eFH78t0mkA3RP9Y/6HvJ9/ROc6qfnLCed5e+X9NXosG5H97Kj76L/zal140H6bfAH4QR/OJ54v6TX70PA4S/27quOLrZ04nXjZOHsP85FEQhldRGMX8eBJN+EmXsySOHhawWIuS4rQzaJnm1RUh/Jv8Rv048V8qyG5pFrDAirPcjx79JH7lvytqMU2SVZQzHFg4L3he8sXCOSvGJa5YJKeck3P0flH7Egf+IYdCDcvFV/fzIPbAm/fAW+6T8/777VuO5EZYtrV2NEnyhxFiZVMr8NDVxgSZ25sYaGtiJfNrNPHe9Oa89bDPPCQemCDmZi2IJrx8Fs3Fv728jx0v4A1aOt+mnkmp6glgcGjpfQIwokZh9QqEEKYSQtiUAQIAawII1DdNo9B/uRC+greSP/HyjwM3dGazwF3FibdU/PpNYGrg4vB7+bvBSw54dvSaH2UX9737ql/Zr+23NGtRFvuhkwRPq9dUNXV+hdso4HezgJYCtgorRpXniVMydv38Z2XvIddkmNBcvFCVLmS13sSJ7/1EqjdlwqIZ9icHPC5y+C9BUvoZP/qeHlmU5cfLH4qD8u9u/TjgzeHHTRCNB1dpw29oOmZ3zEjDAqvk4e6Bc2tfXtq8vrW8xC3z0trOy20cdOfxU+p0xMHswU/ccc4Mz5mNF98UlAMrhMOb6bakN2zP+PF/n4K5odkIrUlewrokr42RYZn24oUrZLPYfiQGJiabK67HYs4r57V02lScMFv/fwCAlcCMkJVOBP+Q1dnoQ4I0Ge+FEQaErjwVULRAm2a4K4ICYNJNRCIQ7cnQLRVj2q6dJdr9PwOsbCFNw2ZbjGR69C+hEcFltFmlh4zBnlHktorbDiPpTu467yWvuOESeUT5rZNw7CdpCTSXikbR8YYnSwnbNuzSqxrC2WRPRmyuF2FomKUXbpUe7FSszNL10UoHBIB2PV+NYA90SmQLVZQOzPakroWYYdOSS5SMWbtstf/1YVVtctXoSWS6VVcshEWP52AWQlIJ4dvu8gKVkkrCJBehVwhI/jePii/OZ6k8fcFPgHj6svySf7rP3nmwikbZW1Elv8Ws1uKcCsMP0WxHzPVdV6XBDhlGeElfDQo5NKoRdeHrSoSkWCZkVdBrTn9VCbA74gpUuN44Qz/kEEFz4CROXWAL0+W+hgFHOLa2wzvMuHAzXBQ47sN9ypBP84RXU4xjZXc8AFjBiZFP1JzwqD00dXICooqYRYCsyQOkMFJUmyZ/ZLrrgd5ku5Mw1Qi1JDethsZmJXAhRePvHHNXh+NQNXrX7TJqyKQnQ6POQgi6seeETbqp51Q7vNh8lbY77EClHfZOqT2nZFUF4+59kiqHpY8/d4w1qrh2H36qJN4dYTVVsN7G/P6cYMJPuL6tC2uSZT+VsFvFKJcGFTk1ThjcCz3Q5fCIXvClAC9wnfAi/+Ix8DxxGSVZVumkC30pPYco0nMsxUMNtaGvUmabQP8vAX0wuecnXAYTR9xqzwAOJLRrUIC2SgGV+toEBa7enRbmcZQ4eQKZTgoAE0gaBJVJoPLt+jig0jR7DuiL7nA1DIAdMwA2oC4qGfDnTc8AlSNgVW2AgY4ZUCMRtB+kzQUjuikZbu9B2i31LmKFtpI/damQRzOKeqyJH8SyDNvej0Rbk0pQu8OlUCVCNiElfZmEpyQmeY7PRkrVgbjMH460hpzElLOBO1eUoEpk7OXpHX0RIBt9UfH47eyLNtfbuhXZYQZdz5V9s0ubGcvYdpnWJ7HoErl6iVNyNUxk2Fc8TfcyJ9SlcfUy55qAA+F6NGhV6oS6ZK67u17lUPHAhtBgFRp0LnZa66Wu2dSZ7M+CfAQTX4klDTguZ/Ty5owOfvD36zv+QRrhzC7XE0WEDFQ2GIjBjpnSQGpeL4ruwgJTnrWKOrcXKhmsicSHX5xf+Xd386kfPwWzVMcQxDAHwZNxWvzQRQgeRVT6DlQxUGK1GktaDehZajq4gg6XXz5/5G+/8L7W7NeeBev6FEfAgx3Uq+AxXT2pDFC1nZNoWipN0+Ruo1mQW91hlCTRIz8hlTwvF+rjSrKaePFT0otdzKbZKk8CGKc4GAUvAq3L/H4G4yQRy0NdiJaA1643QUbgRpNRqosaLr8ivPZSafValM/4+zAKvfPpPJ6G/vko9v3zmfPIP4uvhKG8TtWW2Y/PPv9yNubv05BT5sdncbc/roLY5b8DkBlT3mmSNFL67oK80+thbIWHAYtZAWU2UZlMRVnzZGoguU4ZaQxeJ85j4ArD4oTOxE37qr1R4bVAAmQmFJV3Fmscno1nKXkA6qLObzKYztZhUyKDZFTkwV1xoWtOv1C071/OOHp0sk6WoOFlbtIGaFn2OW9iKJFNDKCQIcFENhrc6sE0nVMbVUxsy1QhcudE5X8sbVQ5XNVUUwX2VDnAqlRXgkIUdcyTGrOrt42SZD84tdEOhKuDVnTjLPf683qqFVts48CJ7sWQasxI/nkQ3hNTypCcltnloghovUy5apWb6Hvymx1H3lrDnxUP42rJTx5CUooNJpHGVnQmWg0iUQOypZIljuuemDBVEi4h1coFJGXr2d0TQaVcVnA6aPHVIqWp5uKr1HeIb6oCNv0pUIhKU/+RLYdmKm9FdfX6UY35uY1MdKuJD4dnmC6B2/7EODHL2mDV2LlIXtq2OC5iuhBqYB6s0pT2/fEDuIKJaRRTVpZ+V36YoWIiJdSVzYh2yFD7F+nBz8K08I+ZGPzE643i8+LLVBF2PK9LxRcTIpGFFMvgruS+qgyLLrLomnWLqSEGqE4pRtPo8pmEvEqOaTcm05WMiHm42SOfP/PYkpG35Hz3dpHXlYRIgIF65IswksphpCmHke0iryvv8EPkBaMgzRz5MA+T4PwuEY68p4KoxbSYtN2GpUgbgOZydKcVOuBtyt6+Ize1jUDfqZDpYsHKUpEIIFVaQJtjN7jGnNydBAKVELCupRUCwWjk2WqBYIGbPstuM8OsQmQyCSClQEAsXQipJLZGnDo0RLp6b8nTSQIWh345EAMrRp3JLGjVv2NtKYGeSAl8L3rnH31Rz00w8Z2Yf3jHr2Xw94uTk+S1sQRV19ZS5Qgik7Ts7A9XENXOvvZSab2zV6RpWHZ1NMAqOoydOXtdC+z1WvMhVoVhqRfRudaMG5APlVSBPVX2p4pqWIIARYezVaroShPsZzxnP6kmgHQ+aw0frjOqEe9nOJcedtuU0glA5wkfWCU0Vi13v4ZXBuDmBbdBpcaGNlpqew0voktr7M1/bv7tisdHVO5HtGoEiK4e51KH6IFPBxlhrfGFdsHXu0r7vVjM4AxfJWM/cVRLGmhbtH1o+pavlJ5Nn5mMlVgT+qNEJ/AmpfJ0IFsGXuW9tK3jTnR1CTOIxznwb63j7rjMs1S4Qwsh7HWM+2LL4M5wbyCf5ERx1+fS5Zmhqn0a2sW5geyRE8W5recbWPLc8O5xbyB3ZAPuYY77dBzohV0K6apzCbDPPKTiAYNDi2gdWjaZNK5cL4zDulCnh3fbNqH+0KPOH3eoCOMUwKuWqNMHvK65WxnIwx54KBZ+ALY8r3MTBwBolQRQQYL1Mp5yQq87j58Wot7swU/ccY7city3XP/2zCrvj71YDLerHSE73TYYFDPnpP05dpUDkbTTB62wZo0AyMF1XkunTcUJsw23LG8oYVaImFXZqLxIa8xg65ipxW4DxdYCpV+t3WjgQHZv3T276LRtfQyyXn1Xj4Fd3RkXV5MfaqvipgXl2BpWpu019CjY0qNgt/Eo9FMFj3BMnljYsCrzATpP36D9VMGjnCpIEJDI0vlUQdrn+pyIXek814eqtODerhynXUGyE2rXrugSlPtZSE3bFSzPb2jXrujSoPvQ9gCqUMC7setnxXQe5TKVht17o869EQVkE286D3iZLgm8D3j1WZvOY98i67q3NqdlbboOg5muuZt9GKzR2nQdETOV2Ntbm6O3NkxeCaZda6MrPbefDazR2tjyylHtWhu9ub2j9rL9FiuStr/CKBKLNdfI7qMKZLVl97EdFNuj39a4xng263Q8GwNr077EpPoA1x3dFsvNrZ0Q0NJEL6Y3T/TnsBGEmcdnI+wGckH77avrbjgoP8ndb19tN6CI9dtX72AGLGoUoB/P7tW2SuHqd6/WRwMM0fHtXm03oFf1exLXJwHlJJA3pja7XunBVglQjSw5N0w3JYbnlx8/nhYh9MUETDID3W8/ax+uJPULO2yEXewUW+osVnuLna/zY+tVhfqpQjwQJIqOfefTg+waotG/fHpQpnZ0piNhXDYNrNJV2HuyEFPQTdd8IWwrqN3GlCFbJVStZ+/PvEYVAWCjXlmVGOvybEu9Gteouvjqfh7EHnjzHrg7+OS8/377dq5L3ei3RKi6s+rmxBaU16rSuR0Cor8l+J93D/4EJyNmAfO3m+l5nTTBsoNKLZsfv3vyhYHLURo7XmouVjyYWQPKsDLYLuG3djRePYi/LZoxTcDYILvldFjn8eWec2BsPM5cxzfSX03jYOYbYeQ+5FS/S5k+gFgjObj3MSRHBhX9XIAMpoceUxd8vnZ+/Hl39zt02dev11/+97Gnx3HQg1rUwNXYpGV+fHu7nqDxjw/TL283Hy+9u5cv90jpPTbxQ8pr+JkI82M+EW/p/26LObxzZMthLbC6p46qA92bltZNi5ogWJWq1So96uRm9PRogR7UNKjkekCr9uPT+7e7Z/ubi/68ev57BNDFH3h+DuDhPReg6rl8maTopxu7DdIUzHrdlwI69zVMszit7erbMOt53AwXBYv80k/zhFfjFyTNoAZYodAV+4C3v683Q0QyHQTI+XpAJc3SBhL21MRoYG5lT4zD+rpIXszvCIjRQJZWT4zDxnWQtXFcRydJ+GEcCdyW8plwsR8izxdn/B8=</diagram></mxfile>
|
2110.08851/main_diagram/main_diagram.pdf
ADDED
|
Binary file (54.6 kB). View file
|
|
|
2110.08851/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Self-supervised learning (SSL) has achieved great success with floating point (FP) networks in recent years [4, 5, 7, 9, 14, 15, 18, 20, 22, 30, 43, 44, 47, 54]. Models learned by SSL methods perform on par with or even outperform the ones learned by supervised pretraining by the help of large scale unlabeled data in a number of downstream tasks such as image classification [1, 5], semi-supervised finetuning [5, 7, 20] and object detection [22]. While recent works [5, 7, 20, 22] from resourceful research groups have shown that the gains from SSL scale up with model size and/or dataset size used for pretraining, there is little work where the resulting pretrained models are small in size, i.e., quantized. SSL for such small models is important since it could expedite the AI deployment for a wide range of appli-
|
| 4 |
+
|
| 5 |
+
<span id="page-0-0"></span>
|
| 6 |
+
|
| 7 |
+
<sup>3</sup>Yonsei University
|
| 8 |
+
|
| 9 |
+
Figure 1. Comparison of various representation learning methods on multiple downstream tasks (pretrained with ImageNet). 'Obj. Det.' refers to object detection, 'Lin. Eval' refers to linear evaluation, 'SS 1/10% refers to semi-supervised fine-tuning with 1 or 10% data respectively, 'FS K=1' refers to few-shot learning with 1 shot, and 'Transfer (CUB)' means transfer learning to the CUB dataset. 'Tuned MoCov2' and 'S2-BNN' are SSL methods from [41]. Proposed BURN outperforms all comparable methods in various tasks, and even the Supervised Pre. in certain tasks.
|
| 10 |
+
|
| 11 |
+
cations onto models with high efficiency in computational and memory costs, and energy consumption [12]. At the extreme of resource constrained scenarios, binary networks exhibit superior efficiency and the accuracy is being significantly improved [2, 3, 25, 33–36, 40]. Thus, developing an SSL method for binary networks could further accelerate the deployment of models to edge devices for various downstream tasks, yet is seldom explored.
|
| 12 |
+
|
| 13 |
+
Providing additional supervisory signals with a pretrained FP network by using the KL divergence loss between the softmax outputs from the classifiers of the FP target network and binary network, which we denote as 'supervised KL div.', has become a popular and effective method for training binary networks [2, 3, 34, 36]. Recently, [41] propose an unsupervised representation learning method for binary networks based on the supervised KL div. method. To extract meaningful softmax probabilities from the FP network, they pretrain the classifier as well as the feature
|
| 14 |
+
|
| 15 |
+
This work was done while DK and JC were an intern and an AI technical advisor at NAVER AI Lab., respectively. † indicates corresponding author.
|
| 16 |
+
|
| 17 |
+
<span id="page-1-0"></span>extractor using SSL. Then, the FP network is completely frozen when used as the target network, which could lead to stale targets [\[20\]](#page-8-7) or be dependent on the pretraining dataset used for the fixed FP network being similar to the dataset used for training the binary network.
|
| 18 |
+
|
| 19 |
+
Thus, to avoid the potential pitfalls of a fixed target, we are motivated to develop an SSL method for binary networks that uses a moving FP network as the target, similar to other SSL methods [\[8,](#page-8-15) [9,](#page-8-3) [20,](#page-8-7) [22\]](#page-8-8), and call our method Binary Unsupervised RepresentatioN learning or BURN. Specifically, we first construct the FP target network by combining a fixed FP feature extractor pretrained in an SSL manner and a randomly initialized FP classifier. We then use the outputs of the randomly initialized FP classifier as targets for the binary network and *jointly optimize both the FP classifier and the binary network*, using the KL divergence loss, to keep updating the FP network overtime. But the gradients provided by the randomly initialized FP classifier could have unexpectedly large magnitudes, especially during early training phase. To alleviate this problem, we additionally propose to enforce feature similarity across both precision, providing stable gradients that bypass the randomly initialized classifier. As relative importance of the feature similarity loss decreases as the FP classifier gets jointly trained to provide less random targets, we further propose to *dynamically balance* the KL divergence term and the feature similarity term in the loss function. Finally, we modify the multi-stage training scheme [\[36\]](#page-9-6) for BURN to further improve performance.
|
| 20 |
+
|
| 21 |
+
We conduct extensive empirical validations with a wide variety of downstream tasks such as object detection on Pascal VOC, linear evaluation on ImageNet, semi-supervised fine-tuning on ImageNet with 1% and 10% labeled data, SVM classification and few-shot SVM classification on Pascal VOC07, and transfer learning to various datasets such as CIFAR10, CIFAR100, CUB-200-2011, Birdsnap, and Places205. In the validations, the binary networks trained by our method outperforms other SSL methods by large margins (see Fig. [1](#page-0-0) and Sec. [4.1\)](#page-4-0).
|
| 22 |
+
|
| 23 |
+
We summarize our contributions as follows:
|
| 24 |
+
|
| 25 |
+
- We propose a novel SSL method for binary networks that uses a jointly trained FP classifier to obtain targets that can adapt overtime to the current training scenario.
|
| 26 |
+
- We propose to use a feature similarity loss and dynamic balancing with modified multi-stage training to significantly improve the accuracy.
|
| 27 |
+
- Our BURN outperforms prior arts by large margins on a wide variety of downstream tasks.
|
| 28 |
+
- We analyze our proposed BURN by in-depth investigations.
|
| 29 |
+
|
| 30 |
+
# Method
|
| 31 |
+
|
| 32 |
+
The supervised KL div. method is an effective method to train binary networks [34, 36] that utilizes a FP network pretrained with labeled data. But, as we are interested in the self-supervised learning with no access to labeled data at any time during training, the supervised KL div. is not applicable. Recently, S2-BNN [41] propose to use the supervised KL div. for unsupervised learning of binary networks. They pretrain the classifier and the feature extractor of the FP network to obtain meaningful softmax probabilities and use a completely fixed FP network as the target. In contrast, we propose an unsupervised representation learning method for binary networks that uses a changing FP network as the target such that the FP network can adapt to the current dataset and binary network to provide more useful targets overtime. We illustrate the supervised KL div. method [34, 36], S2-BNN [41], and our proposal in Fig. 2.
|
| 33 |
+
|
| 34 |
+
Specifically, instead of using softmax outputs from a fixed pretrained FP network [41], we propose to use softmax outputs from a randomly initialized classifier attached to a pretrained FP feature extractor, and jointly train the classifier with the binary network using the KL divergence loss. As the supervision from the untrained classifier makes gradients with unexpectedly high magnitudes, we subdue gradients by proposing an additional feature similarity loss across precision. We propose to use a dynamic balancing scheme between the loss terms to better balance the KL divergence and feature similarity losses and employ a modified multi-stage training [36] to improve learning efficacy.
|
| 35 |
+
|
| 36 |
+
Grill et al. [20] show that even when a randomly initialized exponential moving average (EMA) network is used as the target network, the online network improves by training with it. One possible reason for the improvement is that the randomly initialized target network is also updated in an EMA manner during training, improving it gradually. Motivated by this, we conjecture whether a randomly initialized classifier combined with a pretrained FP feature extractor can be used as a moving target network for training binary networks. To gradually improve the target network, we jointly train the classifier of the target network and the binary network. Note that training just the classifier can improve the target network as is shown in the SSL literature [6, 7, 9, 18, 22, 44, 45, 47, 54]. We discuss other moving targets, e.g., the EMA target [20] or the momentum encoder [22] for binary networks in Sec. 4.2.
|
| 37 |
+
|
| 38 |
+
The joint training of the randomly initialized classifier is depicted in ① in Fig. 2-(b). Specifically, instead of a fixed FP network $f(\cdot)$ , the randomly initialized and trainable classifier $g_{\theta}(\cdot)$ and the pretrained and fixed FP feature extractor $h_{\zeta}(\cdot)$ are combined to create the target network. Then, we use the outputs of $g_{\theta}(\cdot)$ as targets for training the binary network $b_{\phi}(\cdot)$ . Our objective is to minimize the KL divergence between the outputs of $g_{\theta}(\cdot)$ and $b_{\phi}(\cdot)$ as:
|
| 39 |
+
|
| 40 |
+
$$\min_{\theta,\phi} \mathbb{E}_{x \sim \mathcal{D}}[\mathcal{L}_{KL}(g_{\theta}(h_{\zeta}(x)), b_{\phi}(x))], \tag{1}$$
|
| 41 |
+
|
| 42 |
+
where x is a sample from the dataset $\mathcal{D}$ and $\mathcal{L}_{KL} = D_{KL}(\cdot,\cdot)$ is the KL divergence between the outputs of $g_{\theta}(\cdot)$ and $b_{\phi}(\cdot)$ . However, the softmax outputs from the classifier would be close to random early on. Thus, using the random outputs as the only target for the binary network, especially in early training, could result in noisy gradients.
|
| 43 |
+
|
| 44 |
+
<span id="page-3-7"></span><span id="page-3-3"></span>
|
| 45 |
+
|
| 46 |
+
Figure 3. Gradient magnitude for the binary classifier (a) and the binary feature extractor (b) during early training with and without $\mathcal{L}_{FS}$ for pretraining on ImageNet. With only KL, the gradients of the classifier is extremely large and this carries over to the feature extractor. Additionally, we observe intermediate spikes for both the classifier and the feature extractor. The addition of $\mathcal{L}_{FS}$ significantly lowers the gradient magnitudes of the classifier as well as the feature extractor at early iterations. Additionally, the surges in gradient magnitudes are also subdued.
|
| 47 |
+
|
| 48 |
+
To alleviate the issue of unreliable gradients from the randomly initialized classifier being the only target, particularly in the early the training, we propose an additional loss term that enforces feature similarity between the target and the binary network. Specifically, $g_{\theta}(\cdot)$ is largely updated (due to the fixed feature extractor) in the early phase of the joint training. As the binary classifier uses the quickly changing $g_{\theta}(\cdot)$ as a target to transfer knowledge from, the binary classifier might receive large gradients. To address the potentially undesirably large gradients caused by using the randomly initialized classifier as the only target, we propose to augment an additional loss term that bypasses the classifier. We call it the *feature similarity loss*.
|
| 49 |
+
|
| 50 |
+
Specifically, we use the cosine distance between the feature vectors from the FP and binary feature extractors as the feature similarity loss; $\mathcal{L}_{FS}(v_1,v_2)=1-\frac{\langle v_1,v_2\rangle}{\|v_1\|_2\cdot\|v_2\|_2}$ for smoothness and a bounded nature to prevent large gradients. The cosine distance (or 1—the cosine similarity) is widely used in representation learning [6,20,22,52] (discussion on other choices for $\mathcal{L}_{FS}$ is in Sec. 4.2).
|
| 51 |
+
|
| 52 |
+
Augmenting the cosine distance to the KL divergence loss, we can write our new optimization problem as:
|
| 53 |
+
|
| 54 |
+
<span id="page-3-4"></span>
|
| 55 |
+
$$\min_{\theta,\phi} \mathbb{E}_{x \sim \mathcal{D}}[(1 - \lambda)\mathcal{L}_{KL}(g_{\theta}(h_{\zeta}(x)), l_{\phi}(k_{\phi}(x))) + \lambda \mathcal{L}_{FS}(h_{\zeta}(x), k_{\phi}(x))],$$
|
| 56 |
+
(2)
|
| 57 |
+
|
| 58 |
+
where the binary network $b_{\phi}(\cdot)$ is decoupled into $k_{\phi}(\cdot)$ the binary feature extractor and the classifier $l_{\phi}(\cdot)$ , $\lambda$ is a static balancing factor, and $\mathcal{L}_{FS}(\cdot,\cdot)$ is the feature similarity loss.
|
| 59 |
+
|
| 60 |
+
The new loss provides additional supervisory signals from the feature extractor of the FP network. Since the feature extractor of the FP network is pretrained and fixed, it provides stationary and stable targets as opposed to the randomly initialized classifier. Empirically, we observe the gradient of the binary classifier and feature extractor with and without $\mathcal{L}_{FS}$ in Fig 3. Note that with only KL, the gradients of the binary classifier are extremely large; it starts at roughly 20,000 then drops to roughly 3,000 for some iterations and finally drops to a small value around roughly 9,000 iterations. In addition, there is a surge in gradient magnitude around 7,500 iterations. The binary feature extractor also shows a similar trend where the gradients exhibit a sudden spike at around 7,500 iterations. Both very high magnitudes of the gradients at the start and the sudden spikes, occurring after some iterations, would harm training stability [10,55]. However, as shown in the figures, addition of the proposed $\mathcal{L}_{FS}(\cdot,\cdot)$ significantly reduces the gradient magnitudes of the binary classifier and the feature extractor at early iterations as well as the surges throughout the training, which leads to better training efficacy and accuracy.
|
| 61 |
+
|
| 62 |
+
As $g_{\theta}$ is gradually updated, it provides more meaningful targets and $\mathcal{L}_{FS}$ becomes less important. Thus, we propose a temporally *dynamic balancing* strategy to replace the static balancing factor $\lambda$ in Eq. 2 by a smooth cosine annealing similar to how [20] annealed the momentum value as:
|
| 63 |
+
|
| 64 |
+
<span id="page-3-5"></span>
|
| 65 |
+
$$\lambda(t) = \lambda_{T_{max}} - (\lambda_{T_{max}} - \lambda_0) \cdot (\cos(\pi t/T_{max}) + 1)/2, (3)$$
|
| 66 |
+
|
| 67 |
+
where $\lambda_0$ and $\lambda_{T_{max}}$ are the initial and final values of $\lambda(t)$ , $T_{max}$ is the maximum training iteration and t is the current training iteration. Thus, $\lambda(t)$ will start at $\lambda_0$ then gradually decay to $\lambda_{T_{max}}$ , emphasizing the cosine distance more at the beginning and less as the learning progresses. Discussion on other choices of $\lambda(t)$ are in Sec. 4.2.
|
| 68 |
+
|
| 69 |
+
Finally, our optimization problem can be rewritten as:
|
| 70 |
+
|
| 71 |
+
<span id="page-3-6"></span>
|
| 72 |
+
$$\min_{\theta,\phi} \mathbb{E}_{x \sim \mathcal{D}}[(1 - \lambda(t))\mathcal{L}_{KL}(g_{\theta}(h_{\zeta}(x)), l_{\phi}(k_{\phi}(x))) + \lambda(t)\mathcal{L}_{FS}(h_{\zeta}(x), k_{\phi}(x))].$$
|
| 73 |
+
(4)
|
| 74 |
+
|
| 75 |
+
The multi-stage training [2,34,36] is known to be effective in training binary networks. It trains the network with only binarized activations in the first stage. Then, it uses the trained weights of the partially binarized network as initial values for training the fully binarized network, *i.e.*, binarized weights and activations, in the second stage. Unfortunately, we cannot use this strategy as the binary networks converge quickly thanks to the good initial values learned in the first stage [36] whereas the randomly initialized FP classifier $g_{\theta}$ does not converge as quickly as the binary network. This discrepancy in the convergence speeds of the binary network and the FP classifier harms training efficacy.
|
| 76 |
+
|
| 77 |
+
<span id="page-4-3"></span><span id="page-4-1"></span>Algorithm 1 Binary Unsupervised RepresentatioN learning (BURN)
|
| 78 |
+
|
| 79 |
+
```
|
| 80 |
+
1: function BURN(D, t, ζ, hζ , gθ, kφ, lφ)
|
| 81 |
+
2: θ, φ ← Pretrain(D, t, ζ, hζ , gθ, kφ, lφ, STAGE1)
|
| 82 |
+
3: W ← {ζ}
|
| 83 |
+
S
|
| 84 |
+
{θ, φ}
|
| 85 |
+
4: θ, φ ← Pretrain(D, t, W, hζ , gθ, kφ, lφ, STAGE2)
|
| 86 |
+
5: return kφ
|
| 87 |
+
6: end function
|
| 88 |
+
7: function PRETRAIN(D, t, W, hζ , gθ, kφ, lφ, F)
|
| 89 |
+
8: if F is STAGE1 then
|
| 90 |
+
9: kφ, lφ ← Binarize Activations
|
| 91 |
+
10: hζ ← W . Load pretrained weights
|
| 92 |
+
11: else
|
| 93 |
+
12: kφ, lφ ← Binarize Activations and Weights
|
| 94 |
+
13: hζ , gθ, kφ, lφ ← W . Load pretrained weights
|
| 95 |
+
14: end if
|
| 96 |
+
15: x = RandomSelect(D) . Sample x ∼ D
|
| 97 |
+
16: v1, v2 = hζ (x), kφ(x) . Feature vectors v1, v2
|
| 98 |
+
17: p1, p2 = gθ(v1), lφ(v2) . Softmax Probabilities p1, p2
|
| 99 |
+
18: Lζ,θ,φ = AugmentedLoss(v1, v2, p1, p2, t)
|
| 100 |
+
19: θ ← Optimizer(∇θLζ,θ,φ, η) . Update θ
|
| 101 |
+
20: φ ← Optimizer(∇φLζ,θ,φ, η) . Update φ
|
| 102 |
+
21: return θ, φ
|
| 103 |
+
22: end function
|
| 104 |
+
23: function AUGMENTEDLOSS(v1, v2, p1, p2, t)
|
| 105 |
+
24: LKL = DKL(p2kkp1) . KL Divergence
|
| 106 |
+
25: LF S = 1 −
|
| 107 |
+
hv1,v2i
|
| 108 |
+
kv1k2·kv2k2
|
| 109 |
+
. Cosine Distance
|
| 110 |
+
26: λ(t) = λT − (λT − λ0)·(cos(πt/T) + 1)/2 . Eq. 3
|
| 111 |
+
27: L = (1 − λ(t)) · LKL + λ(t) · Laug . Eq. 4
|
| 112 |
+
28: return L
|
| 113 |
+
29: end function
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
To apply the multi-stage training to BURN, we modify it to give good initial points to the FP classifier as well as the binary network. Specifically, we initialize g<sup>θ</sup> in the second stage with the weights of g<sup>θ</sup> obtained in the first stage, similar to the binary network. As a result, g<sup>θ</sup> starts from a good initial point and converges quickly to provide useful targets.
|
| 117 |
+
|
| 118 |
+
We describe the full algorithm of BURN in Alg. [1](#page-4-1)
|
2110.11945/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-05-28T02:04:36.456Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/89.0.4389.128 Electron/12.0.7 Safari/537.36" version="14.6.13" etag="FZUxBkFBcrAXsm2eeiCQ" type="device"><diagram id="MNlccqp6gkY6VUu-qKOs">7V1Zc+M2Ev41rkoeBoWjcT1mrmRrK7uzO6lN5pGWaEsVWfJKcuzZX7+AeAOQRFIgLdvSPIwJiRf6QPfXje4r9uHu6ed1cj/7dTVNF1cUT5+u2McrSgXjwvxnR75nI4wpyEZu1/NpNoarga/z/6XZIMkHH+bTdNMY2q5Wi+38vjk4WS2X6WTbGEvW69Vj82c3q8W0MXCf3KaNp7ADXyfJIvV+9vt8up3loxTXfv5LOr+dFbcWxTfXyeTP2/XqYZnfcLlaptk3d0lxnfynm1kyXT3WhtinK/ZhvVpts7/unj6kCzuvzRn7vOfb8pnX6XLb5gSanfBXsnjIXzt/ru33Yh52L5La35Mr9v5xNt+mX++Tif320VDejM22d4v865vVcptTkjJzvNmuV3+W82dHbuaLxYfVYrXeXZ19/vz54ydlxlfmmvOt5RWGzWH+XOl6mz7tfTdSzpjhwnR1l27X381P8hPeAVU6OylnQYI5zwYeK4oKprKxWY2YjEmkC2bKGem2vEM1oeaPfE7D88vOcH4h2vwyTJz5BYmk8KcYyHBTDGc4xTzeFBPmTDHTiGNdflRgtikSA044P5sJjzPDBQdVM0wQ1qNPqxh4Wp1JvOH2X/672nj2CZIhynQLogLTDd50U4UR04HpFjTKdMvAdIvFNp+QxryL/z6sii/ebXZT+pP5AWP3T7tZKb43f93a//9VXMk8Q3axbDwaPVmInjc3dDIZjm4AGhFCtTWA7Ic3qUgYR5prrgTFQmnhLwJcIeCMASs0l09bgRGRRtCYkMqYU7qQzFPorI6LVbqc/mQNRnM0WSSbzXyyf+atYk+f5ts/zAFGPD/6Zo/yvz8+1Q++FwdL89x/1A++VVewh9Vpu6PivCaLTJPNrHHwJdlu0/VyN0IxOUD+dFqYuznxN6uH9SQfymmxTda36bYhH51EG7vmFuUIS58TsE/6YixE6vyWX1bznWCW9+PAkBBa6owlHVuECIG0MfgMRwFhUju8lL1+fs26hezdBqRCjBnWZnW+Ld9RcASSCEaJ+QFWzbtkM+rdxbBb8r32s3v7g83Bd8Xgvp50xCC7aCUUJYlayYn25OSPE/XVHv3k6rGB9BVnZp0wekg480aJNuOcaA7YvLXy7adnUlQED7ci/ecNrkgMS6QUAcqpkowLdjaEJmMtSeR1LUkEd2eS0dckawcZA5o6zCiJWaoAqNxZSHKYtYgRMO+GBSGgDd+zwdYixydlOYQSay0iIVgqU2Cb+2TZRhUSvUcVfk3u7hfz5W1NI2bXjKwRifY14jRJ1c0ke4uJfYbsrPzot9V9LmEDKUyqpV36KDgeGAFAWnFQTe6sSYfWSBLgVO7XlMY7NopYccylMYeo4BEUZQv0zFeUv83myyOecNPjzVaPyHY9PaZFzYN/ni+KR+ylIqWvImkfFUlI6WmPpCXtumosd82EY5xZ/BCrbK01JpqjwdrrSSLBXEgT9wZEEGT4s/BSixhFdBUJBSEqN0HHVZEtYM+uskFbyAapS0YpJ3ENjGFEo1hUulkPdGzRAKyRYlJhV0kbHczN2l58HP3aXjQYZUhJib0bEONNY51pb2PwtvNk+7BuCwB5eLXezzaWz6TWewGto6t1oBRRKVxAnSmCFC6Vem8ghnKOOKFUOZxLlRlnpMQeHdEYTqnTyBgMCcUAukEAhO+xe48a0FXU+t0kkyJ7vfXt9Q/URk0/7C7Oyj9+3F0Td7i9MbzNH8vrjf3vl93EUvy7+8Veu7z5M3vutC2qYYzmbVN3rFPzzMn17gdWbJOH7WqTG+72cDG/tR7sxEhbambivbW855Nk8VP+xd18OrUnv8+5xtyGv7/iHx2bfxf5L9SAM1g5FtjxHAJRmVKZDYWWaavbrCgZq4m64gXISLbSgIWk2sdRFPPVR6HTT7L/T4/REDD82F0WglfJYrr4iJzgmqjgmrRcJOIEiYDxJQKIsbdAmzXLSoQbXDADFl0uZYJ6QiEDWSU0QtyShAJaowjFhfHfBuNz45tjCRj7S4HxChDmigFlYAwu7q8Fg7G9H5/qZRrFZHuiKrbf5SEVbC/4j+Vy0fYhGiw9y0Xi8RSRmL1SmXgG84hqq+4lURBaDIhmyKwTYDxorMxyELCQxDAWEj09ZkjUHov95+TB+N07ofh7ul7a7N5hY4hEXXmI+U0ylUkNMR8cIycgkDLOqhtuN1oPcQ2ydDI9GltoUWDBijMdcFwrbKxsKbENzkQAx6kfRQxQbBTSjEsJyjgSsgrr+tJ2kBIEUUGyjBBGMHMR4F6koBcpHIf2BDgCUlC+NAqek/ahGNXzQzY706S0S0qj5CTIJoJRsve8Jiu/cKTqYl51FXfja4OWnIkQ+sSMyGsmhaJcE+bnaxES8jliWFehGNtFtp9Jts/jtS+y3Uu2heTUl+3ccVLY5o+b730UjZCBsOXgLpbnl+3zWMBayvaLFeILLBJJtqnNsQ6v2wRLZG5OJWNSilGX7QhRVLmH9/6RPm5Xy3f/Tu5nm9VycF9MXvm+mJqkE88XexfONlgk1+nifSmNDi/F4AGm8xh4w0dT/jZbAjTgkWEUIaE6uJfLIUEtoSR/+xeTBHVIgA8mkhTT0MgkEd3F/BkySTC1Vr8i1EtS4khou9kn8+H75pIwTQRSVEgvQfDwDeJlQQU3pj2/AXImC/QlmjHcss2UksjY3hiCoW2G7QYpTRUoSZQOCPlg0YxQjO/89Hilur/VvxtQjxflT2p6vNjWdt56nCnJERecc9c6DG8X7q7HjV+JNChjZGqsBQNHnTOWbRHF+ZYYh0tjpQYypd0EkRxcjpUayEKRvnOXDTK8bBRyUJeNIiR05rIhIavkYbQs0Qwzx462dSUIVja/iGmpiZNE0V5EhFYIVBUSqe9JA03LHTvtTJ3usiGVJ5Rx02ZZtw2Vb8f+x4F142XIht7lozLsppITCsjoeZ0F8AD67oIw64ZGTAlpjCDzkdKtsqMwwgo0kYpUe+/jewFF4uCZs+7oJk/Bpi9RrRuLpFTrxsrmXTirAwNjY9kA065lZetDCWMP8bwixWBKXXrvFVept9n8uVjM7zfpcUwt2dxnJQJv5k/WC2stNk1n7rBvFvAFr2K4a1I7yolQhQIAG0EqgKjWhk8qZDf2fsPGPqoxF95e+qrYXNiA2uhL0FdgnhMpsLmNDpcJgWzKGs4T13oqKjDsihTxgTbBETHun1bZnvHBthuyAbYbduDcvhsJW7Phqdu/uafJh+U3IqVFXrkXMGrWa1KOq95l73d+AxKqkWFeDwFzax1E3yt4+CGwJohy3ZStaCtni0qCZ2BTju4OkZA79CJgNLAFOBinVLuetJC22oykWX2i3hCB5CLzhnZlZYRbSekwEhFRVXcL5I2kqo+W8uinqvvs0nx1qhqMESAIuDu6saZ2811RU6+l/91HS7tbySEybNut/OTYuviwTu2rq1voYhbw76XuIRGcu7pYc4VCVf0iufeCMaMqizpIoLw6qICgt1PPpbm4xvmHu6aLuTYFXH4csYuohyNttnvLyXGXxNe3HmXXeG9yHObwTMlxEGHH4NtJO7mI9kW0QzmPxmwVrMhpx9jNgHy+tHZoEQptTmXHbNQA7H2wf0CzbYP5ODSvj8egDQjqZGKUMcT6/Bcpow2knOoIBKAeAf52ZzsNUfzp7jqdTnelR/dSpHtd7Wsynd64MhGmVJz5LboclfML/i5MHWBvbkspR5jgbkVBT/eoelUDLfGE0dAt4TtU0Afc0gJB06dRmiPq0zha0TipkHH4uXAxACaQxR5yjFQ4wtmhZjIwJKlf9vbIDeKVjXNzw3CesHLgkd1TSN64KhYuUbDG210npFcFuVBsR9cJFwDtpcY6dVF6jQTgXmcrohAukZUyZlLvCRRQOQTidBGLGDRpUbUpNKsBAC4I6tEuqF6LlQPaE6+l+l+ni2Q7/6t5y05V9L0mUaI3xgzc5TPaLtbcR9O7VWNynPOQpnehgTwl7NAprgXWYj3xTsHHTvF6J7Y4xaUa1UdP8ax1eewU6Z0St/sAtIhCvXLV7DXNIDzUo62ov9LQxwzpCHVVIGLgJNtG2RoiOK1e9Cmpkm2UdT4NdTu/iBOckwLnDLmREdy7ZLmvi+we03btTfoogG77kNqEobuy4MnsRHozctGJmexnyIaTSV8C82EX22nPfNzm8zqsLABRNUIgjofQ+r1LUQvw6PCG8H6VR+mB5SiI5UZZowQWyFELWvguAwn5DOIATNG6C25spLU9afbYEAe7r9W7EIlr2/gkGh1cqIJUurlGiAAiGAMOLNLf3zoZuJsaRhhtSYYIFRh4C1D2bVDB5nU6hKAiRIhyUWkopgikiI3svVRSKBkgBWtNigixIB4b43uxpGA+KcLqKUiKGMtEp+7qr5gU0vGkCNWt6cAi0CE2tvJC6SC8/ulhOgSW6xhUaAOu9N5b140kp+68c72PqD4GcW1brIJUCuBg7s7LXnRqAURsZsm9/XO1/lRS7D5dz83NbC7Ox5yOX6qhYwSt0pX++bBdzJcFGNCariwAafaiaxyV53YAJ0QaInree137EZ+eMZpYCN+d/4I+IY+mLywDS3fCtnZn5u9K4lAYbHeGfa16dv4orVr1+OILoWSsCEaHaAERDIxhj781K3vDgxh2sfycOYYtqQ8jMulFNzogiV6pRGq7eY+AIxbR0xcXTRmaE7XPiefHiH71LO1gNl140C1UBRzJigVxu6hKj6bx3FmGi/mJFbYVbRID30TBCI0dfsESEGGHMmuGLR4hWmBSb4M0SntVvsjYxNhf+bxtP0zzzuG9CD8kP77OXQYUH2WscesuA1dgO9nnbfqwiycwbiwVXXXq85M3ioouDas3Bn/tL7/dib985rq+MNdIzGX73dEDzAX4SBvIwZhrtFLfRzLyV7b6+tZOCD9ga3bbylzp9zKgLREdrlKZkE5WIAHH+utQZLIgRHUtOmCKjhhtX/2xPfKFo42s+1Z3thkWh50cc1BD5XZjFVdB2430PRiN2drqpcHf9Ae0MRUDzQ2jsRwIRFw8jjs+USeuO365iFw3WoHq1toH6luViMOAujMDtuA42o/jaNPw1bat14BVGsBL2XKucopq4xpxWrnMg1UHlbHTv+r8F4p+cftvL2obylCOYW4wwZyFqKynV/eKeMgncpeYPkaF9HHaz6mZ0tzYvF4Xdubn1foxWU9PpIGzfxOnLBUBiG2gvctcCyRlBQU4/mgZo6jX+6f+xIs4uzkl7aRRT6/41C8ltpZU22tzpuxhD4LSqMBtK7AOI1Ej3nAKlGOKmJbUDZ4xTpHkouxk3Ntg5NytZeaCGxH1aLc9w5Hrih1I7+64TXgPpzWKKPXY8wtOeULNWWi3RzTWcinvJVt14CJwJIQBQWZ9dqHO6PWPQavwS+x/Vu8Ulj98LARctoBZL+kCx9ZHIYytund9DOG1Q6ULyEhIrY+kTS9I2jhImnEeuK0CdX4wrRy7Suz42+midR7o0zSPMy8VXEuCAtlG0XpmYI7MmgKOW1rlwdgsmaqed2/LTSmB3JYu2cWrz2DAnwwBwD3ql0WvS4hrpQlxrTohzQy3YDmvMaoTXjrnRXSepdpf0+9ow9vihNg1v2Q3LHxw72aabGYlGGIPviRbwx7L3YhNz9uv3481DNujv0+tImvz3WktD8vL+tKI0EPpE9FUuBuRKVpl9anpTd3oDnM2YEdzhrRbkoHBMWdIurC9c8rpztAbz50erBsN9avfE2qs2EBHGtsd3BeM2vApak9164gYDTrsV2H7CBDUMdGxhR1b8H9dNxZAbidwSCovdmN7pw2HD9nOVnv9b1sDW2NPVfeI6gAgKsrbuMWosvuA+xjxDVrlBx88Nn6VMR5sHCTsL6XhMI/tcxEBUSkE4ATvgbE9uW/f2prZfanJju9Xix8tUtwYtUx73ZswIMaZ5oDproWTrw0UsnFfYE2jyYshUQFZ8xJjUEfYia5CUP/z17c+09L1eHrxDaOJyg7K0wKHdyFJbL1DrDQIXTZUqQdZB6oIrUJBgUjpvr8+LLbzd7M0me5jQRs7P+EGk9VykmzRBaw+6kLEWaN3lteelgVMHGlZgAMq3m243ouDBwuDTC5hkNHCIHAoDGIbSsHhMEhIPUYIg6huYZCzTEDpGy25Ou5FFgZzA2Hrk8ziVkYlRJV9kobIMQBR8yHBuTnVu/Yr5cdRUh3SD4gMmsf5DfIek0IN5z0+WzjEvQrYq+Sxjr6hkks45EWZvECVtnuQiIKQyYvBCIHCQAjDSshAG47QLrcoNm8oInKRiotUnIFUUGCIwbNIRSgicpGKi1SMJBXKegB7pIIRag2mUip8PDGSVJjD9coyTmVwrZP72a+raWp/8X8=</diagram></mxfile>
|
2110.11945/main_diagram/main_diagram.pdf
ADDED
|
Binary file (51.5 kB). View file
|
|
|
2110.11945/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Recently the step change brought by Transformers [\[34\]](#page-11-0) in natural language processing (NLP) [\[10,](#page-10-0) [4\]](#page-10-1) seems to have arrived in vision [\[11,](#page-10-2) [42,](#page-12-0) [48,](#page-12-1) [47\]](#page-12-2). Indeed, with less inductive bias in its architecture design than Convolution neural networks (CNNs), pure Vision Transformer (ViT) [\[11\]](#page-10-2) and its variants have shown to be able to outperform CNNs on various vision tasks [\[8,](#page-10-3) [16\]](#page-10-4). However, there is a bottleneck in any Transformer based model, namely its quadratic complexity in both computation and memory usage. This is intrinsic to the self-attention mechanism: given a sequence of tokens (*e.g.*, words or image patches) as input, the self-attention module iteratively learns the feature representations by relating one token to all other tokens. This results in a quadratic complexity O(n 2 ) with the token sequence length n in both computation (time) and memory (space) since an n × n sized attention matrix needs to be computed and saved during inference. This problem is particularly acute in vision: a 2D image after tokenization will produce a far longer sequence than those in NLP even with a moderate spatial resolution. This quadratic complexity thus prevents a ViT model from modeling images at high spatial resolutions, which are often crucial for visual recognition tasks.
|
| 4 |
+
|
| 5 |
+
<sup>∗</sup>Li Zhang (lizhangfd@fudan.edu.cn) is the corresponding author with School of Data Science, Fudan University.
|
| 6 |
+
|
| 7 |
+
<span id="page-1-1"></span><span id="page-1-0"></span>
|
| 8 |
+
|
| 9 |
+
Figure 1: Top1-Accuracy on ImageNet [9] validation set with respect to parameters and the memory usage corresponding to the token sequence length in practice compared to other methods. (a) Comparison with CNN models: RegNet [27], ResNet [14] and Transformer models: PVT [36], DeiT [32], ViT [11], T2T-ViT [42], Twins-SVT [6] and SAN10 [46]; (b) Comparison with Transformer [34], Linformer [35], Nyströformer [40] and Performer [5]. The memory usage is measured with a batch size of 1 on a 16GB Tesla V100.
|
| 10 |
+
|
| 11 |
+
A natural solution is to reduce the complexity of self-attention computation via approximation. Indeed, there have been a number of attempts in NLP [35, 5, 19, 40]. For example, [35] takes a naive approach by shortening the length of Key and Value via learnable projections. Such a coarse approximation would inevitably cause performance degradation. In contrast, [5, 18] both leverage the kernel mechanism to approximate softmax normalization to linearize the computation in self-attention. [19] instead adopts a hashing strategy to selectively compute the most similar pairs. Recently, [40] uses Nyström matrix decomposition to reconstruct the full attention matrix with polynomial iteration for approximating the pseudo-inverse of the landmark matrix. Nonetheless, softmax normalization is simply duplicated across the matrix decomposition process, which is theoretically unsound. We empirically found that none of these methods are effective when applied to vision (see Sec. 4.2).
|
| 12 |
+
|
| 13 |
+
In this work, we identify that the limitations of existing efficient Transformers are caused by the use of *softmax self-attention*, and for the first time propose a softmax-free Transformer. More specifically, in all existing Transformers (with or without linearization), a softmax normalization is needed on top of scaled dot-product between token feature vectors [34]. Keeping this softmax operation challenges any subsequent linearization efforts. To overcome this obstacle, we introduce a novel *softmax-free self-attention* mechanism, named as SOFT, with linear complexity O(n) in both space and time. Specifically, SOFT uses Gaussian kernel to define the similarity (self-attention) function without the need for subsequent softmax normalization. With this softmax-free attention matrix, we further introduce a novel low-rank matrix decomposition algorithm for approximation. The robustness of the approximation is theoretically guaranteed by employing a Newton-Raphson method for reliably computing the Moore-Penrose inverse of the matrix.
|
| 14 |
+
|
| 15 |
+
We make the following **contributions**. (I) We introduce a novel *softmax-free Transformer* with linear space and time complexity. (II) Our attention matrix approximation is achieved through a novel matrix decomposition algorithm with theoretical guarantee. (III) To evaluate our method for visual recognition tasks, we design a family of generic backbone architectures with varying capacities using SOFT as the core self-attention component. Extensive experiments show that with a linear complexity (Figure 1b), our SOFT models can take in as input much longer image token sequences. As a result, with the same model size, our SOFT outperforms the state-of-the-art CNNs and ViT variants on ImageNet [9] classification in the accuracy/complexity trade-off (Figure 1a).
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
A schematic illustration of our model is given in Figure 2. Let's first look at our attention module design. Given a sequence of n tokens $X \in \mathbb{R}^{n \times d}$ with each token represented by a d-dimensional feature vector, self-attention [34] aims to discover the correlations of all token pairs exhaustively.
|
| 20 |
+
|
| 21 |
+
<span id="page-2-1"></span>Formally, X is first linearly projected into three $d_e$ -dimensional spaces (query, key, and values) as:
|
| 22 |
+
|
| 23 |
+
$$Q = XW_q \in \mathbb{R}^{n \times d_e}, \quad K = XW_k \in \mathbb{R}^{n \times d_e}, \quad V = XW_v \in \mathbb{R}^{n \times d_e}, \tag{1}$$
|
| 24 |
+
|
| 25 |
+
where $W_q, W_k, W_v \in \mathbb{R}^{d \times d_e}$ are learnable matrices. Self-attention can be expressed in a generic formulation as:
|
| 26 |
+
|
| 27 |
+
$$y_{i,:} = \sum_{j=1}^{n} \alpha(Q_{i,:}, K_{j,:}) \odot V_{j,:}, \tag{2}$$
|
| 28 |
+
|
| 29 |
+
where $\odot$ is the Hadamard product, and $i, j \in \{1, \cdots, n\}$ index the tokens. The key self-attention function $\alpha: \mathbb{R}^{d_e} \times \mathbb{R}^{d_e} \to \mathbb{R}$ is composed of a nonlinear function $\beta: \mathbb{R} \to \mathbb{R}$ and a relation function $\gamma: \mathbb{R}^{d_e} \times \mathbb{R}^{d_e} \to \mathbb{R}$ . A dominant instantiation of $\alpha$ is the scaled dot-product based softmax self-attention [34], defined as
|
| 30 |
+
|
| 31 |
+
<span id="page-2-0"></span>
|
| 32 |
+
$$\beta(\cdot) = \operatorname{softmax}(\cdot), \quad \gamma(Q_{i,:}, K_{j,:}) = \frac{1}{\sqrt{d_e}} \cdot Q_{i,:}^{\top} K_{j,:}. \tag{3}$$
|
| 33 |
+
|
| 34 |
+
Whilst this softmax self-attention has been the *de facto* choice and seldomly questioned, as discussed earlier it is not necessarily suited for linearization. To facilitate the design of linear self-attention, we introduce a softmax-free self-attention function with the dot-product replaced by a Gaussian kernel
|
| 35 |
+
|
| 36 |
+
$$\beta'(\cdot) = \exp(\cdot), \quad \gamma'(Q_{i,:}, K_{j,:}) = -\frac{1}{2\sqrt{d_e}} \cdot \|Q_{i,:} - K_{j,:}\|_2^2. \tag{4}$$
|
| 37 |
+
|
| 38 |
+
<span id="page-3-2"></span><span id="page-3-0"></span>
|
| 39 |
+
|
| 40 |
+
Figure 2: Schematic illustration of the proposed softmax-free self-attention (SOFT) method. P.E.: Position embedding. Dash lines: linear projection. dh: the hidden dim of each attention head. ◦ denotes the matrix dot product.
|
| 41 |
+
|
| 42 |
+
To preserve the symmetric property of attention matrix as in Eq [\(3\)](#page-2-0), we set the project matrices W<sup>q</sup> and W<sup>k</sup> in Eq [\(1\)](#page-2-1) identical (*i.e.*, Q = K). Our self-attention matrix is then written as:
|
| 43 |
+
|
| 44 |
+
<span id="page-3-1"></span>
|
| 45 |
+
$$S_{i,j} = \exp\left(-\frac{1}{2\sqrt{d_e}} \cdot \|Q_{i,:} - K_{j,:}\|_2^2\right).$$
|
| 46 |
+
(5)
|
| 47 |
+
|
| 48 |
+
For notation simplicity, we define the matrix formulation as: S = exp (Q K).
|
| 49 |
+
|
| 50 |
+
Remarks Our self-attention matrix S has three important properties: (1) It is symmetric; (2) All the elements lie in a unit range of [0, 1]; (3) All diagonal elements hold the largest value 1 (selfreinforced), with the bottom ones (corresponding to most dissimilar token pairs) being close to 0. As Gaussian kernel is a positive definite kernel [\[12\]](#page-10-12), S is deemed a Gram matrix. However, we find that when using our kernel-based self-attention matrix S without linearization, the training of a transformer fails to converge. This might explain why softmax dot-product based self-attention [\[34\]](#page-11-0) is so popular in vanilla transformers.
|
| 51 |
+
|
| 52 |
+
To solve the convergence and quadratic complexity problems, we leverage matrix decomposition as a unified solution with low-rank regularization. In particular, we consider Nyström [\[39\]](#page-11-8), which is originally a low-rank matrix approximation algorithm. This enables our model's complexity to be reduced significantly without computing the full self-attention matrix S.
|
| 53 |
+
|
| 54 |
+
We make this choice because our S is positive semi-definite (*i.e.*, a Gram matrix) without follow-up normalization which are all necessary conditions for Nyström. In contrast, [\[40\]](#page-12-4) totally ignores these requirements, leading to theoretical flaw in its approximation.
|
| 55 |
+
|
| 56 |
+
To define the Nyström method formally, let us express S = exp (Q K) as a block matrix:
|
| 57 |
+
|
| 58 |
+
$$S = \begin{bmatrix} A & B \\ B^{\top} & C \end{bmatrix} \in \mathbb{R}^{n \times n}, \tag{6}$$
|
| 59 |
+
|
| 60 |
+
where A ∈ R <sup>m</sup>×<sup>m</sup>, B ∈ R m×(n−m) , C ∈ R (n−m)×(n−m) with m n. Through Nyström decomposition (see derivative details in Appendix [A.1\)](#page-12-6), an approximation can be represented as:
|
| 61 |
+
|
| 62 |
+
$$\hat{S} = \begin{bmatrix} A \\ B^{\top} \end{bmatrix} A^{\dagger} \begin{bmatrix} A & B \end{bmatrix} = P^{\top} A^{\dagger} P, \quad \text{where} \quad P = \begin{bmatrix} A & B \end{bmatrix}, \tag{7}$$
|
| 63 |
+
|
| 64 |
+
and A† is the Moore-Penrose (a generalized) inverse of A.
|
| 65 |
+
|
| 66 |
+
Sampling In the standard Nyström formulation, A and B are sub-matrices of S obtained by randomly sampled <sup>m</sup> tokens, denoted as <sup>Q</sup>e. We call the sampled <sup>Q</sup><sup>e</sup> as bottleneck tokens. However,
|
| 67 |
+
|
| 68 |
+
Input: $Q \in \mathbb{R}^{n \times d_e}$ , sampling function $f_s$ Sampling $\widetilde{Q} \leftarrow f_s(Q)$ ; $A \leftarrow \exp(\widetilde{Q} \ominus \widetilde{Q}), P \leftarrow \exp(\widetilde{Q} \ominus Q);$ $\widehat{S} \leftarrow P^{\top} NR(A)P;$
|
| 69 |
+
|
| 70 |
+
<span id="page-4-0"></span>Output: $\hat{S}$
|
| 71 |
+
|
| 72 |
+
<span id="page-4-1"></span>Input: $A \in \mathbb{R}^{m \times m}$ , and $\mathcal{T} \in \mathbb{Z}^+$ $\alpha = 2/\|A\|_1^2$ . Initialize $A_0 \leftarrow \alpha A$ ; for k from 1 to $\mathcal{T}$ do $|A_k \leftarrow 2A_{k-1} - A_{k-1}AA_{k-1}$ end Output: $A_{\mathcal{T}}$
|
| 73 |
+
|
| 74 |
+
we find empirically that random sampling is considerably sensitive to the choice of m. We hence explore two additional options by leveraging the structural prior of visual data: (1) Using one convolutional layer with kernel size k and stride k to learn $\widetilde{Q}$ , and (2) Using average pooling with kernel size k and stride k to generate $\widetilde{Q}$ . For both, we need to reshape Q to the form of $\mathbb{R}^{H\times W\times d_e}$ . Each slide of convolution or pooling produces a token. We set k according to the length of Q such that m tokens can be obtained. Our experiments show that a convolution layer performs better in accuracy. We therefore use a convolution layer by default.
|
| 75 |
+
|
| 76 |
+
As K is identical to Q, we have $\widetilde{K} = \widetilde{Q}$ . Given these m tokens, we then compute A and P as:
|
| 77 |
+
|
| 78 |
+
$$A = \exp(\widetilde{Q} \ominus \widetilde{K}), \quad P = \exp(\widetilde{Q} \ominus K).$$
|
| 79 |
+
(8)
|
| 80 |
+
|
| 81 |
+
We finally obtain the regularized self-attention matrix $\hat{S}$ of SOFT as:
|
| 82 |
+
|
| 83 |
+
$$\hat{S} = \exp\left(Q \ominus \widetilde{K}\right) \left(\exp\left(\widetilde{Q} \ominus \widetilde{K}\right)\right)^{\dagger} \exp\left(\widetilde{Q} \ominus K\right), \tag{9}$$
|
| 84 |
+
|
| 85 |
+
leading to Algorithm 1. The low-rank regularization is conducted as follows. For computing the attention score between any two tokens, we first correlate each of them with sampled tokens using our self-attention function (Eq (5)); With this correlation representation we then compute their similarity under the modulation of the generalized inverse of $\widetilde{Q}$ 's correlation matrix. Similar as standard Nyström, our design associates the input tokens w.r.t. a small space spanned by sampled tokens, giving a proper estimation of the original attention relationships subject to a low-rank constraint. The correctness of this method is proved in Appendix A.1.
|
| 86 |
+
|
| 87 |
+
**Moore-Penrose inverse** An accurate and commonly used way to calculate the Moore-Penrose inverse is to use Singular Value Decomposition (SVD). Given $A \in \mathbb{R}^{m \times m}$ and its SVD form $A = U \Sigma V^{\top}$ where U, V are $m \times m$ unitary matrices and $\Sigma$ is a $m \times m$ diagonal matrix, the Moore-Penrose inverse of A is $A^{\dagger} = V \Sigma^{\dagger} U^{\top}$ . Nevertheless, SVD is not friendly to the training process on GPU hence harming the model training efficiency. To solve this issue, we adopt the Newton–Raphson method. It is an iterative algorithm with the (k+1)-th iteration formulated given the previous iteration as:
|
| 88 |
+
|
| 89 |
+
<span id="page-4-3"></span>
|
| 90 |
+
$$A_{k+1} = 2A_k - A_k A A_k$$
|
| 91 |
+
, and $A_0 = \alpha A$ . (10)
|
| 92 |
+
|
| 93 |
+
<span id="page-4-2"></span>We now prove that $A_k$ finally converges to Moore-Penrose inverse of $A_{m \times m}$ , if $\alpha$ is sufficiently small [3].
|
| 94 |
+
|
| 95 |
+
**Theorem 1** When $\alpha$ is sufficiently small, $A_{k+1} = 2A_k - A_k A A_k$ , $A_k$ converges to $A^{\dagger}$ .
|
| 96 |
+
|
| 97 |
+
Though $\alpha=2/\|A\|_1^2$ which ensures good convergence behavior in Algorithm 2 (see more details in Appendix A.2.1), in practice, we find that using an alternative form gives more stable training and faster convergence. Specifically, in $\|I-A\frac{2\beta^n}{\|A\|_1^2}\|_1 \leq 1$ where $\beta$ equals to 0.5, we find the smallest $n_i$ that holds this inequality. Then, we initialize $\alpha$ as $\alpha=\frac{2\beta^{n_i}}{\|A\|_1^2}$ .
|
| 98 |
+
|
| 99 |
+
The following proposition comes with the proof of Theorem 1:
|
| 100 |
+
|
| 101 |
+
**Proposition 1** $||AA_kA - A||$ and $||A_k - A^{\dagger}||$ decreases to 0 monotonously, if $\alpha$ is sufficiently small.
|
| 102 |
+
|
| 103 |
+
The detail of proposition 1 is shown in Appendix A.2.2. This ensures that our estimated inverse is sufficiently accurate for matrix decomposition, subject to that our SOFT attention is regularized.
|
| 104 |
+
|
| 105 |
+
<span id="page-5-1"></span><span id="page-5-0"></span>
|
| 106 |
+
|
| 107 |
+
| Methods | Complexity | Memory | Params | FLOPs | Throughput (img/s) | Top-1 % |
|
| 108 |
+
|--------------------|--------------------|---------|--------|-------|--------------------|---------|
|
| 109 |
+
| Transformer [34] | $\mathcal{O}(n^2)$ | 19.0GB† | 13M | 3.9G | 1073 / 3240 | 79.1 |
|
| 110 |
+
| Linformer [35] | $\mathcal{O}(n)$ | 11.7GB | 13M | 1.9G | 2767 / 3779 | 78.2 |
|
| 111 |
+
| Performer [5] | $\mathcal{O}(n)$ | 15.0GB | 13M | 2.2G | 2037 / 3657 | 76.1 |
|
| 112 |
+
| Nyströmformer [40] | $\mathcal{O}(n)$ | 17.2GB | 13M | 2.0G | 1891 / 3518 | 78.6 |
|
| 113 |
+
| SOFT | $\mathcal{O}(n)$ | 15.8GB | 13M | 1.9G | 1730 / 3436 | 79.3 |
|
| 114 |
+
|
| 115 |
+
Table 1: Comparison of different linear/efficient transformer variants on ImageNet [9], based on our multi-stage Tiny configuration (see Table 2). The memory usage is measured with the batch size of 1024 which is our standard training setting. Transformer is tested at a batch size of 256, which is the maximal number possible with the GPU resource at our disposal. Throughput is in format as Train throughput / inference throughput.
|
| 116 |
+
|
| 117 |
+
**Complexity** We summarize the complexity of SOFT in space and time. For *time complexity*, it involves: (1) Sampling: $\mathcal{O}(nd_e)$ . (2) Calculating three decomposed matrices: $\mathcal{O}(nmd_e+mnd_e+m^2d_e)=\mathcal{O}(2mnd_e+m^2d_e)$ ; (3) Moore-Penrose inverse: $\mathcal{O}(\mathcal{T}\times m^3)=\mathcal{O}(\mathcal{T}m^3)$ , where $\mathcal{T}$ is the iteration steps. (4) All matrix multiplication: $\mathcal{O}(nm^2+mnd_e+mnd_e)=\mathcal{O}(nm^2+2mnd_e)$ . The total time complexity is $\mathcal{O}((d_e+4md_e+m^2)n+\mathcal{T}m^3+d_em^2)$ . The space complexity is decided by four decomposed matrices with $\mathcal{O}(n\times m)+\mathcal{O}(m\times m)+\mathcal{O}(m\times n)+\mathcal{O}(n\times d_e)=\mathcal{O}((2m+d_e)n+m^2)$ . As we keep m ( $m\ll n$ ) a fixed constant in our model, both time and space complexity are $\mathcal{O}(n)$ , making SOFT a linear self-attention.
|
| 118 |
+
|
| 119 |
+
Figure 2 shows how our proposed *softmax-free self-attention* block (**SOFT block**) can be implemented in a neural network. We replace the self-attention block with our SOFT block in the traditional Transformer, that is, we stack a SOFT block with a feed forward residual block [11] to form a *softmax-free Transformer* layer (**SOFT layer**).
|
| 120 |
+
|
| 121 |
+
Focusing on the general image recognition tasks, we integrate our SOFT layer into the recent pyramidal Transformer architecture [36] to form our final model **SOFT**. Further, several improvements are introduced in patch embedding (*i.e.*, tokenization). Specifically, unlike [36] that uses a combination of non-overlapping convolution and layer normalization [1], we adopt a stack of overlapping convolutions, batch normalization [15] and ReLU non-linearity. Concretely, the STEM is implemented by 3 units of 3x3 Conv $\rightarrow$ BN $\rightarrow$ ReLU, with the stride of 2, 1, 2 respectively. Then, one such unit is applied to each of three following down-sampling operations with stride of 2 in the multi-stage architecture.
|
| 122 |
+
|
| 123 |
+
The architecture hyper-parameters of SOFT are: d: the input channel dimension of SOFT layer. $d_e$ : the embedding dimension of tokens in SOFT block. In practice, we set $d_e = d$ . h: the head number of SOFT block. $d_h$ : the channel dimension of each head and $d_h = d_e/h$ . n: the input token sequence length of a SOFT block. m: the bottleneck token sequence length of SOFT block. sp: the sampling ratio of token sequence length sampling, which is the ratio between input token sequence length and the bottleneck token sequence length. e: the expansion ratio of the 2-layer feed forward block. In SOFT, for all the stages we set $d_h = 32$ , e = 4 and m = 49, sp varies in each stage according to the input token sequence length. Table 2 details the family of our SOFT configurations with varying capacities (depth and width).
|
2110.14633/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-05-13T09:34:47.797Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36" etag="XULXahCBust6sjMRujKL" version="14.6.12" type="google"><diagram name="Copy of Page-1" id="XwT6Tp3QymiLzB1ODUhZ">7Vxbb6M4FP41kWYfOgLMLY9tOtldaXfU3Y4606cRARPYEhw5bpPMr18TbC52EkjKrW000ggfwMD5vnN8Lk5HYLLY/I6dZfA38mA00hRvMwK3I00b6yb9PxFsU4GhaKlgjkMvFam54D78BZlQYdLn0IOr0oUEoYiEy7LQRXEMXVKSORijdfkyH0Xlpy6dOXuikgvuXSeC0mXfQ48EqdTWrFz+BwznAX+yao7TMwuHX8ymWAWOh9aFZ4EvIzDBCJH0aLGZwCjRHdeL94/yr2Kt9FmEH/SZrs2++Zur9C2np9ySfQKGMWl2aoblixM9M32xbyVbrkCMnmMPJpOoI3CzDkIC75eOm5xdU8ZQWUAWETvto5gwDqgmHbPpISZwI+BR8TFqpmHKTIgWkOAtvY/TkoG05WRj4K9ziPUxkwVFeDlLHEareTZ1rjp6wLR3giaBpMnbECeE3jHJDcJ4/jrVYkQcEqKYDq/op1Fdh1E0QRHCu8mA70PTdal8RTB6goUznjWeKUqH6GhjqwSPZtgSPGNNRof7msbB0atpDmPvOnE3dBSjmApvPGcVZOBUchx6c9HjnKfDgo6MPQzmMgwjSoeX8jP3KY494Q6FMckhAtwQMgsC5SlW6Bm7kN1VdCsVE1FJeSLi4Dkk0kQ7HLPPPh9a4314MADGfXsw82IkFdzOQHqtkeiGMFHLRmJJ0P7lbCGmIlXCuNFFyXOg7e9dlEzXhjO/Q4OzhZBBl0OGTtck+yAmr/RgbwgT1bCGBcr4ICjgw4BC3eewQOGp5B5U9IZROaL9Ml7Ta/pv2iEqwByY/1LVg6gYHwYVHQzMgalyRs9RMT8MKoYyNA8mVwfaCsAkVGxtBkxzLyqT6XV/ARiw5IynW1DkqkBbEdhwQREjsP5RkRP6tkKw4aIihmD9oyIXB9oKwcTA2IC2px9Hq6cQrH9UDuf1TYdgw0VFDMH6R+VwZt90CDZcVMQQrH9U5NReAsN9xi8ZFoVapxs5q1XolsGAm5D8oMfKZ4ONHgtnbhM9KHyw5YOYfkp6k2Xw8WPxZH7fbsRvTAHlDUyt6QIr1cKuknnkOqa+tFB55DqwnxbdFGx1VfDQqlAsr1uw1QVXr5nCRAcKtpQxzrZw2TK5YHX4hbMX5JmKCY6+F1CEbotRavrSg/QNGq0ea3L9pVfTeYXlNN6aqGE5oKblpDFWb70OHQimYzRkOrYwUVOmYwumYKvHTccSejAsy2vXdOQi2VdI1gg/UeHImHzyf46sG3qwDMKRdUsPfpNMi67MpGw/5eWddeKKsQATOVE4T4KFCPrJDMkqH7pOdM3Ei9DzkofsDTvywKTLnQOq4MAtU4oasti2aAStNUW5q6sEkASQOBcICw1q0ZEUMFT1TjGUq2/fnNWThBKdKlyuDmmzAJ6zWqZb1fxwk+g3Q8mliqOBfnfKNqWUVK50spCipGsTtKXr07ba7A8Pzlzqzwkr+g8PeGmlOj6w+owPpNDabCg+kAKNlvdCaDV2DF0Yuq/MVM1Q+8LQJhh62k6sCoaW+JnT9X0x1K7JUG1Y5Yk3y1C58PyGfGgf5bO6BB0Pip96U+Wzcb3yWWP8lEvwF34e5WfdJX5YK/yb5afcjPgzXj4TiaTlFL4qLRWz0ADh8BclkhONuq0BAEG9mlwC0PfQREShuR/n1Khgt+0P1II3yH3DUP0BZ2ilQ+g1KQVjodZ0tkPQhWJyxw4ByHViiaBdtliSgLLA8Svls5KAcJzoyegO4pAqI3E/RzxOi5mAUZO3vSYCIm8lup3NW9EAWupTCs2Tyj4lv7/VZguQ67yne/kOW5Od24Ze0zZSI/qwxtEYH5uohb9nPtbujOsXPjbBxwFUvk+Jggcb86q9Bg/CPgbNPpOOKoef7z2yWvvl9vel//Xu3nuYT//THibKo+//CK/knGyK8NrB3g5TqkjNjJKe9wzTo3ly9AnFUaJnH+GECPTONlrtWfrcULNdazHRNuzyNkud/yqruM1yDxXP6LXTYf63TlIO5H8wBnz5Hw==</diagram></mxfile>
|
2110.14633/main_diagram/main_diagram.pdf
ADDED
|
Binary file (23.4 kB). View file
|
|
|
2110.14633/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
A central topic in the analysis of deep neural networks is the investigation of the learned representations. A good understanding of the features learned on inner layers provides means to analyse and advance deep learning systems. A fundamental question in this line of inquiry is "when are two representations similar?" We contribute to this line of research by introducing a novel approach to study the above question. The key idea of our work is to ask the following, somewhat different question: "in what way are two representations similar, once we know that they are similar?" In this paper, we present a conceptual framework and a methodology that allows to meaningfully pose and study this question. For this purpose, we will distinguish two approaches to define similarity: representational similarity and functional similarity.
|
| 4 |
+
|
| 5 |
+
**Representational similarity** Representational similarity notions define similarity via computing statistics on two different data embeddings that capture appropriate geometrical or statistical properties. There exist several statistical measures, [Raghu et al., 2017, Morcos et al., 2018, Kornblith et al., 2019] each of which serves as a different notion of similarity. These measures have been used successfully to obtain valuable insights about the inner workings of deep neural networks [Nguyen et al., 2021, Mirzadeh et al., 2021, Neyshabur et al., 2020, Wu et al., 2020].
|
| 6 |
+
|
| 7 |
+
**Functional similarity** While representational similarity concerns the data embeddings, in contrast, functional similarity concerns the functions that produce and process these embeddings, i.e., parts of the function compositions that the neural networks realize. With these elements, we can ask new types of questions that we were not able to ask using solely the data embeddings. For example, we can ask the following: "can network B achieve its task using the representations of network A?"
|
| 8 |
+
|
| 9 |
+
Our paper focuses on this specific theme. In particular, we investigate this by taking the activations of network A at a given layer, transform it with an affine map, then use the result as an input of the same layer in network B. In other words, we stitch together the two networks with an affine stitching layer. This technique first appeared in Lenc and Vedaldi [2019] to study the invariance and equivariance properties of convolutional networks. Evaluating the performance of this combined network provides an alternative viewpoint on similarity of representations, the viewpoint of functional similarity.
|
| 10 |
+
|
| 11 |
+
We investigate this novel perspective of reflecting on representational similarity through the lens of functional similarity. A brief outline of this work1 is the following:
|
| 12 |
+
|
| 13 |
+
- After a detailed introduction to model stitching (Section 3) and presenting two types of stitching methods (Section 4), we empirically demonstrate that trained convolutional networks with the same architecture but different initializations can be stitched together, even with a single, affine stitching layer in many cases without significant performance loss (Section 5). We will refer to this compatibility property as the 'matchability' of representations.
|
| 14 |
+
- Observing the matchability of representations as a quite robust property of common vision models, we have a wide range of examples in our hands when we know that representations are functionally similar. This permits us to study the relation between representational similarity measures and functional similarity. In our experiments, we show that the values of similarity indices are not necessarily indicative of task performance on a stitched network, and perhaps more surprisingly, high-performance stitchings can have differences in the values of similarity indices such as Centered Kernel Alignment (CKA) [Cortes et al., 2012, Kornblith et al., 2019]. This also reflects on phenomena experienced by the 'end-users' of these indices. For instance, in the context of continual learning Mirzadeh et al. [2021] observe that CKA remains constant, while the accuracy on previous tasks drops drastically (Section 6).
|
| 15 |
+
- Constraining the space of transformations in the stitching layer provides means to analyse how the information is organised in the representations. As an example of this probing methodology, we study bottleneck stitching layers, and show that when doing SVD in the representational space, the magnitudes of the principal components are not in direct correspondence with the information content of the latent directions (Section 7). As SVD is commonly used with embeddings, and there are representational similarity measures, for example SVCCA [Raghu et al., 2017], that use SVD as an ingredient, these results might be of interest to both theorists and practitioners.
|
| 16 |
+
- The transformation matrices of well-performing stitching layers capture how representations are functionally similar. This provides the opportunity to empirically investigate *"in what way are two representations similar, once we know that they are similar?"*. As a first step on this avenue, we present results regarding the uniqueness and sparsity properties of the stitching matrices (Section 8).
|
| 17 |
+
|
| 18 |
+
# Method
|
| 19 |
+
|
| 20 |
+
Let $f_{\theta}: \mathcal{X} \to \mathcal{Y}$ denote the input-output function of a feedforward artificial neural network with $m \in \mathbb{N}$ layers, thus, $f_{\theta} = f_m \circ \cdots \circ f_1$ , where $\mathcal{X}$ is the input space, $\mathcal{Y}$ is the output space, $f_i: \mathcal{A}_{i-1} \to \mathcal{A}_i$ are maps between activation spaces $\mathcal{A}_{i-1}$ and $\mathcal{A}_i$ for $i \in [m]$ with $\mathcal{A}_0 = \mathcal{X}$ , and $\theta$ are the parameters of the network. We consider models that are trained on a dataset $D = \{(x_i, y_i)\}_{i=1}^n$ in a supervised manner with inputs $x_i \in \mathcal{X}$ and labels $y_i \in \mathcal{Y}$ , where $n \in \mathbb{N}$ is the dataset size.
|
| 21 |
+
|
| 22 |
+
The central tool of our investigation involves splitting up the network at a given layer to a representation map and a task map. The representation map at layer L is a function $R_L: \mathcal{X} \to \mathcal{A}_L$ that maps each data point $x_i$ to its activation vector $a_{i,L}$ at layer L:
|
| 23 |
+
|
| 24 |
+
$$R_L(x_i) = f_L \circ \cdots \circ f_1(x_i) = a_{i,L},$$
|
| 25 |
+
|
| 26 |
+
where $A_L$ is the activation space at layer L. The activation vectors for the dataset D at layer L are simply denoted by $A_L = (a_{i,L})_{i=1}^n$ . The task map at layer L is a function $T_L : A_L \to \mathcal{Y}$ that maps an activation vector at layer L to the final output of the network:
|
| 27 |
+
|
| 28 |
+
$$T_L(a_{i,L}) = f_m \circ \cdots \circ f_{L+1}(a_{i,L}).$$
|
| 29 |
+
|
| 30 |
+
Thus, the input-output map of the network $f_{\theta}$ is simply the composition $f_{\theta} = T_L \circ R_L : \mathcal{X} \to \mathcal{Y}$ for all $L = 1, \dots, m$ . We will omit the index of the layer L when this does not hurt clarity.
|
| 31 |
+
|
| 32 |
+
**Definition 3.1** (Frankenstein network). Fix two (pretrained) neural networks $f_{\theta}$ and $f_{\phi}$ . We will call a transformation $S: \mathcal{A}_{\theta,L} \to \mathcal{A}_{\phi,M}$ between the two activation spaces at layers L and M a stitching layer. Given a stitching layer S, the Frankenstein or stitched network corresponding to S is the composition $F = F_S: \mathcal{X} \to \mathcal{Y}$ ,
|
| 33 |
+
|
| 34 |
+
$$F(x) = T_{\phi,M} \circ S \circ R_{\theta,L}(x),$$
|
| 35 |
+
|
| 36 |
+
where $T_{\phi,M}$ is the task map at layer M for the network $f_{\phi}$ and $R_{\theta,L}$ is the representation map at layer L for network $f_{\theta}$ .
|
| 37 |
+
|
| 38 |
+
The stitching layer thus realizes a correspondence between the representations of two neural networks: it transforms the activations of one network at a particular layer to be a suitable input to the corresponding layer in the other network. For better readability, we will often call $f_{\theta}$ as Model 1 and $f_{\phi}$ as Model 2, and index the corresponding data accordingly ( $f_1 = f_{\theta}$ , $f_2 = f_{\phi}$ , etc.). In the experiments of this paper, Model 1 and Model 2 have the same architecture and L = M.
|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
|
| 42 |
+
(a) Direct matching of representations.
|
| 43 |
+
|
| 44 |
+
(b) Task loss matching of representations.
|
| 45 |
+
|
| 46 |
+
Figure 1: Illustrating two types of matching approaches. (a) **Direct matching**: first align the representations using only the representations themselves, and use the forward path on the task map of Model 2 — with the transformed activations — only to evaluate the performance of the stitched model. (b) **Task loss matching**: the forward path from the input starts on Model 1, then through the stitching layer it continues on Model 2, then task loss gradients are backpropagated to update the weights of the stitching layer.
|
| 47 |
+
|
| 48 |
+
**Definition 3.2** (Frankenstein learning task). Given two pretrained neural networks $f_{\theta}$ and $f_{\phi}$ , a task given by a labeled dataset $D = \{(x_i, y_i) \in \mathcal{X} \times \mathcal{Y} : i = 1, ..., n\}$ and a loss function $\mathcal{L}: \mathcal{Y} \times \mathcal{Y} \to \mathbb{R}$ , we will call the Frankenstein learning task at layer L the task of finding the stitching layer $S: \mathcal{A}_{\theta,L} \to \mathcal{A}_{\phi,L}$ which minimizes the total value of the loss $\mathcal{L}$ on the dataset D.
|
| 49 |
+
|
| 50 |
+
**Frankenstein is an evaluation framework** The Frankenstein learning task primarily provides an *evaluation framework* for matching representations, and it does not tie our hands on how the transformation function S is produced. As an evaluation metric, we can either use the mean loss value of the Frankenstein learning task on an evaluation set, or any other appropriate metric (e.g., accuracy in the case of classification tasks).
|
| 51 |
+
|
| 52 |
+
Stitching convolutional representations Our toolset is generic, but here we will only consider convolutional architectures. Thus, the activation spaces have the structure of rank-3 tensors $\mathbb{R}^{w \times h \times c}$ , where w and h are the spatial width and height and c is the number of feature maps. We will only consider linear stitching layers of the form $M: \mathbb{R}^c \to \mathbb{R}^c$ (1 $\times$ 1 convolutional layers). When formulating least squares problems on n many activation tensors, we always mean the least squares problem in $\mathbb{R}^c$ , which in practice means reshaping the $n \times w \times h \times c$ tensor to a $nwh \times c$ matrix.
|
| 53 |
+
|
| 54 |
+
Here we line up several possibilities to solve the Frankenstein learning task presented in the previous section. We distinguish two approaches: (1) **Direct matching**: utilize only the outputs of representation maps; (2) **Task loss matching**: utilize the loss of the Frankenstein learning task. Figure 1 illustrates the difference between the two approaches schematically.
|
| 55 |
+
|
| 56 |
+
From the exposition of the Frankenstein learning task, a training method naturally follows: utilize the loss function in the Frankenstein learning task itself, and train an appropriately parametrised stitching layer with backpropagation (leaving the other parts of the Frankenstein network fixed).
|
| 57 |
+
|
| 58 |
+
We use the outputs of Model 2 as a soft label [Hinton et al., 2015] and define the task loss with cross-entropy. Another meaningful option would be to use the original training objective (the one that was used for the training of Model 1 and Model 2, e.g., the one-hot labels in a classification task). Both are reasonable choices as tasks, one corresponding to the stitched network imitating Model 2, the other to solving the task itself. We have experimented with both losses and found very high correlation, with no difference regarding our main observations. We often present performance in terms of relative accuracy compared to Model 2, because it is easier to interpret than cross-entropy.
|
| 59 |
+
|
| 60 |
+
Due to their close connection to similarity indices of representations, we also consider methods that arrive at the transformation S by using only the representations themselves, namely, utilizing only the outputs $A, B \in \mathbb{R}^{n \times p}$ of the representation maps of Model 1 and Model 2, respectively.
|
| 61 |
+
|
| 62 |
+
**Least squares methods** Given $A, B \in \mathbb{R}^{n \times p}$ , and a class of transformations $C \subseteq \text{Hom}(\mathbb{R}^p, \mathbb{R}^p)$ , we will consider least squares problems of the following kind: find an optimal $M_o \in C$ , such that
|
| 63 |
+
|
| 64 |
+
$$||AM_o - B||_F = \min_{M \in \mathcal{C}} ||AM - B||_F.$$
|
| 65 |
+
(1)
|
| 66 |
+
|
| 67 |
+
We consider three variants of this problem: 1. arbitrary linear maps: $\mathcal{C} = \operatorname{Hom}(\mathbb{R}^p, \mathbb{R}^p)$ , 2. orthogonal transformations: $\mathcal{C} = O(p)$ , 3. linear maps in $\operatorname{Hom}(\mathbb{R}^p, \mathbb{R}^p)$ of rank at most k: $\mathcal{C} = \Sigma_k$ .
|
| 68 |
+
|
| 69 |
+
In each case, there is a closed form expression relying on singular value decomposition. For $C = \text{Hom}(\mathbb{R}^p, \mathbb{R}^p)$ , the minimal value of (1) is obtained through an orthogonal projection in the space of matrices [Penrose, 1956],
|
| 70 |
+
|
| 71 |
+
$$M_o = M_{LS} := A^{\dagger} B, \tag{2}$$
|
| 72 |
+
|
| 73 |
+
where $A^{\dagger}$ denotes the Moore-Penrose pseudoinverse of A. We will refer to this $M_{LS}$ as the *least squares matching*. For results of this type, see Section 5.
|
| 74 |
+
|
| 75 |
+
In the case C = O(p), problem (1) is also known as the *orthogonal Procrustes problem*, see e.g. Golub and Van Loan [2013]. If $A^TB = USV^T$ is a singular value decomposition, then the optimal orthogonal map is $M_o = UV^T$ .
|
| 76 |
+
|
| 77 |
+
Finally, the case $\mathcal{C}=\Sigma_k$ is also known as *reduced rank regression* Izenman [1975]. In terms of the SVD of $AM_{LS}=USV^T$ , the optimal rank k map $M_o$ is given by $M_o=M_{LS}V_kV_k^T$ , where $V_k$ consists of the first k columns of V. For reduced rank matching results, see Section 7.
|
| 78 |
+
|
| 79 |
+
**Sparse matchings** It is known [Wang et al., 2018] that representations are not fully local, that is, matching cannot be achieved by permutation matrices. High quality sparse matchings between neurons can be interpreted to imply "slightly distributed" representations [Li et al., 2016].
|
| 80 |
+
|
| 81 |
+
To achieve sparsity, a natural approach is to add the regularization term $||M||_1 = \sum |m_{ij}|$ to the loss function, where M is the transformation matrix. Then one can consider the L1 regularized versions of 1) task loss matching, and 2) least squares matching (Lasso regression, as in Li et al. [2016]):
|
| 82 |
+
|
| 83 |
+
$$\mathcal{L}(A, B) = ||AM - B||_F + \alpha \cdot ||M||_1, \tag{3}$$
|
| 84 |
+
|
| 85 |
+
where A, B denote the activations of the two models. See Section 8 for these experiments.
|
| 86 |
+
|
| 87 |
+
We now continue our exposition by presenting a series of experiments regarding matching neural representations. Our results in this section can be summarized in the following statement:
|
| 88 |
+
|
| 89 |
+
Neural representations arising on a given layer of convolutional networks that share the same architecture but differ in initialization can be matched with a single affine stitching layer, achieving close to original performance on the stitched network.
|
2111.12082/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2111.12082/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
![ The trajectories of rPPG signals around t1, t2, and t3 share similar properties (e.g., trends with rising edge first then falling edge later, and relatively high magnitudes) induced by skin color changes. It inspires the long-range spatio-temporal attention (e.g., [blue]{style="color: blue"} tube around t1 interacted with [red]{style="color: red"} tubes from intra- and inter-frames) according to their local temporal difference features for quasi-periodic rPPG enhancement. Here 'tube' indicates the same regions across short-time consecutive frames. ](Figures/Figure1.pdf){#fig:Figure1}
|
| 4 |
+
|
| 5 |
+
Physiological signals such as heart rate (HR), respiration frequency (RF), and heart rate variability (HRV) are important vital signs to be measured in many circumstances, especially for healthcare or medical purposes. Traditionally, the Electrocardiography (ECG) and Photoplethysmograph (PPG) are the two most common ways for measuring heart activities and corresponding physiological signals. However, both ECG and PPG sensors need to be attached to body parts, which may cause discomfort and are inconvenient for long-term monitoring. To counter for this issue, remote photoplethysmography (rPPG) [@yu2021facial; @chen2018video; @liu2021camera] methods are developing fast in recent years, which aim to measure heart activity remotely without any contact.
|
| 6 |
+
|
| 7 |
+
In earlier studies of facial rPPG measurement, most methods analyze subtle color changes on facial regions of interest (ROI) with classical signal processing approaches [@verkruysse2008remote; @poh2010non; @poh2010advancements; @li2014remote; @tulyakov2016self]. Besides, there are a few color subspace transformation methods [@de2013robust; @wang2017algorithmic] which utilize all skin pixels for rPPG measurement. Based on the prior knowledge from traditional methods, a few learning based approaches [@hsu2017deep; @qiu2018evm; @niu2018synrhythm; @niu2019rhythmnet] are designed as non-end-to-end fashions. ROI based preprocessed signal representations (e.g., time-frequency map [@hsu2017deep] and spatio-temporal map [@niu2018synrhythm; @niu2019rhythmnet]) are generated first, and then learnable models could capture rPPG features from these maps. However, these methods need the strict preprocessing procedure and neglect the global contextual clues outside the pre-defined ROIs. Meanwhile, more and more end-to-end deep learning based rPPG methods [@vspetlik2018visual; @chen2018deepphys; @yu2019remote1; @yu2019remote2; @liu2020multi] are developed, which treat facial video frames as input and predict rPPG and other physiological signals directly. However, pure end-to-end methods are easily influenced by the complex scenarios (e.g., with head movement and various illumination conditions) and rPPG-unrelated features can not be ruled out in learning, resulting in large performance decrease [@yu2020autohr] in realistic datasets (e.g., VIPL-HR [@niu2019rhythmnet]).
|
| 8 |
+
|
| 9 |
+
Recently, due to its excellent long-range attentional modeling capacities in solving sequence-to-sequence issues, transformer [@lin2021survey; @han2020survey] has been successfully applied in many artificial intelligence tasks such as natural language processing (NLP) [@vaswani2017attention], image [@dosovitskiy2020image] and video [@bertasius2021space] analysis. Similarly, rPPG measurement from facial videos can be treated as a video sequence to signal sequence problem, where the long-range contextual clues should be exploited for semantic modeling. As shown in Fig. [1](#fig:Figure1){reference-type="ref" reference="fig:Figure1"}, rPPG clues from different skin regions and temporal locations (e.g., signal trajectories around t1, t2, and t3) share similar properties (e.g., trends with rising edge first then falling edge later and relative high magnitudes), which can be utilized for long-range feature modeling and enhancement. However, different from the most video tasks aiming at huge motion representation, facial rPPG measurement focuses on capturing subtle skin color changes, which makes it challenging for global spatio-temporal perception. Furthermore, video-based rPPG measurement is usually a long-time monitoring task, and it is challenging to design and train transformers with long video sequence inputs.
|
| 10 |
+
|
| 11 |
+
Motivated by the discussions above, we propose an end-to-end video transformer architecture, namely PhysFormer, for remote physiological measurement. On one hand, the cascaded temporal difference transfomer blocks in PhysFormer benefit the rPPG feature enhancement via global spatio-temporal attention based on the fine-grained temporal skin color differences. On the other hand, to alleviate the interference-induced overfitting issue and complement the weak temporal supervision signals, elaborate supervision in frequency domain is designed, which helps PhysFormer learn more intrinsic rPPG-aware features.
|
| 12 |
+
|
| 13 |
+
The contributions of this work are as follows:
|
| 14 |
+
|
| 15 |
+
- We propose the PhysFormer, which mainly consists of a powerful video temporal difference transformer backbone. To our best knowledge, it is the first time to explore the long-range spatio-temporal relationship for reliable rPPG measurement.
|
| 16 |
+
|
| 17 |
+
- We propose an elaborate recipe to supervise PhysFormer with label distribution learning and curriculum learning guided dynamic loss in frequency domain to learn efficiently and alleviate overfitting.
|
| 18 |
+
|
| 19 |
+
- We conduct intra- and cross-dataset testings and show that the proposed PhysFormer achieves superior or on par state-of-the-art performance without pretraining on large-scale datasets like ImageNet-21K.
|
| 20 |
+
|
| 21 |
+
# Method
|
| 22 |
+
|
| 23 |
+
We will first introduce the architecture of PhysFormer in Sec. [3.1](#sec:PhysFormer){reference-type="ref" reference="sec:PhysFormer"}, then introduce label distribution learning for rPPG measurement in Sec. [3.2](#sec:distribution){reference-type="ref" reference="sec:distribution"}, and at last present the curriculum learning guided dynamic supervision in Sec. [3.3](#sec:dynamic){reference-type="ref" reference="sec:dynamic"}.
|
| 24 |
+
|
| 25 |
+
As illustrated in Fig. [2](#fig:PhysFormer){reference-type="ref" reference="fig:PhysFormer"}, PhysFormer consists of a shallow stem $\mathbf{E}_{\text{stem}}$, a tube tokenizer $\mathbf{E}_{\text{tube}}$, $N$ temporal difference transformer blocks $\mathbf{E}^{i}_{\text{trans}}$ ($i=1,...,N$) and a rPPG predictor head. Inspired by the study in [@xiao2021early], we adopt a shallow stem to extract coarse local spatio-temporal features, which benefits the fast convergence and clearer subsequent global self-attention. Specifically, the stem is formed by three convolutional blocks with kernel size (1x5x5), (3x3x3) and (3x3x3), respectively. Each convolution operator is cascaded with a batch normalization (BN), ReLU and MaxPool. The pooling layer only halves the spatial dimension. Therefore, given an RGB facial video input $X\in \mathbb{R}^{3\times T\times H\times W}$, the stem output $X_{\text{stem}}=\mathbf{E}_{\text{stem}}(X)$, where $X_{\text{stem}}\in \mathbb{R}^{D\times T\times H/8\times W/8}$, and $D$, $T$, $W$, $H$ indicate channel, sequence length, width, height, respectively. Then $X_{\text{stem}}$ would be partitioned into spatio-temporal tube tokens $X_{\text{tube}}\in \mathbb{R}^{D\times T'\times H'\times W'}$ via the tube tokenizer $\mathbf{E}_{\text{tube}}$. Subsequently, the tube tokens will be forwarded with $N$ temporal difference transformer blocks and obtain the global-local refined rPPG features $X_{\text{trans}}$, which has the same dimensions with $X_{\text{tube}}$. Finally, the rPPG predictor head temporally upsamples, spatially averages, and projects the features $X_{\text{trans}}$ to 1D signal $Y\in \mathbb{R}^{T}$.
|
| 26 |
+
|
| 27 |
+
**Tube tokenization.** Here the coarse feature $X_{\text{stem}}$ would be partitioned into non-overlapping tube tokens via $\mathbf{E}_{\text{tube}}(X_{\text{stem}})$, which aggregates the spatio-temporal neighbor semantics and reduces computational costs for the subsequent transformers. Specifically, with the targeted tube size $T_{s}\times H_{s}\times W_{s}$ (the same as the partition step size in non-overlapping setting), the tube token map $X_{\text{tube}}\in \mathbb{R}^{D\times T'\times H'\times W'}$ has length, height and width $$\begin{equation}
|
| 28 |
+
T'=\left \lfloor \frac{T}{T_{s}} \right \rfloor, H'=\left \lfloor \frac{H/8}{H_{s}} \right \rfloor, W'=\left \lfloor \frac{W/8}{W_{s}} \right \rfloor.
|
| 29 |
+
\label{eq:token}
|
| 30 |
+
%\vspace{-2.1em}
|
| 31 |
+
\end{equation}$$ Please note that there are no position embeddings after the tube tokenization as the stem at early stage already captures relative spatio-temporal positions.
|
| 32 |
+
|
| 33 |
+
**Temporal difference multi-head self-attention.** In self-attention mechanism [@vaswani2017attention; @dosovitskiy2020image], the relationship between the tokens is modeled by the similarity between the projected query-key pairs, yielding the attention score. Instead of point-wise linear projection, we utilize temporal difference convolution (TDC) [@yu2020autohr; @yu2021searching] for query ($Q$) and key ($K$) projection, which could capture fine-grained local temporal difference features for subtle color change description. TDC with learnable $w$ can be formulated as $$\begin{equation}
|
| 34 |
+
\footnotesize
|
| 35 |
+
\begin{split}
|
| 36 |
+
\mathrm{TDC}(x)
|
| 37 |
+
&=\underbrace{\sum_{p_n\in \mathcal{R}}w(p_n)\cdot x(p_0+p_n)}_{\text{vanilla 3D convolution}}+\theta\cdot (\underbrace{-x(p_0)\cdot\sum_{p_n\in \mathcal{R'}}w(p_n))}_{\text{temporal difference term}}, \\
|
| 38 |
+
\label{eq:CDC-T}
|
| 39 |
+
\end{split}
|
| 40 |
+
\end{equation}$$ where $p_0$, $\mathcal{R}$ and $\mathcal{R'}$ indicate the current spatio-temporal location, sampled local (3x3x3) neighborhood and sampled adjacent neighborhood, respectively. Then query and key are projected as $$\begin{equation}
|
| 41 |
+
Q = \mathrm{BN}(\mathrm{TDC}(X_{\text{tube}})), K= \mathrm{BN}(\mathrm{TDC}(X_{\text{tube}})).
|
| 42 |
+
\vspace{-0.3em}
|
| 43 |
+
\end{equation}$$ For the value ($V$) projection, point-wise linear projection without BN is utilized. Then $Q,K,V\in \mathbb{R}^{D\times T'\times H'\times W'}$ are flattened into sequence, and separated into $h$ heads ($D_h=D/h$ for each head). For the $i$-th head ($i\leq h$), the self-attention (SA) can be formulated $$\begin{equation}
|
| 44 |
+
\mathrm{SA}_{i}=\mathrm{Softmax}(Q_{i}K^{T}_{i}/\tau)V_{i},
|
| 45 |
+
\vspace{-0.3em}
|
| 46 |
+
\end{equation}$$ where $\tau$ controls the sparsity. We find that the default setting $\tau=\sqrt{D_h}$ in [@vaswani2017attention; @dosovitskiy2020image] performs poorly for rPPG measurement. According to the periodicity of rPPG features, we use smaller $\tau$ value to obtain sparser attention activation. The corresponding study can be found in Table [\[tab:ablation2\]](#tab:ablation2){reference-type="ref" reference="tab:ablation2"}. The output of TD-MHSA is the concatenation of SA from all heads and then with a linear projection $U\in \mathbb{R}^{D\times D}$ $$\begin{equation}
|
| 47 |
+
\text{TD-MHSA} = \mathrm{Concat}(\mathrm{SA}_{1}; \mathrm{SA}_{2};...; \mathrm{SA}_{h})U.
|
| 48 |
+
\vspace{-0.3em}
|
| 49 |
+
\end{equation}$$ As illustrated in Fig. [2](#fig:PhysFormer){reference-type="ref" reference="fig:PhysFormer"}, residual connection and layer normalization (LN) would be conducted after TD-MHSA.
|
| 50 |
+
|
| 51 |
+
**Spatio-temporal feed-forward.** The vanilla feed-forward network consists of two linear transformation layers, where the hidden dimension $D'$ between two layers is expanded to learn a richer feature representation. In contrast, we introduce a depthwise 3D convolution (with BN and nonlinear activation) between these two layers with extra slight computational cost but remarkable performance improvement. The benefits are two-fold: 1) as a complementation of TD-MHSA, ST-FF could refine the local inconsistency and parts of noisy features; 2) richer locality provides TD-MHSA sufficient relative position cues.
|
| 52 |
+
|
| 53 |
+
Similar to the facial age estimation task [@geng2013facial; @gao2018age] that faces at close ages look quite similar, facial rPPG signals with close HR values usually have similar periodicity. Inspired by this observation, instead of considering each facial video as an instance with one label (HR), we regard each facial video as an instance associated with a label distribution. The label distribution covers a certain number of class labels, representing the degree that each label describes the instance. Through this way, one facial video can contribute to both targeted HR value and its adjacent HRs.
|
| 54 |
+
|
| 55 |
+
To consider the similarity information among HR classes during the training stage, we model the rPPG-based HR estimation problem as a specific $L$-class multi-label classification problem, where $L$=139 in our case (each integer HR value within \[42, 180\] bpm as a class). A label distribution $\mathbf{p}= \left\{ p_1,p_2,...,p_L\right\}\in \mathbb{R}^L$ is assigned to each facial video $X$. It is assumed that each entry of $\mathbf{p}$ is a real value in the range \[0,1\] such that $\sum_{k=1}^{L}p_k=1$. We consider the Gaussian distribution function, centred at the ground truth HR label $Y_{\text{HR}}$ with the standard deviation $\sigma$, to construct the corresponding label distribution $\mathbf{p}$. $$\begin{equation}
|
| 56 |
+
p_k=\frac{1}{\sqrt{2\pi}\sigma}\exp\left ( -\frac{(k-(Y_{HR}-41))^2}{2\sigma^2} \right ).
|
| 57 |
+
\end{equation}$$ The label distribution loss can be formulated as $\mathcal{L}_{\text{LD}}=\mathrm{KL}(\mathbf{p}, \mathrm{Softmax}(\mathbf{\hat{p}}))$, where divergence measure $\mathrm{KL}(\cdot)$ denotes the Kullback-Leibler (KL) divergence [@gao2017deep], and $\mathbf{\hat{p}}$ is the power spectral density (PSD) of predicted rPPG signals.
|
| 58 |
+
|
| 59 |
+
Please note that the previous work [@niu2017continuous] also considers the distribution learning for HR estimation. However, it is totally different with our work: 1) the motivation in [@niu2017continuous] is to smooth the temporal HR outliers caused by facial movements across continuous video clips, while our work is more generic, aiming at efficient feature learning across adjacent labels under limited-scale training data; 2) the technique used in [@niu2017continuous] is after a post-HR-estimation for the handcrafted rPPG signals, while our work is to design a reasonable supervision signal $\mathcal{L}_{\text{LD}}$ for PhysFormer.
|
| 60 |
+
|
| 61 |
+
Curriculum learning [@bengio2009curriculum], as a major machine learning regime with philosophy of easy-to-hard curriculum, is utilized to train PhysFormer. In the rPPG measurement task, the supervision signals from temporal domain (e.g., mean square error loss [@chen2018deepphys], negative Pearson loss [@yu2019remote1; @yu2019remote2]) and frequency domain (e.g., cross-entropy loss [@niu2020video; @yu2020autohr], signal-to-noise ratio loss [@vspetlik2018visual]) provide different extents of constraints for model learning. The former one gives signal-trend-level constraints, which is straightforward and easy for model convergence but overfitting after that. In contrast, the latter one with strong constraints on frequency domain enforces the model learning periodic features within target frequency bands, which is hard to converged well due to the realistic rPPG-irrelevant noise. Inspired by the curriculum learning, we propose the dynamic supervision to gradually enlarge the frequency constraints, which alleviates the overfitting issue and benefits the intrinsic rPPG-aware feature learning gradually. Specifically, exponential increment strategy is adopted, and comparison with other dynamic strategies (e.g., linear increment) will be shown in Table [\[tab:ablation3\]](#tab:ablation3){reference-type="ref" reference="tab:ablation3"}. The dynamic loss $\mathcal{L}_{\text{overall}}$ can be formulated as $$\begin{equation}
|
| 62 |
+
\begin{split}
|
| 63 |
+
\mathcal{L}_{\text{overall}}&=\underbrace{\alpha\cdot\mathcal{L}_{\text{time}}}_{\text{temporal}}+\underbrace{\beta\cdot(\mathcal{L}_{\text{CE}}+\mathcal{L}_{\text{LD}})}_{\text{frequency}},\\
|
| 64 |
+
\beta&=\beta_{0}\cdot(\eta^{({\text{Epoch}}_{\text{current}}-1)/{\text{Epoch}}_{\text{total}}}),
|
| 65 |
+
\vspace{-0.1em}
|
| 66 |
+
\end{split}
|
| 67 |
+
\end{equation}$$ where hyperparameters $\alpha$, $\beta_{0}$ and $\eta$ equal to 0.1, 1.0 and 5.0, respectively. Negative Pearson loss [@yu2019remote1; @yu2019remote2] and frequency cross-entropy loss [@niu2020video; @yu2020autohr] are adopted as $\mathcal{L}_{\text{time}}$ and $\mathcal{L}_{\text{CE}}$, respectively. With the dynamic supervision, PhysFormer could perceive better signal trend at the beginning while such perfect warming up facilitates the gradually stronger frequency knowledge learning later.
|
2204.01172/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-13T18:52:40.210Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" etag="4QRJKD0ajTtIoaenZ9mo" version="15.7.0" type="google"><diagram id="0yXu29Yji-UsEEvizw3T" name="Page-1">7Vtbd5s4EP41fkwOd5vHYMdNzyY9Oet0u3naIxvZqMGICvmWX78SSAYBtkkCTtqTPsRoJAYx8+mbGYn2zOFy+4WAOLjDPgx7huZve+aoZxi67hrsh0t2mcRxtEywIMgXg3LBBD1DIZTDVsiHiTKQYhxSFKvCGY4iOKOKDBCCN+qwOQ7Vp8ZgIZ6o5YLJDISwMuwH8mmQSQd2YfQNRItAPlnXRM8SyMFCkATAx5uCyLzumUOCMc2ultshDLnxpF2y+8YHevcTIzCiTW64SUbTh2S3nq+nxohcRZ7zc3JhmJmaNQhX4o3FbOlOmoDgVeRDrkXvmd4mQBROYjDjvRvmdCYL6DIU3T5Igv3Y6hTFrNeQULgtiMSUv0C8hJTs2BDRa+j97BaBn77QsCk4Qxg4KPihL+4Cwv2LveLcQuxCGOklBjM+uMGkNaS9tKq9nDPaS2/ZXAkl+Gm/FI29ZIhDTFJ95njsmY7DeuYoDBX52BqaXI4jOgZLFHIb3cBwDSmaAdEhKEi3ZVtMtGa9vdg3lqYrzjF0s+IdpwbNptYVmmu844RUvLriJufXCsuOiyQ10hUboDvxNu9kV4v01/aGt5Oeze7THgLI/hKYULAiINXKmJD9hdsZmyJMJXOM/eSSXXzlrQ17UcZmtnd3Nfkr08Iak+v79DqbIHvfbI7iiWVUMZdQFTogRIuIXc/YIyHDhMcdxxwfXomOJfJ9frvHJouewTRVpbF2jFFEU9vbHp8C07WiOBFIqWAwwhEswU+KmiCv3wHyDOsk8Cz3nMirizsvRF6/FnlCSxKDSMruCZsyQBHkqLsF0WLFg76hiWxlj6fiPU1h9iLyKhHScMgoaVxHYY7jGYWeIt29G3mZKoR0uxpZLBlGihCyuoKQdTq0wMi/4klgvgIL3shdx9c4GzpGfAJpq7GV6x102u7Ql3nnAasTGAKK1moaWmdDces956iD3jI0S9WQ4BWZQXFTMWss6zEGxxVRQBaQVhSlHt2/zuudbLcTodIFXaKKr1G8ok3XOffXLZgyvugspqjAaYhBpzZrPLpeyit+XzyJ2fWK9UkdE1xol1bfVtkga70RtLIaEUpleiIV4Pk8gZ2gzPmkkiNUYhrtcImta8cVdcwl/dazXbZyZMzOhaaW/qsSzj2gjCSiP4FyeFsprdI3bkxFcr19XCq6UDF/Pioa1OJAOgkTGuAFjkB4nUtL9JOPucU4Fp79CSndCQ/y4qWUmDblqAILTkM8e3oIUKQwXn2NDreI/ivZkV0/8utLW7RG20LXaFfgzdPkl3HPEXu6ZyJJ3VVRqNuNuK2qyNWOKzrAtm3hz22dJDN6u7u9Y303kJf+3ZdWrK4yhsOju0MfqbQaWJcqibmiWSiuXJEcnaW2kpH6vDS0p4k9NTwWmeEgTXRJXcr+bDlevp6W5FlGygrvT19WKcfTy8jqODXT9ffAW9fY+Rj5e/kIRTebRZSKIt05oahrkLSwXT04sBnAfsGSx5RomvAflloTxJPQV283N90Tbp7P14VAdQ00AvSggwjGwlMpaxlUAph9zv1lvcHBZhKAmF/OViTceQTMnjgVn8o01LRkHqL45qV0UklKRMVYyyEHEpvWPeiqHnSr27tGjQPtzhxofUaEc0UEt6ThtQGhrKfreFDdHF4QCOgnOZe/oXBL5UXfqrJzzdFNd+xct+Ha/vHfa7X8A8mUef4ZkuSNx4O/EfI6OXY2HAV3llHBXf+sWUELW8CHMsgYJyjl5D8PGl2Q0kBFRg0jnRcZg+6QEcEF+ERGc2SoJ866DBbvBo0WtkQ7DFbpEdFnnHob5NT0yKrWPl0h7tH/8fw9+Hs5AN/8/757Bph60yYfTcradbUMR5D7ane6clXLkdpCdH+Qh5kWRLk9+vm5Xs15XcXsh3MBUz3YMPvVHLTGyJbZkZHrviz57Y1s2uoXpvqgCuWWrMya+cfrWUmX/xcA8/p/</diagram></mxfile>
|
2204.01172/main_diagram/main_diagram.pdf
ADDED
|
Binary file (17.9 kB). View file
|
|
|
2204.01172/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Recent methods for few-shot language model tuning obtain impressive performance but require careful engineering of prompts and verbalizers to convert inputs to a cloze-format [@taylor1953cloze] that can be scored with pre-trained language models (PLMs) [@radford2018improving; @radfordlanguage; @brown2020language; @PET1; @PET2]. For example, as Figure [1](#fig:pet){reference-type="ref" reference="fig:pet"} shows, a sentiment classifier can be designed by inserting the input text $\bm{x}$ in a *prompt template* "$\bm{x}$ It was [\[MASK\]]{.smallcaps}" where *verbalizers* (e.g., 'great' and 'terrible') are substituted for the [\[MASK\]]{.smallcaps} to score target task labels ('positive' or 'negative'). In this paper, we show that such engineering is not needed for few-shot learning and instead can be replaced with simple methods for data-efficient fine-tuning with as few as 32 end-task examples.
|
| 4 |
+
|
| 5 |
+
<figure id="fig:pet" data-latex-placement="tp">
|
| 6 |
+
<embed src="figures/PET.pdf" style="width:45.0%" />
|
| 7 |
+
<figcaption><span style="color: black">Existing few-shot fine-tuning methods require manual engineering to reduce new tasks to masked language modeling. <span class="smallcaps">Perfect</span> does not rely on any handcrafting, removing both patterns and verbalizers (see Figure <a href="#fig:perfect" data-reference-type="ref" data-reference="fig:perfect">3</a>).</span></figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
More specifically, we propose [Perfect]{.smallcaps}, a Prompt-free and Efficient paRadigm for FEw-shot Cloze-based fine-Tuning. To remove handcrafted patterns, [Perfect]{.smallcaps} uses *task-specific adapter layers* [@houlsby2019parameter; @pfeiffer2020adapterhub] (§[3.1](#sec:pattern_free){reference-type="ref" reference="sec:pattern_free"}). Freezing the underlying PLM with millions or billions of parameters [@liu2019roberta; @raffel2020exploring], and only tuning adapters with very few new parameters saves on memory and storage costs (§[4.2](#sec:efficiency){reference-type="ref" reference="sec:efficiency"}), while allowing very sample-efficient tuning (§[4](#sec:experiments){reference-type="ref" reference="sec:experiments"}). It also stabilizes the training by increasing the worst-case performance and decreasing variance across the choice of examples in the few shot training sets (§[4.3](#sec:analysis){reference-type="ref" reference="sec:analysis"}).
|
| 11 |
+
|
| 12 |
+
To remove handcrafted verbalizers (with variable token lengths), we introduce a new *multi-token fixed-length classifier scheme* that learns task label embeddings which are independent from the language model vocabulary during fine-tuning (§[3.2](#sec:soft_verbalizers){reference-type="ref" reference="sec:soft_verbalizers"}). We show (§[4](#sec:experiments){reference-type="ref" reference="sec:experiments"}) that this approach is sample efficient and outperforms carefully engineered verbalizers from *random initialization* (§[4](#sec:experiments){reference-type="ref" reference="sec:experiments"}). It also allows us to avoid previously used expensive auto-regressive decoding schemes [@PET2], by leveraging prototypical networks [@snell2017prototypical] over multiple tokens. Overall, these changes enable up to 100x faster learning and inference (§[4.2](#sec:efficiency){reference-type="ref" reference="sec:efficiency"}).
|
| 13 |
+
|
| 14 |
+
[Perfect]{.smallcaps} has several advantages: It avoids engineering patterns and verbalizers for each new task, which can be cumbersome. Recent work has shown that even some intentionally irrelevant or misleading prompts can perform as well as more interpretable ones [@webson2021prompt]. Unlike the zero-shot or extreme few-shot case, where prompting might be essential, we argue in this paper that all you need is tens of training examples to avoid these challenges by adopting [Perfect]{.smallcaps} or a similar data-efficient learning method. Experiments on a wide variety of NLP tasks demonstrate that [Perfect]{.smallcaps} outperforms state-of-the-art prompt-based methods while being significantly more efficient in inference and training time, storage, and memory usage (§[4.2](#sec:efficiency){reference-type="ref" reference="sec:efficiency"}). To the best of our knowledge, we are the first to propose a few-shot learning method using the MLM objective in PLMs that provide state-of-the-art results while removing all per-task manual engineering.
|
| 15 |
+
|
| 16 |
+
# Method
|
| 17 |
+
|
| 18 |
+
We consider a general problem of fine-tuning language models in a few-shot setting, on a small training set with $K$ unique classes and $N$ examples per class, such that the total number of examples is $|\mathcal{D}|=N \times K$. Let $\mathcal{D} = \cup_{k=1}^K \mathcal{D}_k$ be the given training set, where $\mathcal{D}_k= \{(\bm{x^i_k}, y^i_k)\}_{i=1}^N$ shows the set of examples labeled with class $k$ and $y^i_k \in \mathcal{Y}$ is the corresponding label, where $|\mathcal{Y}|=K$. We additionally assume access to a development set with the same size as the training data. Note that larger validation sets can grant a substantial advantage [@perez2021true], and thus it is important to use a limited validation size to be in line with the goal of few-shot learning. Unless specified otherwise, in this work, we use 16 training examples ($N = 16$) and a validation set with 16 examples, for a total of 32-shot learning.
|
| 19 |
+
|
| 20 |
+
Recent work has shown that fine-tuning *all* parameters of PLMs with a large number of parameters in low-resource datasets can lead to a sub-optimal solution [@peters-2019-tune; @dodge2020fine]. As shown in Figure [2](#fig:our_method){reference-type="ref" reference="fig:our_method"}, @rebuffi2018efficient and @houlsby2019parameter suggest an efficient alternative, by inserting small task-specific modules called *adapters* within layers of a PLMs. They then only train the newly added adapters and layer normalization, while fixing the remaining parameters of a PLM.
|
| 21 |
+
|
| 22 |
+
Each layer of a transformer model is composed of two primary modules: a) an attention block, and b) a feed-forward block, where both modules are followed by a skip connection. As depicted in Figure [2](#fig:our_method){reference-type="ref" reference="fig:our_method"}, adapters are normally inserted after each of these blocks before the skip connection.
|
| 23 |
+
|
| 24 |
+
Adapters are bottleneck architectures. By keeping input and output dimensions the same, they introduce no additional architectural changes. Each adapter, $A(.) \in \mathbb{R}^H$, consists of a down-projection, $D(.)\in\mathbb{R}^{H\times B}$, a non-linearity, such as GeLU [@hendrycks2016gaussian], and an up-projection $U(.)\in\mathbb{R}^{B\times H}$, where $H$ is the dimension of input hidden states $\bm{x}$, and $B$ is the bottleneck size. Formally defined as: $$\begin{align}
|
| 25 |
+
A(\bm{x})=U(\text{GeLU}(D(\bm{x}))) + \bm{x}, \label{eq:adapters}
|
| 26 |
+
\\[-4ex]\nonumber
|
| 27 |
+
\end{align}$$
|
| 28 |
+
|
| 29 |
+
<figure id="fig:our_method" data-latex-placement="tp">
|
| 30 |
+
<embed src="figures/automat_our_method.pdf" style="width:40.0%" />
|
| 31 |
+
<figcaption> Left: Adapter integration in a PLM. Right: An adapter architecture. Adapters are usually inserted after the feed-forward and self-attention modules. During training, we only optimize the green components</figcaption>
|
| 32 |
+
</figure>
|
| 33 |
+
|
| 34 |
+
In standard fine-tuning with PLMs [@devlin-etal-2019-bert], first a special $\text{[CLS]}$ token is appended to the input $\bm{x}$, and then the PLM maps it to a sequence of hidden representations $\bm{h}=(\bm{h_1},\dots,\bm{h_S})$ with $\bm{h_i}\in \mathbb{R}^{H}$, where $H$ is the hidden dimension, and $S$ is the maximum sequence length. Then, a classifier, $\textit{softmax}(\bm{W^T} \bm{h}_{\text{[CLS]}})$, using the embedding of the classification token ($\bm{h}_{\text{[CLS]}}$), is trained end-to-end for each downstream task. The main drawback of this approach is the discrepancy between the pre-training and fine-tuning phases since PLMs have been trained to *predict mask tokens* in a masked language modeling task [@devlin-etal-2019-bert].
|
| 35 |
+
|
| 36 |
+
To address this discrepancy, *prompt-based fine-tuning* [@PET1; @PET2; @gao2020making] formulates tasks in a cloze-format [@taylor1953cloze]. This way, the model can predict targets with a *masked language modeling (MLM) objective*. For example, as shown in Figure [1](#fig:pet){reference-type="ref" reference="fig:pet"}, for a sentiment classification task, inputs are converted to: $$\begin{align}
|
| 37 |
+
\bm{x}_\text{prompt} ~=~ \text{[CLS]}~ \bm{x}~ .~ \underbrace{\text{It was}}_{\text{pattern}}~ \text{[MASK]}~.~\text{[SEP]} \nonumber
|
| 38 |
+
%\\[-5ex]\nonumber
|
| 39 |
+
\end{align}$$ Then, the PLM determines which *verbalizer* (e.g., 'great' and 'terrible') is the most likely substitute for the mask in the $\bm{x}_\text{prompt}$. This subsequently determines the score of targets ('positive' or 'negative'). In detail:
|
| 40 |
+
|
| 41 |
+
Let $\mathcal{M}: \mathcal{Y} \to \mathcal{V}$ be a mapping from target labels to individual words in a PLM's vocabulary. We refer to this mapping as *verbalizers*. Then the input is converted to $\bm{x}_\text{prompt} = \mathcal{T}(\bm{x})$ by appending a *pattern* and a *mask token* to $\bm{x}$ so that it has the format of a masked language modeling input. Then, the classification task is converted to a MLM objective [@tam2021improving; @PET1], and the PLM computes the probability of the label $y$ as:
|
| 42 |
+
|
| 43 |
+
$$\begin{align}
|
| 44 |
+
p(y|\bm{x}) &= p(\text{[MASK]}= \mathcal{M}(y) |\bm{x}_\text{prompt}) \nonumber \\
|
| 45 |
+
&= \frac{\exp (\bm{W}_{\mathcal{M}(y)}^T \bm{h}_{\text{[MASK]}})}{\sum_{v'\in \mathcal{V}} \exp(\bm{W}^T_{v'}\bm{h}_\text{[MASK]})}, \label{eqn:pet_prob}
|
| 46 |
+
\end{align}$$ where $\bm{h}_\text{[MASK]}$ is the last hidden representation of the mask, and $\bm{W}_{v}$ shows the output embedding of the PLM for each verbalizer $v \in \mathcal{V}$. For many tasks, verbalizers have multiple tokens. @PET2 extended [\[eqn:pet_prob\]](#eqn:pet_prob){reference-type="eqref" reference="eqn:pet_prob"} to multiple mask tokens by adding the maximum number of mask tokens $M$ needed to express the outputs (verbalizers) for a task. In that case, @PET2 computes the probability of each class as the summation of the log probabilities of each token in the corresponding verbalizer, and then they add a hinge loss to ensure a margin between the correct verbalizer and the incorrect ones.
|
| 47 |
+
|
| 48 |
+
During inference, the model needs to select which verbalizer to use in the given context. @PET2 predicts the verbalizer tokens in an autoregressive fashion. They first trim the number of mask tokens from $M$ to each candidate verbalizer's token length and compute the probability of each mask token. They then choose the predicted token with the highest probability and replace the corresponding mask token. Conditioning on this new token, the probabilities of the remaining mask positions are recomputed. They repeat this autoregressive decoding until they fill all mask positions. This inference strategy is very slow, as the number of forward passes increases with the number of classes and the number of verbalizer's tokens.
|
| 49 |
+
|
| 50 |
+
This formulation obtained impressive few-shot performance with PLMs. However, the success of this approach heavily relies on engineering handcrafted *patterns* and *verbalizers*. Coming up with suitable verbalizers and patterns can be difficult [@mishra2021crosstask; @mishra2021reframing]. Additionally, the performance is sensitive to the wording of patterns [@zhao2021calibrate; @perez2021true; @PET1; @jiang2020can] or to the chosen verbalizers [@webson2021prompt].
|
| 51 |
+
|
| 52 |
+
<figure id="fig:perfect" data-latex-placement="!tp">
|
| 53 |
+
<embed src="figures/automat_method.pdf" style="width:45.0%" />
|
| 54 |
+
<figcaption>We remove handcrafted patterns and verbalizers. We replace patterns using task-specific adapters and design label embeddings for the classes. We only train the green blocks (the label embeddings, adapters, and layer norms).</figcaption>
|
| 55 |
+
</figure>
|
| 56 |
+
|
| 57 |
+
In addition, handcrafted verbalizers cause problems for efficient training: a) they require updating the PLM embedding layer, causing large memory overhead; b) fine-tuning PLMs also requires a very small learning rate (usually $10^{-5}$), which slows down tuning the parameters of the verbalizers; c) modeling verbalizers as one of the tokens of the PLM vocabulary (perhaps unintentionally) impacts the input representation during tuning; d) verbalizers have variable token lengths, complicating the implementation in a vectorized format, thereby making it challenging to efficiently fine-tune PLMs.
|
| 58 |
+
|
| 59 |
+
We propose [Perfect]{.smallcaps}, a *verbalizer and pattern free* few-shot learning method. We design [Perfect]{.smallcaps} to be close to the pre-training phase, similar to the PET family of models [@PET2; @gao2020making], while replacing handcrafted patterns and verbalizers with new components that are designed to describe the task and learn the labels. As shown in Figure [3](#fig:perfect){reference-type="ref" reference="fig:perfect"}, we first convert each input $\bm{x}_\text{input}$ to its masked language modeling (MLM) input containing $M$ mask tokens [\[MASK\]]{.smallcaps}[^1] with no added patterns, denoted as $\bm{x}_\text{masked} =\mathcal{T^{'}}(\bm{x}_\text{input})$.[^2] [Perfect]{.smallcaps} then trains a classifier per-token and optimizes the average multi-class hinge loss over each mask position.
|
| 60 |
+
|
| 61 |
+
Three main components play a role in the success of [Perfect]{.smallcaps}: a) a pattern-free task description, where we use task-specific adapters to efficiently tell the model about the given task, replacing previously manually engineered patterns (§[3.1](#sec:pattern_free){reference-type="ref" reference="sec:pattern_free"}), b) multi-token label-embedding as an efficient mechanism to learn the label representations, removing manually designed verbalizers (§[3.2](#sec:soft_verbalizers){reference-type="ref" reference="sec:soft_verbalizers"}). c) an efficient inference strategy building on top of the idea of prototypical networks [@snell2017prototypical] (§[3.4](#sec:perfect_eval){reference-type="ref" reference="sec:perfect_eval"}), which replaces prior iterative autoregressive decoding methods [@PET2].
|
| 62 |
+
|
| 63 |
+
As shown in Figure [3](#fig:perfect){reference-type="ref" reference="fig:perfect"}, we fix the underlying PLM model and only optimize the new parameters that we add (green boxes). This includes the task-specific adapters to adapt the representations for a given task and the multi-token label representations. We detail each of these components below.
|
| 64 |
+
|
| 65 |
+
We use task-specific adapter layers to provide the model with learned, implicit task descriptions. Adapters additionally bring multiple other benefits: a) fine-tuning all weights of PLMs with millions or billions of parameters is sample-inefficient, and can be unstable in low-resource settings [@dodge2020fine]; adapters allow sample-efficient fine-tuning, by keeping the underlying PLM fixed, b) adapters reduce the storage and memory footprints (§[4.2](#sec:efficiency){reference-type="ref" reference="sec:efficiency"}), c) they also increase stability and performance (§[4](#sec:experiments){reference-type="ref" reference="sec:experiments"}), making them an excellent choice for few-shot fine-tuning. To our knowledge, this is the first approach for using *task-specific adapters* to effectively and efficiently remove patterns in few-shot learning. Experimental results in §[4](#sec:experiments){reference-type="ref" reference="sec:experiments"} show its effectiveness compared to handcrafted patterns and soft prompts [@li2021prefix; @lester2021power].
|
| 66 |
+
|
| 67 |
+
We freeze the weights of the PLM's embedding layer and introduce a separate label embedding $\bm{L}\in\mathbb{R}^{K\times M \times H}$, which is a multi-token label representation where $M$ is the number of tokens representing each label, $K$ indicates the number of classes, $H$ is the input hidden dimension. Using a fixed number of tokens $M$ for each label, versus variable-token length verbalizers used in prior work [@PET1; @PET2] substantially simplifies the implementation and accelerates the training (§[4.2](#sec:efficiency){reference-type="ref" reference="sec:efficiency"}).
|
| 68 |
+
|
| 69 |
+
As shown in Figure [3](#fig:perfect){reference-type="ref" reference="fig:perfect"}, we optimize label embeddings so that the PLM predicts the correct label, and optimize adapters to adapt the PLM for the given task. For label embeddings, [Perfect]{.smallcaps} trains a classifier per token and optimizes the average multi-class hinge loss over all mask positions. Given $\bm{x}_\text{masked}$, let $\bm{h}_{\textsc{[MASK]}_i}$ be the embedding of its $i$-th mask token from the last layer of the PLM encoder. Additionally, let $f(.): \mathbb{R}^H \to \mathbb{R}^{K}$ be a per-token classifier that computes the predictions by multiplying the mask token embedding with its corresponding label embedding. Formally defined as: $$\begin{align}
|
| 70 |
+
\bm{t_i} = f(\bm{h}_{\textsc{[MASK]}_i}) = \bm{L_i}^T \bm{h}_{\textsc{[MASK]}_i},
|
| 71 |
+
\label{eqn:label-embeddings} \nonumber
|
| 72 |
+
%\\[-5ex]\nonumber
|
| 73 |
+
\end{align}$$ where $\bm{L_i}\in\mathbb{R}^{K\times H}$ shows the label embedding for the $i$-th mask position. Then, for each mask position, we optimize a multi-class hinge loss between their scores $\bm{t_i}$ and labels. Formally defined as: $$\begin{align}
|
| 74 |
+
\mathcal{L}(\bm{x}, y, i) = \frac{\sum_{k=1, k\neq y}^K \max(0, m-\bm{t_{iy}}+\bm{t_{ik}}) }{K}, \nonumber
|
| 75 |
+
%\\[-5ex]\nonumber
|
| 76 |
+
\end{align}$$ where $\bm{t_{ik}}$ shows the $k$-th element of $\bm{t_i}$, representing the score corresponding to class $k$, and $m$ is the margin, which we fix to the default value of $m=1$. Then, the final loss is computed by averaging the loss over all mask tokens and training samples: $$\begin{align}
|
| 77 |
+
\mathcal{L} = \frac{1}{M|\mathcal{D}|}\sum_{(\bm{x},y)\in\mathcal{D}}\sum_{i=1}^M\mathcal{L}(\bm{x}, y, i)
|
| 78 |
+
\\[-5ex]\nonumber
|
| 79 |
+
\label{eqn:total_loss}
|
| 80 |
+
\end{align}$$
|
| 81 |
+
|
| 82 |
+
During evaluation, instead of relying on the prior iterative autoregressive decoding schemes [@PET2], we classify a query point by finding the nearest class prototype to the mask token embeddings: $$\begin{align}
|
| 83 |
+
y = \operatornamewithlimits{argmax}_{y \in \mathcal{Y}}\hspace{0.1em} \max_{i \in \{1, \dots, M\}} \left( \hspace{0.1em}{\exp^{-d(\bm{h_{i}^q}, \bm{c_{iy}})}} \right),
|
| 84 |
+
\\[-5ex]\nonumber
|
| 85 |
+
\end{align}$$ where $d$ is squared euclidean distance,[^3] $\bm{h_{i}^q}$ indicates the embedding of the $i$-th mask position for the query sample $q$, and $\bm{c_{iy}} \in \mathbb{R}^D$ is the prototype representation of the $i$-th mask token with class label $y$, i.e., the mean embedding of $i$-th mask position in all training samples with label $y$: $$\begin{align}
|
| 86 |
+
\bm{c_{iy}} = \frac{1}{|\mathcal{D}_y|} \sum_{b \hspace{0.01em} \in \mathcal{D}_y} \bm{h_i^b},
|
| 87 |
+
\label{eqn:centroids} %\nonumber
|
| 88 |
+
%\\[-5ex]\nonumber
|
| 89 |
+
\end{align}$$ where $\bm{h_i^b}$ shows the embedding of $i$-th mask position for training sample $b$, and $\mathcal{D}_y$ is the training instances with class $y$. This strategy closely follows prototypical networks [@snell2017prototypical], but applied across multiple tokens. We choose this form of inference because prototypical networks are known to be sample efficient and robust [@snell2017prototypical], and because it substantially speeds up evaluation compared to prior methods (§[4.2](#sec:efficiency){reference-type="ref" reference="sec:efficiency"}).
|
| 90 |
+
|
| 91 |
+
:::: table*
|
| 92 |
+
::: adjustbox
|
| 93 |
+
max width=1.0
|
| 94 |
+
|
| 95 |
+
**Method** **SST-2** **CR** **MR** **SST-5** **Subj** **TREC** **Avg**
|
| 96 |
+
---------------------------- ----------------------- ------------------- ------------------- ------------------- ----------------------- ------------------- -----------------------
|
| 97 |
+
|
| 98 |
+
[Finetune]{.smallcaps} 81.4/70.0/4.0 80.1/72.9/4.1 77.7/66.8/4.6 39.2/34.3/2.5 **90.2**/84.1/**1.8** 87.6/75.8/3.7 76.0/67.3/3.4
|
| 99 |
+
[PET]{.smallcaps}-Average 89.7/81.0/2.4 88.4/68.8/3.0 85.9/79.0/2.1 45.9/40.3/**2.4** 88.1/79.6/2.4 85.0/70.6/4.5 80.5/69.9/2.8
|
| 100 |
+
[PET]{.smallcaps}-Best 89.1/81.0/2.6 88.8/85.8/1.9 **86.4/82.0/1.6** **46.0**/41.2/2.4 88.7/**84.6/1.8** 85.8/70.6/4.4 80.8/74.2/2.4
|
| 101 |
+
@logan2021cutting 89.8/84.1/1.7 89.9/87.2/1.1 84.9/76.2/3.2 45.7**/41.6/2.3** 81.8/73.5/4.0 84.7/81.8/1.6 79.5/74.1/2.3
|
| 102 |
+
[Perfect]{.smallcaps}-rand 90.7**/88.2/1.2** 90.0/85.5/1.4 86.3/81.4/**1.6** 42.7/35.1/2.9 89.1/82.8/2.1 **90.6**/81.6/3.2 **81.6****/75.8/2.1**
|
| 103 |
+
|
| 104 |
+
[Perfect]{.smallcaps}-init **90.9**/87.6/1.5 89.7/87.4/1.2 85.4/75.8/3.3 42.8/35.9/3.5 87.6/81.6/2.8 90.4/**86.6/1.8** 81.1/**75.8**/2.4
|
| 105 |
+
prompt+mte 70.6/56.0/8.3 71.0/55.8/8.2 66.6/49.6/7.3 32.2/26.5/3.2 82.7/69.6/3.9 79.6/66.8/6.5 67.1/54.0/6.2
|
| 106 |
+
bitfit+mte 89.5/81.7/3.0 **90.1/87.8/1.0** 85.6/80.5/1.9 42.3/36.8/3.3 89.1/82.4/2.4 90.4/85.0/1.4 81.2/75.7/2.2
|
| 107 |
+
**Method** **CB** **RTE** **QNLI** **MRPC** **QQP** **WiC** **Avg**
|
| 108 |
+
|
| 109 |
+
[Finetune]{.smallcaps} 72.9/67.9/**2.5** 56.8/50.2/**3.5** 62.7/51.4/7.0 **70.1/62.7/4.7** 65.0/59.8/3.6 52.4/46.1/3.7 63.3/56.4/4.2
|
| 110 |
+
[PET]{.smallcaps}-Average 86.9/73.2/5.1 60.1/49.5/4.7 66.5/55.7/6.2 62.1/38.2/6.8 63.4/44.7/7.9 51.0/46.1/2.6 65.0/51.2/5.6
|
| 111 |
+
[PET]{.smallcaps}-Best 90.0/78.6/3.9 62.3/51.3/4.5 70.5/57.9/6.4 63.4/49.3/6.5 70.7/55.2/5.8 51.6/47.2/2.3 68.1/56.6/4.9
|
| 112 |
+
@logan2021cutting 91.0/87.5/2.7 **64.4/58.5**/3.9 71.2/**66.5/2.6** 63.9/53.7/5.3 70.4/62.7/**3.4** 52.4**/48.4/1.8** 68.9/**62.9/3.3**
|
| 113 |
+
[Perfect]{.smallcaps}-rand **90.3**/**83.9**/3.5 60.4/53.1/4.7 **74.1**/60.3/4.6 67.8/54.7/5.7 **71.2**/64.2/3.5 **53.8**/47.0/3.0 **69.6**/60.5/4.2
|
| 114 |
+
|
| 115 |
+
[Perfect]{.smallcaps}-init 87.9/75.0/4.9 60.7/52.7/4.5 72.8/56.7/6.8 65.9/56.6/6.0 71.1/**65.6**/3.5 51.7/46.6/2.8 68.4/58.9/4.8
|
| 116 |
+
prompt+mte 73.0/62.5/6.1 56.9/50.7/4.1 55.4/50.2/4.6 60.0/51.5/5.8 54.3/46.2/5.6 51.3/46.7/2.8 58.5/51.3/4.8
|
| 117 |
+
bitfit+mte 89.6/82.1/4.3 61.3/53.8/5.2 70.6/51.9/5.9 68.5/57.4/5.1 69.4/63.0/3.9 52.9/47.8/2.7 68.7/59.3/4.5
|
| 118 |
+
:::
|
| 119 |
+
::::
|
2204.12516/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-16T03:25:53.493Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" version="15.7.4" etag="qaQG5Roe5w1seGkPKR4j" type="google"><diagram id="VMuRJb3lgbbz9VN9oaX8">7V1Zc5tIEP41qkoerOKag8fYzvGQVKU2tbvJUwoLJLHBQotwbO+v30HMSMyBNILhkGzlITAMA3T3901P040n7s3908csWC+/pGGUTBwrfJq4txPHsZGNyX9Fy3PZgj3asMjikHbaN3yL/4too0VbH+Iw2nAd8zRN8njNN87S1Sqa5VxbkGXpI99tnib8VdfBgl7R2jd8mwVJJHX7Ow7zJX0KUOn9KYoXS3Zl26JH7gPWmQ6xWQZh+li5lvt+4t5kaZqXW/dPN1FSCI/JpRzoQ83R3Y1l0SrXOcEpT/gdJA/02eh95c/sYbP0YRVGRX974l4/LuM8+rYOZsXRR6Je0rbM7xN6eB4nyU2apNn2XHcOin+kfZNn6a+ocgRuf8UZ6SqvtJc/0k5vLMry6Kn24eydyIitRel9lGfPpAs9gdhVeQo1MwjpEI97pXmQqmZZVRj0qbFQQ1nsxt7LkmxQcapF6160aJEgWuBNQY/C9S5auFAUrqUrWtxetECSZBQS9qO7aZYv00W6CpL3+9ZrXtb7Pp/TdE0b/4ny/JlSefCQp7z8o6c4/17Z/kG2rSkCdPe2kIzFdp7Zzoo82vdtT8B2f1SP7U/b7rHzCtXRO/G3B8N3xaRAdmdJsNnEs7LxQ5yw2ytFUsiBZ+/0IZvRJkTnoSBbRFQdTJQams+iJMjj3/zwbbQIx6NFbSWicWrRHU6LSKK5m3T1++MffxpluzCI8HymZLsZju7mTNj0aoXwC36LiUP0LokXK9J2H4fh1oaKjh+C+zgp1PRXlIXBKuCVZVtmWNLFNseSyHYlllSRpGOAI/EZouskcGlAxZehgswjZXsqAXbwXOmwTuNVvqmM/LVo2JuGZ/OmAX0gaLccca/r3a1pqd9XeB8wIU98HXNWAf99KBz5rfVfbbZ6fUc64PXTVsTsMNlalP+Ted/7/DMrN9iY5G7ifRfR7pKErHqi45gPNutyKTSPnworlFyeeQRnShIIkX9nCYyLDWHY9nlFYdmJdFQYttpjmC0mLxfEYbBZ7u71ODFjY7Mog1+VG/zhZlF2OxxeC4Td/IwZ1ARLICad84rjYbFKV5GAIdoU0BlxRh4zyg5NlSrA7u3L0CzpuBaPMCDPkp0hTF6UjR5hHL6mDjgCMWOo4rCqAzG3l+lXW9WqaEeBrE8vD2LIlpfrnUFMlmuPENvDajcdnTyJ2X0hTAdUcFzz1qChmGbK5fmzN/psqNwuIjSNViyuwzvCCCLBUtqtWNjDS/w8ATdf4zcfC5a2yPYszmbl1td4At5P0PWVPUG3P4vGN+uiF7mUFf6M376tofXTAhF7RUNDZOzzsVOEsUTGqCsyRuft7+hQsfGI2wkqPg2CclRACKtjjPghSmagZwlqPxFuZx8gGtO07CiWk4zQBpmX/XPR7slaukvS2S9JRykh8DgvxnCLUXcvra2JHEbCs0gRRgqjefCwjWaV7ey1+dbt/xXls+XkYOCxe3ZwfHSYHUqLlNhBnspdYSBxaqmhmZNdBvE6GDW6L1MuhtM+nsZZQv8EaHuYo8CpZTtHaHC79zXKYiKoYil6K+HjKLj0gSLAcFSTq+ODqRB1x9ifesja/YQxtfHkwSk52d/9gGDGaOr7wjsdUxDz4CGItYdMXWDy48XHTIT8EYyQ5KZ3FTNxBg1L8stqW5ObuGX1IG66It7odsAu2kqUA4631+8kxTZdjfqyZxOCCIeeBDlyBDt3LjS0fvWAQG+eMrGqK2ho5FUtiEzXtU9KUy6DO9bdaisBBHqkBnD8+Vu8ceXtS8taSo3IwupDGKqYkSiMPbnQiaYiAB58Ei01e6nZhEdr+E2Ws8B5QOa8su2gPoBCH6ytpZd15fOvGjxBzbo+1ZXHj3Nl661Rmvg4cg7TmKzohJVCKysaymIcXtFuQ4MRhunQXAaNnDVzziqviYfzz1RcNWDmotMkRtZEcw3fPPEmUyPgroXZLLnN4ZdOvuUdhK7YHwNwuL/ogYIDyXOKOIA3tSo/R7xX1dFT2Ui6RRczJ9k8IzHKPO4TF1aVE82nxXraA4dcPANusvg2g+W69uAZuqq4RVdusrZMq7JzLbXw2PwF+5OVM2r/x6wXXSf2nnydqe9g33VsjG3X8yDiA96uO7WQbUGEbRtDwBLhTiUfGwnD8BRnY+Eu9EKTTYhJo8LsYjzrUVvWMZ3rm5ZgsJ2ZjirOU0Sc5yNP12sSQjmSgu7wMX7k6iXIsuTWVnODKtxUSD97cWrwoDOcGuqSox5fnhqEkule1aCKFBXSX788NYhoUGSxdaaGEeUunVMBMAupVoMGwNHWvPEIjHs2WUrjKgFW6lF/eW5cj54cf3gtAmZECfzhioA9W1LA+AFmugrYc2S0eB2gpVGoFDq8dZiuA2YP/1oI3BrHQv1Dn4XAnjskji+thspTvIhiOBlk+jzDCjmOo0dVRKXSbhdObiO6By5PIqaLqLy6cM2lFVEBa7giKu9svlxkDHdNab0GrxzzdlXgKINPsBmMwdSv/BqGxEVMY83Uk7bcgTE0yx1nUxv4WjN0gs2LJThmbN7rp4JIuo5pm28fSXytCDJQEXS+5A+K0h9HAAPsrN6HXK9bQAwakj3TyhOgiscO+KkbIMdjL6LyBMIhK0+ARkpZ15UnogT6rDxhi+jxVJ6AGmH1IYxxZzaZzZkDKs5TRNXLtqEyoM6w8gRoFLNdTH5cvRUNZTHnVnkCRvS1rXOqPFFy1YDvy4FG2eIA+fpQrHjvMV8f6NTgDZqvXypNFl7/+foAj3rWMOx71Ih9FFnVl5avD1SfDR+PZRn2R8ZsWWeXr8/mitd8fbI2HC5fH6qCFC8zXx8iZzg1qGIlLzNfH4qfO+1TDXXf/H55+foSGnrM14eymE9Zt5p8saa5aq26Jw1zHKoLUFaNPJYXBHDQQEIfecG9/mGRrhMJazxIH05B7XcMIGy4GHERmPIvGXaZrh14jWeTSlWX+jiuv7/RS467/LGL/cctdkUO/rTpG3XoHx/MoAUOkvXUq1F0bwCQxfbbqx8JHw1BHWUW1d9y3Z1hMeVUPKN18gVsl43Um5+k4fF04d3oGQzGcOpomUwTDV3aN6O6eA3SDI8ur8Zj34wS+5/2DSgXWeLVTHz1SU5Q6fCrT+wRxu87afAF6m2uEvniiOoN6qsuNBeOOhQxaR97kD6lzUJC5l+Wkd0sLSoV90oiD7X8koZR0eN/</diagram></mxfile>
|
2204.12516/main_diagram/main_diagram.pdf
ADDED
|
Binary file (37.1 kB). View file
|
|
|
2204.12516/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Given an RGB or RGB-D image containing a set of object instances of known 3D shapes, 6D multi-object pose is the task of detecting and estimating the 6D pose---position and orientation---of each object instance. Accurate poses are important for robotics tasks such as grasping and augmented reality applications involving shape manipulation.
|
| 4 |
+
|
| 5 |
+
In the standard 6D multi-object pose setup, we are given a set of 3D models of known object instances. Given an RGB or RGB-D input image, the goal is to jointly detect object instances and estimate their 6D object pose. Early work solved this problem by first estimating correspondences between the 3D model and the image [@lowe1999object], producing a set of 2D-3D correspondences, which are then used to obtain 6D object pose using Perspective-n-Point (PnP) solvers [@horaud1989analytic; @epnp] or iterative algorithms like Levenberg-Marquardt.
|
| 6 |
+
|
| 7 |
+
While 2D-3D correspondence is sufficient to solve for 6D pose, it is difficult to obtain accurate correspondence in practice. In many applications, we wish to estimate the pose of poorly textured objects where local feature matching is unreliable. Furthermore, problems such as heavy occlusion, object symmetry, and lighting variation can make detecting and matching local features near impossible. These problems cause classical systems to be too brittle for many use cases which require a greater degree of robustness.
|
| 8 |
+
|
| 9 |
+
Recently, many of these issues have been partially addressed using deep learning. A simple approach is to train a network to directly regress 6D poses[@cosypose; @li2018deepim; @xiang2017posecnn]. Direct pose regression simply learns to map input to output, and makes no use of the fact that the pixels are a perspective projection of a known 3D object. Although direct pose regression can be quite effective in practice, an intriguing question is whether there exist better deep learning methods that take advantage of projective geometry.
|
| 10 |
+
|
| 11 |
+
Many works on 6D pose have attempted to combine deep learning and projective geometry. One approach is to train a deep network to detect keypoints of a known 3D object [@pix2pose; @tekin2018real; @bb8; @pavlakos20176; @pvn3d; @peng2019pvnet], producing a set of 2D-3D correspondences which can serve as input to a Perspective-n-Point (PnP) solver. Another approach is to impose geometric knowledge in the form of implicit or declarative layers [@blindpnp; @bpnp]. These works showed that PnP could be implemented as a modular component in end-to-end differentiable architectures. However, both approaches are "one-shot" in the sense that correspondence is predicted once and then used to solve for pose through a PnP solver (differentiable or not); this makes the approaches sensitive to outliers and errors in the correspondences.
|
| 12 |
+
|
| 13 |
+
<figure id="fig:per_obj" data-latex-placement="t">
|
| 14 |
+
<img src="Figures/main.png" />
|
| 15 |
+
<figcaption>Given an image and collection of 3D models, our method outputs the position and orientation of each object instance.</figcaption>
|
| 16 |
+
</figure>
|
| 17 |
+
|
| 18 |
+
We propose a new approach to 6D object pose estimation. Our approach consists of an end-to-end differentiable architecture that makes use of geometric knowledge. The main novelty of our approach over prior work on 6D pose is the use of "coupled iterative refinement": unlike prior work which operates in a single shot setting, we iteratively refine pose and correspondence in a tightly coupled manner, allowing us to dynamically remove outliers to improve accuracy.
|
| 19 |
+
|
| 20 |
+
Our approach builds on top of the RAFT [@teed2020raft] architecture developed for optical flow (i.e. dense correspondence). The basic idea is to estimate flow between the input image and a set of rendered images of the known 3D object, generating 2D-3D correspondences that are used to solve for pose. Like RAFT, we use a GRU to perform recurrent iterative updates, but at each iteration we update not only flow but also object pose. The flow update and pose update are tightly coupled: the flow update is conditioned on the current pose, and the pose update is conditioned on the flow.
|
| 21 |
+
|
| 22 |
+
To perform the pose update, we introduce a novel differentiable layer we call "Bidirectional Depth-Augmented PnP (BD-PnP)". This layer is similar to a differentiable PnP solver in that it produces a Gauss-Newton update to object pose by minimizing reprojection error. However, it is novel in two aspects. First, it is bidirectional: it solves for a single pose update to simultaneously satisfy two sets of 2D-3D correspondences, one set defined on the input image, the other set defined on a rendered image. Second, our layer is "depth-augmented": the optimization objective also includes the reprojection error on inverse depth, which we show to be important for improving accuracy.
|
| 23 |
+
|
| 24 |
+
Our method achieves state-of-the-art accuracy on the YCB-V [@ycb], T-LESS[@tless] and Linemod (Occluded) [@linemod_occlusion] RGB-D multi-object BOP [@bopchallenge] pose benchmarks, significantly outperforming prior work. A variant of our method can handle RGB-only input, with performance on par with the current state-of-the-art.
|
| 25 |
+
|
| 26 |
+
# Method
|
| 27 |
+
|
| 28 |
+
Our method operates on a single input image and produces a set of object pose estimates (Fig. [1](#fig:per_obj){reference-type="ref" reference="fig:per_obj"}). For simplicity of exposition, we assume RGB-D input unless otherwise noted. Our method can be decomposed into 3 stages: (1) object detection, (2) pose initialization, and (3) pose refinement. The first two stages (object detection and pose initialization) follow the method proposed by CosyPose [@cosypose]. Our primary contribution concerns the pose refinement stage, where we seek to transform the initial coarse pose estimates into refined poses with subpixel reprojection error.
|
| 29 |
+
|
| 30 |
+
**Preliminaries** Given a textured 3D mesh of an object, we can render images and depth maps of the object from different viewpoints using PyTorch3D [@pytorch3d], with views parameterized by intrinsic and extrinsic parameters $$\begin{equation}
|
| 31 |
+
\mathbf{G}_i = \begin{pmatrix} \mathbf{R} & \mathbf{t} \\ \mathbf{0} & 1 \end{pmatrix} \qquad
|
| 32 |
+
\mathbf{K}_i = \begin{pmatrix}
|
| 33 |
+
f_x & 0 & c_x \\
|
| 34 |
+
0 & f_y & c_y \\
|
| 35 |
+
0 & 0 & 1 \\
|
| 36 |
+
\end{pmatrix}.
|
| 37 |
+
\end{equation}$$ where $\mathbf{G}_{i}$ is the object pose in camera coordinates. Letting $\mathbf{G}_0$ be the pose for the image and $\{\mathbf{G}_1, ..., \mathbf{G}_{N}\}$ be the poses for a set of renders, we can define a function which maps points in a render to points in the image $$\begin{equation}
|
| 38 |
+
\mathbf{x}_{i \rightarrow 0}' = \Pi\left(\mathbf{G}_0 \mathbf{G}_i^{-1} \Pi^{-1}(\mathbf{x}_i) \right)
|
| 39 |
+
\label{eqn:forw}
|
| 40 |
+
\end{equation}$$ or from the image to a render $$\begin{equation}
|
| 41 |
+
\mathbf{x}_{0 \rightarrow i}' = \Pi\left(\mathbf{G}_i \mathbf{G}_0^{-1} \Pi^{-1}(\mathbf{x}_0) \right)
|
| 42 |
+
\label{eqn:back}
|
| 43 |
+
\end{equation}$$ We use depth-augmented pinhole projection functions $\Pi$ and $\Pi^{-1}$ which convert not just image coordinates of a point but also its *inverse depth* between frames $$\begin{equation}
|
| 44 |
+
\Pi(\mathbf{X}) = \begin{bmatrix}
|
| 45 |
+
X/Z \\
|
| 46 |
+
Y/Z \\
|
| 47 |
+
1/Z
|
| 48 |
+
\end{bmatrix} \ \
|
| 49 |
+
\Pi^{-1}(\mathbf{x}) = \begin{bmatrix}
|
| 50 |
+
x/d \\
|
| 51 |
+
y/d \\
|
| 52 |
+
1/d
|
| 53 |
+
\end{bmatrix}
|
| 54 |
+
\ \
|
| 55 |
+
\mathbf{x} = \begin{bmatrix} x \\ y \\ d \end{bmatrix}
|
| 56 |
+
\end{equation}$$ where it is assumed pixels coordinates are normalized using the camera intrinsics.
|
| 57 |
+
|
| 58 |
+
The goal is to solve for pose $\mathbf{G}_0$ such that Eqn [\[eqn:forw\]](#eqn:forw){reference-type="ref" reference="eqn:forw"} correctly maps points between the image and renders. We can return the object pose in world coordinates by inverting $\mathbf{G}_0$.
|
| 59 |
+
|
| 60 |
+
**Object Candidate Detection** Given an input image, we first apply Mask-RCNN [@maskrcnn] to generate a set of object detections and associated labels. We use the pretrained Mask-RCNN weights from CosyPose [@cosypose] which were trained on the BOP [@bopchallenge] object classes. We then use the detected bounding boxes to generate crops from the image, segmentation mask, and depth map (in the RGB-D setting). We resize crops to $320 \times 240$ and adjust intrinsics accordingly.
|
| 61 |
+
|
| 62 |
+
**Pose Initialization** Following detection, our system operates in parallel for each object candidate. Given an object, we start by generating an initial pose estimate $\mathbf{G}^{(0)}$.
|
| 63 |
+
|
| 64 |
+
We first compute a translation vector $\mathbf{t}_\text{bbox}$ which aligns the bounding box of the 3D model to the detected object mask, such that the diameter of the mesh aligns with the projected bounding box. We then render the 3D model using the estimated translation and concatenate the render with the image crop. This input is fed directly to an Resnet-based architecture which regresses a rotation and translation update ($\mathbf{R}$, $\Delta \mathbf{t}$) where rotation is predicted using the continuous 6D parameterization proposed by Zhou et al. [@zhou2019continuity]. The initial pose estimate can be written as a $4\times4$ matrix $$\begin{equation}
|
| 65 |
+
\mathbf{G}_0^{(0)} = \begin{pmatrix} \mathbf{R} & \mathbf{t}_\text{bbox} + \Delta \mathbf{t} \\ \mathbf{0} & 1 \end{pmatrix}.
|
| 66 |
+
\end{equation}$$
|
| 67 |
+
|
| 68 |
+
We use the pretrained EfficientNet [@tan2019efficientnet] weights from Cosypose [@cosypose] for this pose initialization step.
|
| 69 |
+
|
| 70 |
+
**Feature Extraction and Correlation** Given our initial pose estimate, we render several viewpoints at our pose estimate as well as centered around it by adding or subtracting $22.5^{\circ}$ from either pitch, yaw or roll (7 rendered views in total). For each render, our network estimates bidirectional, dense correspondence between the render and the image crop. The object pose of each of the renders is known; the pose of the object in the image crop needs to be estimated.
|
| 71 |
+
|
| 72 |
+
For all $N$ renders we extract dense $\frac{H}{4} \times \frac{W}{4}$ feature maps. We also apply the same feature extraction network to the image crop using shared weights.
|
| 73 |
+
|
| 74 |
+
We then build two correlation volumes for each image-render pair, one from the image to the render and another from the render to the image. The correlation volume is computed by taking the dot product between all pair of feature vectors. Like RAFT [@teed2020raft], we pool the last two dimension of each correlation volume to produce a set of 4-level correlation pyramids. These pyramids contain correlation features useful for matching.
|
| 75 |
+
|
| 76 |
+
<figure id="fig:update_op" data-latex-placement="t">
|
| 77 |
+
<embed src="Figures/pose_update.pdf" />
|
| 78 |
+
<figcaption>The update operator. A GRU produces revisions <span class="math inline"><strong>r</strong></span> and confidence weights <span class="math inline"><strong>w</strong></span>. The revisions and confidence weights are used to solve for a pose update.</figcaption>
|
| 79 |
+
</figure>
|
| 80 |
+
|
| 81 |
+
We use a GRU-based update operator (Fig. [2](#fig:update_op){reference-type="ref" reference="fig:update_op"}) to produce a sequence of updates to our pose estimates. The GRU also has a hidden state which gets updated with each iteration.
|
| 82 |
+
|
| 83 |
+
Let $\mathbf{G}$ be the set of all poses, including both the renders and the image. The poses of the renders are fixed, while the first pose $\mathbf{G}_0$, the pose of the image, is a variable.
|
| 84 |
+
|
| 85 |
+
Using Eqn. [\[eqn:forw\]](#eqn:forw){reference-type="ref" reference="eqn:forw"}, we compute the dense correspondence field bidirectionally between the image and each render. We compute $\mathbf{x}_{i\rightarrow 0}$ using Eqn. [\[eqn:forw\]](#eqn:forw){reference-type="ref" reference="eqn:forw"} and $\mathbf{x}_{0\rightarrow i}$ using Eqn. [\[eqn:back\]](#eqn:back){reference-type="ref" reference="eqn:back"}. The correspondence field $\mathbf{x}_{i\rightarrow0} \in \mathbb{R}^{H \times W \times 3}$ tells us, for every pixel in render $i$, its estimated 2D location in the image. It is worth noting that the correspondence field is augmented with inverse depth, that is, $\mathbf{x}_{i\rightarrow0}$ contains not just 2D coordinates but also inverse depth.
|
| 86 |
+
|
| 87 |
+
**Correlation Lookup** We use $\mathbf{x}_{i\rightarrow 0}$ to index from the corresponding correlation pyramid using the lookup operator defined in RAFT [@teed2020raft]. The lookup operator constructs a local grid around each point with radius $r$ and uses the grid to index from each level in the correlation pyramid, producing a total of $L$ correlation features. The result of the lookup operation is a map of correlation features $\mathbf{s}_{i\rightarrow 0} \in \mathbb{R}^{H\times W \times L}$. Similarly, we use $\mathbf{x}_{0\rightarrow i}$ to produce the correlation features $\mathbf{s}_{0\rightarrow i}\in \mathbb{R}^{H\times W \times L}$.
|
| 88 |
+
|
| 89 |
+
**GRU Update** For each image-render pair, the correlation features $\mathbf{s}_{i\rightarrow 0}$ and the hidden state $\mathbf{h}_{i\rightarrow 0}$, together with additional context and depth features described in the appendix, are fed to a 3x3 convolution GRU, which outputs (1) a new hidden state, (2) revisions $\mathbf{r}_{i\rightarrow 0} \in \mathbb{R}^{H \times W \times 3}$ to each of the dense correspondence fields, and (3) a dense map of confidence $\mathbf{w}_{i\rightarrow 0}$ in the predicted revisions. The revision $\mathbf{r}_{i\rightarrow 0}$ represents a new flow estimate in the form of a dense map of corrections that should be applied to the correspondences produced by the current pose estimate. Note that $\mathbf{r}_{i\rightarrow 0}$ includes not just revisions for 2D coordinates but also revisions for inverse depth. The revisions for depth are necessary to compensate for the fact that the input sensor depth may be noisy and the corresponding point may be occluded.
|
| 90 |
+
|
| 91 |
+
We also apply the same GRU for the other direction of the image-render pair. That is, we use the correlation features $\mathbf{s}_{0\rightarrow i}$ to produce revisions $\mathbf{r}_{0\rightarrow i}$ and confidence map $\mathbf{w}_{0\rightarrow i}$. Note that the weights of the GRU are shared across all image-render pairs in both directions.
|
| 92 |
+
|
| 93 |
+
**Bidirectional Depth-Augmented PnP (BD-PnP)** The BD-PnP layer converts the predicted revisions $\mathbf{r}$ and confidences $\mathbf{w}$ to a camera pose update $\Delta \mathbf{G}_0$. We first use the revisions to update the correspondence fields $$\begin{equation}
|
| 94 |
+
\begin{split}
|
| 95 |
+
\mathbf{x}_{i\rightarrow 0}' = \mathbf{x}_{i\rightarrow 0} + \mathbf{r}_{i\rightarrow 0} \\
|
| 96 |
+
\mathbf{x}_{0\rightarrow i}' = \mathbf{x}_{0\rightarrow i} + \mathbf{r}_{0\rightarrow i}
|
| 97 |
+
\end{split}
|
| 98 |
+
\end{equation}$$ and define an objective function to minimize the distance between the reprojected coordinates and the revised correspondence $$\begin{equation}
|
| 99 |
+
\begin{split}
|
| 100 |
+
\label{eq:obj}
|
| 101 |
+
\mathbf{E}(\mathbf{G}_0) = \sum_{i=1}^N \left||\mathbf{x}_{i\rightarrow 0}' - \Pi (\mathbf{G}_0\mathbf{G}_i^{-1}\Pi^{-1}(\mathbf{x}_i)|\right|_{\Sigma_{i\rightarrow0}}^2 + \\
|
| 102 |
+
\sum_{i=1}^N \left||\mathbf{x}_{0\rightarrow i}' - \Pi (\mathbf{G}_i\mathbf{G}_0^{-1}\Pi^{-1}(\mathbf{x}_0)|\right|_{\Sigma_{0\rightarrow i}}^2
|
| 103 |
+
\end{split}
|
| 104 |
+
\end{equation}$$ where $||\cdot||_{\Sigma}$ is the Mahalanobis distance with $\Sigma_{i\rightarrow 0} = \text{diag} \ \mathbf{w}_{i \rightarrow 0}$. The objective in Eqn. [\[eq:obj\]](#eq:obj){reference-type="ref" reference="eq:obj"} states that we want camera poses $\mathbf{G}_0$ such that the reprojected points match the revised correspondence $\mathbf{x}_{ij}'$. It is important to note that this objective is similar to conventional PnP because it optimizes reprojection error. But unlike conventional PnP, which optimizes a single set of 2D-3D correspondences, our objective is bidirectional because it optimizes two sets of 2D-3D correspondences, one defined on the render and the other defined on the input image. In addition, unlike conventional PnP, our objective also includes reprojection error of inverse depth, which experiments show to be important for improving accuracy.
|
| 105 |
+
|
| 106 |
+
We linearize Eqn. [\[eq:obj\]](#eq:obj){reference-type="ref" reference="eq:obj"} using the current pose and perform a fixed number of Gauss-Netwon updates (3 during training and 10 during inference). Each Gauss-Newton update produces a pose update $\delta \xi \in \mathfrak{se}(3)$ which is applied to the current pose estimate using retraction on the SE3 manifold $$\begin{equation}
|
| 107 |
+
\mathbf{G}_0^{(t+1)} = \exp(\delta \xi) \cdot \mathbf{G}_0^{(t)}.
|
| 108 |
+
\end{equation}$$
|
| 109 |
+
|
| 110 |
+
**Inner and Outer Update Loops** For a given set of renders, we run 40 iterations of the update operator. Upon completion, we use the refined pose estimate to re-render a new set of 7 viewpoints and repeat the process. As we show in our experiments, we can trade speed for accuracy by increasing the number of inner and outer iterations. []{#sec:loops label="sec:loops"}
|
| 111 |
+
|
| 112 |
+
To handle RGB input, we can use the current pose $\mathbf{G}_0^{(t)}$ to render the depth from the known 3D model, and proceed as if we have RGB-D input. However, this basic approach is not mathematically sound because the rendered depth is a function of the object pose but is treated as a constant in the optimization. On the other hand, a fully principled treatment is difficult to implement because it requires computing the Jacobian of the rendering function, as well as the derivatives of the Jacobian during backpropagation. As a middle ground, we use the rendered depth to linearize the optimization objective, and introduce depth as a variable in the optimization so that we jointly optimize pose and depth but discard the depth update (full details in the appendix). This revised approach gives better results.
|
| 113 |
+
|
| 114 |
+
Each training step, we randomly sample a visible object in the input image and randomly purturb the ground-truth rotation and translation to initialize the pose. Our model is trained to recover the ground truth pose from this initialization. In order to save GPU memory, we use 10 inner-loops and one outer-loop during training, and render only one viewpoint at each training step.
|
| 115 |
+
|
| 116 |
+
**Supervision** We supervise on the predicted correspondence revisions and the updated pose estimates from all update iterations in the forward pass, with exponentially increasing loss weights similar to RAFT [@teed2020raft]. Specifically, we supervise on the geodesic L1 distance between the estimated pose and the ground truth pose. The flow is supervised using an L1 endpoint error loss, as is standard for optical flow problems. All ground truth poses in the BOP benchmark[@bopchallenge], which we use for experiments, have a set of discretized symmetries which are considered equivalent with regard to the MSSD and MSPD error metrics. In order to align the loss with the error metrics, we compute the loss using all discretized symmetries and backpropagate the minimum.
|
2205.13662/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-05-13T16:03:30.024Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 Safari/537.36" etag="rh9bWijxJP2_CSMtSuEY" version="18.0.3" type="google"><diagram id="f8dyB8DnOjgrg-9P_dfu" name="Page-2">7VpLc5swEP41HMMIvTkGJ2kPbaczOfTYwSAwU4wyGMdOf31FEGBetR07sZI2hwxaiZW0+33a1WILzZbbT7n/sPgqQ5FaEIRbC91YEGIK1f9S8FQJIOaVIM6TsBI5reA++S20EGjpOgnFqjOwkDItkoeuMJBZJoKiI/PzXG66wyKZdmd98GM9I2gF94GfisGwH0lYLCoph6yVfxZJvKhndqhb9Sz9erBWsVr4odzszIVuLTTLpSyqp+V2JtLSdrVdqgXdTfQ2C8tFVhzyArmR31bLn4/ucn4/u7qax1/y+ZXW8uina73ha73a4qk2gVKjrK0a3maRFOL+wQ/Kno3yt5ItimWqWo561LpEXojt5CKdZusKMkIuRZE/qSH6BeSQ6hUNFwdrt2xa40M9ZLFj91rma3fHjebWIupBG+UYAw0t5JllIX5pC8GBhWZGWaih9sUshExjmQsMYxk2jWV9C12cZcQ0lvUsdHmW0YGFhgbKwusyKVCtTGaia5RcrrNQlFMA1YpkVuh0hOvmTKYyV+1QRP46VfvxCj+PRfFd5Inag8hLgydZrDWsilz+arKGcoYoSdNaiQVRxAMRBM3InZ45J5iAxlEirJOUKTeprOh5KXs5NulPYDsu5F2XDhxKwNChtSwXqV8kj92FjnlZL+G7TNQWWjzhHp4Q6KpYyXUeCP3Wbp7TVzRF3VqRdtq0onqgjKKV6Ix5BmZjuxOwyt4KqwpMaAbp3d1xaIVDtIa+4NEoWmnAxTw6Cq2VM/cnpvtRjcxGNZmItMeiGgPHZhxzyjgBCCKGu3oBt13EACOUMMDrs9AkyLsjkFebwFuL3FrMs4gHLHJjsZtKPOCDClVFlwTTQJyCrp8mcaZkgcKjUJ1eGQITddu71h3LJAzTqQDakq6cPfXnIvX84Ff8LK+n01zdoSQbsAvY5ExBGPbgNYJuOIJup4fCswXherIpL197JzmYwzmidMTBRPAQf0QHY2SYg4d3GdOzrEjQ8SwrZO4cHJdlnS9ukXcVt+ArxS1ofNyqvWAo4F83UTMXn8i13Z2/7koQsnu3zIMvD7yrF+5TbABCkdlH8r+K0N59Vq3GRnAHWvyFCO1fb/foNQGgw+remQBKTgHoWCZweDZxzpwBHpo0TFx2L1Shgeeq0LDDKjRnw+OwlmrSgfnalcL9ePyfxI4lsQ5CHENOXATHkljHcamr+jHuLdSEE/jd1cabG/10DeAiiDe83Ii5TTCkDAPoAoBB70uoAioiDDscucx167j8EjpgBgjnimLQxU6PDoja1GG0GdKbxgQ+vFn9/Z3y4VA6mP5N6a90UAe2TTEk3KEAAkBfeGfcQweHKtLVXSUtzKtxoLEax27VdnZS1bbJrqdy6Q9XtaW0C4ER2L9p1Rbt+fjinebgJgmdSls/nIM5fysHq2b7G8GK8O0PLdHtHw==</diagram></mxfile>
|
2205.13662/main_diagram/main_diagram.pdf
ADDED
|
Binary file (12.7 kB). View file
|
|
|
2205.13662/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Preference learning [1] is a classical problem in machine learning, where one is interested in learning the order relations on a collection of data items. Preference learning algorithms [2–5] often assume that there is a latent utility function f : X 7→ R dictating the outcome of preferences, where X denotes the domain of item covariates. An explicit feedback such as item ratings or rankings from recommender systems can be treated as noisy evaluations of f, whereas pairwise comparison data (also known as duelling data) arising from, e.g., sports match outcomes [6, 7] can be used to implicitly infer f, i.e. item x (`) is preferred over (beats) item x (r) when f(x (`) ) > f(x (r) ). As shown by Kahneman and Tversky [8], humans often struggle with evaluating absolute quantities when it comes to eliciting preferences, but are broadly capable of evaluating relative differences, a core observation often exploited in preference learning. Motivated by such, this work will focus on explaining preferences inferred using duelling data.
|
| 4 |
+
|
| 5 |
+
Explaining preference models is crucial when they are applied in areas such as recommendation systems [9], finance [10], and sports science [11] for the practitioner to trust, debug and understand the value of their findings [12]. However, despite its importance, no prior work has studied this problem to the best of our knowledge. While one may suggest applying existing explainability tools such as LIME [13], or SHAP [14] to a learned utility function f, we reason that this approach only explains the utility but not the mechanism of eliciting preferences itself. We highlight the important differences between these two viewpoints in our numerical experiments. Moreover, the utility-based model places a strong *rankability* assumption on the underlying preferences, meaning that if we define x (`) x (r) ⇐⇒ f(x (`) ) ≤ f(x (r) ), then is a total order on all the items. However, as Pahikkala et al. [15] and Chau et al. [16] have discussed, there are many departures from rankability in practice, e.g. we might easily see a preference of A over B, B over C, but C over A – conforming to the *rock-paper-scissors* relation. Such inconsistent preferences are under frequent study in social choice theory [17, 18], and are of wider interest in both healthcare [19] and retail [20] where data are both large and noisy.
|
| 6 |
+
|
| 7 |
+
To move beyond the rankability assumption, we will utilise the *Generalised Preferential Kernel* from [16] to model the underlying preferences, and develop PREF-SHAP, a novel Shapley value [21]-based
|
| 8 |
+
|
| 9 |
+
<sup>∗</sup>Equal contribution, order decided by coinflip
|
| 10 |
+
|
| 11 |
+
<sup>†</sup>Work primarily done at the University of Oxford and finished at Amazon.
|
| 12 |
+
|
| 13 |
+
explainability toolbox, to explain the inferred preferences. Our contributions can be summarised as follows:
|
| 14 |
+
|
| 15 |
+
- We propose PREF-SHAP, a novel Shapley value-based explainability algorithm, to explain preferences based on duelling data.
|
| 16 |
+
- 2. We empirically demonstrate that PREF-SHAP gives more informative explanations compared to the naive approach of applying SHAP to the inferred utility function f.
|
| 17 |
+
- 3. We release a high-performant implementation of PREF-SHAP at [22].
|
| 18 |
+
|
| 19 |
+
We will first give a brief overview of preference learning and Shapley Additive Explanations (SHAP) [14], which are the two core concepts of our contribution, PREF-SHAP, described in Section 3.
|
| 20 |
+
|
| 21 |
+
**Notation** Scalars are denoted by lower case letters, while vectors and matrices are denoted by bold lower case and upper case letters, respectively. Random variables are denoted by upper case letters. $\mathcal{X} \subseteq \mathbb{R}^d$ denotes the item space with d features and $\mathcal{Y} = \{-1, 1\}$ is the binary preference outcome space<sup>3</sup>. We let $k: \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ be a kernel function and $\mathcal{H}_k$ the corresponding reproducing kernel Hilbert space (RKHS).
|
| 22 |
+
|
| 23 |
+
In this section, we will introduce the two approaches to model preferences from duelling data, namely the *utility based approach* and the more general approach from Chau et al. [16]. Formally, a preference feedback is denoted as *duelling*, when a pair of items $(\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}) \in \mathcal{X} \times \mathcal{X}$ is given to a user, and a binary outcome $y \in \mathcal{Y}$ telling us whether $\mathbf{x}^{(\ell)}$ or $\mathbf{x}^{(r)}$ won the duel, is observed. In general, we observe m binary preferences among n items, giving the data $D = (\mathbf{y}, \mathbf{X}^{(\ell)}, \mathbf{X}^{(r)}) = \left\{ (y_j, \mathbf{x}_j^{(\ell)}, \mathbf{x}_j^{(r)}) \right\}_{i=1}^m$ .
|
| 24 |
+
|
| 25 |
+
We also use $\mathbf{X} \in \mathbb{R}^{n \times d}$ to denote the full item covariate matrix.
|
| 26 |
+
|
| 27 |
+
**Utility-based Preference model (UPM)** The following likelihood model is often used [2-5, 7] to model duelling feedback using a latent utility function f:
|
| 28 |
+
|
| 29 |
+
$$p\left(y \mid \mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}\right) = \sigma\left(y\left(f\left(\mathbf{x}^{(\ell)}\right) - f\left(\mathbf{x}^{(r)}\right)\right)\right),\tag{1}$$
|
| 30 |
+
|
| 31 |
+
where $\sigma$ is the logistic CDF, i.e. $\sigma(z) = (1 + \exp(-z))^{-1}$ . Maximum likelihood approaches are then deployed to learn the latent utility function f. Consequently, preferences between items can be inferred accordingly from $\mathbf{f} = \{f(\mathbf{x}_i)\}_{i=1}^n$ , i.e. $\mathbf{x}_i$ is on average preferred over $\mathbf{x}_j$ if $\mathbf{f}_i \geq \mathbf{f}_j$ .
|
| 32 |
+
|
| 33 |
+
Albeit elegant, there are several drawbacks to this approach in modelling preferences. As mentioned, using a one-dimensional vector $\mathbf{f}$ to derive preferences assumes that the items $\{\mathbf{x}_i\}_{i=1}^n$ are perfectly rankable, i.e. there is a total ordering on $\mathcal{X}$ which the true preferences are consistent with. This is a strong assumption that often does not hold in practice. For example, it is well studied that cognitive biases often lead to inconsistent human preferences in behavioural economics [8]. Moreover, the ranking community has also challenged this assumption by devising rankability metrics [23, 24] to test this restrictive assumption in practice.
|
| 34 |
+
|
| 35 |
+
**Generalised Preference Model (GPM)** Chau et al. [16] proposed to model preference directly using a more general $g: \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ that captures the preference within any pair of items, using the likelihood
|
| 36 |
+
|
| 37 |
+
$$p\left(y \mid \mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}\right) = \sigma\left(yg\left(\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}\right)\right). \tag{2}$$
|
| 38 |
+
|
| 39 |
+
We note that g has to be a skew-symmetric function to ensure the natural property $p(y \mid \mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}) = 1 - p(y \mid \mathbf{x}^{(r)}, \mathbf{x}^{(\ell)})$ . The utility based approach can be obtained as a special case of this model, i.e. by setting $g(\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}) = f(\mathbf{x}^{(\ell)}) - f(\mathbf{x}^{(r)})$ . We propose that when one is interested in modelling (and thus explaining) pairwise preferences, we should consider the preference function g directly instead of explaining preferences based on a restrictive utility model f.
|
| 40 |
+
|
| 41 |
+
<sup>&</sup>lt;sup>3</sup>Thus, we do not model 'draws' in match outcomes, but the model can be straightforwardly extended to include them by specifying the appropriate likelihood function.
|
| 42 |
+
|
| 43 |
+
We follow Chau et al. [16]'s approach to model g non-parametrically using kernel methods [25]. We assume g as a function lives in the following RKHS of skew-symmetric functions: given kernel $k: \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ defined on the item space $\mathcal{X}$ , the generalised preferential kernel $k_E$ on $\mathcal{X} \times \mathcal{X}$ is constructed as follows:
|
| 44 |
+
|
| 45 |
+
$$k_E\left(\left(\mathbf{x}_i^{(\ell)},\mathbf{x}_i^{(r)}\right),\left(\mathbf{x}_j^{(\ell)},\mathbf{x}_j^{(r)}\right)\right) = k\left(\mathbf{x}_i^{(\ell)},\mathbf{x}_j^{(\ell)}\right)k\left(\mathbf{x}_i^{(r)},\mathbf{x}_j^{(r)}\right) - k\left(\mathbf{x}_i^{(\ell)},\mathbf{x}_j^{(r)}\right)k\left(\mathbf{x}_i^{(r)},\mathbf{x}_j^{(\ell)}\right).$$
|
| 46 |
+
|
| 47 |
+
This kernel allows us to model the similarity across pairs of items. Moreover, if k is a universal kernel [26], then $k_E$ also satisfies the corresponding notion of universality, meaning that the corresponding RKHS $\mathcal{H}_{k_E}$ is rich enough to approximate any bounded continuous skew-symmetric function arbitrarily well [16, Theorem. 1]. To infer $g \in \mathcal{H}_{k_E}$ using likelihood (2), one simply runs kernel logistic regression with data $\mathbf{y}$ as labels and $\left(\mathbf{X}^{(\ell)}, \mathbf{X}^{(r)}\right)$ as inputs. We will refer to this approach as the *Generalised Preference Model* (GPM).
|
| 48 |
+
|
| 49 |
+
We emphasize that *explaining* GPM allows us to specifically explain *inconsistent preferences*, which in contrast to *explaining rank* allows us to infer preferences even when transitivity is violated. Such insights can be of great importance in broader contexts such as decision theory [27] and utility theory [28] where transitivity does not hold.
|
| 50 |
+
|
| 51 |
+
Incorporating context variables. Besides item-level covariates $\mathbf{x} \in \mathcal{X}$ , when there exist additional context covariates $\mathbf{u} \in \mathcal{U} \subseteq \mathbb{R}^{d'}$ that describe the context in which a specific pairwise comparison is made, they can be incorporated into the kernel design as discussed in Chau et al. [16, Appendix. B]. Examples of such context covariates could be court type when a tennis match is conducted, or where a different user compares two clothing items in e-commerce. Considering the enriched dataset $D = \left\{ \left( y_j, \mathbf{u}_j, \mathbf{x}_j^{(\ell)}, \mathbf{x}_j^{(r)} \right) \right\}_{j=1}^m$ , we can now model the preference incorporating the context as: $p(y \mid \mathbf{u}, \mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}) = \sigma \left( g_U \left( \mathbf{u}, \mathbf{x}^{(\ell)}, \mathbf{x}^{(r)} \right) \right)$ . Now, given a kernel $k_U$ defined on the context space $\mathcal{U}$ , the context-specific preference function $g_U : \mathcal{U} \times \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ can be learnt non-parametrically with the following kernel,
|
| 52 |
+
|
| 53 |
+
$$k_E^{(U)}\left(\left(\mathbf{u}_i, \mathbf{x}_i^{(\ell)}, \mathbf{x}_i^{(r)}\right), \left(\mathbf{u}_j, \mathbf{x}_j^{(\ell)}, \mathbf{x}_j^{(r)}\right)\right) = k_U\left(\mathbf{u}_i, \mathbf{u}_j\right) k_E\left(\left(\mathbf{x}_i^{(\ell)}, \mathbf{x}_i^{(r)}\right), \left(\mathbf{x}_j^{(\ell)}, \mathbf{x}_j^{(r)}\right)\right).$$
|
| 54 |
+
|
| 55 |
+
We refer to this approach as the Context-specific Generalised Preference Model (C-GPM).
|
| 56 |
+
|
| 57 |
+
To explain preferences, we will utilise the popular SHAP (SHapley Additive exPlanations) paradigm, which is based on the concept of Shapley values (SV). SV [21] were originally proposed as a credit allocation scheme for a group of d players in the context of cooperative games, which are characterised by a value function $\nu:[0,1]^d\to\mathbb{R}$ that measures *utility* of subsets of players. Formally, the Shapley value for player j in game $\nu$ is defined as:
|
| 58 |
+
|
| 59 |
+
$$\phi_j(\nu) = \sum_{S \subseteq \Omega \setminus \{j\}} (|S|!(d - |S| - 1)!/d!) \left(\nu(S \cup j) - \nu(S)\right),\tag{3}$$
|
| 60 |
+
|
| 61 |
+
where $\Omega = \{1,...,d\}$ is the set of players of the game. Given a value function $\nu$ , the Shapley values are proven to be the only credit allocation scheme that satisfies a particular set of favourable and fair game theoretical axioms, commonly known as *efficiency*, *null player property*, *symmetry* and *additivity* [21]. Štrumbelj and Kononenko [29] later connect Shapley values to the field of *explainable machine learning* by drawing an analogy between model fitting and cooperative game. Given a specific data point, by considering its *features* as *players* participating in a game that measures features' utilities, the Shapley values obtained can be treated as *local feature importance scores*. Such games are typically defined through the value functions defined below.
|
| 62 |
+
|
| 63 |
+
**Definition 2.1** (Value functions). Let X be a random variable on $\mathcal{X} \subseteq \mathbb{R}^d$ and $f: \mathcal{X} \to \mathbb{R}$ a model from hypothesis space $\mathcal{H}$ . The value function $\nu: \mathcal{X} \times [0,1]^d \times \mathcal{H}$ is given by
|
| 64 |
+
|
| 65 |
+
$$\nu_{\mathbf{x},S}(f) = \mathbb{E}_{r(X_{S^c}|X_S = \mathbf{x}_S)} \left[ f(\{X_S, X_{S^c}\}) \mid X_S = \mathbf{x}_S \right]$$
|
| 66 |
+
(4)
|
| 67 |
+
|
| 68 |
+
where r is an appropriate reference distribution, $X_S$ is the subvector of X corresponding to the feature set S, $S^c$ is the complement of the feature set S and $\{X_S, X_{S^c}\} = X$ denotes the concatenation of $X_S$ and $X_{S^c}$ .
|
| 69 |
+
|
| 70 |
+
In other words, given a data point x, the utility of the feature subset S is defined as the impact on the model prediction, after "removing" the contribution from $S^c$ via integration with respect to the reference distribution r. These "removal-based" strategies are common in the explainability literature [30]. Nonetheless, the correct choice of the reference distribution has been a long-standing debate [31]. Janzing et al. [32] argued from a causality perspective that the feature marginal distribution should be used as the reference distribution, i.e. $r(X_{S^c} \mid X_S = x_S) = p(X_{S^c})$ where p is the data distribution. On the other hand, Frye et al. [33] disagreed by pointing out these "marginal" value functions ignore feature correlations and lead to unintelligible explanations in higher-dimensional data, and they instead advocate the use of conditional distribution as reference, i.e. $r(X_{S^c} \mid X_S = x_S) = p(X_{S^c} \mid X_S = x_S)$ . Thus, there is no consensus and in fact, Chen et al. [31] took a neutral stand and argued the choice depends on the application at hand. This also leads to design of value functions for specific problems, e.g. improving local estimation [34], incorporating causal knowledge [35, 36] and modelling structured data [37]. In this paper, we will design an appropriate value function for preference learning and show that naive application of the existing value function to preference learning will lead to unintuitive results.
|
| 71 |
+
|
| 72 |
+
Shapley value estimation. Given a data point x and a model f, estimating Shapley values consist of two main steps: Firstly, for each feature subset $S \subseteq \Omega$ , estimate the value function $\nu_{\mathbf{x},f}(S)$ either by Monte Carlo sampling from the reference distributions r, or by utilising a model specific structure to speed up the estimation such as in Linearshap [29], Deepshap [14], Treeshap [38], and RKHS-SHAP [12]. The former sampling procedure is straightforward when r is the marginal distribution, but computationally heavy and difficult when r is the conditional distribution, as it involves estimating an exponential number of conditional densities [39]. Finally, after estimating the value functions, one can compute the Shapley values based on Eq. 3 or by utilising the efficient weighted least square approach proposed by Lundberg and Lee [14].
|
| 73 |
+
|
| 74 |
+
Estimating value functions when $f \in \mathcal{H}_k$ . We give a review to the recently introduced RKHS-SHAP algorithm proposed by Chau et al. [12] as it is another core component for PREF-SHAP. RKHS-SHAP is a SV estimation method for functions in a given RKHS. It circumvents the need for any density estimation and utilises the arsenal of kernel mean embeddings [40] to estimate the value functions non-parametrically. Assume k takes a product kernel structure across dimensions, then for any $f \in \mathcal{H}_k$ , by applying the *reproducing property* [25], the value function can be decomposed as:
|
| 75 |
+
|
| 76 |
+
$$\nu_{\mathbf{x},S}(f) = \left\langle f, \ \mathbb{E}_{r(X_{S^c}|X_S = \mathbf{x}_S)} \left[ k\left( \{X_S, X_{S^c}\}, \cdot \right) \mid X_S = \mathbf{x}_S \right] \right\rangle_{\mathcal{H}_k} \tag{5}$$
|
| 77 |
+
|
| 78 |
+
$$= \left\langle f, k_{X_S} \otimes \mu_{r(X_{S^c}|X_S = \mathbf{x}_S)} \right\rangle_{\mathcal{H}_k}, \tag{6}$$
|
| 79 |
+
|
| 80 |
+
where $k_{X_S}$ is the product of kernels belonging to the feature set S, and $\mu_{r(X_{S^c}|X_S=\mathbf{x}_S)}:=\int k_{X_{S^c}}r(X_{S^c}\mid X_S=\mathbf{x}_S)dX_{S^c}$ is the kernel mean embedding [40] of the reference distribution r. Depending on the choice of the reference distribution, one recovers either the standard kernel mean embedding or the conditional mean embedding. This allows us to arrive at a closed form expression of the value function and circumvents the need for fitting an exponential number of conditional densities.
|
| 81 |
+
|
| 82 |
+
# Method
|
| 83 |
+
|
| 84 |
+
In this section, we will present PREF-SHAP, a new Shapley explainability toolbox designed to explain preferences by attributing contribution scores over item-level and context-level covariates for our preference models. Recall the likelihood model for C-GPM from Sec. 2.1:
|
| 85 |
+
|
| 86 |
+
$$p\left(y \mid \mathbf{u}, \mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}\right) = \sigma\left(yg_U\left(\mathbf{u}, \mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}\right)\right),\tag{7}$$
|
| 87 |
+
|
| 88 |
+
where $g_U$ is the context-included preference function that denotes the strength of preference of item $\mathbf{x}^{(\ell)}$ over item $\mathbf{x}^{(r)}$ under context $\mathbf{u}$ . As there are two distinct sets of covariates present, we will propose two different value functions to capture the influences from items and context variables respectively, and show how they could be estimated non-parametrically using tools from the kernel methods literature, as in RKHS-SHAP.
|
| 89 |
+
|
| 90 |
+
To explain a general preference model $g: \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ , we propose the following *preferential value function for items*.
|
| 91 |
+
|
| 92 |
+
**Definition 3.1** (Preferential value function for items). Given a preference function $g \in \mathcal{H}$ , a pair of items $(\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}) \in \mathcal{X} \times \mathcal{X}$ to compare, we define the preferential value function for items as $\nu^{(p_I)} : \mathcal{X} \times \mathcal{X} \times [0, 1]^d \times \mathcal{H} \to \mathbb{R}$ such that:
|
| 93 |
+
|
| 94 |
+
$$\nu_{\mathbf{x}^{(\ell)},\mathbf{x}^{(r)},S}^{(p_I)}(g) = \mathbb{E}_q\left[g(\{X_S^{(\ell)},X_{S^c}^{(\ell)}\},\{X_S^{(r)},X_{S^c}^{(r)}\}) \mid X_S^{(\ell)} = \mathbf{x}_S^{(\ell)},X_S^{(r)} = \mathbf{x}_S^{(r)}\right] \tag{8}$$
|
| 95 |
+
|
| 96 |
+
where expectation is taken over the reference $q\left(X_{S^c}^{(\ell)},X_{S^c}^{(r)}\mid X_S^{(\ell)}=\mathbf{x}_S^{(\ell)},X_S^{(r)}=\mathbf{x}_S^{(r)}\right)$ .
|
| 97 |
+
|
| 98 |
+
We note that $\nu^{(p_I)}$ is also applicable to the context-specific preference models. For example, applying $\nu^{(p_I)}$ to $g_{\mathbf{u}} := g_U(\mathbf{u},\cdot,\cdot)$ allows one to quantify the item covariate's influences under a specific context $\mathbf{u}$ , while applying $\nu^{(p_I)}$ to $\bar{g} := \mathbb{E}_{p(U)}[g_U(U,\cdot,\cdot)]$ quantifies the average influence from each of the item covariates instead.
|
| 99 |
+
|
| 100 |
+
Similar to standard value functions, the influence of a feature set S shared by the items $\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}$ is measured as the impact on the preference model after "removing" contributions from features in $S^c$ , via integration with respect to some reference distribution r. Similar to g, this value function is skew-symmetric in its first two arguments, i.e. $\nu^{(p_I)}(\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}, S, g) = -\nu^{(p_I)}(\mathbf{x}^{(r)}, \mathbf{x}^{(\ell)}, S, g)$ . This is justified, since features that "encourage" preference of $\mathbf{x}^{(\ell)}$ over $\mathbf{x}^{(r)}$ should naturally be the ones that "discourage" preference of $\mathbf{x}^{(r)}$ over $\mathbf{x}^{(\ell)}$ to ensure consistency. In this paper, we assume the items are i.i.d sampled from some distribution p, and we utilise the observational data distribution as reference as in [33], i.e. we take $r\left(X_{S^c}^{(\ell)}, X_{S^c}^{(r)} \mid X_S^{(\ell)} = \mathbf{x}_S^{(\ell)}, X_S^{(r)} = \mathbf{x}_S^{(r)}\right)$ to be
|
| 101 |
+
|
| 102 |
+
$p\left(X_{S^c}^{(\ell)}\mid X_S^{(\ell)}=\mathbf{x}_S^{(\ell)}\right)p\left(X_{S^c}^{(r)}\mid X_S^{(r)}=\mathbf{x}_S^{(r)}\right)$ . Although we decide here to use the observational distribution as the reference, the corresponding estimation procedure follows analogously if one instead uses the marginal distribution approach in Janzing et al. [32].
|
| 103 |
+
|
| 104 |
+
Problems with direct application of SHAP to preference model g A naive way of explaining with SHAP a general preference model g which assumes no rankability would require concatenation of the items' covariates. Namely, we would set $\mathbf{z} = (\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}) \in \mathbb{R}^{2d}$ and then apply SHAP to the function $g(\mathbf{z})$ directly, now giving 2d Shapley values for each observed preference, i.e. two Shapley values for each feature. Not only does this approach require us to consider a larger number of feature coalitions during computation (squaring the original amount), but it also ignores that $\mathbf{x}^{(\ell)}$ and $\mathbf{x}^{(r)}$ in fact consist of the same features, leading to inconsistent explanations, i.e. that the same feature in $\mathbf{x}^{(\ell)}$ and $\mathbf{x}^{(r)}$ has a different influence, hence giving different explanations simply due to the ordering of items. We illustrate these pitfalls of such a naive approach in Appendix B.
|
| 105 |
+
|
| 106 |
+
Empirical estimation of the preferential value function $\nu_{\mathbf{x}^{(\ell)},\mathbf{x}^{(r)},S}^{(p_I)}(g)$ While the *preferential value function* is general in the sense that it could be applied to any preference function g, we divert our attention to functions in $\mathcal{H}_{k_E}$ , where $k_E$ is the *generalised preferential kernel* introduced in Sec 2.1. This allows us to adapt the recently introduced RKHS-SHAP to our settings, and we can thus circumvent learning an exponential number of conditional densities as in [33]. In the following segment, we prove the existence of the Riesz representation of the *preferential value functional*, a necessary step to adapt the RKHS-SHAP framework to our setting.
|
| 107 |
+
|
| 108 |
+
**Proposition 3.1** (Preferential value functional for items). Let k be a product kernel on $\mathcal{X}$ , i.e. $k(\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}) = \prod_{j=1}^d k^{(j)}(x^{(j)}, x'^{(j)})$ . Assume $k^{(j)}$ are bounded for all j, then the Riesz representation of the functional $\nu_{\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}, S}^{(p)}$ exists and takes the form:
|
| 109 |
+
|
| 110 |
+
$$\nu_{\mathbf{x}^{(\ell)},\mathbf{x}^{(r)},S}^{(p)} = \frac{1}{\sqrt{2}} \left( \mathcal{K}(\mathbf{x}^{(\ell)},S) \otimes \mathcal{K}(\mathbf{x}^{(r)},S) - \mathcal{K}(\mathbf{x}^{(r)},S) \otimes \mathcal{K}(\mathbf{x}^{(\ell)},S) \right)$$
|
| 111 |
+
|
| 112 |
+
where $K(\mathbf{x}, S) = k_S(\cdot, \mathbf{x}_S) \otimes \mu_{X_{S^c}|X_S = \mathbf{x}_S}$ and $k_S(\cdot, \mathbf{x}_S) = \bigotimes_{j \in S} k^{(j)}(\cdot, x^{(j)})$ is the sub-product kernel defined analogously as $X_S$ .
|
| 113 |
+
|
| 114 |
+
All proofs are included in the appendix. By representing the functionals as elements in the corresponding RKHS, we can now estimate the value function non-parametrically using kernel mean embeddings.
|
| 115 |
+
|
| 116 |
+
**Proposition 3.2** (Non-parametric Estimation). Given $\hat{g} = \sum_{j=1}^{m} \alpha_j k_E((\mathbf{x}_j^{(\ell)}, \mathbf{x}_j^{(r)}), \cdot)$ , datasets $\mathbf{X}^{(\ell)}, \mathbf{X}^{(r)}$ , test items $\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}$ , the preferential value function at test items $\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}$ for coalition S
|
| 117 |
+
|
| 118 |
+
Table 1: A summary of how our preference value functions can tackle different explanation tasks
|
| 119 |
+
|
| 120 |
+
| Candidate | Explanation of interest | Value function | Preference function |
|
| 121 |
+
|---------------------------------------------------|---------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|---------------------------------------|
|
| 122 |
+
| $\mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}$ | Which item features contributed most to this duel? | $\nu_{\mathbf{x}^{(\ell)},\mathbf{x}^{(r)},S}^{(p_I)}$ | $g, \mathbb{E}_U[g_U(U,\cdot,\cdot)]$ |
|
| 123 |
+
| $\mathbf{x}^{(\ell)}$ | Which item features contributed most to $\mathbf{x}^{(\ell)}$ 's matches? | $\frac{1}{n} \sum_{i=1}^{n} \nu_{\mathbf{x}^{(\ell)}, \mathbf{x}_{i}, S}^{(p_{I})}$ | $g, \mathbb{E}_U[g_U(U,\cdot,\cdot)]$ |
|
| 124 |
+
| $\mathbf{u},\mathbf{x}^{(\ell)},\mathbf{x}^{(r)}$ | Which context features contributed most to this duel? | $\nu_{\mathbf{u},\mathbf{x}^{(\ell)},\mathbf{x}^{(r)},S}^{(p_U)}$ | $g_U$ |
|
| 125 |
+
| u | Which context features contributed most on average? | $\frac{1}{m} \sum_{j=1}^{m} \nu_{\mathbf{u}, \mathbf{x}_{j}^{(\ell)}, \mathbf{x}_{j}^{(r)}, S'}^{(p_{U})}$ | $g_U$ |
|
| 126 |
+
|
| 127 |
+
and preference function $\hat{q}$ can be estimated as
|
| 128 |
+
|
| 129 |
+
$$\hat{\nu}_{\mathbf{x}^{(\ell)},\mathbf{x}^{(r)},S}^{(p_I)}(\hat{g}) = \pmb{\alpha}^\top \left( \Gamma(\mathbf{X}_S^{(\ell)},\mathbf{x}_S^{(\ell)}) \odot \Gamma(\mathbf{X}_S^{(r)},\mathbf{x}_S^{(r)}) - \Gamma(\mathbf{X}_S^{(\ell)},\mathbf{x}_S^{(r)}) \odot \Gamma(\mathbf{X}_S^{(r)},\mathbf{x}_S^{(\ell)}) \right),$$
|
| 130 |
+
|
| 131 |
+
where
|
| 132 |
+
$$\Gamma(\mathbf{X}_S^{(\ell)}, \mathbf{x}_S^{(\ell)}) = \mathbf{K}_{\mathbf{X}_S^{(\ell)}, \mathbf{x}_S^{(\ell)}} \odot \mathbf{K}_{\mathbf{X}_{S^c}, \mathbf{X}_{S^c}} \mathbf{K}_{\mathbf{X}_S, \lambda}^{-1} \mathbf{K}_{\mathbf{X}_S, \mathbf{x}_S^{(\ell)}}, \mathbf{K}_{\mathbf{X}_S, \lambda} = \mathbf{K}_{\mathbf{X}_S, \mathbf{X}_S} + n\lambda I, \ \alpha = \{\alpha_j\}_{j=1}^m \ and \ \lambda > 0 \ is \ a \ regularisation \ parameter.$$
|
| 133 |
+
|
| 134 |
+
The influence an individual context feature in U has on a C-GPM function $g_U$ can be measured by the following value function.
|
| 135 |
+
|
| 136 |
+
**Proposition 3.3** (Preferential value function for contexts). Given a preference function $g_U \in \mathcal{H}_{k_E^U}$ , denote $\Omega' = \{1, ..., d'\}$ , then the utility of context features $S' \subseteq \Omega'$ on $\{\mathbf{u}, \mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}\}$ is measured by $\nu_{\mathbf{u}, \mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}, S'}^{(p_U)} = \mathbb{E}[g_U(\{\mathbf{u}_S, U_{S^c}\}, \mathbf{x}^{(\ell)}, \mathbf{x}^{(r)}) \mid U_S = \mathbf{u}_S]$ where the expectation is taken over the observational distribution of U. Now, given a test triplet $(\mathbf{u}, \mathbf{x}^{(\ell)}, \mathbf{x}^{(r)})$ , if $\hat{g}_U = \sum_{j=1}^m \alpha_j k_E^U((\mathbf{u}_j, \mathbf{x}_j^{(\ell)}, \mathbf{x}_j^{(r)}), \cdot)$ , the non-parametric estimator is:
|
| 137 |
+
|
| 138 |
+
$$\begin{split} \hat{\nu}_{\mathbf{u},\mathbf{x}(\ell),\mathbf{x}^{(r)},S'}^{(p_U)}(\hat{g}_U) &= \boldsymbol{\alpha}^\top \left( \left( \mathbf{K}_{\mathbf{U}_{S'},\mathbf{u}_{S'}} \odot \mathbf{K}_{\mathbf{U}_{S'^c},\mathbf{U}_{S'^c}} \left( \mathbf{K}_{\mathbf{U}_{S'},\mathbf{U}_{S'}} + m \lambda' I \right)^{-1} \mathbf{K}_{\mathbf{U}_{S'},\mathbf{u}_{S'}} \right) \odot \Xi_{\mathbf{x}^{(\ell)},\mathbf{x}^{(r)}} \right) \\ where \ \Xi_{\mathbf{x}^{(\ell)},\mathbf{x}^{(r)}} &= \left( \mathbf{K}_{\mathbf{X}^{(\ell)},\mathbf{x}^{(\ell)}} \odot \mathbf{K}_{\mathbf{X}^{(r)},\mathbf{x}^{(r)}} - \mathbf{K}_{\mathbf{X}^{(r)},\mathbf{x}^{(\ell)}} \odot \mathbf{K}_{\mathbf{X}^{(\ell)},\mathbf{x}^{(r)}} \right). \end{split}$$
|
| 139 |
+
|
| 140 |
+
Analogously, the average influence of a specific context feature can be computed by taking an average over all pairs of matches, i.e. by using a modified value function $\frac{1}{m} \sum_{j=1}^{m} \nu_{\mathbf{u},\mathbf{x}_{j}^{(\ell)},\mathbf{x}_{j}^{(r)},S'}^{(p_U)}$ . We summarise different ways to modify the proposed preferential value functions to interrogate the preference models in Table 1.
|
| 141 |
+
|
| 142 |
+
Computational complexity of PREF-SHAP When computing GPM, it is fundamentally a kernel ridge regression (KRR), which naïvely has complexity $\mathcal{O}(n^3)$ . There exists a multitude of approximation techniques for KRR, the most common type being the Nyström approximation [41]. For all our experiments, we use FALKON [42], a large-scale library for solving kernel logistic regression using preconditioned conjugate gradient descent and Nyström approximations. FALKON has a computational complexity of $\mathcal{O}(n\sqrt{n})$ , which effectively becomes the complexity for GPM when using FALKON. As the value function for GPM requires estimating conditional mean embeddings, which in turn also are KRR's, one can appeal to FALKON again to reduce complexity to $\mathcal{O}(n\sqrt{n})$ . We summarize the procedure of PREF-SHAP in Algorithm 1. We further detail computational details pertaining to computing coalitions S and batched conjugate gradient descent (BatchedCGD) in Appendix A.
|
2205.14962/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2022-09-25T06:06:29.732Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/20.3.0 Chrome/104.0.5112.114 Electron/20.1.3 Safari/537.36" etag="HB1XZOf9W7GjuXOPII3f" version="20.3.0" type="device"><diagram id="s4BINPKdR6QmrDrYFRoy" name="Page-1">7V1bV+LKtv41PcY5D6tHrgiPSGg6bis0GlR4WQMCxiRc3ApC8uvPNysBqpIouFrFXse911pCMqnLnN+81KxZyTe9MV23Hgb3d2w+Gk++acpo/U23vmmaqZn4L12I0wtGtZpe8B+CUXpJ3V24DJJxdlHJri6D0fhRIlzM55NFcC9f9Oaz2dhbSNcGDw/zlUx2O5/Ivd4P/KxHZXfh0htMxgWy62C0uEuvVrWT3fWf48C/2/SsVmrpnelgQ5w18Xg3GM1XQl9685veeJjPF+mn6boxnhDvNnxJB/TjmbvbgT2MZ4tDftAa3+q/9LhX8f6ujh8moVE7X/xFP6BmngaTZTbjb1plML3/pp/Oho/0x30YBLNg5mezWMQb1jzMl7PRmFpXQLa6Cxbjy/uBR3dXwAKu3S2mE3xT8fFp/LAIwNb6JPBnuLaYE8FtMJk05pP5A29RHw3G1VsP1x8XD/NoLNypeNXx8BZ3BtnvPcx6/EBNzGeLH4NpMCGAucEUYNEUZ7zCfy/m08EsI7nMxq1uvqdIUw18L3IyYy4NerwWLmWcbY3n0/HiIQZJdtc4yeCTwdzMvq52mNE3CLsT8VLLLg4ynPrbpneixIdMmuWSHTZG4b0+ixbzv+3WX38POvZfg7/UvYL91bx0xotv2in/53eEOwoeoHvBnATzOF/SjE/v5g9BAkYPJlkbBwhqI9zJ+HbxPGbeQXpVVZLeiVmQnmrUitKrVX9feOVqaZZIz6B/zMZsMJwM8JcMDJjz7eT0/NuJ9TeuYLYLfL1iDVzI6PNy5TSS8GRdm81n45xiZpcKmpcXzzQYjaibUrTIeCIZCtrdaCj434HKPAnur4Rm3hgKlQ3rMyioFb2AhUqJIutvoMflUND3KrI9ux1jymD3B5hoc1wdGWUmuqoN9UrlzzHRmnGojVbeS7ZqrSCx8QjhR/Z1/rC4m/vz2WDS3F3NadKO5nxO8uLcC8eLRZyxb7BczGU55yR6W/XGXqnT3aplemcT/4DHp+PZqE7RFb4OJ3MvSi/9CCabTiCnh/gmGyT/0qMv383NV2st3rTiw70EOsrmpr8GKXBMD974BWlkId1i8OCPFy/QbVSdRPUi8B7Gk8EieJIjyTIQZT/9NQ8w5h1gK8p3RfifJsO3pn035SbTkWet5KC5HdY/R+tJiSHKoRcx7j19DKY8rN6alPPBcDz5NX8MsiBhOF8s5lMQTOjG6cCLfA7rDfhG49vBcvJ8DDB4vE+D/dtgTapwyjusb64qmyvU1GAx+KbX06/aj8cnBLKnayiD1vj109H68akxvF4vveTeGLYmy0GiBIOfF4pnzZ/O9ZE+ik2dxeaTN/WemBuZ7ctaShfb/rilPg5nrGZP75TRz3rlPK7hF95ylLDlUD+bnSf2iln1J0/vz+zgVB9cXygDSwnQTnweeoaTnK1GDdsftK7u+9qd8uvSXjO3vmRun18fTSeTkXL2NKbfNOor22qaLDD0tuXF+LxmDWPtBER3FXva5GkYgu7SWNuhMbX1u7t2XN/8rfZbV1Mv2V73MXdlfL2e2D8vTK/Vrdmzi8n4Z+eppzsJfv/fse4o3rT20L9UHwfX5mQwrd0Pw7mPa+pw2jmxf8r3e9qPVTs4DUY3F8qtu8KcL8xhqwtenwWO0XMf/V+NWtS/7ie/gl44bjVPbGs9s1uT6Nfl2XRwPVK96dUtu7SFXwgUq97NRTj4URXnRvfp+txudWp2pPjgUca/nn8WsDWzFqycZ+BL0ly1lZW/k4VNv99dJ343eiHaO3FT3su/1efpX+Agmytw0L8ftlYVu2U+Dafd4Fzkd6M22868AXnEp/d9yLV3czfx1MK8NvPu0Pw6Wu1x2LoKRy3IeJKjbRRli2uQ7+ndqOX7g2vwZqroztRWGNpwkmYCLFLbjm3VfRZv5swwZ/DJ7WrMiuj6+jzsquCbYlv20gn9pO3W1/SZucAp8Ssw1ixsJiwQ2ggMrW0xk1lNH1hUHdfXnUY9QV9LJ4lUJ6ybtsUEXjPqayH1LbRx7jaXjtuFfDpif4vSvnf9LaRxCG2fu7zvhTjvc1cYW0NJ8O9u7g0F+OgZLIlW7FLqt3QM567AK7kdancFrJmO21u1L/kYthjrxOl36X45L8V+MbZmzFwWS+3saOWxy/JdkRwyPamkfx8DO7MLwEcy+nn2NNC6FbvB1v3Q03thR8I0Ye9cu5j0Z0zUWkJfy/N72vrea6hcQ9Ea9Hs1t/V67IQXYdtCCy7T/yPre4NjX/W0Lmzr+n44fawMfp5N+rBrtlbAsAF7GTsNA/yyTeZ2MfaoZoedFedJ0tFhTYq6uZvLzoa5c99OfkydZBQ4DftJ0FM+x9GNAzt5R3buoT29i/vXvZodOAsn6QeOdTrpJd6akT+APmMsZH9W0BEao4kxmiw2FMeK1pCJyF9YvfVjO6iH7BpWV7OVdnx6CM+h21dJ//I081f3c1jOKfoJh66ngX7Vv2Y6MIG+fPDDM89DpjkJ24xHbTcMlbiPa0cYT3PlBMBn0lFZ4mUyY8Z5WAePOolTZk/fWWYYW0I+gLmegrb8scuvQY7dlWN1Y/ANfJg89nG9P508Di2hb+uRxhsPtcVE5J0dVH8P2WtYo2Xb9Q1o7DqTXAK0607IoMWdt5ActNeTR1lyZeOpvNYPpafd3UKOeW3PfA1RuZf1NWzWkiU9E74SNgX2O/QUFvqa4E/XTtLF9XrStnDdAia4zfLWzs4mke/WcR32ihltqwdfTfYMuLF81YlFHw6bC33btAX7qDhu3UB7Ylv4bRd2wTfScYG7VgQdaCovjQs2ExLwgN/Os+PCffIfGLttbtoaB+lfSC6EXqzObziPEGddxcOGH3qWrTkt/L6F+CTx7ssx1iGdUPIYS691RCkvSPf61/dPg2sD8Yd6P2pdxQUMHiLdeJ90Yc0ScE8h7sKSKG2rW+AOLDC8Dls7xNHAiB23CQREIo0CjQf36zE83UE0zyEJ0lZZuEUIPC50wLINOcqTEKKdh7aKMVO7W5r2paHAApTfF6K9A/1amRXLpJm3YkUpn1+Tjbj671CbLBFVLwY3FxPYSsnK5SJK7nPTaI9+2437+WgZ0iYpDxqb6M8Gr+qaEzbhKesJ5yNFdqEoS5siiBgap4EXCiILWMkuvK2opTbkBI1AZAGtVMBHnZEllzBhq9A+yAXRTaOuOkQT+uu2a4s04Ds00qrHiGw00jx8XgFfIg08BbxJYqu21UObiIIgMyZagzCCNnox+jJAY1AEhIhKpkmaS+BJA24gbw98sMlarEU8oI8lC7tm240QAXvwEJ0VbHHiSDQMuKonsBpYXXg0Hqz4okTuq4PxdMCfpu8kvSXNCWM2BQyjTfA16cKu+6Dx0C/hITIkmsAwGY0BOCcaigzbVkfUhQT6yCMPjAc0PtqB/li+wD9mpmPk/NBBn0CneDS/o6EIimM1Bi44DXP9RLamHYpsoXNNvZ3SwFN6K9EqQwfJf5E8dfAGNqGb0lyKNOAN9Azjhzw9yL8H/fZ0YU5KahsQ6WGM3DoDk7Dia5EG/a+JpxTZIqoBb0i2vtiOCjzF8Jtkr0BDtqQDCw4ciTTE4wRRQAJZhT7ZG+C0K/al0njwO8RTkEOI8XOv0k0EmhS7fC4Yj4t23CY8S0dsR2fUjgvbGaIveEzwAnIQ7R/xg3AMWZD8Y4P4DT6IND6tGKDDWGWABraZe7i2sJJtE4/h6YAXYNQ2OdYteEeJBlgBxh2uD6SrPuIPX2OifaSVGLANb7dO7YIPffAlrIO/wDrhgreDsZHcuitpPC71BYxihYk5AdPgk+z1sVqBrMjLh13Y7u6SVlMOdFmy+xw70EdaTaJf/AuZRLJvsEhP4IvCSE0jgF5CGBDnziAb2EHoX5Ns/5rsC9rUcjQK4Re6TPZrzbFk9YA3UW+aa64HSQScReRPuL9xXMmnxakNBY4SRv5T4+0kvkRD0QX4TLaNaBBRdbF6kLIa8E0UmQDLwAXsCsYmRTsUKwI7kCNsNWjgRyPY7aZEg7ZxDTIim0L4cv041w54RbptczvSpjZdf5UbC9l86FUH44yIBtj34JNFu90kmw/e2yvwDxj0sAKBL5eiuB75btD7CWEAvoD3Jdlkl+y5bVKElsqTbHIzl9EhO4xVV9jUeGxB/HMpjmjKEVxA8knjGMIP5ofYWtL1NdlryIZWAXoaj3SI1+pWL34nHkizRLlY4CzsIcrtJUIscDO67/+8mCPyS4oRn3PnWdusnBzfW6unflKaVdpk05Q98QHmG0GOiGwvuY0FTnvx1sarTfjaDnSzC7w3oV8dbr+B6zj/W4f8DPmQbXz3WLJ+OV0ONXMir12cyAltihXEjGOWSZ2j/0LMFPZu6hUbvx9dm1FxFZaLZmPyqln+gHsjSAuew8tFNy9wgUcctsgFok9o1AKS9vVTln/Zrur6Vo+8n/GxnGHSiGGzYddpVSfNqiHPqpTmgHb29SWspoqrJK1DnIQ9lldJJVxbjLRJhDlWINXnMtjxUL9a9hsHxtoz4pkzv0m6gZSByGWmeVYIvhD6YqSxJOysVd/FktAlmcan+Ao+OCJfrvJYgXwp2WArWlL2D35sa6vK+DKc1pZ99/14UpxjXZyjXpgjxXv7+MBpOge0s6evl7LzZZrjluNIsL3xq2xvwZZ2lXM+XhsWBLHXJcV+zHQs0Sd1Y5GmXU6zOqCdfX0d39ockIlw9uWZdB51wOa2KT+eQCuwYnGsXXagm6NBlIPolCJJWklEyywnu045xiifo26jvuf3dT4QNYTsDllFg9HqO+aRmEIrQGGVYAo0CeZbQkPWZG87+/o6PmryCMBIBemSTeBxcFfM6MkIINtSQoPYd187e/v6/D7Ke9FHYey03qTdoTRvF1M+LEqEjOwFz9QGlIOum5TzQ4yP9S/WGhbtDObu8XU45R+65mfyUxiHIsyT1mP5ebLCXCjDSzvJSSeR15dY2wMHwIWS5tOwBnJZbl2zt7/Pp1n5mZm02qc9G3k1JiGGsg8GVtxKbsW7SnctNoiJCqgo4WSxv2N79P2rKePl1RRfkcNi1CkzyTPSlIGkFbfgr9aULYQfWiGy0dNVcEdDFGhStkS+52MN0dTRRvy5tIshigEW4Dt4hiegjGTPbMuZkr284Jig1feOF8BanUfETFrt7+3vM2qXNLNOQZLp7F+WdkpjH9DOy319fr8V7fFblDeyTazADcozcTQkXZOJeQqJxo7T3V4/5ogJm1hLQMNcHz6rvnRcRvnxT7W2oj0XYY5mYY7b/NkLfEhpkv3t7Onr2Ja4aFk1ynu33Yhy4JRvUCHHFeQqWhyN55RTmoQ1SmnU9uUB7bzc1/GtTX5tJXuu3N1frUL9jrXmO9bdlMfpmkzerZb08dfPs0lP76SVFPD0LMgqKLLPmQ5Q5YcJGSDu8aiiYt6/nswGPzu8IqSd0oo6gHE6j4ObU14Bt6sZxFzEardkxasrf/lUf0r/HFxs/sbHBk5q36sn8smBk5PiKRLF/L45OiaWmFfercK8eowK839QL37Euu/qgXXfJ29d9v1bkq1+VWN/VWN/VWP/idXYaoD42+ghpnK0q+irGvurGvtY1dinUd+yVSAx6Yej4DXV2AUMH70auxfW1+0WU9vuaNJvfIJq7Gv4G/cudNxI7193jl+NXTqe41Zjl8ns2NXYpcj+qsb+d1djty4iFl5BN5wpJPZVjf2HV2MX/NqfVo1NnhQy7VIF3HPV2FQRStXAphPauXqlXTV2WmHJEjn/ThWeHuXMNV6NHVOFYVOlakKBRk8rRSONqrEdvq/cWUHeuWrsHmiaBlUBM6JxEeFYIm7YKq2u7KwxHqq6gy3t6XJ1IKOdM4MqQR3a1aW5c7sqVS5TxSllbbECpOpVA2OJctWKjOYOnvUS8BCYMzRYMFO2YkwlDwyPS2M2KPMEnTElC0ZV2ryynDLmEe3BmOCzKld+U2VpF+1Q5XK05hUolFEWcU41CS5lB3uIuPnKxqSK7LZUAU0rpMyvWFFC1ae8KjEQaShDTZXIHVg3RtXkK1hOcf9MSfWubhLPiIbGhXlIldSMqkBDqkTvpTS0K2BJO5oKyZksOtkQ3pfLdVmmgUVnlkd6DRpe5Y2VXlemAe6AS4Uqmts8Cqubjlz1RVWjiIRsWG4aD3kAnyJsscpXZbwSlp8fSyuf3A6i7bpUtZ1ioQtscZnrlJ1lkk50qTJRo8if44v2KcM6eChmdnscpxiHkdZU9FYUKYlV3VwHQvTv2rzKHDwwqKpWqiS4pP6bvPqVKr8xd9qLkisJUt3WGFX788jGh92W9r4I3xruJYzXrWwq4qTKeMKuSqcJHCurvgO/HUviMa22YzpVhpU1xxqvwkw8kX8mnT5wLI+f/CBcU3U4dFgcs0mZcbL36S4JVb1TVN3NVWTz1RZVGiepvjbppIjCCnvG+EwVzA3yJQYw2zToxIK4Z4zr4KGvkG3DX6rt0YFnuT/OO+Aa9iQ9PQJ7Ruf6JF9D/oNOL3SVzK7CxlJ1VK4qmzLQuE6nA2Avl9xmYmUt9cerXZsYb89I7TXVedZzNFRXRKdJonVaAY9INvFz1eRkp6i2BjjByp5OzAD3iWxbm1TLA8xG3C4zwhL0A7yXq7Px203lJNEwwh78lxRFcR2lGAE45PEAIvsk0st3j3yTpTGAQZW8Ti6WYHRyB/bVIay4/DQLbIFU65qvmi717+TbYeuAs46a7dvmYg6sflu0A1BfcD+JeGd0c/b4n4az6rc6sXPNVKb19F4QHVhJXLJquVSf0HYIf24ikoOsuuU7MMkn2IGRI9SvHZgDdmBq+veSB3l9+BbMydcWzL4tmNofuQVT+9qC+dqC+dqC+dAtmM284z3FdhTk03EjLTtOiMCwE8sFz4V0Fh1RVikFJQdGdJyMSn03qSoEuVaXjiXrLyQ+DkuNHZDOogUkAliTAls5iJTGZdIRv7actKnsC6TYFPQtCmipnKUskCpNb5UnT3J6MbyeKP0b+7VFlMm+IkoEjassaDT4cTy3wBdaUCWMUonPBp9UWge+U4rwMJrnMJRLYx6SFOvRsXIpPenyI5EdCpxL7+9/mMTZnWOd3kGOVMZ29PQl/717Ft0kzZdL96j0jo6i8yPFtGDs0lZaLMkzpEWlzRNd6bFzOtIareUj/3T811MdlxaCHTpeisWHcJiD03She3S0m47zdykBBlmSLRRpekt+LNWy4zQh4OMzHaERaWgx7iV8gY82sZgjuYnJD1rUktdAX352jJyOMks0CSXLIOuYHyOnxxEQTdKUHgvAF+/kY2EfKOGAxRbky1SJJga2LEaLQH48nuxF22VSX3yjwvLo2KqeJm9okRblkmL8iL/ZTo9DxBwTcgJO40dyiR8Bp6GtTMOR9IFvHUGGGE/6WICYCsLF499yUswz0u0wcds2S4pxvGIBntLQNoYqP16AtqIpIeiZKQ0WuEldzyXFTEr6YAz8kQmOy2lUKVHFkyiIM/hjASjxh8WllFTspPaBSvQCnuCBfpMNF5NHHfRPR/yjFU+EEm8gW/SdP7plgq+EJ53bE2BMToTSEbAocfh2aV2j4+H8URbSsW1KZiGOClniUGKWJ4rsxJEeycGxq9FcMB6+xchc+BVpzB4dN4MueXF24Af8pqSDwGPiR0JJvA50hulpAtaWEphtXkrQ0xxeLkpbd9y/GblkFrBEiRCbHwBgXK98JZfMWgHjGukD6So9wqEtlOHyZBY/gsNMOsrND91CHyhxIiWzCOuEC94OPyao0/aTlMziCUOmp8kQYDrsrHJHNvgjQSjB61Ay5NKgbX/FkRM0K46d9LgHP2YOm5O05VhrxfXE9eM0iUO2jBLScoIGslnSsXeK0/jnBJhNejmaTvqoEUrM8YMBtoH4ZpVLdpEe6DzpeEk+hfuc3KENm9tQRhtF3If2qJ1E0lGXNgh64DNbp362CT/bVHObOCaPS4Dlc0r6hbQh8WySCzQeJWbXcjKQ7AWuUeIy4O1QsjTXDh2opiQftyN0/F+F3HNjIZvf2WxEGBz7STeW7LZLNp9RAh78o0fEdDE/+TgPlYI7PDlJj5SgDV/elynT0GNVIorPUnladLClWdhO5ZsTLvks2jiDn6FY4qUEGN+88zRZ1/c/puB3YoKyMnY2de6YdRaWl7Gz3yntzye+xLvWWl4LgZpv7f0QCqdm9bjndlY97hG7Rm6FEPevHSUtaiCLZGfFDJvPabE5rSjbXMM6PDUmrTT5ESQ7v0rZpeIsMYWXS9pVz3WsnRPjk6XAtmmto2bA1EJe5HNmwMbrYHGze0w1vvU2D6nG590Tq+nLJ3hg9eZB1HvTZrUPemB1VX7Cuq7m3o+QzujdHlCtFl9mwsaLQctxCvg7yust3kHlTUXW95NaQd2NkkfaV2vvpeqVP0TVD1Hbf2AN3kvVVf1AVa9+jKpXjNxrMjQ9/zT6d1b2zZsaPhZp29cabL8c9lqDHZZOXuFaPuJFDUf0XweDemPY3xjVYMwgFgjuCauPL4A+599UoyIiez99Fpw9R29qJy/R40M64rd1mvvfL0MvFHp/B3rwy5+O4GUrsrUr8bJl7wR6Py9rFGR2/SN79d6/NdCpybpULb6WSS8TwXvt/armUd2P4HF64r1n3M8f5he0Q/2C8dZ+4fde1XWUde5XSHI49I6AqE8ZaVQU8yV6OdLYH/wb5veTw6L/Ylub99tt37eX4+U7v9RKK7rSw9+0+Lh8ePh61eJbvWpR27zh7GivWqw8j4XF3Xgx+Nv7k0SdvjXueBGbkavGLAbNG93/GOmWvb+OpNn8EutrtNbQ9or1Q5W27EEYJM2r/8lM9/AWpvqCLLf3v1+CPlzQuRgBki9I2vxISW8aFiR9CQc89weL33zz7Sde9xq13IYeYq1iUfvH5vhLVS7Hfypfvj98/prizWczCAEhV9aC8iJf9Frtey4K1YvPW9JKXtptvltKQD2AL2gnuH98zh4ImCxUdL8Of5rxkkGRV3E/fmTX/YfBKBjvfpPZuM1lS1ATukSqUYLxPaDZD/3tUqAgT7X09czvJs8DCvw/ozybFfr/55Hn5m4xUvhgeb58Eifjzu+/X/vtBCukSrLR7TIl5cmV14pvfzJOPfT5aFt9+STpOFUrBgxf+vvP7bFeTLl/sP6+XFzwpb+/qb8bffk0+nuA/333OLOajzO1akn8XRJnGmq+IuINOVOWtfwDLNvHRJq1Z4T/rJC1XPb82IHnJjH6p4n3YxzXq8WrHzvw3OQ5vxzXTnwH7AIbBzsu85M5rrKNhC/9fbX+buuIS5/j8KEaXCLALw1+Qw2ufC4NLsk1f3zoWUhxltSdlWU4jffLcKr72fIZDduHxJ1bzPxBGU7tz5TnhziqV8jz02Q4X65K/H/lpjbiOyBDcnB12NsXov+mvMuKt7/093f1dxt3Hludi5UKX+r8luqsfy511svccVaA5o3mVDH6yctX3ghGheKLw8Ps5yspNuWF25rXajGaNsrU+930e1M+8684Lbk7mqC96mxCzjWMBuPqbWlJecWrjoe33/LF389C41jHyornoo1cflnJNXFosbRRq3yvVaW2atXvtcMqpl9bg/7cqJ8f3MlL9O90es08pOamCG9BHd7aiOVOPgh3qtpQr1Se00DxOQT/8JjPfuORVzZzXB0Zb6hS+qFbiHuKCDfX3lrz8rHaoZqXP2ev553CWyldpXzAzypdjl6rSfSvVTp8fZjPFyI5PP1derRQb/4f</diagram></mxfile>
|
2205.14962/main_diagram/main_diagram.pdf
ADDED
|
Binary file (25.7 kB). View file
|
|
|
2205.14962/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Solving the Schrödinger equation is key to assessing a molecular system's quantum mechanical (QM) properties. As analytical solutions are unavailable, one must rely on expensive approximate solutions. Such methods are called *ab-initio* if they do not rely on empirical data. Recently, neural networks succeeded in such ab-initio calculations within the variational Monte-Carlo (VMC) framework (Carleo & Troyer, 2017). Although they lead to accurate energies, training such neural wave functions proved to be computationally intensive (Pfau et al., 2020).
|
| 4 |
+
|
| 5 |
+
To reduce the computational burden, Gao & Günnemann (2022) proposed the potential energy surface network (PESNet) to simultaneously solve many Schrödinger equations, i.e., for different spatial arrangements of the nuclei in $\mathbb{R}^3$ . They use a GNN to reparametrize the wave function model based on the molecular structure. While training significantly
|
| 6 |
+
|
| 7 |
+

|
| 8 |
+
|
| 9 |
+
Figure 1: PlaNet framework. During training, we use the noisy energies obtained during the VMC optimization to fit our surrogate. At inference time we only query the surrogate to avoid costly numerical integration.
|
| 10 |
+
|
| 11 |
+
faster, afterward one only obtains a neural wave function that generalizes over a domain of geometries, but not the associated energy surface. Obtaining the energy of a geometry remains costly requiring a Monte-Carlo integration with complexity scaling as $O(N^4)$ in the number of electrons N. This high inference time prohibits many applications for neural wave functions. For instance, geometry optimization, free energy calculation, potential energy surface scans, or molecular dynamics simulations typically involve hundreds of thousands of energy evaluations (Jensen, 2010; Hoja et al., 2021).
|
| 12 |
+
|
| 13 |
+
To address the inference shortcomings, we propose the **Potential learning from ab-initio Networks** (PlaNet) framework, where we utilize intermediate results from the PESNet optimization to train a surrogate graph neural network (GNN) as illustrated in Figure 1. In optimizing PESNet, one must compute approximate energy values to evaluate the loss. We propose to use these noisy intermediate energies, which otherwise would be discarded, as training labels for the surrogate. After training, the surrogate accelerates inference by directly estimating energies bypassing Monte Carlo integration.
|
| 14 |
+
|
| 15 |
+
As a second contribution, we adapt neural wave functions for closed-shell systems, i.e., systems without unpaired electrons, by introducing restricted neural wave functions. Specifically, we use doubly-occupied orbitals as inductive bias like restricted Hartree-Fock theory (Szabo & Ostlund, 2012). Together with several other architectural improvements, we present the improved PESNet++.
|
| 16 |
+
|
| 17 |
+
In our experiments, PESNet++ significantly improves energy estimates on challenging molecules such as the nitrogen dimer where PESNet++ reduces errors by 74 % compared to PESNet. When analyzing PlaNet we find it to accurately reproduce complex energy surfaces well within chemical accuracy across a range of systems while accelerating inference by 7 orders of magnitude for larger molecules such as ethanol. To summarize our contributions:
|
| 18 |
+
|
| 19 |
+
- **PlaNet**: an orders of magnitude faster inference method for PESNet(++), enabling exploration of higher dimensional energy surfaces at no loss in accuracy.
|
| 20 |
+
- **PESNet++**: an improved neural wave function for multiple geometries setting new state-of-the-art results on several energy surfaces while training only a single model.
|
| 21 |
+
|
| 22 |
+
# Method
|
| 23 |
+
|
| 24 |
+
PESNet++ consists of three key ingredients, the MetaGNN, the equivariant coordinate system, and the wave function model (WFModel). The WFModel is used within the VMC framework to generate gradients and find the ground-state wave function. The MetaGNN's goal is to adapt the WFModel to the geometry at hand and the equivariant coordinate system enforces the physical symmetries of the energy. We directly adopt the MetaGNN and the equivariant coordinate system from Gao & Günnemann (2022) and would like to redirect the reader to the original work for more information. All architectural improvements in PESNet++ affect the WFModel. The WFModel acting on an electron configuration ${\bf r}$ first constructs single-electron ${\bf h}_i^{(1)}$ and pair-wise ${\bf g}_{ij}^{(1)}$ features in a permutation equivariant way. We adopt the same construction as in (Gao & Günnemann, 2022):
|
| 25 |
+
|
| 26 |
+
$$\boldsymbol{h}_{1}^{(1)} = \sum_{m=1}^{M} \text{MLP}\left(\boldsymbol{W}\left[\left(\boldsymbol{r}_{i} - \boldsymbol{R}_{i}\right) E, \|\boldsymbol{r}_{i} - \boldsymbol{R}_{i}\|\right] + z_{m}\right), \tag{16}$$
|
| 27 |
+
|
| 28 |
+
$$g_{ij}^{(1)} = ((r_i - r_j) E, ||r_i - r_j||)$$
|
| 29 |
+
(17)
|
| 30 |
+
|
| 31 |
+
where $E \in \mathbb{R}^{3 \times 3}$ is our equivariant coordinate system and $z_m$ are nuclei embeddings outputted by the MetaGNN. These features are then iteratively updated with our new update rules from Equation (10) and Equation (11). After $L_{\mathrm{WF}}$ many layers, we use the final electron embeddings $h_i^{(L_{\mathrm{WF}})}$ to construct the orbital matrices $\phi$ with Equation (13). Finally, we compute the final amplitude with the Jastrow factor with Equation 14.
|
| 32 |
+
|
| 33 |
+
To illustrate the impact of $\tanh$ as an activation function and the normally distributed biases, Figure 6a plots the standard deviation of PESNet's neurons throughout training. One may see that the number of dead neurons increases during training to $>40\,\%$ , reducing the effective network size. Our improvements reduce the number of dead neurons to $<10\,\%$ as seen in Figure 6b. Note that we still keep one $\tanh$ activation function in the embedding block to limit the magnitude of our embeddings if distances increase.
|
| 34 |
+
|
| 35 |
+
Each PlaNet optimization step consists of two parts, the first is a VMC optimization step with the additional coordinate transform, and the second is the optimization of the surrogate.
|
| 36 |
+
|
| 37 |
+
In each VMC step, we first perturb the geometry from the previous iteration followed by the coordinate transformation from Equation (15). Next, we sample for each geometry c new electron positions $r_c$ via Metropolis-Hastings and calculate the local energy $E_{c,i}$ for each electron configuration $\mathbf{r}_{c,i}$ (McMillan, 1965). Given these energies, we compute the gradients and construct our updates via natural gradient descent with the conjugate gradient (CG)-method (Neuscamman et al., 2012). For
|
| 38 |
+
|
| 39 |
+
```
|
| 40 |
+
Input: R^{(t)}, r^{(t)}, \Theta^{(t)}, \chi^{(t)}
|
| 41 |
+
Output: \mathbf{R}, \mathbf{r}, \Theta, \chi
|
| 42 |
+
\mathbf{R} \sim \rho(\mathbf{R}|\mathbf{R}^{(t)})
|
| 43 |
+
\mathbf{r}' = \lambda(\mathbf{r}^{(t)}|\mathbf{R},\mathbf{R}^{(t)})
|
| 44 |
+
▶ Equation (15)
|
| 45 |
+
for c \in \{1, \dots, C\} do r_c \sim \psi^2_{\theta_c^{(t)}}(r'_c)
|
| 46 |
+
▶ MCMC
|
| 47 |
+
E_{c,i} = \psi_{\theta_c^{(t)}}(\mathbf{r}_{c,i})^{-1} \mathbf{H} \psi_{\theta_c^{(t)}}(\mathbf{r}_{c,i})
|
| 48 |
+
⊳ (Pfau et al., 2020; Hermann et al., 2020)
|
| 49 |
+
\delta_{c,i} = E_{c,i} - (\frac{1}{B} \sum_{i=1}^{B} E_{c,i})
|
| 50 |
+
\begin{aligned} \nabla_{\Theta^{(t)}} \mathcal{L} &= \mathbb{E}_{c,i} \left[ \delta_{c,i} \nabla_{\Theta^{(t)}} \log \left| \psi_{\Theta^{(t)}} (\mathbf{r}_{c,i}) \right| \right] \\ \Theta &= \Theta^{(t)} - \eta F^{-1} \nabla_{\Theta^{(t)}} \mathcal{L} \end{aligned}
|
| 51 |
+
⊳ CG-method (Gao & Günnemann, 2022)
|
| 52 |
+
for i \in \{1, \dots, N_{\text{surr}}\} do
|
| 53 |
+
\chi' \leftarrow \text{AdamW step on } \mathcal{L}_{\text{surr}}
|
| 54 |
+
⊳ Equation (6)
|
| 55 |
+
end for
|
| 56 |
+
Compute \gamma
|
| 57 |
+
|
| 58 |
+
⊳ Equation (7)
|
| 59 |
+
|
| 60 |
+
\chi = \gamma \chi^{(t)} + (1 - \gamma)\chi'
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
further details on the VMC step, we would like to refer the reader to Pfau et al. (2020) and Gao & Günnemann (2022). Next, we fit our surrogate model over $N_{\rm surr}$ many steps to the local energies and, finally, temporally average the surrogate parameters via the moving average described in Section 4. The complete optimization algorithm is given in Algorithm 1. Note that we dropped the dependence on the t+1 step and explicitly wrote down the loop over all geometries. Though, in practice one implements those as an additional batch dimension and performs these operations in parallel. To reduce the dependence on the current batch, we smooth $\mathcal{L}_{\rm surr}^{(t)}$ and $D^{(t)}$ when evaluating Equation (7) with EMAs.
|
2205.15544/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile><diagram id="84xzLBuhAFAF_F0s_TSg" name="Page-1"><mxGraphModel dx="496" dy="718" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="827" pageHeight="1169" math="1" shadow="0"><root><mxCell id="0"/><mxCell id="1" parent="0"/><mxCell id="2" value="<font color="#000000">CRISS<br>$$\hat{\theta}$$</font>" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;" parent="1" vertex="1"><mxGeometry x="60" y="500" width="80" height="60" as="geometry"/></mxCell><mxCell id="3" value="<font color="#000000">`\theta`</font>" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;" parent="1" vertex="1"><mxGeometry x="180" y="500" width="80" height="60" as="geometry"/></mxCell><mxCell id="4" value="" style="endArrow=classic;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" parent="1" source="2" target="3" edge="1"><mxGeometry width="50" height="50" relative="1" as="geometry"><mxPoint x="390" y="640" as="sourcePoint"/><mxPoint x="440" y="590" as="targetPoint"/></mxGeometry></mxCell><mxCell id="5" value="<font color="#000000">$$\theta_2^{\rightarrow L}$$</font>" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;" parent="1" vertex="1"><mxGeometry x="290" y="430" width="80" height="60" as="geometry"/></mxCell><mxCell id="6" value="<font color="#000000">$$\theta_2^{\rightarrow En}$$</font>" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;" parent="1" vertex="1"><mxGeometry x="290" y="580" width="80" height="60" as="geometry"/></mxCell><mxCell id="7" value="" style="endArrow=classic;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" parent="1" source="3" target="5" edge="1"><mxGeometry width="50" height="50" relative="1" as="geometry"><mxPoint x="180" y="700" as="sourcePoint"/><mxPoint x="220" y="700" as="targetPoint"/></mxGeometry></mxCell><mxCell id="8" value="" style="endArrow=classic;html=1;entryX=0.5;entryY=0;entryDx=0;entryDy=0;" parent="1" target="6" edge="1"><mxGeometry width="50" height="50" relative="1" as="geometry"><mxPoint x="260" y="530" as="sourcePoint"/><mxPoint x="340" y="490" as="targetPoint"/></mxGeometry></mxCell><mxCell id="9" value="" style="endArrow=classic;html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;exitX=0.5;exitY=0;exitDx=0;exitDy=0;dashed=1;" parent="1" source="3" target="5" edge="1"><mxGeometry width="50" height="50" relative="1" as="geometry"><mxPoint x="210" y="470" as="sourcePoint"/><mxPoint x="340" y="490" as="targetPoint"/><Array as="points"><mxPoint x="250" y="460"/></Array></mxGeometry></mxCell><mxCell id="10" value="" style="endArrow=classic;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;dashed=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" parent="1" source="3" target="6" edge="1"><mxGeometry width="50" height="50" relative="1" as="geometry"><mxPoint x="210" y="660" as="sourcePoint"/><mxPoint x="280" y="610" as="targetPoint"/><Array as="points"><mxPoint x="250" y="600"/></Array></mxGeometry></mxCell></root></mxGraphModel></diagram></mxfile>
|
2205.15544/main_diagram/main_diagram.pdf
ADDED
|
Binary file (12.3 kB). View file
|
|
|
2205.15544/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Fully unsupervised machine translation (UMT), where absolutely only unlabeled monolingual corpora are used in the entire model training process, has gained significant traction in recent years. Early success in UMT has been built on the foundation of cross-lingual initialization and iterative backtranslation [15, 17, 9, 30, 22]. While achieving outstanding performance on empirically high-resource languages, such as German and French, these UMT methods fail to produce any meaningful translation (near zero BLEU scores) when adopted in low-resource cases, like the FLoRes Nepali and Sinhala unsupervised translation tasks [13]. Arguably, such low-resource languages need UMT the most.
|
| 4 |
+
|
| 5 |
+
Liu et al. [21] discovered that low-resource UMT models benefit significantly when they are initialized with a multilingual generative model (mBART), which is pre-trained on massive monolingual corpora from 25 languages (CC25). Not only does this multilingual pre-training set contain data of the low-resource languages of interest (e.g., Nepali), it also comprises sentences from related languages (e.g., Hindi), which presumably boosts the performance of their low-resource siblings. Tran et al. [31] and Nguyen et al. [23] later propose ways to mine pseudo-parallel data from multiple directions to improve low-resource UMT. On the other hand, Conneau et al. [10] find the *curse of multilinguality*, which states that performance of a multilingual model with a fixed model capacity tends to decline after a point as the number of languages in the training data increases. Thus, adding more languages may hinder further improvement in low-resource UMT, which is intuitively due to incompatibility between the linguistic structures of different languages. Furthermore, these models have to tell languages apart by using only a single-token language specifier prepended into the input sequence,
|
| 6 |
+
|
| 7 |
+
which is shown to be insufficient [\[1\]](#page-9-1). Our analysis also shows that the models sometimes predict words of the wrong language (see Appendix). Meanwhile, as shown later in our analyses ([§4.4\)](#page-8-0), other complementary techniques such as pseudo-parallel data mining may have reached their limits in improving low-resource UMT.
|
| 8 |
+
|
| 9 |
+
To alleviate the curse of multilinguality and work around the linguistic incompatibility issue in low-resource UMT, we propose a simple refinement strategy aimed at disentangling, or separating, irrelevant languages from a pre-trained multilingual unsupervised model, namely CRISS [\[31\]](#page-11-2), and focusing exclusively on a number of low-resource UMT directions. Briefly, our method consists of four stages. In the first stage, we use a modified back-translation technique [\[28\]](#page-11-4) to finetune an initial pre-trained multilingual UMT model on English (En) and a family of low-resource languages L with a different set of feed-forward layers in the decoder for each En ↔ L pair. This step aims at discarding irrelevant languages and separating the languages L from each other. The second stage separates the resulting En ↔ L model into two En → L and L → En models. This stage is motivated by the fact that English is often vastly distinct from low-resource languages L, thus requiring a greater degree of disentanglement and less parameter sharing. The third stage boosts the performance by using the second-stage models as fixed translators for the back-translation process. The final stage, which is only applied to from-English directions, is aimed to separate the target low-resource languages between themselves into different models. Overall, our method prioritises to maximize the individual performance of each low-resource task, though the trade-off is that it can no longer translate multiple languages in a one-for-all manner.
|
| 10 |
+
|
| 11 |
+
In our experiments, our method establishes the state of the art in fully unsupervised translation tasks of English (En) to Nepali (Ne), Sinhala (Si), Gujarati (Gu), Latvian (Lv), Estonian (Et) and Kazakh (Kk) with BLEU scores of 9.0, 9.5, 17.5, 18.5, 21.0 and 10.0 respectively, and vice versa. This is up to 4.5 BLEU improvement from the previous state of the art [\[31\]](#page-11-2). We also show that the method outperforms other related alternatives that attempt to achieve language separation in various low-resource unsupervised tasks. Plus, our ablation analyses demonstrate the importance of different stages of our method, especially the English separation stage (stage 2).
|
| 12 |
+
|
| 13 |
+
Recent advances in fully unsupervised machine translation (UMT) are often built on the foundation of iterative back-translation [\[28,](#page-11-4) [15,](#page-10-0) [3,](#page-9-2) [17\]](#page-10-1). It is a clever technique where the model back-translates monolingual data from one language to another, and the resulting pair is used to train the model itself via standard back-propagation. To make UMT successfully work, iterative back-translation must be accompanied with some forms of cross- or multi-lingual initialization of the model, either through an unsupervised bilingual dictionary [\[16,](#page-10-5) [15\]](#page-10-0), phrase-based statistical MT [\[17\]](#page-10-1), or language model pre-training [\[9,](#page-9-0) [30,](#page-11-0) [21\]](#page-10-3). Apart from that, cross-model back-translated distillation [\[22\]](#page-11-1), where two distinct models are used to complement the target model, can also boost UMT performance. Plus, pseudo-parallel data mining, where sentences from two monolingual corpora are paired to form training samples through certain language-agnostic representations [\[33,](#page-11-5) [31,](#page-11-2) [23\]](#page-11-3), has been shown to significantly improve UMT performance.
|
| 14 |
+
|
| 15 |
+
Nonetheless, when it comes to fully unsupervised translation of distant and low-resource languages [\[13\]](#page-10-2), the aforementioned techniques fail to translate their success in high-resource languages to low-resource counterparts, unless "multilinguality" is involved. mBART [\[21\]](#page-10-3) is among the first multilingual model to boost the performance greatly in low-resource UMT, by pre-training the model on the CC25 dataset, which is a 1.4-terabyte collection of monolingual data from 25 languages. The CC25 dataset contains not only data from English and the low-resource languages of interest, like Nepali, but also data from their related but higher-resource siblings, like Hindi.
|
| 16 |
+
|
| 17 |
+
CRISS [\[31\]](#page-11-2) later advances the state of the art in low-resource UMT by finetuning mBART with pseudo-parallel data of more than 180 directions mined from the model's encoder representations. This type of UMT models handles translations in multiple directions by a few principles that are designed to maximize parameter sharing. First, it builds a large shared vocabulary that covers wordpieces from all languages. Second, the encoder encodes input sentences without knowledge about their corresponding languages to promote language-agnostic representations in its outputs. Lastly, the decoder receives the encoder outputs and decodes the translation in the target language by prepending a language-specific token at the beginning of the output sequence. Training the model
|
| 18 |
+
|
| 19 |
+
in such a multilingual environment helps the encoder learn language-agnostic latent representations that are shared across multiple languages, allowing the decoder to translate from any language. The vast availability of high-resource siblings (*e.g.,* Hindi for Indic languages) may help improve the performance of low-resource MT significantly [\[12,](#page-10-6) [8\]](#page-9-3). Meanwhile, LAgSwAV [\[23\]](#page-11-3) improve unsupervised MT by mining pseudo-parallel data from monolingual data with cluster assignments.
|
| 20 |
+
|
| 21 |
+
Despite its outstanding success, multilinguality may conversely hinder the model's further improvement in low-resource unsupervised translation due to the curse of multilinguality [\[1,](#page-9-1) [10\]](#page-10-4). A multilingual model with a fixed capacity may perform suboptimally on individual languages as it is forced to simultaneously handle conflicting structural reorderings of distant languages [\[20\]](#page-10-7). The single-token language specifier is also not enough to ensure the language consistency of its predictions [\[1\]](#page-9-1). Our proposed method aims to gradually separate the languages and focus on the target low-resource directions only. One prominent feature of our method is that it prioritises to separate English from the remaining low-resource languages first, as these languages are often much more distant from the common language English than from themselves [\[32\]](#page-11-6).
|
| 22 |
+
|
| 23 |
+
There are several efforts with similar objectives as ours, albeit of different contexts and scenarios. Specifically, Sen et al. [\[27\]](#page-11-7) propose a shared encoder and language-specific decoder model to be trained with back-translation for unsupervised MT, which is slightly similar to stage 1 of our proposed four-stage refinement process, as explained in [§3.](#page-2-0) Our stages 2, 3 and 4 improve the first stage, or equivalently Sen et al. [\[27\]](#page-11-7), further by refining the model in ways that are not possible in stage 1. Meanwhile, Sachan and Neubig [\[26\]](#page-11-8), Zhang et al. [\[34\]](#page-11-9), Li and Gong [\[20\]](#page-10-7) propose methods to cause the model to implicitly "select" relevant language-specific parameters by gating mechanisms or gradient readjustments, in the context of supervised multilingual translation. However, these methods, as found in [§4,](#page-5-0) struggle with unsupervised MT as signals from back-translated synthetic training samples are often much noisier. On the implementation side, the design of our language-specific FFN layers ([§3.1\)](#page-2-1) is also largely inspired by sparsely-distributed mixture of experts [\[29,](#page-11-10) [18,](#page-10-8) [35\]](#page-12-0).
|
| 24 |
+
|
| 25 |
+
Low-resource MT can also benefit greatly from the presence of supervised parallel data, or auxiliary data, in a related language (*e.g.,* English-Hindi for English-Nepali), which is not a fully unsupervised setup but a zero-shot scenario. Garcia et al. [\[11,](#page-10-9) [12\]](#page-10-6) propose multi-stage training procedures that combine both monolingual data and auxiliary parallel data to improve the performance of low-resource translation. Meanwhile, Chen et al. [\[7,](#page-9-4) [8\]](#page-9-3) suggest to scale their experiments up to 100 languages to attain better performance boosts. Nonetheless, while such line of work offers considerable insight into low-resource MT, it is not within the scope of this paper, as we focus on fully unsupervised machine translation where absolutely no parallel data is available, regardless of languages.
|
| 26 |
+
|
| 27 |
+
In this section, we describe our multilingual language disentanglement refinement procedure that consists of four stages. Before the training process starts, we first choose to restrict the model, *e.g.,* CRISS [\[31\]](#page-11-2), to only translate English (En) to (and from) a small group L = (l1, l2, ..., l<sup>N</sup> ) of N low-resource languages based on their genealogical, demographic, areal or linguistic similarities, such as Nepali (Ne), Sinhala (Si) and Gujarati (Gu) for Indic family (a part of Indo-European). There is no guarantee, however, that these languages are indeed significantly close due to such heuristics. In the following, we first propose a language-specific feed-forward layer ([§3.1\)](#page-2-1), which helps achieve a gradual language separation process (Figure [1\)](#page-3-0). Then, in [§3.2-](#page-4-0)[§3.5,](#page-5-1) we explain the four stages of our language separation procedure, which is also visually demonstrated in Figure [2.](#page-4-1)
|
| 28 |
+
|
| 29 |
+
While our method starts with finetuning a fully parameter-shared multilingual MT model, we desire to gradually and progressively increase the level of language separation by reducing the degree of parameter sharing of the model. Thus, for every layer j of an H-layer Transformer decoder such that j is divisible by a constant σ (*e.g.,* 3) and 1 ≤ j ≤ H, we propose to replace its vanilla feed-forward layer (FFN<sup>j</sup> ) with N separate layers FFNj,i such that each corresponds to a language l<sup>i</sup> ∈ L.
|
| 30 |
+
|
| 31 |
+
During the training process, each of the separate FFNj,i layers is sharded and reside individually on a distinct GPU-i accelerator, where their back-propagated gradients are not aggregated or averaged during the data parallelism process. Each GPU-i device is only fed with data from its
|
| 32 |
+
|
| 33 |
+
<span id="page-3-0"></span>
|
| 34 |
+
|
| 35 |
+
Figure 1: Illustration of a decoder layer with language-specific sharded FFN layers, where the FFN parameters for each language pair are separate and reside on a specific GPU and only data streams from such language pair (*e.g.*, En, Ne for GPU-1) are fed into the model replica in that GPU.
|
| 36 |
+
|
| 37 |
+
respective language $l_i$ and the common language English. Meanwhile, the remaining parameters (e.g., embedding and attention layers) are still shared and their gradients are averaged across all GPU devices during back-propagation. Figure 1 visualizes a sharded FFN layer of our model and how its parameters are allocated on each GPU. During inference, $\overline{\text{FFN}}_{j,i}$ is used to decode the translation to and from $l_i$ .
|
| 38 |
+
|
| 39 |
+
Formally, let $\hat{\theta} = \{\hat{\theta}_m, \hat{\theta}_f\}$ be a fully parameter-shared multilingual model (e.g., CRISS) at our initialization, where $\hat{\theta}_m$ is the set of parameters intended to be shared and $\hat{\theta}_f$ is the initial set of FFN parameters intended to be disentangled in our proposed model. We define our model as $\theta = \{\theta_m, \theta_f^1, \theta_f^2, ..., \theta_f^N\}$ , where $\theta_m$ denotes the shared parameters while $\theta_f^i$ denotes the separate parameters of all $\overline{\text{FFN}}_{j,i}$ layers $(1 \leq j \leq H)$ for a language $l_i \in \mathcal{L}$ . Before we begin training the model, we initialize $\theta_m = \hat{\theta}_m$ and $\theta_f^i = \hat{\theta}_f$ , $\forall i \in [1, ..., N]$ . Then during each update step, the gradients of $\theta_f^i$ and $\theta_m$ are computed as follow:
|
| 40 |
+
|
| 41 |
+
$$\nabla_{\theta} = \{ \nabla_{\theta_m}, \nabla_{\theta_f^1}, \dots, \nabla_{\theta_f^N} \}$$
|
| 42 |
+
(1)
|
| 43 |
+
|
| 44 |
+
$$\nabla_{\theta_f^i} = \underset{\substack{x_i \sim \mathbb{X}_i \\ x_c \sim \mathbb{X}_e}}{\mathbb{E}} \nabla_{\theta_f^i} \left( \mathcal{J}_{\theta}(x_i | y_i^e) + \mathcal{J}_{\theta}(x_e | y_e^i) \right) \tag{2}$$
|
| 45 |
+
|
| 46 |
+
$$\nabla_{\theta_m} = \frac{1}{N} \sum_{\substack{l_i \in \mathcal{L} \\ x_e \sim \mathbb{X}_e}} \mathbb{E}_{x_e \sim \mathbb{X}_e} \nabla_{\theta_m} \left( \mathcal{J}_{\theta}(x_i | y_i^e) + \mathcal{J}_{\theta}(x_e | y_e^i) \right)$$
|
| 47 |
+
(3)
|
| 48 |
+
|
| 49 |
+
<span id="page-3-2"></span><span id="page-3-1"></span>where $\mathbb{X}_i$ and $\mathbb{X}_e$ are monolingual corpora (or data distribution) of language $l_i$ and English respectively, $\mathcal{J}_{\theta}(x|y) = -\log P_{\theta}(x|y)$ , and $y_i^e \sim P(\cdot|x_i, \{\theta_m, \theta_f^i\})$ is the back-translation into English from $x_i$ while $y_e^i \sim P(\cdot|x_e, \{\theta_m, \theta_f^i\})$ is the back-translation into $l_i$ from English by the model $\{\theta_m, \theta_f^i\}$ . Equations 2 and 3 show that gradients are not aggregated for the language-specific sharded FFN layers, while the remaining are aggregated and averaged across all data streams. However, note that our model only separates the FFN layers of the decoder, while all encoder parameters are fully shared to ensure the language-agnostic representations from the encoder [31, 23].
|
| 50 |
+
|
| 51 |
+
**Implementation aspect.** The rationale behind the design of our language-specific sharded FFNs is to take full advantage of multi-GPU environment to maximize training efficiency both in speed and batch size. First, each sharded FFN is placed on a separate GPU and only the respective language-specific data streams are fed into that GPU, which enables us to achieve the same maximal batch size as the regular Transformer without running into out-of-memory error. Second, FFN layers, rather than other types of layers (*e.g.*, attention), are chosen to be separated because they require minimal additional memory (only 50Mb for a 8.8Gb model) to achieve noticeable performance gain (see §4.2). If we have access to only one GPU, we can allocate all FFNs into it, reduce the batch size and increase gradient accumulation to achieve the same multi-GPU effect.
|
| 52 |
+
|
| 53 |
+
<span id="page-4-1"></span>
|
| 54 |
+
|
| 55 |
+
Figure 2: Overview of the stages of our language disentanglement refinement procedure. Stage 1 finetunes our model with language-specific FFNs using multilingual back-translation. Then, stage 2 and stage 3 continue by separating the model into to-En and from-En models. Finally, stage 4 separates low-resource target language from the resulting from-En model.
|
| 56 |
+
|
| 57 |
+
With the aforementioned language-specific FFN layers, we enter the first stage of refinement. The purpose of this stage is to begin differentiating the language-specific FFN layers so that each FFN can specialize in its designated language. That is, after initializing our model $\theta = \{\theta_m, \theta_f^1, ..., \theta_f^N\}$ with the pre-trained multilingual UMT model $\hat{\theta} = \{\hat{\theta}_m, \hat{\theta}_f\}$ as $\theta_m = \hat{\theta}_m$ and $\theta_f^i = \hat{\theta}_f \ \forall i \in [1, ..., N]$ , we finetune it with iterative back-translation [9], except with some noteworthy modifications. Specifically, we uniformly sample monolingual sentences $x_e$ and $x_i$ from English and each low-resource language $l_i \in \mathcal{L}$ , respectively. Then, we use $\theta_i = \{\theta_m, \theta_f^i\}$ , a subset of $\theta$ , to back-translate $x_e$ and $x_i$ into $y_e^i$ and $y_i^e$ , respectively. After that, we train the model with $y_e^i \to x_e$ and $y_i^e \to x_i$ pairs. Note that for each direction of back-translation, forward and backward directions, the appropriate FFN layers $\theta_f^i$ are used with respect to the data stream involving language $l_i$ and English on a separate GPU-i. Furthermore, similarly to Conneau and Lample [9], the back-translation model is continuously updated after each training step. The resulting model $\theta^{(1)} = \{\theta_m^{(1)}, \theta_f^{(1)}, \dots, \theta_f^{(N,(1))}\}$ will be used to initialize the second stage of refinement.
|
| 58 |
+
|
| 59 |
+
Alleviating the curse of multilinguality may involve separation of unrelated languages. Yet, for many low-resource translation tasks, English is often significantly distant from the target low-resource counterparts, such as Nepali and Sinhala, which in fact share certain similarities between themselves. Thus, English may not share similar structural patterns with any of the target languages [32]. Despite that, most existing UMT models are bidirectional [9, 30, 22]. This means that they have to handle both translations to English and their target low-resource language $l_i$ , causing them to endure reordering discrepancy and perform sub-optimally for both directions.
|
| 60 |
+
|
| 61 |
+
The second stage is aimed to disentangle English. It builds two separate models $\theta_{\to \mathcal{L}}^{(2)}$ and $\theta_{\to \mathcal{L}n}^{(2)}$ from $\theta^{(1)}$ , which are specialised to translate to target low-resource languages $l_i \in \mathcal{L}$ and to English, respectively. Specifically, the second stage starts by initializing both $\theta_{\to \mathcal{L}}^{(2)}$ and $\theta_{\to \mathcal{L}n}^{(2)}$ with $\theta^{(1)}$ . We then finetune them with iterative back-translation exclusively in $En \to l_i$ and $l_i \to En$ directions, respectively. Different from the first stage, we use a fixed model $\theta^{(1)}$ to back-translate the monolingual data. In other words, to finetune $\theta_{\to \mathcal{L}}^{(2)}$ , we first initialize it with $\theta^{(1)}$ . Then we sample monolingual data $x_i$ uniformly from all $l_i \in \mathcal{L}$ and use the fixed model $\theta^{(1)}$ to back-translate them into English $y_i^e \to P(\cdot|x_i,\{\theta_m^{(1)},\theta_f^{i,(1)}\})$ . The resulting pair $y_i^e \to x_i$ is used to train $\theta_{\to \mathcal{L}}^{(2)}$ . In the opposite direction, we finetune $\theta_{\to \mathcal{L}n}^{(2)}$ by first initializing it with $\theta^{(1)}$ , then sample English data $x_e$ and
|
| 62 |
+
|
| 63 |
+
randomly choose an $l_i \in \mathcal{L}$ to back-translate $x_e$ into $y_e^i \sim P(\cdot|x_e, \{\theta_m^{(1)}, \theta_f^{i,(1)}\})$ , and finally train $\theta_{\rightarrow E_n}^{(2)}$ with $y_e^i \rightarrow x_e$ pairs.
|
| 64 |
+
|
| 65 |
+
The third stage improves the performance further by continuing finetuning $\theta_{\to\mathcal{L}}^{(2)}$ and $\theta_{\to En}^{(2)}$ into $\theta_{\to\mathcal{L}}^{(3)}$ and $\theta_{\to En}^{(3)}$ , respectively. Specifically, we build $\theta_{\to\mathcal{L}}^{(3)}$ by initializing it with $\theta_{\to\mathcal{L}}^{(2)}$ and use $\theta_{\to En}^{(2)}$ as a fixed back-translation model and train $\theta_{\to\mathcal{L}}^{(3)}$ in the same manner as in §3.3. Similarly, $\theta_{\to En}^{(3)}$ is trained in the same fashion in the opposite direction.
|
| 66 |
+
|
| 67 |
+
The last stage is aimed to achieve the maximal degree of language separation not only from English, but also between all languages in the target group. Specifically, we seek to train N separate models $\theta_{\rightarrow l_i}^{(4)}$ for each language $l_i \in \mathcal{L}$ . Each model $\theta_{\rightarrow l_i}^{(4)}$ is initialized with $\theta_{\rightarrow \mathcal{L}}^{(3)}$ with the corresponding language-specific FFN layers, i.e., $\{\theta_m^{(3)}, \theta_f^{i,(3)}\}_{\rightarrow \mathcal{L}}$ , and $\theta_{\rightarrow l_i}^{(4)}$ is trained by back-translation with a fixed backward model $\theta_{\rightarrow En}^{(3)}$ , and exclusively on $En \rightarrow l_i$ direction. Because only one direction is trained in this stage, we no longer use the above language-specific FFN layers for our end models. Instead, we revert back to the vanilla Transformer, whose FFN layers are initialized with the appropriate language-specific layers from the previous stage. Note that we do not perform this stage of refinement for the to-English $(\rightarrow En)$ direction, due to the fact that the encoder is fully shared across all languages and there is no difference in separating $l_i \rightarrow En$ directions for the encoder, while the decoder only decodes the common English language.
|
| 68 |
+
|
| 69 |
+
# Method
|
| 70 |
+
|
| 71 |
+
In this section, we compare our refinement procedure against other related methods, as well as those that share a similar language disentanglement objective as ours. Particularly, we reproduce the method proposed by Sen et al. [\[27\]](#page-11-7) in low-resource unsupervised MT. This method is equivalent to stage 1 of our refinement procedure, except that the entire decoder is linguistically disentangled with separate decoders for different languages, not just FFN parameters. Meanwhile, Sachan and Neubig [\[26\]](#page-11-8) and Zhang et al. [\[34\]](#page-11-9) both share the same goal of alleviating the curse of multilinguality, but in the context of supervised multilingual translation, where the model teaches itself to implicitly "select" relevant language-specific layers by gating mechanisms or gradient readjustments. We adapt these methods to unsupervised MT by substituting supervised training data with multilingual back-translated samples by the models. Lastly, Zuo et al. [\[35\]](#page-12-0) recently propose a mixture-of-experts (MoE) variant [\[18,](#page-10-8) [29\]](#page-11-10), where experts (FFNs) are selected randomly to perform the forward pass. In the context of language disentanglement, these separate experts act as implicit language-specific components. We also adapt this method to low-resource unsupervised tasks with multilingual back-translation and set the number of experts as the number of languages. For all methods, we use CRISS as the initial model.
|
| 72 |
+
|
| 73 |
+
The comparison results are shown in Table [4.](#page-8-1) As shown, the method proposed by Sen et al. [\[27\]](#page-11-7) performs slightly above stage 1 of our method, while it underperforms our complete procedure by up to 2.7 BLEU in Gu-En task. This is in line with the fact that it is almost equivalent to our first stage. The approaches suggested by Sachan and Neubig [\[26\]](#page-11-8) and Zhang et al. [\[34\]](#page-11-9), on the other hand, yield considerably lower BLEU scores in various low-resource unsupervised tasks. A possible reason for this underperformance is that these methods are designed for supervised multilingual translation, where accurate parallel data plays a crucial role. When adapting to the unsupervised regime, the synthetic training samples created by back-translation may be too noisy and inaccurate. Meanwhile, the MoE candidate [\[35\]](#page-12-0) also produce lower BLEU scores than our method, which can be due to its lack of explicit language-specific components to enforce a language disentanglement naturally.
|
| 74 |
+
|
| 75 |
+
<span id="page-8-1"></span>Table 4: Comparison with related methods. \* indicates the cited method is exclusively applied to supervised MT, and it is adapted to unsupervised MT using multilingual iterative back-translation products as training samples.
|
| 76 |
+
|
| 77 |
+
| Method<br>CRISS | <b>En-Ne</b> 5.5 | <b>Ne-En</b> 14.5 | <b>En-Si</b> 6.0 | <b>Si-En</b> 14.5 | <b>En-Gu</b> 14.2 | <b>Gu-En</b> 23.7 |
|
| 78 |
+
|-----------------------------|------------------|-------------------|------------------|-------------------|-------------------|-------------------|
|
| 79 |
+
| <b>CRISS</b> finetuned with | | | | | | |
|
| 80 |
+
| Sen et al. [27] | 7.7 | 17.0 | 7.0 | 14.5 | 15.7 | 26.8 |
|
| 81 |
+
| Sachan and Neubig [26]* | 7.1 | 16.9 | 7.2 | 14.0 | 15.1 | 26.1 |
|
| 82 |
+
| Zhang et al. [34]* | 6.8 | 16.1 | 6.4 | 14.0 | 14.9 | 25.2 |
|
| 83 |
+
| Zuo et al. [35]* | 8.2 | 17.4 | 7.8 | 14.6 | 16.4 | 25.6 |
|
| 84 |
+
| Ours | 9.0 | 18.2 | 9.5 | 15.3 | 17.5 | 29.5 |
|
| 85 |
+
|
| 86 |
+
<span id="page-8-2"></span>Table 5: Comparison with popular UMT techniques.
|
| 87 |
+
|
| 88 |
+
| Method<br>CRISS | <b>En-Ne</b> 5.5 | <b>Ne-En</b> 14.5 | | <b>Si-En</b> 14.5 | <b>En-Gu</b> 14.2 | <b>Gu-En</b> 23.7 | | | | | |
|
| 89 |
+
|----------------------|------------------|-------------------|-----|-------------------|-------------------|-------------------|--|--|--|--|--|
|
| 90 |
+
| CRISS finetuned with | | | | | | | | | | | |
|
| 91 |
+
| CBD | 7.6 | 16.9 | 7.3 | 14.6 | 15.8 | 26.0 | | | | | |
|
| 92 |
+
| Mined data | 4.6 | 10.5 | 4.8 | 9.8 | 11.2 | 19.2 | | | | | |
|
| 93 |
+
| Mined data + BT | 6.6 | 16.1 | 6.7 | 13.1 | 14.7 | 24.5 | | | | | |
|
| 94 |
+
| Ours | 9.0 | 18.2 | 9.5 | 15.3 | 17.5 | 29.5 | | | | | |
|
| 95 |
+
|
| 96 |
+
Table 6: Back-translated pseudoparallel dataset sizes.
|
| 97 |
+
|
| 98 |
+
| Mined data size | En-Ne | En-Si |
|
| 99 |
+
|-----------------|-------------|-------------|
|
| 100 |
+
| Unfiltered | ~109.8M | ~108.8M |
|
| 101 |
+
| Filtered | $\sim 3000$ | $\sim$ 6000 |
|
| 102 |
+
|
| 103 |
+
Apart from iterative back-translation, there are other techniques that have been shown to improve unsupervised machine translation. One of them is cross-model back-translated distillation [22], or CBD, where two distinct UMT teachers are used to distill the final model. Another is pseudo-parallel data mining [31, 23], where language-agnostic representations are built for sentences across different languages, which are then used to map unlabeled sentences from one language to another to create synthetic parallel data [2]. CRISS is also itself pseudo-parallel data mining method. In this segment, we demonstrate that the aforementioned techniques do not adapt well with low-resource unsupervised tasks, which shifts our attention to other areas, like the curse of multilinguality. Specifically, we apply CBD on CRISS by finetuning two distinct models from pre-trained CRISS and using them to distill a final model. For pseudo-parallel mining, we use LASER [2] to mine pseudo-parallel data in only the low-resource directions of interest using the encoder outputs from CRISS itself.
|
| 104 |
+
|
| 105 |
+
The results are reported in Table 5. As shown, CBD only performs relatively similar to our stage 1. This indicates that the CBD only improves due to back-translation, and not its own distillation effect. Nonetheless, CBD faces a disadvantage that the two teachers are not considerably distinct as they are both finetuned from the same pre-trained CRISS model, which is not advisable in the original paper. This is nonideal as there is no existing well-performing pre-trained model other than CRISS.
|
| 106 |
+
|
| 107 |
+
Meanwhile, the mined pseudo-parallel data contributes little to the performance improvement due to the reality that the amount and quality of mined data is too low for low-resource pairs, which is less than 5000 pairs for En-Ne and En-Si. This forces the model to be trained on such small mined datasets by progressively decreasing loss weight to prevent overfitting. While many may attribute this to the model's failure in mining more high-quality pseudo-parallel data, we empirically show that it is more likely that there are in fact not enough real parallel data in low-resource corpora to be mined! Specifically, we use our outperfoming model to back-translate the entire English monolingual corpus to Nepali. For each resulting back-translated Ne sentence, we search for its closest real Ne sentence by token-based Levenshtein distance [19] in the Ne monolingual corpus, and then filter out pairs whose distance is more than 20% the sentence length. We repeat the process for En-Si as well. As shown in Table 6, out of >100M possible unfiltered pairs, only <6000 samples satisfy the filtering criterion. We provide an in-depth analysis on this issue in the Appendix.
|
2208.01838/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2208.01838/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Recently, there emerges a surge of interests in localizing activation map with weakly supervision, for example with only image-level annotations, arising in different downstream applications. Weakly supervised learning has received a lot of attention since it minimizes the demand for fine-grained annotations so as to involve less human labeling effort. This work attempts to localize objects with only image category supervision, also termed weakly supervised object localization (WSOL).
|
| 4 |
+
|
| 5 |
+
The dramatic progress of deep neural networks (DNNs) also provide tremendous benefits with impressive gains in the task of WSOL. The cornerstone work is the Class Activation Mapping(CAM) that enables classification neural networks to highlight the part of interest by computing a weighted sum of the last convolutional feature maps. However, CAM focuses only on the most discriminative features instead of the entire object areas due to its locality characteristic. To alleviate this issue, much efforts have been made, such as adversarial erasing, multi-task joint training, novel networks and divergent activation. Most of these CAM-based approaches first compute local features and proceed to acquire adjacent object pieces to embrace the whole target. However, the underlying limitations of CAM still exist despite the fact that these approaches do encourage a better activation region. Another popular alternative is to leverage the long-range dependencies from vision transformer to find the object location. But this kind of approaches always suffer from irrelevant background for the reasons that canonical transformer-based methods generate patch tokens by splitting an image into a series of ordered patches and compute global relations in each layer. Therefore Gao et al. propose TS-CAM by integrating both transformer and CAM to mitigate those drawbacks. As illustrated in Figure , CAM accentuates the most discriminative local region while transformer distracts attention from target to the background area. The results in penultimate column showcase that current ensemble approach can not deal with those issues even along with promising performance.
|
| 6 |
+
|
| 7 |
+
In this paper, we propose a re-attention strategy based on token refinement transformer (TRT) to grasp objects of interest more precisely. Specifically, TRT introduces a novel module named token priority scoring module (TPSM) to suppress the effects of background noise in transformer. We proceed to incorporate the class activation map as the semantically aware guidance to restrain the attention map to the target object. The TPSM attempts to re-score the important regions of strong response obtained by attention map in transformer to minimize the impact of jumbled background as much as possible. In addition, we also reveal that adaptive thresholding based on sampling over cumulative distribution function is superior to the regular thresholding or topK strategy. The contributions are summarized as follows:
|
| 8 |
+
|
| 9 |
+
- we propose a re-attention mechanism termed token refinement transformer (TRT) which highlights the precise object of interest.
|
| 10 |
+
- we propose an adaptive thresholding strategy based on sampling over cumulative importance that improves the performance significantly in the task of WSOL.
|
| 11 |
+
- The experimental results show convincing results of both qualitative and quantitative compared to existing approaches on ILSVRC and CUB-200-2011.
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
In this section, we first revisit the preliminaries for vision transformer . Then we discuss the details of the proposed token refinement transformer (TRT) for WSOL.
|
| 16 |
+
|
| 17 |
+
Let us consider $I \in \mathbb{R}^{W\times H \times 3}$ as an input image, we split the image $I$ based on the patch size $P$, resulting in N ( $N = [\frac{W}{P}] \times [\frac{H}{P}]$) non-overlap flattened patch blocks $\mathbf{x}_p \in \mathbb{R}^{N\times (3\times P^2)}$. Each patch block $\mathbf{x}^n_p (n\in \{1,...,N\})$ is linearly projected into $D$ dimensional patch embedding before being fed into transformer block. As part of the embeddings, an extra learnable class token $\mathbf{z}_0^{cls}\in\mathbb{R}^{1\times D}$ is introduced. In addition, we incorporate a position embedding $\mathbf{E}_{pos} \in \mathbb{R}^{(N+1)\times D}$ to form the whole patch embedding for transformer as follows:
|
| 18 |
+
|
| 19 |
+
\mathbf{Z}_0 = [\mathbf{z}_0^{cls};E(\mathbf{x}_p^1);E(\mathbf{x}_p^2);...;E(\mathbf{x}_p^N)] + \mathbf{E}_{pos}
|
| 20 |
+
|
| 21 |
+
Where $E(.)$ indicates patch embedding projection and $\mathbf{Z}_0\in\mathbb{R}^{(N+1)\times D}$ is the input for the first transformer block.
|
| 22 |
+
|
| 23 |
+
We define $\mathbf{Z}_{l}\in\mathbb{R}^{(N+1)\times D} (l \in \{1, ..., L\})$ as the output feature embedding for $l$-th transformer block. As illustrated in Figure , the output of the penultimate transformer $\mathbf{Z}_{L-1}$ is fed into two branch, one of which is Token Priority Scoring Module (TPSM) that aims to re-attention patch tokens with the proposed adaptive thresholding strategy while the other branch compute standard class activation map.
|
| 24 |
+
|
| 25 |
+
In training stage, we evaluate the inconsistency between the output and ground truth in both two branches with commonly used cross entropy loss. Let $\mathbf{z}^{c}_{L-1} \in \mathbb{R}^{1\times D}$ and $\mathbf{z}^{p}_{L-1} \in \mathbb{R}^{N \times D}$ be the output class token and patch token at the penultimate layer, respectively. For CAM branch, we reshape $\mathbf{z}^{p}_{L-1}$ to $\mathbf{z'}^{p}_{L-1} \in \mathbb{R}^{\sqrt{N}\times\sqrt{N} \times D}$, which serves as the effective input for the subsequent convolution layer. Further, the output feature is globally average pooled followed by a softmax layer to get classification prediction $\mathbf{p}^c$.
|
| 26 |
+
|
| 27 |
+
For TPSM branch, $\mathbf{Z}_{L-1}(=[\mathbf{z}^{c}_{L-1};\mathbf{z}^{p}_{L-1}])$ is subsequent processed with the re-attention module to obtain classification probability $\mathbf{p}^t$. The loss function is thus defined as:
|
| 28 |
+
|
| 29 |
+
L_{ce} = - \sum_{k=1}^{K}{y_k ( \log{p^c_k}} + \log{p^t_k})
|
| 30 |
+
|
| 31 |
+
where $L_{ce}$ denotes cross entropy loss function. $K$ is the number of category. $y_k$ is the binary indicator (0 or 1) if class label $k$ is the correct classification for the observation. $p^c_k$ and $p^t_k$ denote the output probability for class $k$ in CAM and TPSM branch, respectively.
|
| 32 |
+
|
| 33 |
+
During testing, we first forward $\mathbf{z}^{p}_{L-1}$ into TPSM to obtain context-aware feature maps $\mathbf{M}_T \in \mathbb{R}^{\sqrt{N} \times \sqrt{N}}$ by performing a re-attention operation on patch tokens. More details will be discussed in Section . For the counterpart, we apply standard CAM to generate class specific activation maps $\mathbf{M}_C \in \mathbb{R}^{K \times \sqrt{N} \times \sqrt{N}}$. Consequently, attention maps can be obtained by:
|
| 34 |
+
|
| 35 |
+
\mathbf{M} = \mathbf{M}_T \odot\mathbf{M}_C
|
| 36 |
+
|
| 37 |
+
where $\odot$ denotes an element-wise multiplication operation. The proposed context-aware feature map $\mathbf{M}_T \in \mathbb{R}^{\sqrt{N} \times \sqrt{N}}$ is class-agnostic so we incorporate the class-aware activation map $\mathbf{M}_C \in \mathbb{R}^{K \times \sqrt{N} \times \sqrt{N}}$ as the semantically guidance to restrain the attention map to the target class. We further resize the maps $\mathbf{M}$ to the same size of the original images by bilinear interpolation. Specifically, we separate the foreground from the background using a defined threshold as described in . Then we look for the tight bounding boxes that enclose the most linked region in the foreground pixels. Finally with the grid search approach, the thresholds for obtaining bounding boxes are updated to the optimal values.
|
| 38 |
+
|
| 39 |
+
Our token priority scoring module(TPSM) consists of three components as indicated in Figure . Firstly, we generate a preliminary attention map by exploiting long-range dependencies of class token and patch tokens over transformer blocks. Then an adaptive thresholding strategy is introduced to screen out patch tokens with high response in preliminary attention map. Finally, we perform re-attention operation on the selected tokens to capture more effective global relationships. We will detail the descriptions in following three subsections.
|
| 40 |
+
|
| 41 |
+
Multi-head self-attention(MHSA) is widely used in transformer to model the long range dependency. We first compute the preliminary self-attention for $l$-th transformer block as:
|
| 42 |
+
|
| 43 |
+
\mathbf{A}_l = softmax(\frac{\mathbf{Q}_l \mathbf{K}_l^{\mathbf{T}}}{\sqrt{D}})
|
| 44 |
+
|
| 45 |
+
where $\mathbf{Q}_l$ and $\mathbf{K}_l$ are the query and key representations projected from the previously output $\mathbf{Z}_{l-1}$. $D$ indicates dimension of patch embeddings. $\mathbf{T}$ is a transpose operator.
|
| 46 |
+
|
| 47 |
+
We later investigate the characteristic of $\mathbf{A}_l \in\mathbb{R}^{(1+N)\times (1+N)}$. It is found that attention vector in the first row, which records dependency of class token to patch tokens, is driven to highlight object regions when Eq. is optimized. We aggregate these attention vectors over transformer blocks as $\mathbf{m} = \sum_{l=1}^{L-1}\mathbf{A}_l[0,1:]$. $\mathbf{m} \in \mathbb{R}^{1\times N}$ is then reshaped back to $\mathbf{M}_m \in \mathbb{R}^{\sqrt{N} \times \sqrt{N}}$ as a preliminary attention map as illustrated in Figure .
|
| 48 |
+
|
| 49 |
+
Token preliminary attention considers the cumulative dependency between patch tokens and class tokens based on the attention map of multi-head self-attention. As shown in Figure , we are suggested to suppress the irreverent background response to highlight the objective regions. Intuitively, we may pick up the top $k$ largest responses or responses that exceed a fixed threshold $\tau$. However we experimentally find that these two basic strategies do not work well in practice. Thus we propose an adaptive thresholding strategy based on sampling over cumulative importance that boosts the performance significantly.
|
| 50 |
+
|
| 51 |
+
We first calculate the cumulative distribution function $F$ of $\mathbf{m}$ and define strictly monotone transformation $\mathbb{T}: U \sim [0,1] \mapsto \mathbb{R}$ as the inverse function, thus
|
| 52 |
+
|
| 53 |
+
F(x) = \mathbf{P}_r(\mathbf{m} < x) = \mathbf{P}_r( \mathbb{T}(U) <x) = \mathbf{P}_r( U < \mathbb{T}^{-1}(x)) = \mathbb{T}^{-1}(x)
|
| 54 |
+
|
| 55 |
+
$\mathbf{P}_r$ is the probability function. $F$ is considered as the inverse function of $\mathbb{T}$, or $ \mathbb{T}(u) = F^{-1}(u), u\sim [0,1]$. Specifically, we first sort the values in $\mathbf{m}$ from high to low and calculate the cumulative attention. Then the adaptive threshold $\tau'$ is obtained based on the inverse transform sampling given the contribution $u$ over cumulative attention. More theoretical details about adaptive threshold generation can be referred to "Inverse Transform Sampling." $u$ controls the proportion of selected token attention in token preliminary attention. During the training and inference stages, $\tau'$ varies adaptively for different images and can achieve good results for objects with different scales. Therefore, we are capable of generating adaptive thresh $\tau'$ from $ F^{-1}(u)$. We denote $\mathbf{b} = [\mathbf{m} > \tau'] $ as the binary mask for the existence of selected patch tokens.
|
| 56 |
+
|
| 57 |
+
Let $\mathbf{I}_N$ be the identity matrix with size $N$. To achieve the goal of drawing more attention to class-specific objectiveness instead of background, we generate the selection matrix $\mathbf{B} \in \mathbb{R}^{N \times N}$ for token re-attention as:
|
| 58 |
+
|
| 59 |
+
\mathbf{B} = \mathbf{J} \otimes \mathbf{b} + \mathbf{J} \otimes (\mathbf{J}^\mathbf{T} - \mathbf{b} ) \odot \mathbf{I}_N
|
| 60 |
+
|
| 61 |
+
$\mathbf{J} \in \mathbb{R}^{N \times 1}$ is a matrix where every element is equal to one. $\otimes$ means tensor product. $\mathbf{B}$ is a binary matrix where each entry $\mathbf{B}_{i,j} $ means the $j$-th token will contribute to the update of the $i$-th token. We replace self-attention modules in transformer block by masked self-attention modules as follows:
|
| 62 |
+
|
| 63 |
+
\mathbf{S} = \frac{\mathbf{Q}_{L-1} \mathbf{K}_{L-1}^{\mathbf{T}}}{\sqrt{D}}
|
| 64 |
+
|
| 65 |
+
\mathbf{A}^{r}_{ij} = \frac{\exp(\mathbf{S}_{ij}) * \mathbf{B}_{ij}}{\sum_{k=1}^N \exp(\mathbf{S}_{ik}) * \mathbf{B}_{ik}}
|
| 66 |
+
|
| 67 |
+
Patch tokens $\mathbf{z}^{p}_{L-1}$ are fed to mask transformer block followed by a fully connected layer and a mask softmax layer, resulting in importance weights $\mathbf{\lambda}$. For training stage, the fusion embedding is generated by computing a weighted sum of importance weights $\mathbf{\lambda}$ with patch tokens $\mathbf{z}^{p}_{L-1}$. We further concatenate class embedding and fusion embedding to feed into final transformer block to yield classification loss. For inference stage, we retrieve the weights from the original relation $\mathbf{m}$ for pruned tokens. Hence, the re-attention vector is then defined as:
|
| 68 |
+
|
| 69 |
+
r = \frac{\sum_{k=1}^N{\mathbf{m}_k * \mathbf{b}_k}}{\sum_{k=1}^N{\mathbf{\lambda}_{k}}}
|
| 70 |
+
|
| 71 |
+
\mathbf{m'} = \mathbf{m} \odot (\mathbf{J}^{\mathbf{T}} - \mathbf{b}) + \mathbf{\lambda} * r
|
| 72 |
+
|
| 73 |
+
We further reshape $\mathbf{m'} \in \mathbb{R}^{N}$ to yield context-aware feature map $\mathbf{M}_{\mathbf{T}} \in \mathbb{R}^{\sqrt{N} \times \sqrt{N}}$.
|
2208.09170/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-08-05T08:09:51.594Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36" etag="P-dioAaipk8JJQG9l_Xe" version="20.2.2" type="github"><diagram id="8ZOcP4FsGA_ngyqKJd6L" name="Page-2">7Vpdd6o4FP01PraLEAjwWGt7uzrturOm0zvtvEWIQIuECaHq/fWTSBCRaPWWaq/LFyU7HyRnn304CfTg5Xj6jeEsuqcBSXqmEUx7cNAzTc+F4lcCsxKwPQWELA5KyKiBh/gnKUFQoUUckFxhJcQpTXicNUGfpinxeQPDjNFJs9mIJkEDyHBIGtOQwIOPE9Jq9k8c8KhEXdOp8RsSh1F1Z4C8smaMq8Zq4DzCAZ0sQfCqBy8Zpby8Gk8vSSJt17TL9ZraxcQYSfk2Hfjz5RN9vf3r0XkLfVoMB3/8eDlzy1HecFKoBavJ8lllAc5inIay1J9EMScPGfZl1UTwLbCIjxNRAuKSUY55TFNRPAOuIZBRnCSXNKFsPhQMDJ8QsfB+zhl9JUs1NrKBI2+h5kMYJ9O1CwUL8wm3I3RMOJuJJlUHw1LLqnzONc7tEpnUHAIDlVi0xJ+neMXKbcLF6LVlxYUyrt7QBPWj/G/Sj4r752TyyGbUHJ2B9w0tHCSTl34x3MLYQ1qkAQnuhgsA+68hk+j3gidxShQeYPb6XQwTc2kO49ywm6A5R4GGLZu4gaVjyzWHECHZI4mzG3UfuriHbXRDpOuBBo8IwhaLpoZEYBifxKJ5YnFnFj3D/GIswhOLu7MI4Bdj0TqxuDuLpvXFWLRPLO7OIkRfjEV0YnF3Fi3ncCxqNwNOizQSiM2QKlLGIxrSFCdXNdpnJVHSytJKizZ3lGbKdC+E85na2eGC0ybTI5pyVQlQmyZkOch1dDRBYEG7JrDancEFO3Lym7kRa6UF86vF6jxb2YRjFhK+wXZQzzUjidgSvTXn0bn8nJP8dpef3dwkHl5+6CS/lme7H5SV6vonjcWd158QIHNl31/qXXWreb1gDM+WmmWyQb7pRmDlUa3Ow6637qBmVjtWOYfazRZm+fX4oTsFQgmXum84JPqvkKdVc6c5y+decyEaAJRN57RX9eIqlP8DkgmnqIZiHxorY3SIh3Eio4AaUax1WFW3tCMiA2+6Ok7iUJ5N+cILifDivowfsY+TC1UxjoOgVBURE8LD+VBSV4pmMa7d79kDOZYQUq5k01JHSmV8bEipglbU1kEcc92VlB6ZrTi2iFnLgczuII7dXLssfIpGP6fx20txm9N/zdvqkORwcczuMCppI7XZTgq0lljDXOdJwaZJ/rKo7TVCvKNCMmM8PQ4V2t2oEBhoJZsHVkuGrtNWIfwsFdqHUKGwFps9qf7zwvM8AbSr4mC6XDmYqdJe1Qu3VO++UvpNkzzuFzPI3eNrGa2ZdYeIXQTJx9QXlsIiYM12zH/WjRgQnwYiYJ6ibiPqNt8QOUCX/IC2Q1WO2HnY/RJnKF2Fz/VJzRbhE+0pfG6apF7XpWrufzyIFsHyFmWjqJosvRNx39lWj7yRI4fSbKsR9GDQkTyA6zXkYXueRh5mWx7os7IS9+jlsW12cVB56LILnTzqJ1cJFu3H2v40gwMH+zrNDC1kG1ZXmln56MAxoPbjEWuPqvGOXjXW76CajcmiUg1N6dE/VTyzqZAqOz/YUwWAll2PTSBoS4EAQ0/efhSiew+8opAKkNb70IHwdZGTJYmV4zXvsVF5Yjlxlm+xe8Z5Vn5ZOoqnc3mt7mQOs5mGsLmZdgz33Gw/qezFt4/Nw98K7V6MxvtOILkLjj1OWmCVIH0m0U2kFMX6W+LyxVD9QTa8+h8=</diagram></mxfile>
|
2208.09170/main_diagram/main_diagram.pdf
ADDED
|
Binary file (11.9 kB). View file
|
|
|
2208.09170/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Depth estimation is a fundamental task in 3D computer vision, with versatile applications ranging from virtual/augmented reality [@luo2020consistent] to autonomous driving [@AndreasGeiger2012AreWR]. Although 3D sensors (e.g., LiDAR, structured light) can generate accurate depth information, it is more attractive to infer depth from a single RGB image in a self-supervised way [@RaviGarg2016UnsupervisedCF; @ClmentGodard2016UnsupervisedMD; @ClmentGodard2018DiggingIS], which eliminates the necessity of expensive 3D hardware and multi-sensor calibration. However, the accuracy of these monocular methods is not yet on par with 3D sensors due to their inherent ambiguity in geometric modeling.
|
| 4 |
+
|
| 5 |
+
To improve the monocular depth accuracy, recent multi-frame methods [^1] [@JamieWatson2021TheTO; @FelixWimbauer2020MonoRecSD; @RuiWang2019RecurrentNN; @HaokuiZhang2019ExploitingTC; @ZiyueFeng2022DisentanglingOM; @patil2020dont; @TaiWang2022Monocular3O] leverage temporal and spatially associations in multi-frame video sequences which are available in real-world scenes (e.g., smart devices [@HyowonHa2016HighQualityDF] or moving vehicles [@MoritzMenze2015ObjectSF]). Among these approaches, cost-volume-based methods [@ZiyueFeng2022DisentanglingOM; @JamieWatson2021TheTO; @FelixWimbauer2020MonoRecSD] achieve state-of-the-art depth accuracy as they take advantage of the successful Multi-View Stereo (MVS). However, MVS is still challenged by unsatisfactory reconstructions in real-world scenes with non-Lambertian surfaces, textureless areas, and moving objects [@DBLP:journals/tog/KnapitschPZK17; @DBLP:conf/cvpr/SchopsSGSSPG17]. To tackle these problems, teacher-student training architectures [@JamieWatson2021TheTO; @ZiyueFeng2022DisentanglingOM; @DBLP:journals/corr/abs-2205-15034] are proposed to enforce consistency between monocular depth and MVS depth. However, the consistency pushes MVS depth to mimic monocular depth, which underuses the multi-view geometry, thus the performance of these methods is limited.
|
| 6 |
+
|
| 7 |
+
To improve the multi-view depth accuracy, the learning-based MVS methods [@DBLP:conf/eccv/YaoLLFQ18; @DBLP:conf/cvpr/0008LLSFQ19] densely sample depth candidates in a large range. However, the dense sampling strategy causes matching ambiguity in real-world video frames without known camera pose and depth supervision (see Fig. [3](#fig:kitti_vis){reference-type="ref" reference="fig:kitti_vis"}). To mitigate the problem, we explore an efficient approach to enhance geometric cues for improving self-supervised multi-frame depth learning. Our intuition is that the monocular depth serves as a geometric priority of the scene, and the multi-frame matching ambiguity can be significantly reduced by sampling depth candidates near the monocular priority. Apart from the matching ambiguity, the multi-view geometry is still challenged by insufficient *Triangulation Prior* [@JohannesLSchonberger2016PixelwiseVS], especially in static/slow video sequences where nearby frames share little stereo baseline. To address the problem, the predicted camera velocity is leveraged to adaptively adjust the depth range. Specifically, multi-frames with higher motion velocity have larger viewpoint change, which can benefit multi-view geometry, so the depth range is enlarged to infer more accurate depth. In contrast, static frames carry little information for depth inference, thus the depth range is shrunk to the more reliable monocular priority. Besides, we fuse monocular depth and MVS depth by learning uncertainty in the cost volume, resulting in a robust depth estimation against artifacts in multi-view geometry (e.g., moving objects, textureless areas).
|
| 8 |
+
|
| 9 |
+
Owing to the monocular depth priority and camera velocity guidance, MOVEDepth achieves state-of-the-art performance: Compared with competitive monocular baselines [@ClmentGodard2018DiggingIS; @VitorGuizilini20193DPF], our method relatively improves the depth accuracy by $\sim$`<!-- -->`{=html}20% on the KITTI benchmark. MOVEDepth also generalizes to the more challenging DDAD benchmark, relatively outperforming ManyDepth [@JamieWatson2021TheTO] by 7.2%. Besides, qualitative analysis demonstrates that our method is more robust against challenging artifacts where multi-view geometry fails.
|
| 10 |
+
|
| 11 |
+
The main contributions are three-fold as follows:
|
| 12 |
+
|
| 13 |
+
\- We propose a novel self-supervised multi-frame depth learning framework, named MOVEDepth. It leverages the monocular depth cues as a geometric priority, and the multi-frame matching ambiguity is mitigated by sampling depth candidates near the monocular priority.
|
| 14 |
+
|
| 15 |
+
\- The velocity-guided depth sampling is proposed to address failure cases caused by slow/static camera motion. And an adaptive fusing layer is introduced to learn uncertainty in cost volume, which mitigates artifacts brought by textureless areas and moving objects.
|
| 16 |
+
|
| 17 |
+
\- We conduct extensive experiments on KITTI and DDAD, and the results show our method achieves superior depth accuracy in the complex real-word scenes with fewer depth candidates.
|
| 18 |
+
|
| 19 |
+
<figure id="fig:m2d" data-latex-placement="ht">
|
| 20 |
+
<img src="m2d" style="width:100.0%" />
|
| 21 |
+
<figcaption>The main network architecture of MOVEDepth. <strong>(a)</strong> PoseNet is utilized to estimate camera ego-motion and velocity between frame <span class="math inline"><em>T</em></span> and frame <span class="math inline"><em>T</em> − 1</span>. <strong>(b)</strong> The monocular depth is predicted using the DepthNet, which serves as a geometric priority to construct MVS cost volume. <strong>(c)</strong> We conduct homography warping between the encoded frame features using the predicted camera ego-motion and monocular depth priority. The resulting cost volume is decoded into a depth map and an uncertainty map. <strong>(d)</strong> The depth sampling range of homography warping is adaptively adjusted under the guidance of predicted camera velocity.</figcaption>
|
| 22 |
+
</figure>
|
| 23 |
+
|
| 24 |
+
# Method
|
| 25 |
+
|
| 26 |
+
A detailed description of MOVEDepth is given in this section and the main network architecture is illustrated in Fig. [1](#fig:m2d){reference-type="ref" reference="fig:m2d"}. Given a video sequence, (a) we firstly utilize PoseNet to estimate camera ego-motion and velocity between frame $T$ and frame $T-1$. (b) Then the monocular depth is predicted using the DepthNet. (c) Subsequently, we conduct homography warping between the encoded frame features using the predicted camera ego-motion and monocular depth priority. The resulting cost volume is decoded into a depth map and an uncertainty map, which serves as guidance for fusing the monocular depth and MVS depth. (d) Significantly, the depth sampling range of homography warping is adaptively adjusted under the guidance of predicted camera velocity, which mitigates problems brought by slow/static camera motion.
|
| 27 |
+
|
| 28 |
+
The following parts start with the preliminary of self-supervised monocular depth learning from a video sequence. We then introduce the important innovations to improve multi-frame depth learning by fusing monocular cues.
|
| 29 |
+
|
| 30 |
+
The self-supervised pipeline is conducted by jointly training a DepthNet $\theta_{\text{d}}$ (see Fig. [1](#fig:m2d){reference-type="ref" reference="fig:m2d"}(b)) and a PoseNet $\theta_{\text{p}}$ (see Fig. [1](#fig:m2d){reference-type="ref" reference="fig:m2d"}(a)) [@TinghuiZhou2017UnsupervisedLO], and they are trained only on video frames $\{\mathbf{I}_t\}_{t=1}^N$. Specifically, we estimate the monocular depth $D_\text{Mono}=\theta_{\text{d}}(\mathbf{I}_t)$ of current frame $\mathbf{I}_t$, and predict relative camera pose $\mathbf{\left[\mathbf{R} \mid \mathbf{T}\right]}_{t\rightarrow t+k}=\theta_{\text{p}}(\mathbf{I}_t, \mathbf{I}_{t+k})$ between frame $\mathbf{I}_t$ and frame $\mathbf{I}_{t+k}(k\in{-1,1})$. Then, we can synthesize $\mathbf{I}_t$ from viewpoint $\mathbf{I}_{t+k}$ by the following operation: $$\begin{equation}
|
| 31 |
+
\mathbf{I}_{t+k \rightarrow t}(D_{\text{Mono}})=\mathbf{I}_{t+k}\left\langle\operatorname{proj}\left(D_\text{Mono}, \mathbf{\left[\mathbf{R} \mid \mathbf{T}\right]}_{t \rightarrow t+k}, \mathbf{K}\right)\right\rangle,
|
| 32 |
+
\end{equation}$$ where $\mathbf{K}$ is the camera intrinsics, $\operatorname{proj}(\cdot)$ is the projection function that returns the 2D pixel coordinates of the projected $D_\text{Mono}$, and $\left\langle\cdot\right\rangle$ is the pixel sampling operator. Following the optimization convention [@ClmentGodard2018DiggingIS], the training pipeline is optimized by a reprojection loss: $$\begin{equation}
|
| 33 |
+
\mathcal{L}_\text{r}(D_{\text{Mono}})=\min_{k} p e\left(\mathbf{I}_{t}, \mathbf{I}_{t+k \rightarrow t}(D_{\text{Mono}})\right),
|
| 34 |
+
\label{eq:reprj}
|
| 35 |
+
\end{equation}$$ where the $\min$ operation selects the best matching frames to avoid ambiguity brought by occlusions, and $pe(\cdot)$ is a weighted combination of $\mathcal{L}_1$ loss and structure similarity (SSIM) loss. The reprojection loss is calculated over multi-scale depth outputs, and more implementation details can be found in [@ClmentGodard2018DiggingIS].
|
| 36 |
+
|
| 37 |
+
Multi-view approaches warp source images into the reference camera frustum to form cost volume, and estimate depth to be the highest-activated value in cost volume [@DBLP:conf/eccv/YaoLLFQ18]. Although the hard-coded multi-view methods reduce geometry ambiguity and generate more accurate depth, they are still challenged by texture-less regions, non-Lambertian surfaces, and moving objects, especially in real-world video frames without known camera motion. The monocular methods, on the other hand, are more robust against weakly textured regions or moving objects but the overall depth accuracy is limited. Therefore, we exploit monocular cues to complement the limitation of MVS (see Fig. [1](#fig:m2d){reference-type="ref" reference="fig:m2d"}(c)), which is elaborated in the following.
|
| 38 |
+
|
| 39 |
+
Given a current frame $\mathbf{I}_{t}\in \mathbb{R}^{H \times W \times 3}$ and its nearby frame $\mathbf{I}_{t-1} \in \mathbb{R}^{H \times W \times 3}$ (the future frames is not used to enable online depth prediction), we firstly leverage a encoder $\theta_{\text{enc}}$ to extract 2D features of these frames, where the images are downscaled to lower resolution deep features $\mathbf{F}_{i(i\in\{0,-1\})}\in \mathbb{R}^{H/4 \times W/4 \times C}$. Following previous learning-based MVS [@DBLP:conf/eccv/YaoLLFQ18; @DBLP:conf/cvpr/GuFZDTT20], the plane sweep stereo [@RobertTCollins1996ASA] is utilized to establish multiple front-to-parallel planes in the current frame. Specifically, equipped with camera intrinsic $\mathbf{K}$ and extrinsic $\left[\mathbf{R} \mid \mathbf{T}\right]$ estimated by PoseNet $\theta_{\text{p}}$, the previous frame features can be warped into the current camera frustum:
|
| 40 |
+
|
| 41 |
+
$$\begin{equation}
|
| 42 |
+
\label{eq:homo}
|
| 43 |
+
\mathbf{p}_{t-1,j}=\mathbf{K} \cdot\left(\mathbf{R} \cdot\left({\mathbf{K}}^{-1} \cdot \mathbf{p}_{t} \cdot d_{j}\right)+\mathbf{T}\right),
|
| 44 |
+
\end{equation}$$ where $d_j$ is the $j$-th hypothesized depth candidates of pixel $\mathbf{p}_t$ in the current frame feature $\mathbf{F}_t$, and $\mathbf{p}_{t-1,j}$ denotes the corresponding pixel in the previous frame feature $\mathbf{F}_{t-1}$. After the warping operation, the volume feature $\mathbf{V}_{t-1}\in\mathbb{R}^{H/4 \times W/4 \times C \times D}$ is constructed, where $D$ is the number of depth candidates. Significantly, to reduce depth searching space, we specify the depth range $\mathcal{R}$ using the monocular depth priority $D_\text{Mono}$: $$\begin{equation}
|
| 45 |
+
\mathcal{R} = \{d|d_\text{min}\le d\le d_\text{max}\},
|
| 46 |
+
\end{equation}$$ where $(d_\text{min}+d_\text{max})/2=D_\text{Mono}$, and $d_\text{min}, d_\text{max}$ is adaptively adjusted under the guidance of camera velocity, which is elaborated in the next subsection.
|
| 47 |
+
|
| 48 |
+
Given previous frame volume $\mathbf{V}_{t-1}$, we then use *group correlation* [@DBLP:conf/cvpr/WangGVSP21; @QingshanXu2019LearningID] to construct cost volume, which measures the visual similarity between the current frame and the previous frame:
|
| 49 |
+
|
| 50 |
+
$$\begin{equation}
|
| 51 |
+
\mathbf{s}_i^{g}=\frac{1}{G}\left\langle \mathbf{v}_i^g,\mathbf{f}_i^g\right\rangle,
|
| 52 |
+
\end{equation}$$ where $\mathbf{v}_i^g\in \mathbb{R}^{\frac{C}{G}\times D}$ is the $g$-th group feature of $\mathbf{v}_i$ ($\mathbf{v}_i\in \mathbb{R}^{C\times D}$ is the $i$-th pixel feature of $\mathbf{V_t}$), and $\mathbf{f}_i^g\in \mathbb{R}^{ \frac{C}{G}\times 1}$ is the $g$-th group feature of $\mathbf{f}_i$ ($\mathbf{f}_i$ is the $i$-th pixel feature of $\mathbf{F}_t$), and $\left\langle \cdot,\cdot\right\rangle$ is the inner product. Then $\left\{\mathbf{s}_i^g\right\}_{g=0}^{G-1}$ are channel-wise stacked to generate $\mathbf{s}_i\in \mathbb{R}^{G \times D}$, which is $i$-th pixel feature of the final cost volume $\mathbf{S}_t\in\mathbb{R}^{H/4\times W/4\times G \times D}$.
|
| 53 |
+
|
| 54 |
+
The calculated cost volume is subsequently decoded by a light-weight $\theta_\text{dec}$ to get depth probability $\mathbf{P}\in\mathbb{R}^{H/4\times W/4 \times D}$, and the MVS depth is generated by *localmax* [@DBLP:journals/corr/abs-2112-05126]: $$\begin{equation}
|
| 55 |
+
D_\text{MVS}(\mathbf{p})=\left(\frac{1}{\sum_{j=\mathbf{X}(\mathbf{p})-r}^{\mathbf{X}(\mathbf{p})+r} \mathbf{p}_j} \sum_{j=\mathbf{X}(\mathbf{p})-r}^{\mathbf{X}(\mathbf{p})+r} \frac{1}{d_{j}} \cdot \mathbf{p}_j\right)^{-1},
|
| 56 |
+
\label{eq:localmax}
|
| 57 |
+
\end{equation}$$ where $\mathbf{p} \in \mathbb{R}^{D}$ is the pixel value of $\mathbf{P}$, $\mathbf{X}(\mathbf{p})=\text{argmax}_j \mathbf{p}_j$ is the index of the highest value for $\mathbf{p}$, and $r$ is a radius parameter (typically set as 1). Finally, the *convex interpolation* [@ZacharyTeed2022RAFTRA] is leveraged to upsample the MVS depth to the original resolution.
|
| 58 |
+
|
| 59 |
+
In the previous subsection, the monocular depth is leveraged as a geometric center for depth sampling, but the depth range is left to be addressed. Typically, learning-based MVS [@DBLP:conf/eccv/YaoLLFQ18; @DBLP:conf/cvpr/0008LLSFQ19] sample depth candidates in a fixed range, which is either calculated by COLMAP [@JohannesLSchonberger2016PixelwiseVS] or learned by networks [@JamieWatson2021TheTO]. However, the depth range is utilized to describe the entire scene, and densely searching in such a wide range is computationally expensive and can not produce accurate depth [@DBLP:conf/cvpr/GuFZDTT20]. Recent methods reduce depth range by coarse-to-fine sampling [@DBLP:conf/cvpr/GuFZDTT20; @DBLP:conf/cvpr/WangGVSP21] or confidence-based sampling [@Bae2022; @DBLP:conf/cvpr/ChengXZLLRS20]. However, we empirically find these sampling strategies are limited in self-supervised multi-frame depth learning (see Tab. [\[tab:abla_vel\]](#tab:abla_vel){reference-type="ref" reference="tab:abla_vel"}), as they overlook the *Triangulation Prior* [@JohannesLSchonberger2016PixelwiseVS] of nearby frames.
|
| 60 |
+
|
| 61 |
+
To mitigate the problem, we propose velocity-guided depth sampling. The key innovation is to associate *Triangulation Prior* with camera motion velocity $v$. Namely, the viewpoint changes noticeably when the camera moves at a high velocity, providing a sufficient *Triangulation Prior* for multi-view geometry. In contrast, slow/static video frames share a similar viewpoint, thus the *Triangulation Prior* is limited (theoretical analysis is in supplement). For video frames with sufficient *Triangulation Prior*, we expand depth range to infer accurate depth, and for frames with insufficient *Triangulation Prior*, the depth range is shrunk to the more reliable monocular priority. The depth sampling range is specified as follows:
|
| 62 |
+
|
| 63 |
+
$$\begin{equation}
|
| 64 |
+
\begin{aligned}
|
| 65 |
+
d_\text{min}&=D_\text{Mono}(1-\beta\mathcal{T}(v))\\
|
| 66 |
+
d_\text{max}&= D_\text{Mono}(1+\beta\mathcal{T}(v)),
|
| 67 |
+
\end{aligned}
|
| 68 |
+
\end{equation}$$ where the camera motion velocity $v=\alpha \|\mathbf{T}\|_2$ is the byproduct of PoseNet $\theta_\text{p}$ ( $\mathbf{T}$ is the camera translation estimated by $\theta_\text{p}$, and $\alpha$ is the camera frame rate). $\beta$ is a hyper-parameter, and $\mathcal{T}(\cdot)$ is a scale function that transforms $v$ to a real-world scale, which can be calculated by median-scaling [@ClmentGodard2018DiggingIS] or camera-height-scaling [@DBLP:conf/iccv/YinWDC17]. To ensure training stability$, \beta\mathcal{T}(\cdot)$ is clamped to range (0, 1).
|
| 69 |
+
|
| 70 |
+
Notably, the depth sampling strategy resembles *Gaussian Sampling* with mean of $D_\text{Mono}$ and variance of $\beta\mathcal{T}(v)$, and the MVS depth range shrinks to the more reliable monocular depth when the camera is static. Differently, the depth candidates are not sampled by their probability, but by a deterministic inverse sampling strategy: $$\begin{equation}
|
| 71 |
+
d_{j}=\left(\left(\frac{1}{d_{\min }}-\frac{1}{d_{\max }}\right) \frac{j}{D-1}+\frac{1}{d_{\max }}\right)^{-1},
|
| 72 |
+
\end{equation}$$ where $j=0 \ldots D-1$. Compared with linear sampling [@DBLP:conf/eccv/YaoLLFQ18] or probabilistic sampling [@Bae2022], inverse depth sampling results in uniformly distributed depth candidates at the pixel level, which is beneficial for large-scale multi-frame matching [@QingshanXu2019LearningID].
|
| 73 |
+
|
| 74 |
+
<figure id="fig:fuse" data-latex-placement="t">
|
| 75 |
+
<img src="fuse" />
|
| 76 |
+
<figcaption>MOVEDepth learns uncertainty in depth probability to fuse monocualr depth and MVS depth. The upper branch decodes depth probability into depth by <em>localmax</em> (Eq. <a href="#eq:localmax" data-reference-type="ref" data-reference="eq:localmax">[eq:localmax]</a>), and the lower branch decodes depth probability into an uncertainty map, which serves as a guidance for fusing MVS depth and monocular depth.</figcaption>
|
| 77 |
+
</figure>
|
| 78 |
+
|
| 79 |
+
The calculated $D_\text{MVS}$ is still challenged by texture-less regions, non-Lambertian surfaces, and moving objects, which are the inherent problems of the multi-view geometry. To alleviate the problem, the uncertainty-based fusing method is introduced to replace unsatisfactory $D_\text{MVS}$ with the more reliable $D_\text{Mono}$. As shown in Fig. [2](#fig:fuse){reference-type="ref" reference="fig:fuse"}, we leverage an *Uncertainty Decoder* $\theta_\text{u}$ to learn uncertainty map $\mathbf{U}$ from the entropy of depth probability $\mathbf{p}$: $$\begin{equation}
|
| 80 |
+
\mathbf{U}(\mathbf{p}) = \theta_\text{u}(\sum_{j=0}^{D-1}-\mathbf{p}_{j} \log \mathbf{p}_{j}),
|
| 81 |
+
\end{equation}$$
|
| 82 |
+
|
| 83 |
+
where $\theta_\text{u}$ is comprised of 2D Convolutional Neural Network (CNN) blocks and a *Sigmoid* function. The reason for adopting the entropy is that the randomness of the depth probability distribution is positively related to the MVS depth uncertainty [@DBLP:conf/bmvc/ZhangYLLF20]. Subsequently, the uncertainty map is leveraged to calculate the fused depth $D_\text{Fuse}$: $$\begin{equation}
|
| 84 |
+
D_\text{Fuse} = \mathbf{U}\odot D_\text{Mono} + (\mathbf{1}-\mathbf{U}) \odot D_\text{MVS},
|
| 85 |
+
\end{equation}$$ where $\odot$ denotes element-wise product.
|
| 86 |
+
|
| 87 |
+
MOVEDepth is end-to-end trained in a self-supervised manner, and the loss consists of three parts: $$\begin{equation}
|
| 88 |
+
\mathcal{L}_{\text{MOVEDepth}} = \lambda_1 \mathcal{L}(D_\text{Mono}) +
|
| 89 |
+
\lambda_2 \mathcal{L}(D_\text{MVS}) + \lambda_3 \mathcal{L}(D_\text{Fuse}),
|
| 90 |
+
\end{equation}$$ where $\lambda_1,\lambda_2,\lambda_3$ are the loss weights, and $\mathcal{L}(\cdot)$ is a weighted combination of reprojection loss $\mathcal{L}_\text{r}$ (Eq. ([\[eq:reprj\]](#eq:reprj){reference-type="ref" reference="eq:reprj"})) and depth smooth loss $\mathcal{L}_\text{s}$ [@ClmentGodard2016UnsupervisedMD]: $$\begin{equation}
|
| 91 |
+
\mathcal{L}(D)=\mathcal{L}_\text{r}(D)+\gamma \mathcal{L}_\text{s}(D),
|
| 92 |
+
\end{equation}$$ where $\gamma$ denotes the loss weight.
|
2209.01814/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2209.01814/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Driven by improvements in storage, sensors and networking technology, humanity is amassing vast archives of image and video data. A significant fraction of this media is *human-centric*—it is content focused on humans and their actions. The task of *human-object interaction (HOI) detection* [4] aims to provide a step towards fine-grained parsing of such content by detecting all possible triplets of the form <*human, relation, object*> present in visual data. Robust HOI detection has myriad uses for image/video data analysis and represents essential functionality for visual and language applications such as image/video captioning [66, 51], image retrieval [29], image synthesis [28] and video action understanding [26, 69].
|
| 4 |
+
|
| 5 |
+
Given that sustained progress in object detection has yielded increasingly robust systems for detecting people and objects [56, 17, 10], a key remaining challenge for HOI detection is to develop methods capable of generalising to the many possible pairs of interactions between these entities when provided with non-exhaustive training data. To tackle this challenge, we draw inspiration from recent
|
| 6 |
+
|
| 7 |
+
<sup>∗</sup>Equal contribution. This work was done when Hangjie Yuan was an intern at DAMO Academy, Alibaba Group, supported by Alibaba Research Intern Program.
|
| 8 |
+
|
| 9 |
+
<sup>†</sup>Corresponding author.
|
| 10 |
+
|
| 11 |
+
developments demonstrating that contrastive language-image pre-training can induce remarkable generalisation for zero-shot classification tasks [55, 27]. These methods perform classification by casting it as a *retrieval problem*, ensuring that the downstream task *aligns closely* with the pre-training objective. Recent work by Alayrac et al. [2] hypothesises that it is this close alignment between the downstream and pre-training objectives that explains why contrastive methods have proven so effective for zero-shot classification. In light of this hypothesis, in this work, we explore whether it is possible to achieve a similarly close alignment between the HOI detection task and its pre-training strategy.
|
| 12 |
+
|
| 13 |
+
While HOI detection has been widely studied [16, 52, 65, 42, 31, 33, 73, 75, 32, 61, 70, 9], the topic of designing pre-training to reflect the final task objective remains under-explored. Indeed, a widely adopted strategy [70, 72, 61, 7, 75] has been to employ object detection pre-training to initialise the parameters of the model responsible for both entity detection and relation inference. However, while suitable for entity detection, such pre-training may be suboptimal for the detection of *relations between entities* which often requires the model to take account of groups of entities with greater spatial context, rather than individual entities in isolation.
|
| 14 |
+
|
| 15 |
+
To address this shortcoming of HOI detection, we propose Relational Language-Image Pretraining (RLIP) which tasks the model with establishing correspondences from both entities and relations to free-form text descriptions. By doing so, RLIP endows the model with the ability to perform zero-shot HOI detection3 . Moreover, in contrast to previous pre-training schemes that are limited to predefined finite category lists, RLIP benefits from the rich descriptive nature of natural language supervision.
|
| 16 |
+
|
| 17 |
+
We encountered three barriers to a naive implementation of RLIP for existing methods: (1) Recent end-to-end HOI detection architectures [61, 72, 75, 70] typically employ joint representations of (some subset of) *subject, object* and *relation* triplets. As a consequence, it is difficult to leverage text descriptions for separate humans, objects and relations provided by existing datasets such as VG [34]. (2) Contrastive pre-training requires negative samples to train effectively, but it is unclear *a priori* how such negatives should be constructed. (3) Free-form text descriptions exhibit label noise and semantic ambiguity (since there can be many ways to describe the same concept in the absence of a canonical list of categories), rendering optimisation challenging.
|
| 18 |
+
|
| 19 |
+
To overcome these barriers, we make several technical contributions in addition to the RLIP framework. First, to allow end-to-end contrastive pre-training with distinct descriptions of subsets, objects and relations, we propose the Parallel entity detection and Sequential relation inference (ParSe) architecture. ParSe employs a DETR [3]-like design that allocates separate learnable query groups for subject and object representations, together with an additional set of conditional queries that encode relations. While ParSe enables (and works best with) RLIP, we also find that it yields gains for traditional object detection pre-training schemes. To address the second barrier, we synthesise label sequences by extending in-batch labels with out-of-batch sampling to ensure a plentiful supply of negatives—we term this Label Sequence Extension (LSE). For the third barrier, we exploit crossmodal cues to resolve label noise and relation ambiguity. In particular, to mitigate label noise we use the quality of the visual entity detection phase [39] to assign quality scores to relation-text correspondences, an approach we term Relational Quality Labels (RQL). To mitigate relation ambiguity, we leverage similarities between labels to propagate relations via a pseudo-labeling scheme, which we term Relational Pseudo-Labels (RPL).
|
| 20 |
+
|
| 21 |
+
We demonstrate through experiments that relational pre-training outperforms traditional object detection pre-training schemes on comparable data. We further find that the a zero-shot application of our combined approach, RLIP-ParSe, surpasses several existing fine-tuned methods.
|
| 22 |
+
|
| 23 |
+
# Method
|
| 24 |
+
|
| 25 |
+
In this section, we first present our triplet detection architecture, ParSe. Second, we describe how ParSe is used to perform relational language-image pre-training (RLIP). Finally, we introduce techniques to synthesise contrastive negatives and mitigate noise and ambiguities among labels. The overall RLIP-ParSe framework is illustrated in Fig. 1.
|
| 26 |
+
|
| 27 |
+
**Structure overview.** The core idea underpinning the ParSe architecture is to allocate distinct representations of subjects, objects and relations in a holistically optimized model (rather than representing their combination, as commonly pursued in prior work [61]). The motivation for doing so is two-fold: (i) distinct representations enable the direct use of contrastive RLIP, since these representations can be put in correspondence with separate entity and relation annotations; (ii) the separation of responsibilities allows for a more fine-grained control over the context available for each decision (a theme that has proven important for detection tasks [59]). In particular, note that when detecting subjects and objects, local context is typically most useful. However, when it comes to relations, detection will benefit not only from informative local cues, but also neighbouring context [68] (for instance, it is useful to be aware of *water* and *hoses* when inferring the relation in the triplet *<human*, *wash*, *car>*). To instantiate this idea we follow [72] and implement triplet detection in a two-stage end-to-end manner. Our probabilistic model factorises as follows:
|
| 28 |
+
|
| 29 |
+
$$\mathbb{P}(G|Q_s, Q_o, C; \theta_{Par}, \theta_{Se}) = \mathbb{P}(B_s, B_o|Q_s, Q_o, C; \theta_{Par}) \cdot \mathbb{P}(R|B_s, B_o, C; \theta_{Se})$$
|
| 30 |
+
(1)
|
| 31 |
+
|
| 32 |
+
where $Q_s, Q_o \in \mathbb{R}^{N_Q \times D}$ define two sets of independent queries for $N_Q$ subjects and $N_Q$ objects; C denotes features from the detection encoder; $B_s, B_o, R$ denote sets of detected subject boxes, object boxes and relations, respectively (these collectively comprise the detection results G); $\theta_{Par}$ and $\theta_{Se}$ represent learnable parameters from the entity detection decoder and the relation inferring decoder, respectively. To construct ParSe, we design two components, *Parallel Entity Detection* and *Sequential Relation Inference*, to implement the second and third terms in Eq. (1), respectively. These are described next.
|
| 33 |
+
|
| 34 |
+
**Parallel Entity Detection.** Following the DETR family of architectures [3, 74, 45], we first extract visual features using an image encoder, add positional encodings and then pass the result through a customized Transformer encoder according to the detector we adopt (we explore both DETR [3] and DDETR [74] variants) to obtain detection features C. Then, two sets of queries $Q_s$ and $Q_o$ are fed into the entity decoder to perform self-attention [62], cross-attention and feed-forward network (FFN) inference, obtaining $\tilde{Q}_s$ , $\tilde{Q}_o \in \mathbb{R}^{N_Q \times D}$ which are used to predict box locations and classes. **Sequential Relation Inference.** To encode relations, we perform *Sequential Relation Inference* as a sequential step after entity detection (similarly to [72]). In the first stage, subjects and objects are detected via *Parallel Entity Detection*. In the second stage, we adopt a simple parameter-free matching scheme between subjects and objects to generate relation queries: matching by their indices. Using this pairing scheme, we obtain relation queries via a conditional query generation function:
|
| 35 |
+
|
| 36 |
+
$$Q_r = F_{so}(\tilde{Q}_s, \tilde{Q}_o) \tag{2}$$
|
| 37 |
+
|
| 38 |
+
where for simplicity, we adopt addition as the query generation function. Since we match by indices, $Q_r \in \mathbb{R}^{N_Q \times D}$ contains $N_Q$ relation queries. $Q_r$ is then fed into the second decoder to perform Sequential Relation Inference via self-attention, cross-attention and FFN inference to obtain the corresponding relation features $\tilde{Q}_r \in \mathbb{R}^{N_Q \times D}$ which are then used for relation classification.
|
| 39 |
+
|
| 40 |
+
For each iteration of pre-training, we construct a minibatch of images and their annotated relation triplets comprising all entities' locations, $N_E$ unique entity text labels as well as $N_R$ unique relation text labels. We describe how these are used for contrastive pre-training next.
|
| 41 |
+
|
| 42 |
+
Formation of target label sequences. We construct targets from in-batch labels (free-form text descriptions forming subject, object, relation triplets). In more detail, we first aggregate all entity labels within the batch and append to this sequence a *no objects* label. Next, we similarly aggregate all in-batch relation labels. Then, all entity and relation labels are respectively fed into a text encoder (RoBERTa [46] in our implementation) to extract label features denoted as $L_E$ and $L_R$ , respectively. Note that a free-form text label can have multiple tokens after tokenization—we use only the feature derived from the [CLS] token to represent the label. We concatenate the label feature sequence with features from the image encoder as shown in Fig. 1. To fuse the concatenated features, we adopt a simple approach: applying a Transformer encoder [36, 30, 1] to obtain fused label features $\tilde{L}_E \in \mathbb{R}^{N_E \times D}$ and $\tilde{L}_R \in \mathbb{R}^{N_R \times D}$ .
|
| 43 |
+
|
| 44 |
+
Cross-modal alignment through classification. To implement RLIP, we task the model with establishing correspondences between entities/relations and their text descriptions using a classification objective, following [38]. In particular, we align the *i*th relation $\tilde{Q}_r(i) \in \mathbb{R}^D$ with relation its text via a Focal loss [44]:
|
| 45 |
+
|
| 46 |
+
$$P_r(i) = \tilde{Q}_r(i)\tilde{L}_R^T + \tilde{Q}_r(i)W_b^T + W_c; \quad \mathcal{L}_r(j) = \text{Focal(sigmoid}(P_r(i,j)))$$
|
| 47 |
+
(3)
|
| 48 |
+
|
| 49 |
+
where $\tilde{Q}_r(i) W_b^T + W_c$ is the learnable bias term introduced in [44]; $W_b \in \mathbb{R}^{N_R \times D}$ is a learnable linear projection and $W_c \in \mathbb{R}^{N_R}$ is a constant vector filled with $-\log((1-\pi)/\pi)$ with $\pi=.01$ . The Focal loss is defined via $\operatorname{Focal}(p) = -(1-p)^{\gamma} \log(p)$ where $\gamma$ is set as a hyperparameter. In the argument to this loss in Eq. (3), j indexes along $P_r(i) \in \mathbb{R}^{N_R}$ . To encourage matching of subjects and objects with their corresponding entity descriptions, an analogous objective to Eq. (3) is used except that a softmax and a CE loss are applied and $W_c$ is omitted (note that entities are uni-label and relations multi-label, as defined by the downstream task). The central benefit of the RLIP objectives defined above is that they bring the pre-training and downstream HOI detection losses into close alignment since the task of classifying entities and relations in the downstream task reflects the same matching task used in pre-training. As a result, RLIP produces models that can perform HOI detection under zero-shot with no fine-tuning (NF) evaluation protocols.
|
| 50 |
+
|
| 51 |
+
**Label Sequence Extension (LSE).** Within a given batch, the number of negative samples available for matching is limited. However, provision of plentiful negatives has been widely shown to improve contrastive learning [8, 11, 25, 63]. To this end, we propose Label Sequence Extension as a mechanism to leverage out-of-batch text descriptions. Concretely, we sample additional text descriptions with a ratio of two thirds entity labels and one third relation labels. To ensure computational tractability in the presence of the quadratic complexity of Transformer, we limit the label sequence to a predefined length $N_L$ . We experiment with two sampling strategies: (i) *Uniform sampling* that draws among candidate labels with equal probability; (ii) *Frequency-based sampling* that samples according to the label frequency in the training set.
|
| 52 |
+
|
| 53 |
+
Datasets with crowd-sourced language annotations [34, 35] exhibit significant label noise and ambiguity. First, the descriptions themselves may be noisy (inaccurate), particularly when the underlying image is challenging to interpret. A second challenge for traditional training schemes is that similar relations can be described differently, thanks to synonyms. For example, the *stand near* relation may be annotated "stand near", "stand next to", "stand by", *etc.*. These forms of *semantic ambiguity* make supervised cross-modal pre-training (which relies on access to consistent labels) challenging. To mitigate this issue, we focus on two aspects of the pre-training input data: (i) the quality of the relation text labels; (ii) the presence of semantically-similar labels in sampled label sequences.
|
| 54 |
+
|
| 55 |
+
**Relational Quality Labels (RQL).** To tackle the first challenge, we propose a label smoothing [48] approach. The key idea is that we expect the difficulty of subject and object detection for a particular instance to correlate with the confidence of the annotated relation. We therefore propose to estimate annotation quality from the quality of the entity detection stage. Drawing inspiration from the generalised focal loss [39], we instantiate this idea by assessing the quality of the *i*th subject and object detection after bipartite matching [61] as
|
| 56 |
+
|
| 57 |
+
$$e(i) = \min(\text{GIoU}_{0-1}(B_s(i), \hat{B}_s(i)), \text{GIoU}_{0-1}(B_o(i), \hat{B}_o(i)))$$
|
| 58 |
+
(4)
|
| 59 |
+
|
| 60 |
+
where $\mathrm{GIoU_{0-1}}$ denotes generalized IoU from [58] together with a linear scaling function to scale the $\mathrm{GIoU}$ value to the range of 0 to 1 and $\hat{}$ denotes ground-truth annotation. The resulting value e(i) is then employed to calibrate the relation label confidence via multiplication: $\tilde{R}(i) = e(i)\hat{R}(i)$ . Relational Pseudo-Labels (RPL). To address the second issue, we propose a pseudo-labelling strategy [67] to account for synonyms in the extended sequence. We exploit the fact that text embeddings with high semantic similarity will lie close together, as measured by an appropriate distance function $M(\cdot,\cdot)$ . We define the distance between the ith annotated relation label $\hat{R}(i) = \{0,1\}^{N_R}$ and the jth relation text feature $\tilde{L}_R(j) \in \mathbb{R}^D$ from the extended sequence as
|
| 61 |
+
|
| 62 |
+
$$M(\hat{\boldsymbol{R}}(i), \tilde{\boldsymbol{L}}_R(j)) = \sum_{k=1}^{N_R} \hat{\boldsymbol{R}}(i,k) \cdot m(\tilde{\boldsymbol{L}}_R(k), \tilde{\boldsymbol{L}}_R(j))$$
|
| 63 |
+
(5)
|
| 64 |
+
|
| 65 |
+
where $m(\cdot,\cdot)$ denotes Euclidean distance. Given the ith relation label, we apply a scaling function to M(i,j) via $\bar{M}(i,j) = \frac{\max_k(M(i,k)) - M(i,j)}{\max_k(M(i,k))}$ where we have abbreviated $M(\hat{R}(i), \tilde{L}_R(j))$ as M(i,j) for clarity. Next, we use a global threshold $\eta$ to select label texts with high similarities: we set the jth label in the ith relation labels as $\bar{M}(i,j)$ if $\bar{M}(i,j) > \eta$ . Note that when applying either RQL or RPL, the ground truth labels are continuous (rather than discrete). We therefore employ the Quality Focal Loss [39] (rather than the standard Focal Loss [44]) as our objective function.
|
| 66 |
+
|
| 67 |
+
By design, our pre-training (RLIP) and fine-tuning phases follow a similar process. For a given batch of images with corresponding annotations, we aggregate the results from *Parallel Entity Detection* and *Sequential Relation Inference* to form $N_Q$ triplets per image. During pre-training and fine-tuning, we employ bipartite matching similarly to prior work [61, 75, 6, 72, 70], following in particular the matching cost proposed in [61]. The overall loss is then constructed as follows:
|
| 68 |
+
|
| 69 |
+
$$\mathcal{L} = \lambda_1 \mathcal{L}_{l1} + \lambda_2 \mathcal{L}_{GIoU} + \lambda_3 (\mathcal{L}_s + \mathcal{L}_o) + \lambda_4 \mathcal{L}_r \tag{6}$$
|
| 70 |
+
|
| 71 |
+
where $\mathcal{L}_{l1}, \mathcal{L}_{GIoU}, \mathcal{L}_s, \mathcal{L}_o, \mathcal{L}_r$ denote the $\ell_1$ loss for box regression, GIoU loss [58], CE loss for subject and object classes, and Focal loss for relations (or Quality Focal loss [39] when applying RQL or RPL), respectively. The $\lambda$ terms are fixed weights to balance multi-task training following [61], with $\lambda_1 = 2.5, \lambda_2 = 1, \lambda_3 = 1, \lambda_4 = 1$ . During pre-training, the label sequence is constructed from both in-batch and out-of-batch labels. During fine-tuning, we use all text labels contained in the dataset to form the label sequence (unlike pre-training, these labels fall within a pre-defined category list of limited size). Note that we follow [61, 72] and exclude $\mathcal{L}_s$ during fine-tuning since HOI detection detects only humans as subjects. During inference, the confidence score for an object is simply the top-1 score from the softmax distribution over objects, and the relation score is obtained by multiplying the original score from the sigmoid function and the object score. We rank relation scores and filter out the top-K within those correctly localised triplets (IoU > 0.5) for evaluation. K is set to 100 by default following [61, 72, 42, 70].
|
2210.14128/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-10-16T06:44:48.896Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36" etag="cRLPHMvlODbOzTS4Tr3M" version="15.1.2" type="google"><diagram id="6kcwfe2pvgkRhR7fa2ml" name="Page-1">7VxRd6I4FP41njPzUE9IEPHRau3M2e20W93ZM097qKTKKRIHsOr++k0gKGCoYBEE8aGFS7hJ7vdxk3sDaaHBYnNva8v5A9Gx2YJA37TQsAVhV1XoXybY+gJJUpAvmdmGzmV7wdj4D3Mh4NKVoWMnUtAlxHSNZVQ4JZaFp25Eptk2WUeLvRIzWutSm/EawV4wnmomPij2j6G7c1+qdkKlv2FjNg9qlgC/stCCwlzgzDWdrEMidNdCA5sQ1z9abAbYZMYL7OLfN0q4umuYjS03zQ2Po4Xx8PjzD9xZu2/vfYCd399voK/lXTNXvMMtqJhU3+1cYm12t9wQyu8Va+jtK7HcG8eDqU8L0AZs9hfp0Yz/N4PCuWl51aZRJRNjQZkBwQ+8pn+fyUKzwko+W62zZPrA1NQc50CLpxvkUd0XeqAtlvTIenGWviqxyGvVKrhvuDW9/pq+stEq1P50+iKitcZM+UJs1mfDEtX4YNBHzCGudnqtX0O3+vbNanMpJ4hjLfuwVVTsczCtmD07gRBGmgldvPGeLndhUoHk93lqWDN61tmfTQht1/AGIQbN3HDxeOnTf03dLJWRd2y/mp47mRu6ji0qs8nK0jF72gE900xjRqXDKXUP2KYCeodrUMfW5xcW9D7WKs9k3PFCsGtz2LFwX8M04E1IxB3NPSYL7NpbWoRf7XG/Enj9rtTu+JL13onKgY+fhxyoIrc7Xe6+ueue7dTvvRs94A4ug7OTZIG3i+ETBiZw2MwouubMT7At1bFkmhebGRsf2/6QBNurycq2+uyE6WZmktoyA9sr8A1rrCoJtnusIfqGmnqotFVWQDdsOs4ZhFXjkBUz5q3j2uQND4hJaGOGFrE8WA3TDEQtiHreLx94JaRG8ZV7Anw7Anilc0GrXjG0o5FCf/lAC1X5RGiDeU7+2PauGNvBANBfPtgipFwatrtOnDoBlWQ6wHuDb9Gzx8Saxy5e0uugWvMEE7+6KWcJdBzNaRiJ81E5ZKMkGETOxkbRBCEzG0uiYhIPpYaHx8Y8eGk87NSQh7DhYea5V9k8VGrIQ9Tw8Og8sXNZPJRkUZ4yhgy2dD4zD6bSIZDiVvKn3kFGF35kN6wHGeIEqx2ZOwcyG5uaa7xH88oiU/EanojhPR4cFBVEMDkwNg0q7CnmN4XTwDE9HemIIlezZ9g9UOQBt+v1Z7BEOWNZNeTkvJCLKzo7cinyZzVDrh31hLB7OnbHVJ0dPdGk8qrQQyA39A5UnR090VTsutBD+aEXV3V29FKkp2uFngSjYxVSTgQvrqjw6UpgEEEMxFcrj4cjSmk5yqSa/VjoWBwUXQw9Hgkdy2dzUdrQRhQ4RYOk8GOhBOe8iYIXJDJHQmpszJbQbnUyvJBZ6GpIEHl9lpAlsfGg2hR58krzEJyBh8FMvDwSigLyCpMwRZK8IWE8gCmdhKJMQoVJCBsSZiUhAqWTMHnNsJIkTJEeb0gYDydLJ2FXQMLPxJfVy493e/EgXwFtEPrFNKbOHgCB4qiqs4egV5c9UOHHFk+dPYgpKj57IHp3rcKjQ99/h63eQ0QemQMp9k7JRaQOlJxyWZfCxvGk/zypORnzmK9I8NLyB0rNkli/vt/9OWyYmJ2JpScRlJplshomnsbE8jMJSs0yCePJ41NDxOxELD2boIjizQoT8Umj4Gus31PN0g1dc3HNeZlLBBP7iDMhgpF6hXKzK8p0VZibX/zvuCPUqyMjc/GUcUYKwpjAdxXExpqld0Js5HsBJH4o35A0JUkFEY7ULZSlas3SPocsZTtWeNfjbG3Y+zn2iqKiotlbs1RRMnvBboeVrw1Vs1NVEDdBVCxVr+3TFjmv13zjigpfqFNrlnyZUC/Ceu1Mid1Eu8e9iZxuvU4tNNhVk78DryQpm1d9UxARxTxh6Wt1avJX4NVkYRs2PMzOw9JX6tSapf68r9IaHmbkYfnrdGrNlkdAu9vwMDsPC1ymw9b2qf/zUb5R18/bf4dObwjeb1LEmrXdFW80ym8zSzqyRYAtdDPLX1h6ho9AA+u/u29jy/3r4R4LkS1lx5yg2pdA8CO8oP8icBNR73HUTaTnno1pY7UX/jij2yXLBHiG79y2OkOma+UShzuADB4ogS4CUiUzCKAog4I8UIg/6EyuQUig5LCxIdAlEkgONh48P4Ho6X7LeT97tt+4H939Dw==</diagram></mxfile>
|
2210.14128/main_diagram/main_diagram.pdf
ADDED
|
Binary file (19.4 kB). View file
|
|
|
2210.14128/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Pre-trained language models (LM), such as BERT (Devlin et al., 2018) and GPT-3 (Brown et al., 2020), have revolutionized NLP over the last several years and advanced the state-of-theart results in a wide set of downstream NLP tasks. Recent studies show that a considerable amount of linguistic (Hewitt and Manning, 2019; Clark et al., 2019) and relational knowledge (Petroni et al., 2019; Talmor et al., 2019; Jiang et al., 2020; Petroni et al., 2020) has been captured by the pretrained LMs via pre-training on large-scale textual corpora. These approaches often design "fill-in-theblank" questions based on pre-defined relations. For example, a question "Bob Dylan was born in
|
| 4 |
+
|
| 5 |
+
\_" is manually created for the LMs to answer the "birthplace" relation of "Bob Dylan".
|
| 6 |
+
|
| 7 |
+
Most existing approaches that evaluate what pretrained LMs have learned are based on benchmarks with pre-defined relation categories. Yet, the benchmarks present two limitations. First, most benchmarks only cover a limited number of pre-defined relations. Therefore, it is unclear whether the pretrained LMs have stored general open relation information. For example, the Google-RE in LAMA benchmark (Petroni et al., 2019) includes only three relations (i.e., "birthplace", "birthdate", and "deathplace"), while there are hundreds of relations available in the real world scenario. Second, a majority of benchmarks evaluate LMs in a close manner. This means that the gold relation is given to the models. For example, "was born in" is given as the model's input. Besides, the existing benchmarks often provide a single gold relation for each input sentence. However, an input sentence may indicate multiple relations, e.g., containing both "birthplace" and "birthdate" information about an argument or entity. We are curious: instead of the limited relational information, can we systematically benchmark the general information stored in the pre-trained LMs?
|
| 8 |
+
|
| 9 |
+
In this work, we set up a new open information extraction (OIE) benchmark, called IELM, towards testing the general and open relational information stored in pre-trained LMs. We refer to OIE as it is a task that is designed to extract open relations from massive corpora without requiring a pre-defined relation category. As shown in Figure 1, we successfully convert pre-trained LMs to zero-shot OIE systems. We apply them to two standard OIE datasets, including CaRB (Bhardwaj et al., 2019) and Re-OIE2016 (Stanovsky and Dagan, 2016; Zhan and Zhao, 2020), as well as two new large-scale factual OIE datasets in our IELM benchmark. We show that the zero-shot pre-trained LMs outperform the fully supervised state-of-the-arts on fac-
|
| 10 |
+
|
| 11 |
+
<sup>1</sup>Our code and datasets are available at [https://github.](https://github.com/cgraywang/IELM) [com/cgraywang/IELM](https://github.com/cgraywang/IELM).
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
|
| 15 |
+
Figure 1: Summary of our approach. The zero-shot open information extraction system takes a noun phrase (NP) chunked sentence as input, and outputs a set of triples. The approach first conducts argument extraction by encoding NPs as argument pairs, then performs predicate extraction via decoding using the parameters (i.e., attention scores) of the pre-trained language models. The output extractions are then evaluated on our IELM benchmark.
|
| 16 |
+
|
| 17 |
+
tual OIE datasets. Standard OIE datasets rely on human annotations and often consist of thousands of gold triples and sentences. Unlike those datasets, we create two large-scale OIE datasets, namely TAC KBP-OIE and Wikidata-OIE, via distant supervision from knowledge graphs. For example, Wikidata-OIE is constructed via aligning English Wikipedia to Wikidata triples, resulting in millions of triples and documents. The design of zero-shot LMs for OIE is important: by encoding the noun chunks as arguments in the input, we only make use of the parameters of pre-trained LMs to decode the predicates (or relations) between the arguments. To the best of our knowledge, this is the first attempt to systematically evaluate pre-trained LMs in a zero-shot OIE setting. To summarize, our key contributions are the following.
|
| 18 |
+
|
| 19 |
+
- 1. We benchmark the general relational information in pre-trained LMs on our IELM benchmark. Besides two standard OIE datasets (CaRB and Re-OIE2016), we also create two largescale factual OIE datasets for our benchmark. The two new OIE datasets are called TAC KBP-OIE and Wikidata-OIE, which are constructed via distant supervision from two knowledge graphs (TAC KBP and Wikidata). Our benchmark is a general OIE benchmark, helping the development of future OIE systems.
|
| 20 |
+
- 2. We enable the zero-shot capabilities of pretrained LMs for OIE by encoding the arguments
|
| 21 |
+
|
| 22 |
+
- in the input and decoding predicates using the parameters of pre-trained LMs. The pre-trained LMs are particularly good at recovering factual arguments and predicates.
|
| 23 |
+
- 3. We test the OIE performance of 6 pre-trained LMs (BERT and GPT-2 (Radford et al., 2019) families) and 14 OIE systems on IELM benchmark. The zero-shot LMs achieve state-of-theart OIE performance on TAC KBP-OIE and Wikidata-OIE, even outperforming fully supervised OIE systems.
|
| 24 |
+
|
| 25 |
+
For open information extraction (OIE), we take an input as a NP-chunked sentence and output a set of triples. Below is an example.
|
| 26 |
+
|
| 27 |
+
> Input DylanNP was born in MinnesotaNP, and was awarded Nobel PrizeNP.
|
| 28 |
+
|
| 29 |
+
> Output (Dylan; born in; Minnesota), (Dylan; awarded; Nobel Prize).
|
| 30 |
+
|
| 31 |
+
NP denotes the noun phrase.
|
| 32 |
+
|
| 33 |
+
Follow traditional linguistic OIE systems such as Stanford OpenIE (Angeli et al., 2015) and OpenIE5 (Saha et al., 2017, 2018), we treat each NP pair as an argument pair (e.g., "Dylan" and "Minnesota"). We then utilize the parameters of LMs to extract the predicates (e.g., "born in") between the pair in the input as below.
|
| 34 |
+
|
| 35 |
+
The predicate extraction problem is formulated as extracting a set of sequences in the input that are associated with an argument pair. We particularly use the attention scores in a pre-trained LM to measure the relevance between a sequence and the argument pair. We frame the process as a search problem. Given an argument pair, we aim to search for the sequences with the largest attention scores connecting the pair. To compute a score for every possible sequence is computationally expensive especially when the sequence length is large, the exhaustive search is therefore intractable. We adapt beam search as an approximate strategy to efficiently explore the search space. Beam search maintains the k-best candidates. This means the time cost of beam search does not depend on the sequence length, but on the size of the beam and
|
| 36 |
+
|
| 37 |
+
| Step 0 | Step 1 | Step 2 Step 3 | |
|
| 38 |
+
|--------|-------------------------------------|---------------|---------|
|
| 39 |
+
| | | | |
|
| 40 |
+
| | ( Dylan was born in Minnesota<br>NP | | )<br>NP |
|
| 41 |
+
| Action | Partial candidate | | |
|
| 42 |
+
|
| 43 |
+
| Step | Action | Partial candidate | Total score |
|
| 44 |
+
|------|--------|-----------------------------|-------------|
|
| 45 |
+
| 0 | START | (Dylan; | 0 |
|
| 46 |
+
| 1 | YIELD | (Dylan; born | 0.2 |
|
| 47 |
+
| 2 | YIELD | (Dylan; born in; | 0.5 |
|
| 48 |
+
| 3 | STOP | (Dylan; born in; Minnesota) | 0.7 |
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
|
| 52 |
+
| Key: Dylan | | was born | | in | Minnesota |
|
| 53 |
+
|-----------------|-----|-----------|-----|-----|-----------|
|
| 54 |
+
| Query:<br>Dylan | X | X | X | X | X |
|
| 55 |
+
| was | 0.1 | X | X | X | X |
|
| 56 |
+
| born | 0.2 | 0.1 | X | X | X |
|
| 57 |
+
| in | | 0.05 0.05 | 0.3 | X | X |
|
| 58 |
+
| Minnesota | | 0.1 0.06 | 0.1 | 0.2 | X |
|
| 59 |
+
|
| 60 |
+
(b) Attention matrix.
|
| 61 |
+
|
| 62 |
+
Figure 2: Illustration of predicate extraction with a pre-trained language model (LM). The upper part of (a) represents the general search steps of producing the triple (Dylan; born in; Minnesota) from the input "DylanNP was born in MinnesotaNP" encoded with argument noun phrases (NP). The lower portion shows the corresponding step-by-step process. (b) shows the attention scores generated through the forward pass of the LM over the corresponding input.
|
| 63 |
+
|
| 64 |
+
the average length of the candidates. In general, the beam search starts with the first argument (e.g., "Dylan"). At each step, beam search simply selects top-k next tokens with the largest attention scores, and just keeps k partial candidates with the highest scores, where k is the beam size. When a candidate produces the second argument (e.g., "Minnesota"), the candidate is complete.
|
| 65 |
+
|
| 66 |
+
We show a running example as follows. Let's first consider the search from left to right with beam size equal to 1. An example search process is shown in Figure 2. Given an argument pair "Dylan" and "Minnesota", at each step, the search performs one of the following actions:
|
| 67 |
+
|
| 68 |
+
- START the search from first argument. The first argument is added as an initial candidate into the beam. In Figure 2(a), at step 0, "Dylan" is added into the beam. The total attention score is initialized to 0.
|
| 69 |
+
- YIELD a new partial candidate in the beam if the current candidate has not reached the second argument. This action conducts the following: The next largest attended token is appended to the end of the current candidate to yield the new candidate. The total score is increased by the associated attention score. At step 1 of Figure 2(a), "born" is appended to the current candidate to yield the partial candidate, since "born" has the largest attention score (0.2 as highlighted in Figure 2(b)) with "Dylan" in the attention matrix. The total score becomes 0.2. Note that we only consider the single head attention in this example for simplicity. "x" in Figure 2(b) marks the tokens (prior to the current token) that are not considered in the search to prevent searching
|
| 70 |
+
|
| 71 |
+
backward. Step 2 takes the same action, and the score becomes 0.5.
|
| 72 |
+
|
| 73 |
+
• STOP the search step if the candidate has reached the second argument, then add the candidate as a valid triple into the beam. As beam size equals to 1, (Dylan; born in; Minnesota) is returned for the given pair. The final score of the triple is 0.7.
|
| 74 |
+
|
| 75 |
+
We also notice triples are often in reverse order in the sentence, thus enabling bidirectionality by running the algorithm in both directions (left to right and right to left). We merge the subwords as words, and only consider word-level attention. The beam search is implemented by the breadth-first search, which is efficient as the time complexity is O(k · d). d is the maximum depth of the search tree.
|
| 76 |
+
|
| 77 |
+
We adopt two standard OIE datasets below.
|
| 78 |
+
|
| 79 |
+
CaRB CaRB (Bhardwaj et al., 2019) is a crowdsourced OIE dataset, where the input sentences are from the OIE2016 (Stanovsky and Dagan, 2016).
|
| 80 |
+
|
| 81 |
+
Re-OIE2016 Re-OIE2016 (Zhan and Zhao, 2020) is also generated based on the input sentences in the OIE2016, and is further enhanced by human annotations.
|
| 82 |
+
|
| 83 |
+
In addition, we introduce two large-scale factual OIE datasets based on knowledge graphs (KG).
|
| 84 |
+
|
| 85 |
+
| Method | AIDA | | | |
|
| 86 |
+
|---------------------------|------|------|--|--|
|
| 87 |
+
| | dev | test | | |
|
| 88 |
+
| Spitkovsky and Chang 2012 | 26.0 | 28.2 | | |
|
| 89 |
+
| Kolitsas et al. 2018* | - | 82.4 | | |
|
| 90 |
+
| Ours | 63.8 | 64.5 | | |
|
| 91 |
+
|
| 92 |
+
Table 1: Evaluation of unsupervised entity linking of Wikidata-OIE on AIDA benchmark. An asterisk (\*) indicates a supervised method.
|
| 93 |
+
|
| 94 |
+
TAC KBP-OIE TAC Knowledge Base Population (KBP) Slot Filling is a task to search a document collection to fill in a target entity for predefined relations (slots) with a given entity in a reference KG. We adapt the dataset as an OIE dataset. In particular, we use a document sub-collection of the TAC KBP 2013 task (Surdeanu, 2013) as the input, and use the official human annotations regarding the documents as gold extractions.
|
| 95 |
+
|
| 96 |
+
Wikidata-OIE Besides TAC KBP-OIE, we create a larger factual OIE dataset based on the English Wikipedia. Different from TAC KBP, there are no gold triple annotations for Wikipedia. Since a large amount of Wikidata triples are from English Wikipedia, we create the dataset using distant supervision (Zhang et al., 2017) by aligning Wikidata triples to Wikipedia text. We employ an unsupervised entity linker based on a pre-built mentionto-entity dictionary (Spitkovsky and Chang, 2012) to extract potential gold arguments for scalability considerations. The entity linker links an arbitrary entity mention in a sentence to a Wikipedia anchor, which is further linked to a Wikidata entity. For each sentence in Wikipedia articles containing two linked arguments, if there is a Wikidata triple describing a relation holding the two arguments, we denote the Wikidata triple as a gold extraction.
|
| 97 |
+
|
| 98 |
+
Unlike TAC KBP-OIE which is built based on human annotations, Wikidata-OIE is derived from automatic annotations. Therefore, we evaluate our unsupervised entity linker on the standard AIDA benchmark (Hoffart et al., 2011) consisting of Wikipedia entities. Table 1 shows that it significantly improves the unsupervised performance (Spitkovsky and Chang, 2012) and reaches competitiveness with a supervised method (Kolitsas et al., 2018). Given the scale of Wikidata-OIE, we sacrifice acceptable effectiveness for efficiency.
|
| 99 |
+
|
| 100 |
+
The statistics of the datasets are shown in Table 2. For CaRB and Re-OIE2016, we report the statistics of the corresponding test sets. We include a dataset
|
| 101 |
+
|
| 102 |
+
| Dataset | # of triples | # of arguments | # of predicates | # of documents |
|
| 103 |
+
|--------------|--------------|----------------|-----------------|----------------|
|
| 104 |
+
| Re-OIE2016 | 1,508 | 3,328 | 1,506 | 595 |
|
| 105 |
+
| CaRB | 2,715 | 6,226 | 2,715 | 641 |
|
| 106 |
+
| TAC KBP-OIE | 27,655 | 39,661 | 41 | 3,877,207 |
|
| 107 |
+
| Wikidata-OIE | 27,368,562 | 6,047,494 | 1,156 | 6,047,494 |
|
| 108 |
+
|
| 109 |
+
Table 2: Dataset statistics of the IELM benchmark.
|
| 110 |
+
|
| 111 |
+
comparison in Appendix A.3.
|
| 112 |
+
|
| 113 |
+
Unidirectional Language Models Given an input sequence $\mathbf{x} = \{x_1, x_2, \dots, x_N\}$ , unidirectional LMs assign a joint probability to the sequence by factorizing it as $p(\mathbf{x}) = \prod_t p(x_t|x_{t-1}, \dots, x_1)$ , where $p(x_t|x_{t-1}, \dots, x_1) = \sigma(\mathbf{W}\mathbf{h}_t + \mathbf{b})$ . $\mathbf{h}_t$ is the output vector of a neural network at position t.
|
| 114 |
+
|
| 115 |
+
We consider GPT-2 (Radford et al., 2019), where $\mathbf{h}_t$ is produced by Transformer decoders (Vaswani et al., 2017). GPT-2 is pre-trained on WebText containing 40GB of text. We explore all four pre-trained GPT-2s with different model sizes: GPT-2 (117M), GPT-2<sub>MEDIUM</sub> (345M), GPT-2<sub>LARGE</sub> (774M), and GPT-2<sub>XL</sub> (1558M).
|
| 116 |
+
|
| 117 |
+
**Bidirectional Language Models** Different from unidirectional LMs that predict the next word given the previous words, bidirectional LMs take both left and right context of the target word into consideration, formally, $p(x_t) = p(x_t|x_1, \dots, x_{t-1}, x_{t+1}, \dots, x_N)$ .
|
| 118 |
+
|
| 119 |
+
We use BERT (Devlin et al., 2018) that enables bidirectional context modeling via a masked LM objective and utilizing the Transformer architecture. BERT is pre-trained on BooksCorpus and English Wikipedia. We use both its pre-trained settings: BERT<sub>BASE</sub> (109M) and BERT<sub>LARGE</sub> (335M).
|
| 120 |
+
|
| 121 |
+
# Method
|
| 122 |
+
|
| 123 |
+
We compare our method with a wide set of OIE systems including both neural and traditional linguistic OIE systems. Most OIE systems are based on supervised learning, which are indicated with asterisks (\*) in Table 3. We provide details of the comparison systems in Appendix A.5.
|
2210.16613/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-02-16T06:51:10.197Z" agent="5.0 (X11)" etag="tnjMqg8-4yi34zTcceaT" version="16.5.6" type="device"><diagram id="KfjMNfiW9GZi6FOiHfb1" name="Page-1">7V1rc6M2G/01ntm+M81wsbl8XDuXbZvtZpN0uv20I4Nss4uRI3AS99e/EggbJIFvgNzE7nQDQtx0zvPo0dGFnjmav95gsJh9Rj4Me4bmv/bMy55h6Lo9IH9oyipLsQZmljDFgc8ybRIegn8hS9RY6jLwYVzKmCAUJsGinOihKIJeUkoDGKOXcrYJCst3XYApu6O2SXjwQAiFbH8HfjLLUp1BIfcnGExn+Z11jR2Zgzwzu0Q8Az56KdzLvOqZI4xQkm3NX0cwpIWXl0t2oeuKo+sHwzBKdjkhGN3N/n2KhuYfw++3d5+SePSEf7UYPM8gXLI3Zk+brPIiwGgZ+ZBeRe+Zw5dZkMCHBfDo0RcCOkmbJfOQHZ6gKLkG8yCkeH+C4TNMAg+wAwxe3WD7IxQinN7E1NIfTQ/CME+PUETyDwH22JkDsie+NyuKZ4gT+FpIYuVwA9EcJnhFsrCjdg4fI6We779sIDYdljYrwGtaLBEwWk3X196UPNlghb8HEOZAKHfoEyayXYSTGZqiCIRXm9ThBhlacps8twgtGB4/YJKsWOmBZYLKaJEixKtv7Px05x+6c2EM8v3L1+LRyxXbq4W5AOv1tQRWkg4MTUtvEicY/YSFI6ORpllWHc4xWmIP1pQlo3AC8BQmdfkYmLSga2mDYQiS4LnsEmQUSE/9iDFYFTIsUBAlceHKdzRhw0bD5dhoc4a8X36ykT3BhozrVzmcn4bET1ghKd2hHzyTzSnd/PDw9bZnjB7ha/JLil2AY+p0MZrTYn4N4iSIprQ2AAmIIS2U7BrkkQqXkV0ZXkwvKPYLUh3gX/IcY8yfw1+JsynihBd001uFATEdbG73aOPMyG7H6wTg/ZympvdlmZDLQJYeM+82EOk+GdD/RPPIfjIzsNJfM+7O4fhiit5ONyTezmnN2dmn5Owa93WHey5zR8/Vb8VxCZ7GLhPH4Gu/7DnZSRwnGvA55i4+5+sSpq/6gvDPEAGfbKIJLcMZJP9G8IW5mzFxOGe3cYTbWEfI6tyGo9Rt/KpdaJpZ9h0WeaZ675Hu3UEckDKA+L8QPvX/a+HToFy9OVZt+GS6bl3+dsKnQV2z6pTqs12aaDDyP9KmdW/dQiMp1wEtkfQGPohn6/YiifqSb4XtYgOD7G6eke7kj9g+eXW9afIe5dpyVaTCtbFiVkaZEqKt80dKGdV8adzZHceX/pkv9Xy5UO5hjNNiTL3AdGaMdmErp4x5WpSxzpSppYyumi+N6wBH8cVyj26f/VjOF3n+adqkPoRQDKpCmLk9hGiSQF4I4jjwShzSxVacpjkgPVVoxdHf9XVeHtnNrZKpbKxj/Wo1pnI0FTmGlbl5efXwbTy6+/rj59Pt7/Pb1W/3+DETjBtXpNw+p2Uau0lSezcqLbN0H7dekucfy21WkZcWsH58j5FobRzfjjI+u2h89a5yzWu9t1cVsIPNCbbluprmumujEJjdpsPuD5qwin3JbBpOiZ19y2mUnnVFIhFvx7z4Sj0q7SYCrIytpyXtFB+OSLEHEJNDf1IZN0/OT3+4ur0aPfZo3/8ySsuHPKk2+vLXn48f/kf7n67vv3ym6AXRNL3Kzf2Xv+7I3+E/xZPWAnD2GIIuPN6qCi8w8mBMwNiqCddLwBKhzwfQmXgyGlueA8eTOt++u9JrmJzUK+shMiVSb2vd4Xlzuwv6+DBC8yAiBoiiSg55M4RCnkPlM2v6Ft4vtfoaRy1NQi2rU2rp3VGLOZnv5IEqiBUiL6VPXOGf0lPPzNqFWZZyZhndMSvnTRWvPALUHPCk2px1ptQOlDJlXZ7dUqq6D7xxSmX1GAiDpDKUmqE44TlVOu1Mq11o5Sin1elE53M4H2+Nzs+EqiVUv6+cUOI44mtEUYXAm0ljbAyTJY56+UCdcgYNRH7Ogh4b0JOF4HH6lsksPREkwpkc5GW94+CxyxzIA+j4fRnIjjE2mxp+M7BOLnK2BJBvqGiRY7g2Wu23ywKGGMN4gSI/G/pZwtQLkgBSSINow5X1Zd4wlupjVVvA8mGGXjYobEJFBiQbULdk7prhx6JMchDDt4zYCYSCjoDYpxSwOYhWLDajpuSj1KIYjKXoTJsBqjFek63bII3kyoeLQDM7fcOIqo/C3CobZHViMJnA9I1zp5g5y2p7zIKp4pj7lAVvF0T1kY95fKfnUX3mhbk6u/QEHjvy1AKaNpnIYKa9Klqxp5zrAWprBFfumLePV2Xhi+LxqlZFF2ZV36Kj1eZvabpPtf5eGCd/D6lLeobUHVXM/aHD7unhNJB4xJD6rCs/oO93SaogEHmwVzEGX7hdbzD6QP7v2XSLzq8k7Sl7+NSzL7/jrFFRTH/N0mn+S/Lv8XOGjm42VtpedXtyQn6OIzM30zJd02+nPalLwlNTNtS/35ZXNUSRvtSe5MQJb10uGxkid2GCMkH+gjmFLL3CHMQ/1yn1EoOk2arkOWStYzUPUm6JK3mGLmIbEoZZnlTV8W13XD/na49ZySfX4DfEDo1Sg18N6dTaXI2aoebBitLJ2QKbtUDlMo0h9v+UZRqVxiBrgap5ng7lJzVMVC8/GWKXUUl+UgN8WexS8wyCqKbeJs/1QLPWp1wqNMT+tUqpUD39TqFK6FwAVUPNTgVQ+UB/TSjfEx6HXq9CHjoV6ZBx6Os5Hj2ZKnmgFsokztZnZxznzMR+5NyOCzrcZ2LsqXRIJcXdVwqSi4S5GHiVpRNjS8jWPLuFfZlJhGehcF/vIxMKpd6nPaFQ7MduTCjcNkWgYT1w2+2alv2yeRFx9f2aU/e2vVoX9TPU/QG0ZRbiWrYJGuqgdE9PxBPHDTQm4m2l7DEXj8AcbjeGlvS4rcbRlOz2jk1DvbpWOQCjA2BbajFtJW6HWpkaXqnXyvKpcm1oZVvxbU4S28rgRpWvvezl6Bu9Q3/bqYoln7J+/LqfrUgFFasqrI+d+OxzU7ImwwlNP9c1bsTTYMuIJ8Me1OVvZ8STKY44aUVhLQzkPLHAoHPFVI0bPIEho+KwComQdQMjiEGSSmCXxDgxXb82FcOa0sKezprXAfxRr3n1JYMCVhHxF7So/Ox9QRhmX+IACeitpRx/zaNUATW0pyVkg8onqWrGr5dcgfZ7XgdZ2ryQDY7M16VtPoyqX5dNeRi19yI+ibhC8lGx1abvZ98B6M2sBjTYNR4zVcRjfHhlWnb74VVf7MAeBhFIX3eUAjxJJxN3Xr3wy7vVVTeeJ+/gbbK6MbhguS/5No0t8TZmA95GvqqfZL2ULK4oDcHPoggOvrSWKeFTLju20qLsmz9hMI2o8ZOSpE5hSEuQIBZ+ZAfmge+njk1GiLKzO3ppvyZg5ZdYkvQLyCoRs60gwpIEoRtYSUETXGm/6PdNd2mCQRCdod4Gtc5BLfveSrdQSxYR4aH++neKtT2MzwDvC7DsyxjdAixZzmOLLcersyXvDbQs9O8UaFusi3UBP3KdYBFXlfJB3+rjAiF3YlOc2g2ETH6epKTd3e+y2W2LNabxVgvf4r8xJwlX2ip8aZtXrMLY5FPi3E7TgVUUuQSYahPYIZKQKU9N+B8pCmI9Q0fppUMv3ioGNo+BpA7oFAPJSktBehYdzUiVvGWcjYUZfrx/fLu46JpRBsYVcXG7xGWnQa0EKlJ4GT45TuNcB/EKOsiBcvzbgZdTXft6d/DKa3/R7iTf4Hkbtb+hcwph374YqK3/ZXHuKWnee36QQcGEgep5ADso1VmXpfKRA02vfVJXJkWlWlaT7tclfqCUfJQPdXWu9CQ+VBa68N/UaM6HivGj5Lt3b8OHuvxXl1W3oAyR1+9Tx2+0SdYfiBVjW5KQHNZacfcM646ukqtobNFYu0W1VtHlprRJ5rOd0a5D27G3++Zu4RYbF5VwP53BrQXXcMqNl77TmocmuxjRQZab8JOUyuwz8iHN8X8=</diagram></mxfile>
|
2210.16613/main_diagram/main_diagram.pdf
ADDED
|
Binary file (59.6 kB). View file
|
|
|
2210.16613/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Natural Language interface to Databases (NLIDB) that translate text queries to executable SQLs is a challenging task in the field of Semantic Parsing [@geoquery-original; @zettlemoyerlearning; @berant2013semantic]. In addition to understanding the natural language and generating an executable output, Text-to-SQL also requires the ability to reason over the schema structure of relational databases. Recently, datasets such as Spider [@spider2018yu] comprising of parallel (Text,SQL) pairs over hundreds of schemas have been released, and these have been used to train state-of-the-art neural Text-to-SQL models [@ratsql2020wang; @duorat2021scholak; @smbop2021rubin; @picardScholak2021; @dtfixup2021xu]. However, several studies have independently shown that such Text-to-SQL models fail catastrophically when evaluated on unseen schemas from the real-world databases [@suhr2020exploring; @kaggledbqa2021lee; @wildtext2sql2021hazoom]. Adapting existing parsers to new schemas is challenging due to the lack of parallel data for fine-tuning the parser.
|
| 4 |
+
|
| 5 |
+
Synthesizing parallel data, that is representative of natural human generated queries [@overnight2015; @herzig2019don], is a long-standing problem in semantic parsing. Several methods have been proposed for supplementing with synthetic data, ranging from grammar-based canonical queries to full-fledged conditional text generation models [@overnight2015; @herzig2019don; @grounded-2020-zhang; @yang2021hierarchical; @hierarchical-zhang-2021; @victorialin2021]. For Text-to-SQL, data-augmentation methods are primarily based on training an SQL-to-Text model using labeled data from pre-existing schemas, and generating data in the new schemas. We show that the text generated by these methods, while more natural than canonical queries, lacks the rich diversity of natural multi-user queries. Fine-tuning with such data often deteriorates the model performance since the lack of diversity leads to a biased model.
|
| 6 |
+
|
| 7 |
+
We propose a framework called [ReFill]{.smallcaps} (§ [2](#sec:our_method){reference-type="ref" reference="sec:our_method"}) for generating diverse text queries for a given SQL workload that is often readily available [@BaikJ019]. [ReFill]{.smallcaps} leverages parallel datasets from several existing schemas, such as Spider [@spider2018yu], to first retrieve a diverse set of text paired with SQLs that are structurally similar to a given SQL $\vq$ (§ [2.1](#sec:retrieve_queries){reference-type="ref" reference="sec:retrieve_queries"}). Then, it trains a novel *schema translator* model for converting the text of the training schema to the target schema of $\vq$. The schema translator is decomposed into a `mask` and `fill` step to facilitate training without direct parallel examples of schema translation. Our design of the `mask` module and our method of creating labeled data for the `fill` module entails non-trivial details that we explain in this paper (§ [2.2](#sec:schema_translation){reference-type="ref" reference="sec:schema_translation"}). [ReFill]{.smallcaps} also incorporates a method of filtering-out inconsistent (Text,SQL) pairs using an independent binary classifier (§ [2.3](#sec:filter_desc){reference-type="ref" reference="sec:filter_desc"}), that provides more useful quality scores, than the cycle-consistency based filtering [@grounded-2020-zhang]. Our approach is related to retrieve-and-edit models that have been used for semantic parsing [@hashimoto2018retrieve], dialogue generation [@chi2021neural], translation [@cai-2021-nmt], and question answering [@karpukhin2020dense]. However, our method of casting the \"edit\" as a two-step mask-and-fill schema translation model is different from the prior work.
|
| 8 |
+
|
| 9 |
+
We summarize our contributions as follows: (i) We propose the idea of retrieving and editing natural text from several existing schemas for transferring it to a target schema, obtaining higher text diversity compared to the standard SQL-to-Text generators. (ii) We design strategies for masking schema-specific words in the retrieved text and training the [ReFill]{.smallcaps} model to fill in the masked positions with words relevant to the target schema. (iii) We filter high-quality parallel data using a binary classifier and show that it is more efficient than existing methods based on cycle-consistency filtering. (iv) We compare [ReFill]{.smallcaps} with prior data-augmentation methods across multiple schemas and consistently observe that fine-tuning Text-to-SQL parsers on data generated by [ReFill]{.smallcaps} leads to more accurate adaptation.
|
| 10 |
+
|
| 11 |
+
Our goal is to generate synthetic parallel data to adapt an existing Text-to-SQL model to a target schema unseen during training. A Text-to-SQL model ${\mathcal{M}}\!:\!{\mathcal{X}},{\mathcal{S}}\!\mapsto\!{\mathcal{Q}}$ maps a natural language question $\vx \in {\mathcal{X}}$ for a database schema $\vs \in {\mathcal{S}}$, to an SQL query $\hat{\vq} \in {\mathcal{Q}}$. We assume a Text-to-SQL model ${\mathcal{M}}$ trained on a dataset ${\mathcal{D}_\text{train}}= \{(\vx_i, \vs_i, \vq_i)\}_{i=1}^{N}$ consisting of text queries $\vx_i$ for a database schema $\vs_i$, and the corresponding gold SQLs $\vq_i$. The train set ${\mathcal{D}_\text{train}}$ typically consists of examples from a wide range of schemas $\vs_i \in {{\mathcal{S}}_\text{train}}$. For example, the Spider dataset [@spider2018yu] contains roughly 140 schemas in the train set. We focus on adapting the model ${\mathcal{M}}$ to perform well on a target schema $\vs$ different from the training schemas in ${{\mathcal{S}}_\text{train}}$. To achieve this, we present a method of generating synthetic data $\mathcal{D}_\text{syn}$ of Text-SQL pairs containing diverse text queries for the target schema $\vs$. We fine-tune the model ${\mathcal{M}}$ on $\mathcal{D}_\text{syn}$ to adapt it to the schema $\vs$. Our method is agnostic to the exact model used for Text-to-SQL parsing. We assume that on the new schema $\vs$ we have a workload ${\mathcal{QW}_s}$ of SQL queries. Often in existing databases a substantial SQL workload is already available in the query logs at the point a DB manager decides to incorporate the NL querying capabilities [@BaikJ019]. The workload is assumed to be representative but not exhaustive. In the absence of a real workload, a grammar-based SQL generator may be used [@grounded-2020-zhang; @victorialin2021].
|
| 12 |
+
|
| 13 |
+
<figure id="fig:main_figure" data-latex-placement="h!">
|
| 14 |
+
<embed src="figures/system.pdf" />
|
| 15 |
+
<figcaption>Diverse parallel data synthesis by retrieving-and-editing existing examples using <span><span class="smallcaps">ReFill</span></span>. Given an SQL <span class="math inline">$\vq$</span> from a new schema, <span><span class="smallcaps">ReFill</span></span> <strong>(1)</strong> <u>Re</u>trieves SQL-Text pairs from an existing dataset (§ <a href="#sec:retrieve_queries" data-reference-type="ref" data-reference="sec:retrieve_queries">2.1</a>) where the SQLs are structurally similar to <span class="math inline">$\vq$</span> (indicated by dashed lines). <strong>(2)</strong> Since the retrieved text come from a different schema, we mask-out the schema-specific words (§ <a href="#sec:schema_translation" data-reference-type="ref" data-reference="sec:schema_translation">2.2</a>). <strong>(3)</strong> The masked text and the SQL <span class="math inline">$\vq$</span> are then translated into the target schema via an Edit and <u>Fill</u> step that uses a conditional text generation model like BART (§ <a href="#sec:schema_translation" data-reference-type="ref" data-reference="sec:schema_translation">2.2</a>). In this way, we transfer the text from multiple existing schemas to generate diverse text for the new schemas. <strong>(4)</strong> Finally, we use a binary classifier as a filtering model to retain only the consistent Text-SQL pairs in the output dataset (§ <a href="#sec:filter_desc" data-reference-type="ref" data-reference="sec:filter_desc">2.3</a>). </figcaption>
|
| 16 |
+
</figure>
|
| 17 |
+
|
| 18 |
+
::: algorithm
|
| 19 |
+
**input:** ${\mathcal{QW}_s}$, ${\mathcal{M}}$, ${\mathcal{D}_\text{train}}$\
|
| 20 |
+
$\mathcal{D}_\text{syn}\leftarrow \phi$\
|
| 21 |
+
$\{\vq_r, \vx_r\} \leftarrow \textbf{RetrieveRelatedPairs}(\vq, {\mathcal{D}_\text{train}})$\
|
| 22 |
+
$\{\vx^\text{masked}_r\} \leftarrow \textbf{MaskSchemaTokens}(\{\vq_r, \vx_r\})$\
|
| 23 |
+
$\{\vx_r^q\} \leftarrow \textbf{EditAndFill}(\{\vq, \vx^\text{masked}_r\})$\
|
| 24 |
+
$\mathcal{D}_\text{syn}\leftarrow \mathcal{D}_\text{syn}\cup \textbf{Filter}(\vq, \{\vx_r^q\})$\
|
| 25 |
+
${\mathcal{M}}_\text{new} \leftarrow \text{fine-tune}({\mathcal{M}}, \mathcal{D}_\text{syn})$
|
| 26 |
+
:::
|
| 27 |
+
|
| 28 |
+
Figure [1](#fig:main_figure){reference-type="ref" reference="fig:main_figure"} and Algorithm [\[alg:highlevel\]](#alg:highlevel){reference-type="ref" reference="alg:highlevel"} summarizes our method for converting a workload ${\mathcal{QW}_s}$ of SQL queries into a synthetic dataset $\mathcal{D}_\text{syn}$ of Text-SQL pairs containing diverse text queries. Given an SQL query $\vq \in {\mathcal{QW}_s}$ for the target schema $\vs$, our method first retrieves related SQL-Text pairs $\{\vq_r, \vx_r\}_{r=1}^{R}$ from ${\mathcal{D}_\text{train}}$ on the basis of a tree-edit-distance measure such that the SQLs $\{\vq_r\}_{r=1}^R$ in the retrieved pairs are structurally similar to the SQL $\vq$ (§ [2.1](#sec:retrieve_queries){reference-type="ref" reference="sec:retrieve_queries"}). We then translate each retrieved text query $\vx_r$ so that its target SQL changes from $\vq_r$ to $\vq$ on schema $\vs$ (§ [2.2](#sec:schema_translation){reference-type="ref" reference="sec:schema_translation"}). We decompose this task into two steps: masking out schema specific tokens in $\vx_r$, and filling the masked text to make it consistent with $\vq$ using a conditional text generation model $\mathcal{B}$ like BART [@lewis2020bart]. The translated text may be noisy since we do not have direct supervision to train such models. Thus, to improve the overall quality of the synthesized data we filter out the inconsistent SQL-Text pairs using an independent binary classifier (§ [2.3](#sec:filter_desc){reference-type="ref" reference="sec:filter_desc"}). Finally, we adapt the Text-to-SQL model ${\mathcal{M}}$ for the target schema $\vs$ by fine-tuning it on the diverse, high-quality filtered data $\mathcal{D}_\text{syn}$ synthesized by [ReFill]{.smallcaps}.
|
| 29 |
+
|
| 30 |
+
Given an SQL $\vq \in {\mathcal{QW}_s}$ sampled from SQL workload, we extract SQL-Text pairs $\{\vq_r, \vx_r\} \in {\mathcal{D}_\text{train}}$, from the train set such that the retrieved SQLs $\{\vq_r\}$ are structurally similar to the SQL $\vq$. We utilize tree-edit-distance [@pawlik2015efficient; @pawlik2016tree] between the relational algebra trees of SQLs $\vq$ and $\vq_r$ --- smaller distance implies higher structural similarity. Since the retrieved SQLs come from different schemas, we modify the tree-edit-distance algorithm to ignore the schema names and the database values. The tree-edit-distance is further normalized by the size of the larger tree. We only consider the $\{\vq_r, \vx_r\}$ pairs where the SQLs $\{\vq_r\}$ have a distance of less than $0.1$ w.r.t. the SQL $\vq$. Within datasets like Spider that span hundreds of schemas, it is often possible to find several SQLs structurally similar to a given SQL $\vq$. For example, in Spider we found that 76% of the train SQLs contain at least three zero-distance (structurally identical) neighbours in other schemas. In Figure [2](#fig:teds){reference-type="ref" reference="fig:teds"}, we present more detailed statistics.
|
| 31 |
+
|
| 32 |
+
<figure id="fig:teds" data-latex-placement="t">
|
| 33 |
+
<embed src="figures/ted_plot.pdf" style="width:45.0%;height:1.6in" />
|
| 34 |
+
<figcaption>Frequency distribution of average tree-edit-distance between SQLs and their three nearest neighbours from other schemas within Spider’s train set. </figcaption>
|
| 35 |
+
</figure>
|
| 36 |
+
|
| 37 |
+
Our next goal is to translate the retrieved $\vx_r$ from being a text for SQL $\vq_r$ to a text $\hat{\vx}$ for SQL $\vq$ , where $\vq\approx \vq_r$ structurally. However, we do not have a readily labeled dataset to learn a model that translates $\vx_r$ to $\hat{\vx}$ while being consistent with $\vq$. We therefore decompose this task into two steps: 1) A simpler task of masking schema-specific tokens in $\vx_r$ to get a template $\vx^\text{masked}_r$ and 2) A conditional text generation model that maps $(\vx^\text{masked}_r,\vq)$ to the text $\hat{\vx}$ consistent with $\vq$, by filling the masked positions in $\vx^\text{masked}_r$ as per $\vq$. We re-purpose ${\mathcal{D}_\text{train}}$ to get indirect supervision for training the text generation model. We now present each step in detail.
|
| 38 |
+
|
| 39 |
+
Converting the retrieved text queries $\{\vx_r\}$ to masked templates $\{\vx^\text{masked}_r\}$ is a critical component of [ReFill]{.smallcaps}'s pipeline since irrelevant tokens like references to schema elements of the original database can potentially misguide the text generation module. Our initial approach was to mask tokens based on a match of text tokens with schema names and manually refined schema-to-text linked annotations as in @lei-etal-2020-examining. However, this approach failed to mask all schema-related terms since their occurrences in natural text often differed significantly from schema names in the database. Table [\[tab:mask_anecdotes\]](#tab:mask_anecdotes){reference-type="ref" reference="tab:mask_anecdotes"} shows some anecdotes. Consequently, we designed a simple frequency-based method of masking that is significantly more effective for our goal of using the masked text to just guide the diversity. For each word that appears in the text queries of the train set, we count the number of distinct databases where that word gets mentioned at least once. For example, common words like `{‘show’, ‘what’, ‘list’, ‘order’}` get mentioned in more than 90% of the schemas, and domain specific words like `{‘countries’, ‘government’}` occur only in text queries of a few schemas. We mask out all the words that appear in less than 50% of the schemas. The words to be masked are replaced by a special token `MASK`, and consecutive occurrences of `MASK` are collapsed into a single `MASK` token. Thus we obtain masked templates $\{\vx^\text{masked}_r\}$ retaining minimal information about their original schema.
|
| 40 |
+
|
| 41 |
+
Given a masked template $\vx^\text{masked}_r$, and an SQL query $\vq$, we wish to edit and fill the masked portions in $\vx^\text{masked}_r$ to make it consistent with the SQL $\vq$. We utilize a conditional text generation model $\mathcal{B}$ like [BART]{.smallcaps} [@lewis2020bart] for this purpose. We first convert $\vq$ into a pseudo-English representation $\vq^\text{Eng}$ similar to @shu2021logic, to make it easier for $\mathcal{B}$ to encode $\vq$. In addition, we wrap the table, column, or value tokens in $\vq^\text{Eng}$ with special tokens to provide explicit signals to the text generation model $\mathcal{B}$ that such tokens are likely to appear in the output text $\hat{\vx}$. Next, we concatenate the tokens in $\vx^\text{masked}_r$ and $\vq^\text{Eng}$ for jointly encoding them as an input to $\mathcal{B}$. The output of $\mathcal{B}$'s decoder is text $\hat{\vx}$, which is expected to be consistent with the SQL $\vq$.
|
| 42 |
+
|
| 43 |
+
Since we do not have direct supervision to fine-tune $\mathcal{B}$ for this task, we present a method of repurposing ${\mathcal{D}_\text{train}}$ for fine-tuning $\mathcal{B}$. ${\mathcal{D}_\text{train}}$ contains SQL-Text pairs $(\vq_i, \vx_i)$ from various schemas $\vs_i$. A *Naïve* way to train $\mathcal{B}$ is to provide $[\vx^\text{masked}_i|\vq^\text{Eng}_i]$, the concatenation of $\vx^\text{masked}_i$ and $\vq^\text{Eng}_i$ as an input to the encoder and maximize the likelihood of $\vx_i$ in the decoder's output. This way the decoder of $\mathcal{B}$ learns to refill the masked tokens in $\vx^\text{masked}_i$ by attending to $\vq^\text{Eng}_i$ to recover $\vx_i$ in the output. While useful for learning to refill the masked positions, this *Naïve* method of training $\mathcal{B}$ is mismatched from its use during inference in two ways: (i) For a given SQL $\vq$, [ReFill]{.smallcaps} might fail to retrieve a similar structure neighbour of $\vq_i$ from ${\mathcal{D}_\text{train}}$. In such cases, $\mathcal{B}$ should be capable of falling back to pure SQL-to-Text generation mode to directly translate $\vq$ into $\hat{\vx}$. (ii) During inference, $\vx^\text{masked}_r$ and $\vq$ come from different schemas. However, during *Naïve* training, the masked text $\vx^\text{masked}_i$ and the SQL $\vq_i$ are derived from the same example $(\vq_i, \vx_i)$. To address these two limitations, we train $\mathcal{B}$ in a more *Robust* manner as follows: (a) For a random one-third of the train steps we train $\mathcal{B}$ in the Naïve way, allowing $\mathcal{B}$ to learn the filling of the masked tokens using $\vq^\text{Eng}_i$. (b) For another one-third, we pass only $\vq^\text{Eng}_i$ as an input and maximize the likelihood of $\vx_i$. This ensures that model is capable of generating the text from the $\vq^\text{Eng}_i$ alone, if the templates $\vx^\text{masked}_i$ are unavailable or noisy. (c) For the remaining one-third, we first retrieve an SQL-Text pair $(\vq_j, \vx_j)$, from a different schema such that the SQL $\vq_j$ is structurally similar to $\vq_i$ (§ [2.1](#sec:retrieve_queries){reference-type="ref" reference="sec:retrieve_queries"}), and the word edit distance between the masked templates $\vx^\text{masked}_i$ and $\vx^\text{masked}_j$ is also small. We can then replace $\vx^\text{masked}_i$ with $\vx^\text{masked}_j$ and encode $[\vx^\text{masked}_j|\vq^\text{Eng}_i]$ as an input to $\mathcal{B}$ and maximize the likelihood of $\vx_i$ in the decoder's output. This step makes the training more consistent with the inference, as $\vx^\text{masked}_j$ and $\vq^\text{Eng}_i$ now come from different schemas. In § [5.4](#sec:designchoice){reference-type="ref" reference="sec:designchoice"}, we justify training *Robustly* compared to *Naïve* training.
|
| 44 |
+
|
| 45 |
+
Since the data synthesized using [ReFill]{.smallcaps} is used to fine-tune a downstream Text-to-SQL parser, we learn a Filtering model ${\mathcal{F}}: ({\mathcal{X}}, {\mathcal{Q}}) \mapsto \mathbb{R}$ to discard inconsistent examples from the generated dataset. ${\mathcal{F}}$ assigns lower scores to inconsistent Text-SQL pairs. For each SQL $\vq \in {\mathcal{QW}_s}$, we select the top-5 sentences generated by [ReFill]{.smallcaps} and discard all the sentences that are scored below a fixed threshold as per the filtering model. Existing work depended on a trained Text-to-SQL parser ${\mathcal{M}}$ to assign cycle-consistency scores [@grounded-2020-zhang]. However, we show that cycle-consistency filtering favors text on which ${\mathcal{M}}$ already performs well, and hence does not result in a useful dataset for fine-tuning ${\mathcal{M}}$.
|
| 46 |
+
|
| 47 |
+
We instead train a filtering model ${\mathcal{F}}$ as a binary classifier, independent of ${\mathcal{M}}$. The Text-SQL pairs $\{(\vx_i, \vq_i)\}$ in the training set ${\mathcal{D}_\text{train}}$, serve as positive (consistent) examples and we synthetically generate the negative (inconsistent) examples as follows: (i) Replace DB values in the SQL $\vq_i$ with arbitrary values sampled from the same column of the database. (ii) Replace SQL-specific tokens in $\vq_i$ with their corresponding alternates e.g. replace `ASC` with `DESC`, or '$>$' with '$<$'. (iii) Cascade previous two perturbations. (iv) Replace the entire SQL $\vq_i$ with a randomly chosen SQL $\vq_j$ from the same schema. (v) Randomly drop tokens in the text query $\vx_i$ with a fixed probability of 0.3. (vi) Shuffle a span of tokens in the text query $\vx_i$, with span length set to 30% of the length of $\vx_i$. Thus, for a given Text-SQL pair $(\vx_i, \vq_i)$ we obtain six corresponding negative pairs $\{(\vx^n_j, \vq^n_j)\}_{j=1}^{6}$. Let $s_i$ be the score provided by the filtering model for the original pair $(\vx_i, \vq_i)$ and $\{s_j\}_{j=1}^6$ be the scores assigned to the corresponding negative pairs $\{(\vx^n_j, \vq^n_j)\}_{j=1}^{6}$. We supervise the scores from the filtering model using a binary-cross-entropy loss over the Sigmoid activations of scores as in Equation [\[eq:bce\]](#eq:bce){reference-type="ref" reference="eq:bce"}. $$\begin{equation}
|
| 48 |
+
\mathcal{L}_\text{bce} = -\log\sigma(s_i) - \sum_{j=1}^{6}\log\sigma(1-s_j)
|
| 49 |
+
\label{eq:bce}
|
| 50 |
+
\end{equation}$$ To explicitly contrast an original pair with its corresponding negative pairs we further add another Softmax-Cross-Entropy loss term. $$\begin{equation}
|
| 51 |
+
\mathcal{L}_\text{xent} = - \log\frac{\exp(s_i)}{\exp(s_i)+\sum_{j=1}^{6}\exp(s_j)}
|
| 52 |
+
\label{eq:xentbc}
|
| 53 |
+
\end{equation}$$
|
| 54 |
+
|
| 55 |
+
# Method
|
| 56 |
+
|
| 57 |
+
Our method is related to the retrieve-and-edit framework, which has been previously applied in various NLP tasks. In Semantic Parsing, question and logical-form pairs from the training data relevant to the test-input question are retrieved and edited to generate the output logical forms in different ways [@relationatt2018; @cbr2021das; @googlecbr2021; @gupta2021retronlu]. In machine translation, memory augmentation methods retrieve-and-edit examples from translation memory to guide the decoder's output [@hossain2020simple; @cai-2021-nmt]. Our editing step --- masking followed by refilling is similar to style transfer methods that minimally modify the input sentence with help of retrieved examples corresponding to a target attribute [@li2018delete]. In contrast to learning a retriever, we find simple tree-edit distance to be an effective metric for retrieving the relevant examples for our task.
|
2211.12254/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-11-07T14:37:31.989Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" etag="z_lgAd6qyMWrC2uGdG3-" version="20.5.1" type="device"><diagram id="RW22P3QpMzehJcJ9KnT8" name="Page-1">7V1Zd9o4G/4tc8E5MxfkeF8u02SyNTntTDvN9LuZY2wBTozl2iJAf/0nYQlsSWyJF2h8BZaEjJ/n1btpcU+/mMyvUy8ZP8AARD1NCeY9/bKnaZpiWfiDlCzyElXV3LxklIYBLVsXfAl/Alqo0NJpGICs1BBBGKEwKRf6MI6Bj0plXprCWbnZEEbluybeiN5RWRd88b0ICM0ewwCN81LHLLS+AeFozO6sKrRm4rHGtIts7AVwVriX/mdPv0ghRPm3yfwCRAQ9hkve0dWG2tUfS0GM9vnBzQBZXx19od7+7af+/dXgXjX6dt7LixdN6QPTP4sWDIEUTuMAkE6Unv5hNg4R+JJ4PqmdYdJx2RhNInyl4q+Bl42XbclFhlL4vIKNlAzDKLqAEUzxdQxjsGrECnuabvkOGAxxjfiA9JlfQIrAvFBEH/gawAlA6QI3obW66+Q/oeKn25Sd2ZpL1aDPPC7waBpUhKj4jFZdrxHGXyjIcsDvrMXYfHh8vrx6fLzQ7/2n2Tetr5kC4p9hFqIQxgLy+CFRGd4yWBRBCaheFI5ifOlj6AAu/0AgC7FUn9OKSRgE5DZSPsuMl1kkJUMYIzpSVb0aogyrxJOhmwJPhiLSpCt10WQJNF2GKVYwp8qTVg1PmJkSUabiCkRZEqJUuwKirj7Ofr789X1iTS/sj9qn2Q9r3Ff30GA7dJaXJbnlGIZzgmYJN4MjDiuo4XCo+b5MdQXWwDKtapA2lTLShq2eqboAtiYBW6tiVEjBVn9RsHWmSJidcEU70SzS2ntBWmKR60L6/E75atv/XWfm4+iH/c85Qq7dV43dSIMA+4T0EqZoDEcw9qI/16WcKl63uYcwoXw8AYQWFH9vimCZrY3QkntvBTYFkYfCl7K/KoOJ/vQzDPEt1hpdKxNi8DY1g9PUB/RXHNirv/F6/Nntm4UbzEP0b+H7d9LVmUmvLue05+XFgl3E+En/LV4UfkUu1z9bXrHftUet1S61qujrvquhpSstDy2nG1p1UWu0TO0entgvPbSclvHXu6FVE7VG2w7JO7daRsteg/7OVZutleFn8rgDfqEfd0c/yEtHAFVB40PqxvrXj6Nvf/14+vZ48+m7+r9pX8aiFSEah/ZIxp4FmNaPKUmCY8h131cU4jWti6xR/onDNGOef7CO8P/K+2JtjjIvVwi7ta2K9YBsKUetQrOVhRharylbKuXa7IxhTcpAdbjEuGacvVIfaMrOrmpWCVYnJnW5wypns9ks66FCwqfnhI5qFhFZNr9kNdbCw0wEqehnS/rPcQPVTea54ZCakJ55kYWjifcrWBJyXUjTKoquW1WlaZm3wNK0jjj10KiJcV7lTlBINshC9isIQR1kWy2T7Z6KoWhP4Zt8kGa+1i8weJVvcrRuUPnnaeotCs0S0iA74C+bCicoeY/VRiF1hCFBpzdyj9Lkp6iddvWG2mXlassKORzXjvY6dSOsa+A7qjsvIZvvfasn4XcaISeTW29nWEZjGkG6jsJ4twohH49boGHKsvZZaL26FIZp7+qqOu0hBa3LdNUmJryBca3XionOmxixq5rFxJXYGE5uDlpWLUT+5aVbgQecoXTpVpWLqbkJBelaakWi3FW1Lu3OOi7hTAwxWXmSG+Q3oX4MC+QMVYRZtkCOj9qqA1mvWpjbgJXPfu+77lBV3DPHqQtacar5EsRZiMgff7j/LOCcjb2EfEUYVPATks4/JCAN8d8h3uC6/PO6cDcdc8A23nC6ZrlMNIXIWy57p17jwaono50R0xOsFtHrl1idE+SriUR3sytdll6fahI3EOTYdMQeFFDw2lDfb78Bcyaq5/VkcpM83RvZKEYK0k1ibI9F7TkGnmyX2zVSndcmf86T4fa13v3eO7d2R4/UJuaUNCo8bxu/somkOl1zEziBIVOjjjaobrpO2CykCWrSNmpyGqX75/bYljXCOCc9YRpT11336uogXFbbf70B61zZipepcjkJWebaEvGqbb/hzqTk7nlvbeu8t37iGcrlBkZxi6MoI9vFcfeIKsYDWoMSIFt1WqkE9MwLhF3HU5/0bkwQ2HJMQTE0KhZiGFGxWKidPBwkD6o4o9GoQGh7ODAtWlbd4FL0hjiADLVBvHYuAHjzALLf9whyNojO8ZhWrTOtxyUIrFZtV5XKEtudbW1TIMTkXrOKYo95uxZtq6ZxUb4lsa1NnpKzc1Hum6NW7X2PIHeD6ByPbZWNmM62tigIR2JbpVPznXFtUyLEFGez3pZMIg40rq8/ee5wY2txKWLJkVCNpogZXF2OuJ4RtZLP4zW2Wu0RS2dtD5OE40gSs4VunbE9FoFoO0u8zyEabc6/ljccGhLfpFHbqtYexrxz27r/iobWdGjtAUtnWw+ThCOxrfVPIHUCcVq2VQzE7uEoRDUwFYEhehtPjZzerhtuyZ7rlnh8u2yVbn0UiYHSamF9RxK9dtomSYxa6BL5o9R4zbwHwTHKnrErKru6SJIv35VYP46eU35BiMHDLZ58oro1LVeR4r3H4ZMVn/vezOZBg1uhrNuiX6dKBLuKDTpSoGUL62oGupGl4ALQbNtmW0C/7uQm8QSWPfxsc6ufPe9p+AmU4NTdbbMif6DsDhiOmIMxHInmq2AzpFROxBTM19QL42VK6fScNq2i0azyR3NJ1KbWpIGSTOxeLfWfpmC+4mwI00nH19rJlmhfNvKa4UviwS215W2cTPPzuo5Q3zVCls0fEyFyxQ4caYYqMbGQU/Vpit47V6rNT8eLZDUbGO2ci93zHKk3+zVZ7tesTzYlV6d+IlVVXo7Knz4kkRtDYcfDlN44VZejs88bp07y3V586GEookvZaOghOV/j91siphMQhB4Cf9SqW0/VZXElrOmN2sGq30rbzkE+Lr8nWz+THNQvO3TGsCs5cwbZT6Pk08L75/b57ml8ezf4Nr3bsh8qS7xYao6oISOmKB0NfldyC8M+/sit2NJmDb1JGC3ypjcgegFE8gv1BZtmUJtGK/KbkpoYBxBeVKh78dLQw594/HhompK3QW9t53vJpiYzCjKpNJR8lkqJAMIqoY8f3w/jkfhLmCZjHNjkFfmgU4h26IdYAmPam8IeZ1mDVpEQ7W2pGHLil6+kLnQ1g2lQvvmqL/x/B88h7o70mQtrn4qOeM8A+DBdHujSR+PQf45BRm8UxiEK2dPwbQvIb21XEIJSu2EEPcQ/ZhBmSeQtWPMoxBWa8ls4SWCKvBhJvZmHaYTCPuYXRtP8Pbsr3yUXTea75MWDtBPWTlgPElbRf+40XydM7Wu+Gy8b41bXKbbcm5Se3D3d/3CaZVggBGsbnBqJ67M5i8Jt+FZkGS+Zk1OB+yj1cMQsypdkDFLigxdtB4M+ncA49DcGAEeAsK5ws5S2OCnMDvx5I8L4MoVENld11/iRxw8wAKTF/wE=</diagram></mxfile>
|
2211.12254/main_diagram/main_diagram.pdf
ADDED
|
Binary file (20 kB). View file
|
|
|
2211.12254/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Neural rendering methods, especially Neural Radiance Fields (NeRFs) [\[35\]](#page-9-0), have recently emerged as a new modality for representing and reconstructing scenes [\[50\]](#page-9-1), achieving impressive results for novel view synthesis. Substantial research effort continues to focus on formulating more efficient NeRFs (e.g., [\[6,](#page-8-0) [20,](#page-8-1) [43\]](#page-9-2)), to make NeRFs more accessible in use-cases with more limited computa<span id="page-1-0"></span>tional resources. As NeRFs become more widely accessible, the need for editing and manipulating the scenes represented by NeRFs will continue to grow. One notable editing application is removing objects and inpainting the 3D scene, analogous to the well-studied 2D image inpainting task [\[23\]](#page-8-2). However, several obstacles impede progress on this task, not only for the 3D inpainting process itself, but also in obtaining the input segmentation masks. First, NeRF scenes are implicitly encoded within the neural mapping weights, resulting in an entangled and uninterpretable representation that is non-trivial to manipulate (compared to, say, the explicit discretized form of 2D image arrays or meshes in 3D). Moreover, any attempt to inpaint a 3D scene must not only generate a perceptually realistic appearance in a single given view, but also preserve fundamental 3D properties, such as appearance consistency across views and geometric plausibility. Finally, to obtain masks for the target object, it is more intuitive for most end users to interact with 2D images, rather than 3D interfaces; however, requiring annotations of multiple images (and maintaining viewconsistent segments) is burdensome to users. An appealing alternative is to expect only a minimal set of annotations for a single view. This motivates a method capable of obtaining a view-consistent 3D segmentation mask of the object (for use in inpainting) from single-view sparse annotations.
|
| 4 |
+
|
| 5 |
+
In this paper, we address these challenges with an integrated method that takes in multiview images of a scene, extracts a 3D mask with minimal user input, and fits a NeRF to the masked images, such that the target object is replaced with plausible 3D appearance and geometry. Existing interactive 2D segmentation methods do not consider the 3D aspects of the problem (e.g., [\[42\]](#page-9-3)), while current NeRF-based approaches are unable to use sparse annotations [\[76\]](#page-10-0) to perform well, or do not attain sufficient accuracy [\[44\]](#page-9-4). Similarly, while some current NeRF manipulation algorithms allow object removal, they do not attempt to provide perceptually realistic inpaintings of newly unveiled parts of space (e.g., [\[64\]](#page-10-1)). To our knowledge, this is the first approach that handles both interactive multiview segmentation and full 3D inpainting in a single framework.
|
| 6 |
+
|
| 7 |
+
Our technique leverages off-the-shelf, 3D-unaware models for segmentation and inpainting, and transfers their outputs to 3D space in a view-consistent manner. Building on the (2D) interactive segmentation [\[8,](#page-8-3) [15,](#page-8-4) [33\]](#page-8-5) literature, our framework starts from a small number of user-defined image points on a target object (and a few negative samples outside it). From these, our algorithm initializes masks with a video-based model [\[4\]](#page-8-6), and lifts them into a coherent 3D segmentation via fitting a semantic NeRF [\[36,](#page-9-5) [76,](#page-10-0) [77\]](#page-10-2). Then, after applying a pretrained 2D inpainter [\[48\]](#page-9-6) to the multiview image set, a customized NeRF fitting process is used to reconstruct the 3D inpainted scene, utilizing perceptual losses [\[72\]](#page-10-3) to account for inconsistencies in the 2D inpainted images, as well as inpainted depth images to regularize the geometry of the masked region. Overall, we provide a complete method, from object selection to novel view synthesis of the inpainted scenes, in a unified framework with minimal burden on the user, illustrated in Figure [1.](#page-0-0)
|
| 8 |
+
|
| 9 |
+
We demonstrate the effectiveness of our approach through extensive qualitative and quantitative evaluations. In addition, we address the lack of a benchmark for comparing scene inpainting methods, and introduce a new dataset where the "ground-truth inpaintings" (i.e., real images of the scene without the object) are available as well.
|
| 10 |
+
|
| 11 |
+
In summary, our contributions are as follows: (i) a complete process for 3D scene manipulation, starting from object selection with minimal user interaction and ending with a 3D inpainted NeRF scene; (ii) to perform such selection, an extension of 2D segmentation models to the multiview case, capable of recovering 3D-consistent masks from sparse annotations; (iii) to ensure view-consistency and perceptual plausibility, a novel optimization-based formulation of 3D inpainting in NeRFs, which leverages 2D inpainters; and (iv) a new dataset for 3D object removal evaluation that includes corresponding object-free ground-truth.
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
Given a set of RGB images, $\mathcal{I} = \{I_i\}_{i=1}^n$ , with corresponding 3D poses, $\mathcal{G} = \{G_i\}_{i=1}^n$ , and camera intrinsic matrix, K, our model expects one additional "source" view with sparse user annotations (i.e., a few points identifying the unwanted object). From these inputs, we produce a NeRF model of the scene, capable of synthesizing an *inpainted* image from any novel view. We begin by obtaining
|
| 16 |
+
|
| 17 |
+
<span id="page-2-2"></span>
|
| 18 |
+
|
| 19 |
+
Figure 2. Overview of our multiview segmentation architecture. As input, this network takes in a 3D coordinate, x, and a view direction, d, and returns view-independent density, $\sigma(x)$ , objectness logit, s(x), and view-dependent color, c(x, d).
|
| 20 |
+
|
| 21 |
+
an initial 3D mask from the single-view annotated source (§ 4.1.1), followed by fitting a semantic NeRF, to improve the consistency and quality of the mask (§ 4.1.2). Finally, in § 4.2 we describe our view-consistent inpainting method, which takes the views and recovered masks as inputs. Our approach leverages the outputs of 2D inpainters [48] as appearance and geometry priors to supervise the fitting of a new NeRF. Figure 1 illustrates our entire approach, including the inputs and outputs. Additional details are in our supplementary material.
|
| 22 |
+
|
| 23 |
+
We first describe how we initialize a rough 3D mask from single-view annotations. Denote the annotated source view as $I_1$ . The sparse information about the object and the source view are given to an interactive segmentation model [15] to estimate the initial source object mask, $M_1$ . The training views are then treated as a video sequence and, along with $M_1$ , given to a video instance segmentation model V [4,57], to compute $V(\{I_i\}_{i=1}^n, \widehat{M}_1) = \{\widehat{M}_i\}_{i=1}^n$ , where $\widehat{M}_i$ is the initial guess for the object mask for $I_i$ . The initial masks, $\{\widehat{M}_i\}_{i=1}^n$ , are typically inaccurate around the boundaries, since the training views are not actually adjacent video frames, and video segmentation models are usually 3D-unaware. Hence, we use a semantic NeRF model [36, 76, 77] to resolve the inconsistencies and improve the masks (§ 4.1.2), thus obtaining the masks for each input view, $\{M_i\}_{i=1}^n$ , to use for inpainting (§ 4.2).
|
| 24 |
+
|
| 25 |
+
Our multiview segmentation module takes the input RGB images, $\{I_i\}_{i=1}^n$ , the corresponding camera intrinsic and extrinsic parameters, and the initial masks, $\{\widehat{M}_i\}_{i=1}^n$ , and trains a semantic NeRF [76]. Figure 2 depicts the network used in the semantic NeRF; for a point, x, and a view direction, d, in addition to a density, $\sigma(x)$ , and color, c(x,d), it returns a pre-sigmoid "objectness" logit, s(x). The objectness probability is then acquired as $p(x) = \operatorname{Sigmoid}(s(x))$ .
|
| 26 |
+
|
| 27 |
+
<span id="page-3-5"></span>We use Instant-NGP [3, 37, 38] as our NeRF architecture due to its fast convergence. The expected objectness logit, $\widehat{S}(r)$ , associated with a ray, r, is obtained by rendering the logits of the points on r instead of their colors, with respect to the densities, in Eq. 1 [36]:
|
| 28 |
+
|
| 29 |
+
$$\widehat{S}(r) = \sum_{i=1}^{N} T_i (1 - \exp(-\sigma_i \delta_i)) s_i, \tag{3}$$
|
| 30 |
+
|
| 31 |
+
where for simplicity, $s(r(t_i))$ is denoted by $s_i$ . The objectness probability of a ray, $\widehat{P}(r) = \operatorname{Sigmoid}(\widehat{S}(r))$ , is then supervised using the classification loss:
|
| 32 |
+
|
| 33 |
+
$$\mathcal{L}_{\text{clf}} = \frac{1}{|\mathcal{R}|} \sum_{r \in \mathcal{R}} \text{BCE}(\mathbb{1}_{r \in \mathcal{R}_{\text{masked}}}, \widehat{P}(r)), \tag{4}$$
|
| 34 |
+
|
| 35 |
+
where 1 is the indicator function, BCE stands for the binary cross entropy loss, and $\mathcal{R}_{\text{masked}}$ is the set of rays passing through pixels that are masked in $\{\widehat{M}_i\}_{i=1}^n$ . During the calculation of the classification loss, $\mathcal{L}_{\text{clf}}$ , the weights of the colors in the rendering equation (Eq. 1) are detached to limit the supervised updates to the logits; this prevents changes to the existing geometry, due to gradient updates altering the $\sigma$ field. The geometry is supervised using a reconstruction loss, $\mathcal{L}_{\text{rec}}$ , as in NeRF [35], via the given RGB images. The overall loss, used to supervise the NeRF-based multiview segmentation model, is then given by:
|
| 36 |
+
|
| 37 |
+
$$\mathcal{L}_{\text{mv}} = \mathcal{L}_{\text{rec}} + \lambda_{\text{clf}} \mathcal{L}_{\text{clf}}, \tag{5}$$
|
| 38 |
+
|
| 39 |
+
where the classification weight, $\lambda_{\rm clf}$ , is a hyper-parameter. After optimization, 3D-consistent masks, $\{M_i\}_{i=1}^n$ , are obtained by thresholding the objectness probabilities and masking the pixels with probabilities greater than 0.5. Finally, we use two stages for optimization to further improve the masks; after obtaining the initial 3D mask, the masks are rendered from the training views, and are used to supervise a secondary multiview segmentation model as initial guesses (instead of the video segmentation outputs).
|
| 40 |
+
|
| 41 |
+
Figure 3 shows an overview of our view-consistent inpainting method. As the paucity of data precludes directly training a 3D inpainter, our method leverages existing 2D inpainters to obtain depth and appearance priors, which then supervise the fitting of a NeRF to the completed scene. This inpainted NeRF is trained using the following loss:
|
| 42 |
+
|
| 43 |
+
<span id="page-3-4"></span>
|
| 44 |
+
$$\mathcal{L}_{inp} = \mathcal{L}'_{rec} + \lambda_{LPIPS} \mathcal{L}_{LPIPS} + \lambda_{depth} \mathcal{L}_{depth}, \qquad (6)$$
|
| 45 |
+
|
| 46 |
+
where $\mathcal{L}'_{rec}$ is the reconstruction loss for the unmasked pixels, and $\mathcal{L}_{LPIPS}$ and $\mathcal{L}_{depth}$ define the perceptual and depth losses (see below), with weights $\lambda_{LPIPS}$ and $\lambda_{depth}$ .
|
| 47 |
+
|
| 48 |
+
<span id="page-3-1"></span>
|
| 49 |
+
|
| 50 |
+
Figure 3. Overview of our inpainting method. Using posed input images and their corresponding masks (upper- and lower-left insets), we obtain (i) an initial NeRF with the target object present and (ii) the set of inpainted input RGB images with the target object removed (but with view inconsistencies). The initial NeRF (i) is used to compute depth, which we inpaint to obtain depth images as geometric priors (upper-right inset). The inpainted RGB images (ii), which act as appearance priors, are used with the depth priors, to fit a 3D consistent NeRF to the inpainted scene.<sup>1</sup>
|
| 51 |
+
|
| 52 |
+
Our proposed view-consistent inpainting approach uses RGB inputs, $\{I_i\}_{i=1}^n$ , the camera intrinsic and extrinsic parameters, and corresponding object masks, $\{M_i\}_{i=1}^n$ , to fit a NeRF to the scene without the undesired object. To begin with, each image and mask pair, $(I_i, M_i)$ , is given to an image inpainter, INP, to obtain the inpainted RGB images, $\{\widetilde{I}_i\}_{i=1}^n$ , where $\widetilde{I}_i = \text{INP}(I_i, M_i)$ [48]. Since each view is inpainted independently, directly supervising a NeRF using the inpainted views leads to blurry results due to the 3D inconsistencies between each $\widetilde{I}_i$ (see Figure 7). In this paper, instead of using mean squared error (MSE) to optimize the masked area, we propose the use of a perceptual loss, LPIPS [72], to optimize the masked parts of the images, while still using MSE for the unmasked parts, where no inpainting is needed. This loss is calculated as follows:
|
| 53 |
+
|
| 54 |
+
<span id="page-3-3"></span>
|
| 55 |
+
$$\mathcal{L}_{\text{LPIPS}} = \frac{1}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} \text{LPIPS}(\widehat{I}_i, \widetilde{I}_i), \tag{7}$$
|
| 56 |
+
|
| 57 |
+
where $\mathcal{B}$ is a batch of indices between 1 and n, and $\widehat{I_i}$ is the i-th view rendered using NeRF. Our model for multiview inpainting and segmentation uses the same architecture (see Figure 2), except for the additional logit output, s.
|
| 58 |
+
|
| 59 |
+
Even with the perceptual loss, the discrepancies between the inpainted views can incorrectly guide the model towards converging to degenerate geometries (e.g., 'foggy' geometry may form near the cameras, to explain the disparate per-view information). Thus, we use inpainted depth maps as additional guidance for the NeRF model, and detach the weights when calculating the perceptual loss and use the
|
| 60 |
+
|
| 61 |
+
<span id="page-3-2"></span><sup>&</sup>lt;sup>1</sup>IBRNet images in Fig. 3,5,6,7 by Wang et al. available in IBRNet [56] under an Apache License 2.0.
|
| 62 |
+
|
| 63 |
+
<span id="page-4-1"></span>perceptual loss to only fit the colors of the scene. For this purpose, we use a NeRF optimized on images that include the unwanted object, and render the depth maps, $\{D_i\}_{i=1}^n$ , corresponding to the training views. Depth maps are calculated by substituting the distance to the camera instead of the color of points into Eq. 1:
|
| 64 |
+
|
| 65 |
+
$$D(r) = \sum_{i=1}^{N} T_i (1 - \exp(-\sigma_i \delta_i)) t_i.$$
|
| 66 |
+
(8)
|
| 67 |
+
|
| 68 |
+
The rendered depths are then given to an inpainter to obtain inpainted depth maps, $\{\widetilde{D}_i\}_{i=1}^n$ , where $\widetilde{D}_i$ is obtained as $\widetilde{D}_i = \text{INP}(D_i, M_i)$ . We found that using LaMa [48] for depth inpainting, as in the RGB case, gave sufficiently high-quality results. Note that this is all calculated as a preprocessing step, and with a NeRF optimized on the original scene. This NeRF can be the same model used for multiview segmentation. If using another source for obtaining masks, such as human annotated masks, a new NeRF is fitted to the scene. These depth maps are then used to supervise the inpainted NeRF's geometry, via the $\ell_2$ distance of its rendered depths, $\widehat{D}_i$ , to the inpainted depths, $\widehat{D}_i$ :
|
| 69 |
+
|
| 70 |
+
$$\mathcal{L}_{\text{depth}} = \frac{1}{|\mathcal{R}|} \sum_{r \in \mathcal{R}} \left| \widehat{D}(r) - \widetilde{D}(r) \right|^2, \tag{9}$$
|
| 71 |
+
|
| 72 |
+
where $\widehat{D}(r)$ and $\widetilde{D}(r)$ are the depth values for a ray, r.
|
| 73 |
+
|
| 74 |
+
Calculating the perceptual loss, Eq. 7, requires full input views to be rendered during the optimization. Since rendering each pixel necessitates multiple forward passes through the MLP, for high-resolution images, this is an expensive process, resulting in issues such as (i) the batch size, $|\mathcal{B}|$ , has to be small to fit the rendered images and their corresponding computation graphs in memory, and (ii) slow optimization, even with batch sizes as small as $|\mathcal{B}| = 1$ . A straightforward solution is to render a downsized image and compare it to the downsized version of the inpainted images; however, this leads to a loss of information if the downsizing factor is large. Following image-based works (e.g., SinGAN [46] and DPNN [14]), and 3D works (e.g., ARF [71]), we perform the computations on a patch-basis; instead of rendering complete views, we render batches of smaller patches, and compare them with their counterparts in the inpainted images based on the perceptual loss. Only patches inside the bounding box of the object mask are used. For fitting the unmasked areas, recall that $\mathcal{L}'_{rec}$ (Eq. 6) simply alters $\mathcal{L}_{rec}$ (Eq. 2) to sample rays only from unmasked pixels. By separating the perceptual and reconstruction losses, we prevent inconsistency within the mask, while avoiding unnecessary changes to the rest of the scene.
|
| 75 |
+
|
| 76 |
+
<span id="page-4-0"></span>
|
| 77 |
+
|
| 78 |
+
60 Input Views + Camera Poses Human-Annotated Object Masks 40 GT Views + Camera Poses
|
| 79 |
+
|
| 80 |
+
Figure 4. Scenes from our dataset. Columns: input view (left), corresponding target object mask (middle), and a ground-truth view without the target object, from a different camera pose (right). Rows: different scenes; see supplement for examples of all scenes.
|
| 81 |
+
|
| 82 |
+
Here, we consider further leveraging the multiview data to guide the image inpainter. In particular, parts of the training images that are currently being generated by the 2D image inpainter might be visible in other views; in such cases, there is no need to hallucinate those details, since they can be retrieved from the other views. To prevent such unnecessary inpaintings, we propose a mask refinement approach: for each source image, depth, and mask tuple, $(I_s, D_s, M_s)$ , we substitute pixels in $I_s$ and $D_s$ that are visible from at least one other view, to shrink the source mask, $M_s$ . After this refinement step, only parts of $I_s$ and $D_s$ that are occluded by the undesired object in all of the training views will remain masked. As a result, the image inpainter has to fill in a smaller area, resulting in improved inpaintings. Please see our supplementary material for details.
|