diff --git "a/title_31K_G/test_title_long_2405.03251v1.json" "b/title_31K_G/test_title_long_2405.03251v1.json" new file mode 100644--- /dev/null +++ "b/title_31K_G/test_title_long_2405.03251v1.json" @@ -0,0 +1,441 @@ +{ + "url": "http://arxiv.org/abs/2405.03251v1", + "title": "Exploring the Frontiers of Softmax: Provable Optimization, Applications in Diffusion Model, and Beyond", + "abstract": "The softmax activation function plays a crucial role in the success of large\nlanguage models (LLMs), particularly in the self-attention mechanism of the\nwidely adopted Transformer architecture. However, the underlying learning\ndynamics that contribute to the effectiveness of softmax remain largely\nunexplored. As a step towards better understanding, this paper provides a\ntheoretical study of the optimization and generalization properties of\ntwo-layer softmax neural networks, providing theoretical insights into their\nsuperior performance as other activation functions, such as ReLU and\nexponential. Leveraging the Neural Tangent Kernel (NTK) framework, our analysis\nreveals that the normalization effect of the softmax function leads to a good\nperturbation property of the induced NTK matrix, resulting in a good convex\nregion of the loss landscape. Consequently, softmax neural networks can learn\nthe target function in the over-parametrization regime. To demonstrate the\nbroad applicability of our theoretical findings, we apply them to the task of\nlearning score estimation functions in diffusion models, a promising approach\nfor generative modeling. Our analysis shows that gradient-based algorithms can\nlearn the score function with a provable accuracy. Our work provides a deeper\nunderstanding of the effectiveness of softmax neural networks and their\npotential in various domains, paving the way for further advancements in\nnatural language processing and beyond.", + "authors": "Jiuxiang Gu, Chenyang Li, Yingyu Liang, Zhenmei Shi, Zhao Song", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Exploring the Frontiers of Softmax: Provable Optimization, Applications in Diffusion Model, and Beyond", + "main_content": "Introduction 3 2 Related Work 4 3 Preliminary 5 3.1 Neural Tangent Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4 Main Results 7 5 Proof Sketch 8 6 Application in Di\ufb00usion 9 6.1 Preliminary of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6.2 Main Result of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 7 Discussion and Future Work 11 8 Conclusion 12 A De\ufb01nition 13 B Basic Concentration 14 B.1 Some Concentration Basic Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 B.2 Kernel Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C Induction 19 C.1 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 C.2 Induction Part 1. For Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C.3 Induction Part 2. For Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 C.4 Induction Part 3. For Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D Induction Part 1: For Weights 22 D.1 Bounding the Gradient at any Time . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D.2 Bounding the Initialization Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 E Induction Part 2: For Loss 23 E.1 Decomposition for \u2225vec(F(\u03c4 + 1) \u2212Y )\u22252 2 . . . . . . . . . . . . . . . . . . . . . . . . . 24 E.2 Choice of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 E.3 Bounding C0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 E.4 Bounding C1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 E.5 Bounding C2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 E.6 Bounding \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 F NTK Regression 35 F.1 Equivalence between Trained Net and Kernel Regression . . . . . . . . . . . . . . . . 35 1 \fG Di\ufb00usion 39 G.1 Main Result of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 G.2 Tools From Previous Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2 \f1 Introduction Large Language Models (LLMs) like GPT4 [AAA+23] from OpenAI and Claude 3 [Ant24] from Anthropic have widely and profoundly changed the world. Some researchers believe they split human history into two parts, the Pre-LLM Era and the LLM Era. The LLMs have been widely used in human activities, such as education [KSK+23], law [Sun23], \ufb01nance [LWDC23], bio-informatics [TTE+23], coding [HZL+24], and even top AI conference reviews such as ICML, ICLR, and NeurIPS [LIZ+24]. To make LLMs successful, one of the cores of LLMs is the Transformer model architecture [VSP+17], which has many advantages, including faster-parallelized inference rather than sequential inference like RNN [HS97]; being easy to scale up the model capacity to support the scaling laws in neural language models [KMH+20], i.e. since the input and output dimension of each Transformer blocks is the same, we can stack an arbitrary number of layers as we want. The kernel design of the Transformer block is self-attention layers, where each block has many attention heads and each head has its three important private parameter matrices for key, query, and value operation. Many papers believe that the self-attention operation is the critical reason for emergent ability [WTB+22], including in-context learning [OEN+22, Red24] and compositional ability to solve complex task [DLS+24, LPC+24]. The Transformer is so successful and has been widely certi\ufb01ed that this architecture can be adopted in many other modalities such as tabular data, image/video generation, e.g. the video di\ufb00usion model SORA [Ope24] from OpenAI using Transformer [PX23] as its backbone. When we delve into the self-attention mechanism, we \ufb01nd the softmax function plays a crucial role [VSP+17]. It enables the model to focus on the most related information among input sequences by giving higher attention scores to the positions that are more relevant for the current position\u2019s representation and to capture dependencies between positions. [CLJ20] \ufb01nd that softmax attention is more expressive and performs better than any convolutional layer. [DSZ23] exhibits softmax attention outperforms linear attention in most scenarios. Although the softmax function code has been executed every second on thousands of servers, there is a limited understanding of the following question: (\u2217) What is the learning mechanism that makes softmax so powerful? To demystify the black box, in this paper, we analyze the Gradient Descent (GD) training dynamics for two-layer Neural Networks (NN) with softmax activation function for multi-dimensional regression, i.e., F(W, x, a) \u2208Rd and F(W, x, a)\u2113:= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121 \u2200\u2113\u2208{1, . . . , d}, where m is number of hidden neurons, exp(\u00b7) is element-wise exponential function, a\u2113, W are the \ufb01rst and second layer weights respectively and x is the input data. Note that, the self-attention could be written as F(W KX, W QX, W V X) \u2208Rd\u00d7n\u2032, where W K, W Q, W V \u2208Rd\u00d7d denotes key, query, and value matrix and X \u2208Rd\u00d7n\u2032 is a sequence of n\u2032 tokens. Thus, studying the two-layer softmax network is the prerequisite to understanding self-attention. See more discussion in Section 7. There is a rich line of work studying two-layer NN learning trajectory under ReLU activation function ([LL18, DZPS19, AZLS19a, ADH+19a, SY19, MMM19, SYZ21, BPSW21, MOSW22, CB20, ZGJ21, LLWA21, CCBG22] and many more) or exponential activation function from the latest work [GMS23]. As far as we know, our work is the \ufb01rst to theoretically study the optimization and generalization of the two-layer softmax network and it is a \ufb01rst step on understanding the power of softmax. 3 \fReLU ([MOSW22]) exp ([GMS23]) Softmax (ours) m \u2126(\u03bb\u22122n2 log(n)) \u2126(\u03bb\u22122n2+o(1) log2(n)) \u2126(\u03bb\u22122n2+o(1) log2(n)) b T \u2126(\u03bb\u22122n2 log(n/\u01eb)) \u2126(\u03bb\u22122n2+o(1) log(n/\u01eb)) \u2126(\u03bb\u22122n2+o(1) log(n/\u01eb)) Table 1: Comparing hidden neuron number m in two-layer neural networks and training steps b T are required under di\ufb00erent activation functions to guarantee that, for any \u01eb > 0, with probability at least 0.99, the training loss is smaller or equal to \u01eb. Here, n is the number of training samples and \u03bb is the smallest eigenvalue for the matrix of neural tangent kernel, where n > 1 and \u03bb < 1. We can see that the two-layer NN with softmax activation function requires almost the same number of neurons and training steps to converge as that with ReLU or exponential activation functions. More details: Theorem 3.6 in [MOSW22] for ReLU; Theorem 1.1 in [GMS23] for exp; Corollary 4.3 in our paper for softmax. One popular analysis method for studying over-parameterized NN is Neural Tangent Kernel (NTK) [JGH18], where overparameterized networks are approximately linear models around their initialization, so the network training is almost convex. To answer our (\u2217) question above, we adopt the powerful NTK analysis paradigm in this work. Our analysis shows that, because of the normalization e\ufb00ect of the denominator, the Neural Tangent Kernel induced by the softmax has a good perturbation property (Lemma 5.1), which means the loss landscape of softmax version has a large convex region. Thus, the softmax NN requires almost the same number of neurons and training steps to \ufb01t the data and converge as ReLU or exponential NN, which is illustrated in Table 1 clearly (Theorem 4.2). To demonstrate the broad applicability of our theoretical \ufb01ndings, we apply our analysis in a practical case study to show the generalization ability of softmax NN, where the task is learning score estimation functions in di\ufb00usion models with noisy labels, a promising approach for generative modeling, as we can smartly transfer it to a multi-dimensional regression task (Theorem 6.5). Thus, we show that gradient-based algorithms can learn the score function with a provable accuracy. Our paper\u2019s contributions are summarized as follows: \u2022 Softmax NTK: We build up the \ufb01rst NTK analysis framework for two-layer NN with softmax activation function. Furthermore, our multi-dimensional regression setting is more general than previous work [MOSW22, GMS23] (ReLU and exp) and can be degenerated to the linear regression setting. \u2022 Di\ufb00usion Models Case Study: We apply our results in learning score estimation functions in di\ufb00usion models with noisy labels to verify our analysis e\ufb00ectiveness. 2 Related Work Softmax and Attention in LLMs. Recently, signi\ufb01cant advances have been achieved in language modeling, particularly with the introduction of Transformer architectures and attention mechanisms [VSP+17]. Self-attention to capture long-range dependencies in text, revolutionizing the \ufb01eld of NLP, e.g., BERT [DCLT19], PaLM [CND+22], LLaMA [TLI+23], LLaMA 2 [TMS+23], ChatGPT [Ope22], GPT4 [AAA+23], Claude 3 [Ant24] and so on. Many works demonstrate the softmax is beyond other activation functions such as ReLU attention or linear attention in di\ufb00erent aspects, e.g, approximation power [DSZ23, SHT24, NLL+24, GLL+24a], prompt tuning [ORST23], in-context learning ability [GSX23, SWXL23, CPM+24, CSWY24], compositional ability[XSL24]. Many works study to generalize the softmax into high order attention [AS24b] or 4 \fto accelerate softmax computation [WLK+20, CLD+20, SZZ+21, QSD+21, AS23, BSZ24, AS24a, HJK+24, HLSL24, DSY24, SYZ24, GSY23, GSYZ23, KMZ23, GLL+24b]. Another line of work analyzes a one-layer softmax network trained on the linear regression task [LSX+23, DLMS23, DLS23, CSY24, GSWY23, SCWZ24], while our work studies a two-layer softmax setting. Neural Tangent Kernel. Recently many studies show that the analysis of optimization and generalization for deep learning should be interwoven together. One line of work uses the \ufb01rst-order Tyler expansion to study su\ufb03ciently over-parameterized neural networks around its initialization like NTK, e.g. [MRH+18, ZCZG18, JGH18, LL18, AZLS19b, ZG19, OS19, LXS+19, NXL+19, Yan19, SY19, DLL+19, AZLS19a, COB19, OFLS19, ADH+19a, CG19, JT19, AZLL19, OS20, CFW+20, ZCZG20, GSJW20, BPSW21, MZ22, MOSW22, GMS23, QSS23, QMS+23, QSY23, SY23, GQSW24, SZZ24] and more. Thus, the neural network optimization can be a convex problem. The NTK method has been widely used in di\ufb00erent scenarios, such as preprocessing analysis [SYZ21, HSWZ22, ALS+23, SCL+23, SSLL23, SSL24, GQSW24], federated learning [LSY23], LoRA adaptation [HWAZ+21, XSW+24, SMF+23] of LLMs [MWY+23], and learning score estimation functions in di\ufb00usion models [HRX24]. Di\ufb00usion Model. Score-based generative di\ufb00usion models can generate high-quality image samples comparable to GANs which requires adversarial optimization [HJA20, SSDK+21, KLL+24]. Based on the U-Net [RFB15], stable di\ufb00usion can successfully generate business-used images. Based on the softmax-based self-attention [PX23], OpenAI released a video di\ufb00usion model, SORA [Ope24], with a surprising performance. Another line of work studying how to train the di\ufb00usion models to have a better theoretical guarantee [SE19, SE20, SK21, SGSE20, SDME21, LLT22, KFL22, SDCS23, LKB+23, CLL23, CDD23, CHZW23, SCK23, YFZ+23, BDD23, GKL24, CCL+24, GLB+24, WCL+24, CKS24]. In this work, we adapt our analysis in di\ufb00usion models. 3 Preliminary We \ufb01rst introduce some notations. Then, we will introduce our problem setup. Notations. We use N(\u00b5, \u03a3) to denote the Gaussian distribution with \u00b5 and covariance \u03a3. For any positive integer n, we use [n] to denote set {1, 2, \u00b7 \u00b7 \u00b7 , n}. Let a vector z \u2208Rn. We denote the \u21132 norm as \u2225z\u22252 := (Pn i=1 z2 i )1/2, the \u21131 norm as \u2225z\u22251 := Pn i=1 |zi|, \u2225z\u22250 as the number of non-zero entries in z, \u2225z\u2225\u221eas maxi\u2208[n] |zi|. We use z\u22a4to denote the transpose of a z. We use \u27e8\u00b7, \u00b7\u27e9to denote the inner product. Let A \u2208Rn\u00d7d, we use vec(A) to denote a length nd vector. We denote the Frobenius norm as \u2225A\u2225F := (P i\u2208[n],j\u2208[d] A2 i,j)1/2. For a function f(x), we say f is L-Lipschitz if \u2225f(x)\u2212f(y)\u22252 \u2264L\u00b7\u2225x\u2212y\u22252. Let D denote a distribution. We use x \u223cD to denote that we sample a random variable x from distribution D. We use E[] to denote expectation and Pr[] to denote probability. We use p.s.d. to denote the positive-semide\ufb01nite matrix. As we have multiple index, to avoid confusion, we usually use i, j \u2208[n] to index the training data, \u2113\u2208[d] to index the output dimension, r \u2208[m] to index neuron number. Models. We consider a two-layer softmax neural network. The hidden layer has m neurons, and we use the softmax function as the activation function, F(W, \u00b7, a) : Rd1 \u2192Rd2 and F(W, x, a)\u2113:= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121 \u2200\u2113\u2208[d2], (1) where exp(\u00b7) is element-wise exponential function. We use m as a normalization factor. Note that we can reduce the d2 to 1 for the linear regression setting. To simplify the proof we let d1 = d2. 5 \fNote that our proof can generalize to di\ufb00erent d1, d2 easily. We only optimizing W and not both W and a simultaneously as many previous works to simplify optimization, e.g., [DZPS19, SY19, MOSW22], where x \u2208Rd represents the input, w1, \u00b7 \u00b7 \u00b7 , wm \u2208Rd are weight vectors in the \ufb01rst layer, i.e., W = [w1, \u00b7 \u00b7 \u00b7 , wm] \u2208Rd\u00d7m, and a1, \u00b7 \u00b7 \u00b7 , ad \u2208Rm are weights in the second layer. We can simplify the notation as F(W, x) when the context is clear. Data. We have n training data points Dn = {(xi, yi)}n i=1, where x \u2208Rd and y \u2208Rd.1 We denote X = [x1, . . . , xn] \u2208Rd\u00d7n and Y = [y1, . . . , yn] \u2208Rd\u00d7n. We assume that \u2225xi\u22252 \u22641 and \u2225yi\u22252 \u22641, \u2200i \u2208[n]. We have the softmax function S \u2208Rm\u00d7n, where Si \u2208Rm denotes \u27e8exp(W \u22a4xi), 1m\u27e9\u22121 \u00b7 exp(W \u22a4xi) and Si,r \u2208R denotes \u27e8exp(W \u22a4xi), 1m\u27e9\u22121 \u00b7exp(w\u22a4 r xi), \u2200r \u2208[m], \u2200i \u2208[n]. For simplicity, we denote \u03b1i as \u27e81m, exp(W \u22a4xi)\u27e9, expi as exp(W \u22a4xi) and expi,r as exp(w\u22a4 r xi), \u2200r \u2208[m], \u2200i \u2208[n], when the context is clear. Gradient Descent. We use er to denote a vector where the r-th coordinate is 1 and everywhere else is 0. \u2200r \u2208[m], \u2200\u2113\u2208[d], we have \u2202F (W,x,a)\u2113 \u2202wr \u2208Rd can be written as \u2202F(W, x, a)\u2113 \u2202wr = + m\u27e8a\u2113\u25e6er, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121x \u2212m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22122 \u00b7 \u27e8exp(W \u22a4x), er \u25e61m\u27e9x = + m\u27e8a\u2113\u25e6er, S\u27e9\u00b7 x \u2212m\u27e8a\u2113, S\u27e9\u00b7 \u27e8S, er \u25e61m\u27e9x. (2) We use W(\u03c4) to denote the weights of the \ufb01rst layer on the timestamp \u03c4 and similar for S(\u03c4) and F(\u03c4) when the context is clear. Now, we introduce some necessary de\ufb01nition used. De\ufb01nition 3.1 (F(\u03c4), dynamic prediction). We de\ufb01ne Fi(\u03c4) \u2208Rd, for any timestamp \u03c4, as F\u2113,i(\u03c4) := m\u27e8a\u2113, exp(W(\u03c4)\u22a4xi)\u27e9\u00b7 \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9\u22121. Here xi \u2208Rd. It can be rewritten as F\u2113,i(\u03c4) = m\u27e8a\u2113, Si(\u03c4)\u27e9. We consider d-dimensional MSE loss. De\ufb01nition 3.2 (Loss function over time). We de\ufb01ne the objective function L as below: L(W(\u03c4)) := 1 2 X i\u2208[n] X \u2113\u2208[d] (F\u2113,i(\u03c4) \u2212y\u2113,i)2. Thus, we de\ufb01ne the gradient of w. De\ufb01nition 3.3 (\u2206wr(\u03c4)). For any r \u2208[m], we de\ufb01ne \u2206wr(\u03c4) \u2208Rd as below: \u2206wr(\u03c4) := m n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 \u27e8a\u2113\u25e6er, Si(\u03c4)\u27e9\u2212\u27e8a\u2113, Si(\u03c4)\u27e9\u00b7 \u27e8Si(\u03c4), er \u25e61m\u27e9 \u0011 \u00b7 xi where Si(\u03c4) = \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9\u22121 \u00b7 exp(W(\u03c4)\u22a4xi) \u2208Rm. Note that we can simplify the gradient calculation by the fact 1 = \u27e81m, Si(\u03c4)\u27e9. Thus, we have the following claim. Claim 3.4. \u2206wr(\u03c4) := m Pn i=1 Pd \u2113=1(F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 (\u27e8a\u2113,r \u00b7 1m \u2212a\u2113, Si(\u03c4)\u27e9) \u00b7 Si,r(\u03c4) \u0011 \u00b7 xi. 1Our analysis can extend to xi \u2208Rd1 and yi \u2208Rd2 easily. 6 \fWe use the gradient descent (GD) algorithm with the learning rate \u03b7 to train the network. As we only train the hidden layer W and \ufb01x a, we have the following gradient update rule. De\ufb01nition 3.5 (Gradient descent). The gradient descent algorithm for optimizing the weight matrix W is de\ufb01ned as: W(\u03c4 + 1) = W(\u03c4) \u2212\u03b7\u2206W(\u03c4). where \u2206W(\u03c4) \u2208Rd\u00d7m and \u2206wr(\u03c4) \u2208Rd is the r-th column of \u2206W(\u03c4) de\ufb01ned in De\ufb01nition 3.3. 3.1 Neural Tangent Kernel Now, we are ready to introduce our key tools, Neural Tangent Kernel induced by the softmax. We de\ufb01ne the kernel with respect to timestamp \u03c4. De\ufb01nition 3.6 (Kernel function). For simplicity, we denote S(W \u22a4xi) as Si \u2208Rm \u22650 and v\u2113,r = a\u2113,r \u00b7 1m \u2212a\u2113\u2208Rm. We de\ufb01ne the function (Gram matrix) H : Rd\u00d7m \u2192Rnd\u00d7nd as following H(W) := \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 H1,1 H1,2 \u00b7 \u00b7 \u00b7 H1,d H2,1 H2,2 \u00b7 \u00b7 \u00b7 H2,d . . . . . . ... . . . Hd,1 Hd,2 \u00b7 \u00b7 \u00b7 Hd,d \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb, and for each \u21131, \u21132 \u2208[d], we have H\u21131,\u21132 \u2208Rn\u00d7n is de\ufb01ned as [H\u21131,\u21132]i,j(W) := 1 mx\u22a4 i xj m X r=1 \u27e8v\u21131,r, Si\u27e9\u00b7 mSi,r \u00b7 \u27e8v\u21132,r, Sj\u27e9\u00b7 mSj,r. For any timestamp \u03c4, for simplicity, we denote H(\u03c4) := H(W(\u03c4)) and denote H(0) as H\u2217. Note that H\u2217is a positive semi-de\ufb01nite matrix, and we denote its minimum eigenvalue as \u03bb := \u03bbmin(H\u2217). Initialization. We use symmetric initialization, which is widely used in previous works [DM20, DLS22, MOSW22, SWL22, SWL24]. De\ufb01nition 3.7 (Symmetric initialization). For each r \u2208[m/2], we initialize weights as below \u2022 We draw w2r\u22121 from N(0, \u03c32Id) and uniformly draw a2r\u22121 from {\u22121, +1}d. \u2022 We assign a2r = \u2212a2r\u22121 and w2r\u22121 = w2r. Due to symmetric initialization, we can easily see that F(W(0), x) = 0, \u2200x \u2208Rd. 4 Main Results We \ufb01rst de\ufb01ne a constant we used. De\ufb01nition 4.1. Let C > 10 denote a su\ufb03ciently large constant. We de\ufb01ne parameter B as follows B := max{C\u03c3 p log(nd/\u03b4), 1}. Now, we are ready to present our main result, whose complete proof is in Appendix C.1. 7 \fTheorem 4.2 (Main result). Let \u03bb = \u03bbmin(H\u2217) > 0, m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)), \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) and b T = \u2126((m\u03b7\u03bb)\u22121 log(nd/\u01eb)) = \u2126(\u03bb\u22122n2d2 exp(16B) \u00b7 log(nd/\u01eb)). For any \u01eb, \u03b4 \u2208(0, 0.1), after b T iterations, with probability at least 1 \u2212\u03b4, we have \u2225F( b T) \u2212Y \u22252 F \u2264\u01eb. If we \ufb01x \u03b4 and \u03c3 in B de\ufb01ned in the De\ufb01nition 4.1, since exp(\u0398(B)) = (nd)o(1), we can simplify the m = \u2126(\u03bb\u22122(nd)2+o(1)) and b T = \u2126(\u03bb\u22122(nd)2+o(1)). The Theorem 4.2 means that as we have poly(nd) number of neurons and training steps, the softmax NN can \ufb01t any training datasets with n number of d-dim training samples on d-dim regression task. Corollary 4.3. Consider the 1-dimension linear regression setting, i.e., d1 = d and d2 = 1. Let \u03bb = \u03bbmin(H\u2217) > 0, m = \u2126(\u03bb\u22122n2 exp(18B) log2(n/\u03b4)), \u03b7 = 0.1\u03bb/(mn2 exp(16B)) and b T = \u2126((m\u03b7\u03bb)\u22121 log(n/\u01eb)) = \u2126(\u03bb\u22122n2 exp(16B) \u00b7 log(n/\u01eb)). For any \u01eb, \u03b4 \u2208(0, 0.1), after b T iterations, with probability at least 1 \u2212\u03b4, we have \u2225F( b T ) \u2212Y \u22252 2 \u2264\u01eb. Proof. Directly follow Theorem 4.2. As shown in Table 1, our two-layer softmax network needs the same number of training steps b T and number of neurons m as two-layer ReLU networks or two-layer exponential networks. 5 Proof Sketch We \ufb01rst show a key Lemma below, showing that the weight w perturbation will not change the Neural Tangent Kernel too much. Lemma 5.1 (Weight value perturbation \u21d2kernel value perturbation). Let R \u2208(0, 0.01). If the following conditions hold \u2022 Let f W = [ e w1, \u00b7 \u00b7 \u00b7 , e wm] \u2208Rd\u00d7m, where e w1, \u00b7 \u00b7 \u00b7 , e wm are i.i.d. draw from N(0, \u03c32Id). \u2022 Let W = [w1, \u00b7 \u00b7 \u00b7 , wm] \u2208Rd\u00d7m and satisfy \u2225e wr \u2212wr\u22252 \u2264R for any r \u2208[m]. Then with probability at least 1 \u2212\u03b4, we have \u2225H(W) \u2212H(f W)\u2225F \u2264Rnd exp(10B). Please see Appendix B.2 for the proof of Lemma 5.1. We can see that the kernel matrix has a small perturbation when the weights w perturb. Note that in Lemma 4.2 [MOSW22], they have \u2225H(W)\u2212H(f W )\u2225F \u22642Rn for the ReLU activation function and in Lemma 6.7 [GMS23], they have \u2225H(W)\u2212H(f W)\u2225F \u22643Rn1+o(1) for the exp activation function. When we consider the 1-dimension linear regression task, we have \u2225H(W) \u2212H(f W)\u2225F \u2264Rn1+o(1), which is almost the same as the other two cases. Remark 5.2. In the proof of Lemma B.2, we do not use concentration bound as previous work [SY19, MOSW22, GMS23]. The reason is that we consider the worst case. In general, E[H(W)\u2212H(f W)] \u0338= 0nd\u00d7nd. Thus, using the concentration bound may not gain any bene\ufb01ts. 8 \fBased on Lemma 5.1, we can use math induction to \ufb01nish the proof of our main Theorem. We show the induction statement below. Lemma 5.3 (Induction). Let \u03c4 be a \ufb01xed integer. Assume the same condition as Theorem 4.2. Let D be de\ufb01ned as De\ufb01nition A.2 and D < R. If the following conditions hold \u2022 Weights Property. \u2225wr(i) \u2212wr(0)\u22252 \u2264R, \u2200i \u2208[\u03c4] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01 for all r \u2208[m], \u2200i \u2208[\u03c4] Then, for \u03c4 + 1 and \u2200r \u2208[m], we have \u2022 Weights Induction. \u2225wr(\u03c4 + 1) \u2212wr(0)\u22252 \u2264D. \u2022 Loss Induction. \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264(1 \u2212m\u03b7\u03bb/4)\u03c4+1 \u00b7 \u2225F(0) \u2212Y \u22252 F . \u2022 Gradient Induction. \u03b7\u2225\u2206wr(\u03c4 + 1)\u22252 \u22640.01, \u2200r \u2208[m]. Please refer to Appendix C.2, Appendix C.3 and Appendix C.4 for the proof of weights, loss, gradient induction in Lemma 5.3 respectively. Lemma 5.3 means that, at a \ufb01xed timestamp \u03c4, if the weights w(\u03c4) is close to its initialization, the loss is decreasing and the gradient is also small, then we can conclude at timestamp \u03c4 + 1, these conditions still hold as local convexity proved by Lemma 5.1. Thus, after checking the initial condition, we can conclude Theorem 4.2. 6 Application in Di\ufb00usion Now, we apply our results in learning score estimation functions in di\ufb00usion models with noisy labels. We introduce problem setup in Section 6.1 and show our results in Section 6.2. 6.1 Preliminary of Di\ufb00usion In this section, we brie\ufb02y introduce the di\ufb00usion model proposed in [SSDK+21]. Forward Process. During the forward process, we progressively inject the noise into the original data distribution, which can be characterized by the following Stochastic Di\ufb00erential Equation (SDE) [SE20, HJA20]: dx(t) = \u22121 2g(t)x(t) dt + p g(t)dBt, x(0) \u223cp0, (3) where x(t) is the data at the di\ufb00usion process time t, g(t) > 0 is a deterministic weighting function; and (Bt)t\u22650 is a standard d-dimensional Brownian motion/noise. The p0 represents the original/target data distribution that we learn, and we only have few number of accesses to it, i.e., n times. We denote pt as the distribution of x(t) at di\ufb00usion process time t. Then, we can write the explicit solution to Eq. (3) as x(t) = e\u2212 R t 0 1 2g(s)dsx(0) + e\u2212 R t 0 1 2g(s)ds Z t 0 e R s 0 1 2g(u)dup g(s)dBs. 9 \fBackward Process. We denote y(t) = x(T \u2212t) to reverse the forward process in time [HP86, F\u00a8 ol05, CCGL21] that transforms noise into samples from the target distribution. We have a backward process associated to Eq. (3) as: dy(t) = (1 2g(T \u2212t)y(t) + g(T \u2212t)\u2207log pT\u2212t(y(t)))dt + p g(T \u2212t)d \u00af Bt, y(0) \u223cq0. (4) where ( \u00af Bt)t\u22650 is another d-dim Brownian motion/noise. Following the literature, we call \u2207log pt(\u00b7) as \u201cscore function\u201d [SSDK+21]. We have q0 is the initial distribution of the backward process and the score function \u2207log pt(\u00b7) as the gradient of log density of x(t). However, In practice, Eq.(4) cannot be directly used as both the score function and the distribution pT are unknown. To solve the problem, we (1) randomly select a noise distribution as the initial distribution of the backward process pT ; (2) replace the ground-truth score function \u2207log pt(x(t)) by an estimator s\u03b8(x(t), t). The parameterized estimator s\u03b8 is learned by a neural network such as U-Net [HJA20, RBL+22] and Transformer [PX23]. Thus, we obtain a practically implementable approximation of the backward SDE: dy(t) = (1 2g(T \u2212t)y(t) + g(T \u2212t)s\u03b8(y(t), t))dt + p g(T \u2212t)d \u00af Bt, y(0) \u223cN(0, Id), which can be used for sampling/data generation [SE20, CHZW23, CCL+23] Score Matching. When estimate the score function, usually we use L2 loss between the estimated and actual score: min \u03b8 1 T Z T 0 \u03bb(t)E[\u2225s\u03b8(x(t), t) \u2212\u2207log pt(x(t))\u22252 2]dt, (5) where \u03bb(t) is the weighting function that captures time inhomogeneity. As the hardness of estimate \u2207log pt term in Eq. (5), equivalently, we minimize the following denoising score matching [Vin11]: min \u03b8 1 T \u2212T0 Z T T0 \u03bb(t)E[\u2225s\u03b8(x(t), t) \u2212\u2207log pt|0(x(t) | x(0))\u22252 2]dt. (6) In practice, the estimator of the score function is parameterized by a neural network and we have the following sampling procedure for any i \u2208[n], x(0)i \u223cp0, ti \u223cUnif(0, T), x(ti)i \u223cpti|0(\u00b7|x(0)i), and we get the training dataset {x(0)i, (ti, x(ti)i)}n i=1, where x(0)i \u2208Rd and (ti, x(ti)i) \u2208Rd+1. We denote x(0) as the noisy label and E[x(0)|x(t)] as the true label. For simplicity, we denote x(0)i as yi \u2208Rd and (ti, x(ti)i) as xi \u2208Rd+1 and the training dataset as Dn = {(xi, yi)}n i=1. Here, y denotes the image from a dataset and x denotes the noised image with its di\ufb00usion process time t. Neural Network Parameterization. Recall that we consider a two-layer network with softmax activation function as the di\ufb00usion model in Eq. (1), satisfying \u2200\u2113\u2208[d], F(W, x, a)\u2113= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121. Note that, we do not train the top-layer weights a, so we can denote it as Fnn(W, x). Then, similar as [HJA20, HRX24], our loss function Eq. (6) can be rewrite as min W L(W) := 1 2 N X j=1 \u2225Fnn(W, xj) \u2212yj\u22252 2. We denote the target function as F\u2217(t, x(t)) := E[y | (t, x(t))]. Let H be the reproducing Hilbert space (RKHS) induced by the NTK [CDVTU10, JGH18] and let FH in the RKHS H such that \u2225FH\u22252 H \u2264RH. 10 \f6.2 Main Result of Di\ufb00usion We \ufb01rst introduce some natural assumptions we used. Assumption 6.1. Based on normalization, we assume \u2225yi\u22252 \u22641, \u2225xi\u22252 \u22641, \u2200i \u2208[n]. Assumption 6.2. Assume \u03bb = \u03bbmin(H\u2217) > 0. Assumption 6.3. The function g is almost everywhere continuous and bounded on [0, \u221e). Assumption 6.4. For all (t, x(t)) \u2208(0, \u221e) \u00d7 Rd, the function F\u2217(t, x(t)) is \u03b2x-Lipschitz in x, i.e., \u2225F\u2217(t, x(t)) \u2212F\u2217(t, x\u2032(t))\u22252 \u2264\u03b2x\u2225x(t) \u2212x\u2032(t)\u22252. We denote A(RH) := c1\u039b( \u221aRH \u039b )\u22122 d log( \u221aRH \u039b ) and \u039b = O( \u221a d) and \u0393\u03b4 := 2d2A(RH) \u03bb log3/2(e(dn)3/2A(RH) \u03bb ) + 1 \u221an !2 + d2A2(RH) \u03bb2 (log(1/\u03b4) + log(log n)). Now, we are ready to present our main Theorem for di\ufb00usion. Theorem 6.5 (Main results of score estimation and generalization). Suppose Assumptions 6.1, 6.2, 6.3, 6.4 hold and we set m = \u2126(\u03bb\u22122n3d3 exp(18B) log2(nd/\u03b4)) and \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)). Moreover, suppose b T satis\ufb01es Assumption G.3 with corresponding \u01eb(n, b T). Then for large enough RH, with probability at least 1 \u2212\u03b4, it holds that 1 T Z T 0 \u03bb(t)E[\u2225sW ( b T)(t, x(t)) \u2212\u2207log pt(Xt)\u22252 2]dt \u2264O \u0012 1 \u03bb\u221an + \u01eb(n, b T) + dA2(RH) + dA(RH) + p dA(RH)\u0393\u03b4 + \u0393\u03b4 \u0013 . Please refer to Appendix G.1 for the complete proof. Here we provide a proof sketch. Proof sketch of Theorem 6.5. In Theorem F.2, we show the \u201cequivalence\u201d between softmax NN learning and corresponding neural tangent kernel regression, i.e., the gap between them is always small. Then, we can borrow the generalization ability of kernel regression to the generalization ability of two-layer softmax NN. On the other hand, by Claim G.1, we can decompose the loss into a coupling gap, a label mismatch gap, an early stopping gap, and an approximation gap. By using our Theorem 4.2, Theorem F.2 with some tools from [HRX24], we \ufb01nish the proof. From Theorem 6.5, we know that, under some natural assumptions, the GD algorithm trained two-layer softmax NN can learn a provable accuracy on the score estimation functions in the di\ufb00usion model with noisy labels. We use this practical case study to demonstrate the broad applicability of our theoretical \ufb01ndings. 7 Discussion and Future Work Self-attention Learning. The self-attention can be written as F(W KX, W QX, W V X) \u2208Rd\u00d7n\u2032, (7) where W K, W Q, W V \u2208Rd\u00d7d denotes key, query, and value matrix respectively and X \u2208Rd\u00d7n\u2032 is a sequence of n\u2032 tokens. As our work is a \ufb01rst step to understanding softmax, it is natural to consider 11 \fhow to extend our results to self-attention. It is well-known that using two reformulation tricks: tensor-trick and SVM-trick [GSWY23, GSX23, AS24a], any analysis for softmax function can be naturally generalized to attention function F(W KX, W QX, W V X). Therefore, we conjecture that we can borrow the idea from [GSWY23, GSX23, AS24a] to decouple Eq (7) into the value term and the softmax term. And, we can alternatively optimize the weights for the softmax term (W k, W Q) and the value term (W V ). We leave this valuable direction as a future work. Feature Learning. Recently, there is a line of work showing that feature learning may be beyond NTK on sample complexity or time complexity, e.g., [AZL19, WLLM19, HN19, AZLL19, DM20, CBL+20, YH20, HY20, LMZ20, GMMM20, RGKZ21, MKAS21, LXMZ21, DLS22, SWL22, SWL24] and many more. It is worth studying the feature learning ability of two-layer softmax NN to \ufb01gure out what feature pattern the softmax prefers to learn and how it happens. We leave this valuable direction as a future work. 8 Conclusion This paper provides a theoretical analysis of the optimization and generalization properties of twolayer neural networks with softmax activation function. We apply our results in learning score estimation functions in di\ufb00usion models with noisy labels to verify our analysis e\ufb00ectiveness. Our \ufb01ndings contribute to a deeper understanding of the power of softmax neural networks and their potential to self-attention, advance LLMs, and generative modeling. Acknowledgement Research is partially supported by the National Science Foundation (NSF) Grants 2023239-DMS, CCF-2046710, and Air Force Grant FA9550-18-1-0166. The authors would like to thank Yufa Zhou for his helpful suggestions and feedback. 12 \fAppendix Roadmap. In Section A, we introduce some de\ufb01nitions that will be used in the proof. In Section B, we provide the basic concentration. In Section C, we provide the proof of our inductions. In Section D, we establish a bound for the weight of induction Part 1. In Section E, we establish a bound for the loss of induction Part 2. In Section F, we introduce the NTK regression. In Section G, we introduce the di\ufb00usion. A De\ufb01nition Claim A.1 (Restatement of Claim 3.4). We have \u2206wr(\u03c4) := m n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 (\u27e8a\u2113,r \u00b7 1m \u2212a\u2113, Si(\u03c4)\u27e9) \u00b7 Si,r(\u03c4) \u0011 \u00b7 xi Proof of Claim 3.4. We can show that \u2206wr(\u03c4)/m = n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 (\u27e8a\u2113\u25e6er \u2212a\u2113\u00b7 Si,r(\u03c4), Si(\u03c4)\u27e9)xi = n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 (a\u2113,r \u2212\u27e8a\u2113, Si(\u03c4)\u27e9) \u00b7 Si,r(\u03c4) \u0011 \u00b7 xi = n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 \u27e8a\u2113,r \u00b7 1m \u2212a\u2113 | {z } m\u00d71 , Si(\u03c4) | {z } m\u00d71 \u27e9\u00b7 Si,r(\u03c4) \u0011 \u00b7 xi, where the \ufb01rst step follows from the de\ufb01nition of \u2206wr(\u03c4), the second step follows from \u27e8a\u2113\u25e6er, x\u27e9= a\u2113,rxr, and the last step is due to the Fact A.4. We present the following de\ufb01nition to simplify the notation. De\ufb01nition A.2. We de\ufb01ne D D := 4m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F Fact A.3. For any vectors u, v \u2208Rn, the squared Euclidean distance between u and v can be expressed as: \u2225u \u2212v\u22252 2 = \u2225u\u22252 2 \u22122u\u22a4v + \u2225v\u22252 2. Fact A.4. Let 1m be a vector of dimension m consisting of all ones, and Si(\u03c4) \u2208Rm \u22650 be the indicator of some function \u03c4 at position i. We have: 1 = \u27e81m, Si(\u03c4)\u27e9 Fact A.5. For any real number |x| \u22640.1, the following inequality holds: (1 \u2212x)1/2 \u22641 \u22120.5x 13 \fFact A.6. For any real number |x| \u22640.1, we have | exp(x) \u22121| \u22642|x| Fact A.7. For any x \u2208(0, 0.1), we have \u221e X i=0 xi \u2264 1 1 \u2212x Fact A.8. For any |x| \u22640.01, we have exp(x) = 1 + x + \u0398(1)x2 We state the standard Hoe\ufb00ding inequality, Lemma A.9 (Hoe\ufb00ding inequality [Hoe63]). If the below conditions are true \u2022 Let x1, \u00b7 \u00b7 \u00b7 , xn denote n independent variables \u2022 xi \u2208[\u03b1i, \u03b2i], for all i \u2208[n] \u2022 Let x = Pn i=1 xi. Then we have Pr[|x \u2212E[x]| \u2265t] \u22642 exp \u2212 2t2 P i\u2208[n](\u03b2i \u2212\u03b1i)2 ! . Lemma A.10 (Hanson-Wright inequality [HW71, RV13]). Let x \u2208Rn denote a random vector with independent entries xi with E[xi] = 0 and |xi| \u2264K. Let A be an n \u00d7 n matrix. Then, for every t \u22650, Pr[|x\u22a4Ax \u2212E[x\u22a4Ax]| > t] \u22642 \u00b7 exp(\u2212c min{t2/(K4\u2225A\u22252 F ), t/(K2\u2225A\u2225)}). B Basic Concentration In Section B.1, we introduce some concentration basic tools. In Section B.2, given w perturbation within a small ball, we bound the changes of H. B.1 Some Concentration Basic Tools The goal of this section is to prove Lemma B.1. Lemma B.1. If the following conditions hold \u2022 Let B > 1 denote a parameter be de\ufb01ned as De\ufb01nition 4.1. \u2022 Let W = [w1, \u00b7 \u00b7 \u00b7 , wm] and wr be random Gaussian vectors from N(0, \u03c32Id). \u2022 Let V = [v1, \u00b7 \u00b7 \u00b7 , vm] and vr denote the vector where \u2225vr \u2212wr\u22252 \u2264R, \u2200r \u2208[m]. \u2022 Let xi \u2208Rd and \u2225xi\u22252 \u22641, \u2200i \u2208[n]. \u2022 Let R \u2208(0, 0.01). 14 \f\u2022 Let Si and e Si be the softmax function corresponding to W and V respectively. \u2022 Let \u03b1i = \u27e81m, exp(W \u22a4xi)\u27e9and e \u03b1i = \u27e81m, exp(V \u22a4xi)\u27e9, \u2200i \u2208[n]. Then, with probability at least 1 \u2212\u03b4/ poly(nd), we have \u2022 Standard inner product \u2013 Part 1. |\u27e8wr, xi\u27e9| \u2264B, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 2. |\u27e8vr, xi\u27e9| \u2264B + R, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 3. |\u27e8wr \u2212vr, xi + xj\u27e9| \u22642R, \u2200i, j \u2208[n], \u2200r \u2208[m] \u2022 exp function \u2013 Part 4. exp(\u2212B) \u2264exp(\u27e8wr, xi\u27e9) \u2264exp(B), \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 5. exp(\u2212B \u2212R) \u2264exp(\u27e8vr, xi\u27e9) \u2264exp(B + R), \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 6. | exp(\u27e8wr \u2212vr, xi + xj\u27e9) \u22121| \u22644R, \u2200i, j \u2208[n], \u2200r \u2208[m] \u2013 Part 7. | exp(\u27e8wr, xi\u27e9) \u2212exp(\u27e8vr, xi\u27e9)| \u2264R exp(B + R), \u2200i \u2208[n], \u2200r \u2208[m] \u2022 softmax S function \u2013 Part 8. |\u03b1i \u2212e \u03b1i| \u2264mR exp(B + R), \u2200i \u2208[n] \u2013 Part 9. |\u03b1\u22121 i \u2212e \u03b1\u22121 i | \u2264R m exp(3B + 2R), \u2200i \u2208[n] \u2013 Part 10. |Si,r| \u2264exp(2B)/m, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 11. | e Si,r| \u2264exp(2B + 2R)/m, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 12. |Si,r \u2212e Si,r| \u2264R m exp(4B + 3R), \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 13. for any z \u2208Rm and \u2225z\u2225\u221e\u22641, we have |\u27e8z, Si\u27e9\u2212\u27e8z, e Si\u27e9| \u2264R exp(4B+3R), \u2200i \u2208 [n] Proof. As eventually we choose m = poly(nd), we use B > 0 de\ufb01ned in De\ufb01nition 4.1. Proof of Part 1, 2, 4 and 5. We can get the proof by Gaussian tail bound. Proof of Part 3 and 6. Due to \u2225xi\u22252 \u22641 and \u2225xj\u22252 \u22641 and \u2225\u2206wr\u22252 \u2264R, we can have |\u27e8\u2206wr, (xi + xj)\u27e9| \u22642R \u22640.1. (8) Then, we have | exp(\u27e8\u2206wr, (xi + xj)\u27e9) \u22121| \u22642|\u27e8\u2206wr, (xi + xj)\u27e9| \u22644R where the \ufb01rst step follows from the Fact A.6, and the last step follows from Eq. (8). Proof of Part 7. Because \u2225xi\u22252 \u22641 and \u2225\u2206wr\u22252 \u2264R, we can have |\u27e8\u2206wr, xi\u27e9| \u2264R \u22640.1. (9) By convex increasing property of exp function, we have | exp(\u27e8wr, xi\u27e9) \u2212exp(\u27e8vr, xi\u27e9)| \u2264max{exp\u2032(\u27e8wr, xi\u27e9), exp\u2032(\u27e8vr, xi\u27e9} \u00b7 |\u27e8\u2206wr, xi\u27e9| 15 \f\u2264exp(B + R) \u00b7 |\u27e8\u2206wr, xi\u27e9| \u2264exp(B + R)R. where the \ufb01rst step follows from Taylor expansion and exp\u2032 denote the derivative of exp, the second step follows from Part 4 and Part 5 and the last step follows from Eq. (9). Proof of Part 8. |\u03b1i \u2212e \u03b1i| = | X r\u2208[m] expi,r \u2212g X r\u2208[m]expi,r| \u2264 X r\u2208[m] |expi,r \u2212g expi,r| \u2264mR exp(B + R), where the third step is due to Part 7. Proof of Part 9. Similarly, we have |\u03b1\u22121 i \u2212e \u03b1\u22121 i | = | e \u03b1i \u2212\u03b1i \u03b1ie \u03b1i | \u2264mR exp(B + R) |\u03b1ie \u03b1i| \u2264 mR exp(B + R) |m exp(\u2212B)m exp(\u2212B \u2212R)| = R m exp(3B + 2R). where the \ufb01rst step is due to simple algebra, the second step is from Part 8, the third step follows Part 4, 5, and the last step is because of simple algebra. Proof of Part 10 and 11. Trivially follows Part 4 and Part 5. Proof of Part 12. |Si,r \u2212e Si,r| = |\u03b1\u22121 i expi,r \u2212e \u03b1\u22121 i g expi,r| \u2264|\u03b1\u22121 i expi,r \u2212\u03b1\u22121 i g expi,r| + |\u03b1\u22121 i g expi,r \u2212e \u03b1\u22121 i g expi,r| For the \ufb01rst part, we have |\u03b1\u22121 i expi,r \u2212\u03b1\u22121 i g expi,r| = \u03b1\u22121 i | expi,r \u2212g expi,r| \u2264\u03b1\u22121 i exp(B + R)R \u2264exp(B + R)R m exp(\u2212B) = R m exp(2B + R), where the second step follows Part 7 and the third step follows Part 4. 16 \fFor the second part, we have |\u03b1\u22121 i g expi,r \u2212e \u03b1\u22121 i g expi,r| = g expi,r|\u03b1\u22121 i \u2212e \u03b1\u22121 i | \u2264g expi,r R m exp(3B + 2R) \u2264exp(B + R) R m exp(3B + 2R) = R m exp(4B + 3R), where the second step follows Part 9, and the third step follows Part 5. Thus, we have |Si,r \u2212e Si,r| \u2264R m exp(4B + 3R). Proof of Part 13. Note that \u2225z\u2225\u221e\u22641. We have |\u27e8z, Si\u27e9\u2212\u27e8z, e Si\u27e9| = |\u27e8z, Si \u2212e Si\u27e9| \u2264m\u2225Si \u2212e Si\u2225\u221e \u2264R exp(4B + 3R) where the \ufb01rst step follows from simple algebra, the second step follows from |\u27e8a, b\u27e9| \u2264m \u00b7 maxi\u2208[m] |aibi|, and the last step is due to Part 12. B.2 Kernel Perturbation The purpose of this section is to prove Lemma B.2. In the proof, we do not use concentration inequality. Please see Remark 5.2 for more details. Lemma B.2 (Restatement of Lemma 5.1). If the following conditions hold \u2022 Let B \u22651 denote a parameter be de\ufb01ned as De\ufb01nition 4.1. \u2022 Let R \u2208(0, 0.01). \u2022 Let xi \u2208Rd and \u2225xi\u22252 \u22641 for all i \u2208[n]. \u2022 Let f W = [ e w1, \u00b7 \u00b7 \u00b7 , e wm] \u2208Rd\u00d7m, where e w1, \u00b7 \u00b7 \u00b7 , e wm are are i.i.d. draw from N(0, \u03c32Id). \u2022 Let W = [w1, \u00b7 \u00b7 \u00b7 , wm] \u2208Rd\u00d7m and satisfy \u2225e wr \u2212wr\u22252 \u2264R for any r \u2208[m]. \u2022 Let v\u2113,r = a\u2113,r \u00b7 1m \u2212a\u2113\u2208Rm, for any \u2113\u2208[d] and for any r \u2208[m]. Note that a\u2113,r is the r-th in a\u2113. \u2022 Let \u03b1i = \u27e81m, exp(W \u22a4xi)\u27e9and e \u03b1i = \u27e81m, exp(V \u22a4xi)\u27e9, \u2200i \u2208[n]. \u2022 Let H be de\ufb01ned as De\ufb01nition 3.6. Then, we have \u2022 Part 1. Then with probability at least 1 \u2212\u03b4/ poly(nd), |[H\u21131,\u21132]i,j(W) \u2212[H\u21131,\u21132]i,j(f W)| \u2264R \u00b7 exp(10B). 17 \f\u2022 Part 2. Then with probability at least 1 \u2212\u03b4, we have \u2225H(W) \u2212H(f W)\u2225F \u2264Rnd \u00b7 exp(10B). Proof of Lemma 5.1. We de\ufb01ne \ufb01ve real numbers B1, B2, B3, B4, B5 \u2208R as follows, B1 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9expi,r expj,r \u2212\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9g expi,rg expj,r B2 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9g expi,rg expj,r \u2212\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r B3 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r \u2212\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r B4 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r \u2212\u03b1\u22121 i e \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r B5 := \u03b1\u22121 i e \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r \u2212e \u03b1\u22121 i e \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r Thus, we have |[H\u21131,\u21132]i,j(W) \u2212[H\u21131,\u21132]i,j(f W)|/m2 \u2264|B1| + |B2| + |B3| + |B4| + |B5|. To bound B1 We rewrite B1 as B1 = \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9(exp(w\u22a4 r (xi + xj)) \u2212exp( e w\u22a4 r (xi + xj))). Recall that \u2225v\u21131,r\u2225\u221e\u22642 and \u2225Si\u22251 \u22641. Thus, |\u27e8v\u21131,r, Si\u27e9| \u22642. By Fact A.4, we know that |\u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9| \u22642 \u00b7 2 = 4. By Part 4 of Lemma B.1, with probability 1 \u2212\u03b4/ poly(nd), we know that |\u03b1\u22121 i | \u22641 m exp(B). We will condition on the above event is holding in the rest of the proof. By Part 7 of Lemma B.1, | exp( e w\u22a4 r (xi + xj)) \u2212exp(w\u22a4 r (xi + xj))| \u22642R exp(2B + 2R). Finally, we know that |B1| \u22648R m2 exp(5B). To bound B2 and B3 We can rewrite B2 as follows |B2| = |\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9g expi,rg expj,r(\u27e8v\u21132,r, Sj\u27e9\u2212\u27e8v\u21132,r, e Sj\u27e9)| \u2264\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 |\u27e8v\u21131,r, Si\u27e9|g expi,rg expj,r|(\u27e8v\u21132,r, Sj\u27e9\u2212\u27e8v\u21132,r, e Sj\u27e9)|. 18 \fFollowing the similar strategy as B1, by Part 13 of Lemma B.1, we know that |B2| \u22641 m exp(B) \u00b7 1 m exp(B) \u00b7 2 \u00b7 exp(B + R) \u00b7 exp(B + R) \u00b7 4R exp(4B + 3R) \u22648R m2 exp(9B). Similarly, we have |B3| \u22648R m2 exp(9B). To bound B4 and B5 For the term B4, we can rewrite |B4| = |(\u03b1\u22121 j \u2212e \u03b1\u22121 j ) \u00b7 \u03b1\u22121 i 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r| \u2264|\u03b1\u22121 j \u2212e \u03b1\u22121 j | \u00b7 \u03b1\u22121 i 1 m m X r=1 |\u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9|g expi,rg expj,r. Thus, by Part 9 of Lemma B.1, using similar proof strategy as B1 as know |B4| \u2264R m exp(3B + 2R) \u00b7 1 m exp(B) \u00b7 2 \u00b7 2 \u00b7 exp(B + R) \u00b7 exp(B + R) \u22644R m2 exp(7B). Similarly, we have |B5| \u22644R m2 exp(7B). C Induction In Section C.1, we provide the proof of our main result. In Section C.2, we provide an induction lemma for weights part. In Section C.3, we provide an induction lemma for loss part. In Section C.4, we provide an induction lemma for gradient part. C.1 Main Result Our main result is presented as follows. Theorem C.1 (Main result. Restatement of Theorem 4.2). For any \u01eb, \u03b4 \u2208(0, 0.1), if the following conditions hold \u2022 Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 Let m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Let \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) 19 \f\u2022 Let b T = \u2126((m\u03b7\u03bb)\u22121 log(nd/\u01eb)) = \u2126(\u03bb\u22122n2d2 exp(16B) \u00b7 log(nd/\u01eb)) Then, after b T iterations, we have \u2225F( b T) \u2212Y \u22252 F \u2264\u01eb. Proof of Theorem 4.2. Let \u03c3 = 1. We have \u2225F(0) \u2212Y \u22252 F \u2264nd by Lemma D.3. Using the choice of b T, it follows directly from the alternative application of Lemma C.3 and Lemma C.2. Since exp(\u0398(B)) = (nd)o(1), we can simplify the nd exp(\u0398(B)) = (nd)1+o(1). C.2 Induction Part 1. For Weights We provide an induction lemma for weights part. Lemma C.2 (Induction Part 1. For Weights). Let \u03c4 be a \ufb01xed integer. If the below conditions are true \u2022 General Property 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 General Property 2. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 General Property 3. Let D be de\ufb01ned as De\ufb01nition A.2 \u2022 General Property 4. D < R = \u03bb/(2nd exp(10B)) \u2022 General Property 5. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Weights Property. \u2225wr(i) \u2212wr(0)\u22252 \u2264R for all i \u2208[\u03c4] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then, for \u03c4 + 1 and \u2200r \u2208[m], we have \u2225wr(\u03c4 + 1) \u2212wr(0)\u22252 \u2264D. Proof. We have \u03b7 \u221e X i=0 (1 \u2212m\u03b7\u03bb/2)i/2 \u2264\u03b7 \u221e X i=0 (1 \u2212m\u03b7\u03bb/4)i \u2264\u03b7 1 m\u03b7\u03bb/4 \u2264 4 m\u03bb (10) where the \ufb01rst step is due to the Fact A.5, the second stepis due to the Fact A.7, the last step is because of simple algebra. 20 \fWe use the gradient\u2019s norm to measure the weights di\ufb00erence: \u2225wr(0) \u2212wr(\u03c4 + 1)\u22252 \u2264\u03b7 \u03c4 X i=0 \u2225\u2206wr(i)\u22252 \u2264\u03b7 \u03c4 X i=0 exp(3B) \u221a nd \u00b7 \u2225F(i) \u2212Y \u2225F \u2264\u03b7 exp(3B) \u221a nd \u03c4 X i=0 (1 \u2212m\u03b7\u03bb/2)i/2 \u00b7 \u2225F(0) \u2212Y \u2225F \u22644m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F = D where the \ufb01rst step follows from wr(i + 1) \u2212wr(i) = \u03b7 \u00b7 \u2206wr(i), the second step follows from Lemma D.1 for \u03c4 times, the third step follows from Loss Property in Lemma statement, the fourth step follows from Eq. (10), the last step is from General Property 3 in Lemma statement. C.3 Induction Part 2. For Loss We provide an induction lemma for loss part. Lemma C.3 (Induction Part 2. For Loss). Let \u03c4 be a \ufb01xed integer. If the following conditions hold \u2022 General Property 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 General Property 2. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 General Property 3. Let D be de\ufb01ned as De\ufb01nition A.2 \u2022 General Property 4. D < R = \u03bb/(2nd exp(10B)) \u2022 General Property 5. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Weights Property. \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264D < R, \u2200r \u2208[m] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01 \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then we have \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264(1 \u2212m\u03b7\u03bb/4)\u03c4+1 \u00b7 \u2225F(0) \u2212Y \u22252 F . Proof. We have \u2225F(\u03c4) \u2212Y \u22252 F \u2264\u2225F(\u03c4 \u22121) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2) which follows Lemma E.2. Thus, we complete the proof by induction. 21 \fC.4 Induction Part 3. For Gradient We provide an induction lemma for gradient part. Lemma C.4 (Induction Part 3. For Gradient). Let \u03c4 be a \ufb01xed integer. If the following conditions hold \u2022 General Property 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 General Property 2. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 General Property 3. Let D be de\ufb01ned as De\ufb01nition A.2 \u2022 General Property 4. D < R = \u03bb/(2nd exp(10B)) \u2022 General Property 5. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Weights Property. \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264D < R, \u2200r \u2208[m] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01 \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then we have \u03b7\u2225\u2206wr(\u03c4 + 1)\u22252 \u22640.01, \u2200r \u2208[m] Proof. This is trivially follows from Lemma D.1 and Lemma D.2. D Induction Part 1: For Weights In Section D.1, we propose the lemma for bounding gradient and its corresponding proof. In Section D.2, we propose the bounding initialization loss and its corresponding proof. D.1 Bounding the Gradient at any Time In this section, we bound the gradient. Lemma D.1. If the following condition hold, \u2022 Let B > 1 denote a parameter be de\ufb01ned as De\ufb01nition 4.1 \u2022 Let R \u2208(0, 0.01) \u2022 \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264R \u2022 Let v\u2113,r = a\u2113,r \u00b7 1m \u2212a\u2113\u2208Rm, for any \u2113\u2208[d] and for any r \u2208[m] For any timestamp \u03c4, we have \u2225\u2206wr(\u03c4)\u22252 \u2264exp(3B) \u221a nd \u00b7 \u2225F(\u03c4) \u2212Y \u2225F . 22 \fProof. We have \u2225\u2206wr(\u03c4)\u22252 = \r \r \r \r \rm n X i=1 d X \u2113=1 (y\u2113,i \u2212F\u2113,i) \u00b7 xi \u00b7 \u27e8v\u2113,r, Si(\u03c4)\u27e9\u00b7 Si,r(\u03c4) \r \r \r \r \r 2 \u2264exp(3B) n X i=1 d X \u2113=1 |y\u2113,i \u2212F\u2113,i(\u03c4)| \u2264exp(3B) \u221a nd \u00b7 \u2225F(\u03c4) \u2212Y \u2225F where the \ufb01rst step follows from Claim 3.4 and De\ufb01nition 3.3, the second step follows from |\u27e8v\u2113,r, Si\u27e9| \u22642 and |Si,r| \u2264exp(2B + 2R)/m by Part 11 of Lemma B.1, the last step follows from Cauchy-Schwartz inequality. Lemma D.2. If the following conditions hold, \u2022 \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264R Then, for any timestamp \u03c4, we have \u03b7\u2225\u2206wr(\u03c4)\u22252 \u22640.01 Proof. This trivially follows from Lemma D.1 and choice of \u03b7. D.2 Bounding the Initialization Loss In this section, we bound the initialization loss. Lemma D.3. We have \u2225F(0) \u2212Y \u2225F \u2264O( \u221a nd). Proof. This trivially follows from \u2225yi\u2225\u22641, \u2200i \u2208[n] and symmetric initialization from De\ufb01nition 3.7. E Induction Part 2: For Loss In Section E.1, we decompose the loss \u2225F(k + 1) \u2212Y \u22252 F into four parts, namely C0, C1, C2, and C3. In Section E.2, we show our choices of m and \u03b7. In Section E.3, we establish bounds for C0. In Section E.4, we establish bounds for C1. In Section E.5, we establish bounds for C2. In Section E.6, we establish bounds for C3. 23 \fE.1 Decomposition for \u2225vec(F(\u03c4 + 1) \u2212Y )\u22252 2 Here, we decompose the loss \u2225vec(F(\u03c4 + 1) \u2212Y )\u22252 2 into four parts C0, C1, C2 and C3. Lemma E.1. Assuming the following condition is met: \u2022 Let \u03bb = \u03bbmin(H\u2217) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9. \u2022 Let scalar v0,\u2113,i \u2208R be de\ufb01ned as follows v0,\u2113,i := m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) \u2022 Let scalar v1,\u2113,i \u2208R be de\ufb01ned as follows v1,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) \u2022 Let scalar v2,\u2113,i \u2208R be de\ufb01ned as follows v2,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 \u03b72 \u00b7 \u0398(1) \u00b7 \u27e8\u2206wr(\u03c4), xi\u27e92 \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] \u2022 C0 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v0)\u27e9 \u2022 C1 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v1)\u27e9 \u2022 C2 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v2)\u27e9 \u2022 C3 = \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F then \u2225F(\u03c4 + 1) \u2212Y \u22252 F = \u2225F(t) \u2212Y \u22252 F + C0 + C1 + C2 + C3. Proof. The expression \u2225Y \u2212F(\u03c4 + 1)\u22252 F = \u2225vec(Y \u2212F(\u03c4 + 1))\u22252 2 can be rewritten in the following: \u2225vec(Y \u2212F(\u03c4 + 1))\u22252 2 = \u2225vec(Y \u2212F(\u03c4) \u2212(F(\u03c4 + 1) \u2212F(\u03c4)))\u22252 2 = \u2225vec(Y \u2212F(\u03c4))\u22252 2 \u22122 vec(Y \u2212F(\u03c4))\u22a4vec(F(\u03c4 + 1) \u2212F(\u03c4)) + \u2225vec(F(\u03c4 + 1) \u2212F(\u03c4))\u22252 2. (11) where the \ufb01rst step follows from simple algebra, the last step follows from Fact A.3. Recall the update rule (De\ufb01nition 3.5), wr(\u03c4 + 1) = wr(\u03c4) \u2212\u03b7 \u00b7 \u2206wr(\u03c4) In the following manner, \u2200\u2113\u2208[d], we can express F\u2113(\u03c4 + 1) \u2212F\u2113(\u03c4) \u2208Rn: 24 \fF\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4) = m X r\u2208[m] a\u2113,r \u00b7 (\u03b1i(\u03c4 + 1)\u22121 exp(\u27e8wr(\u03c4 + 1), xi\u27e9) \u2212\u03b1i(\u03c4)\u22121 exp(\u27e8wr(\u03c4), xi\u27e9)) = + m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) + m X r\u2208[m] a\u2113,r\u03b1i(\u03c4)\u22121 \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9) \u2212exp(\u27e8wr(\u03c4), xi\u27e9)) = + m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) + m X r\u2208[m] a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (exp(\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) \u22121) = + m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) + m X r\u2208[m] a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((wr(\u03c4)\u22a4xi) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9+ \u0398(1)\u03b72\u27e8\u2206wr(\u03c4), xi\u27e92) = v0,\u2113,i + v1,\u2113,i + v2,\u2113,i where the \ufb01rst step is due to the de\ufb01nition of F\u2113,i(\u03c4), the second step is from the simple algebra, the third step is due to |\u03b7\u2206wr(\u03c4)\u22a4xi| \u22640.01 (due to Gradient Property and \u2225xi\u22252 \u22641), the fourth step follows from the Fact A.8, the last step follows from v0,\u2113,i := m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) v1,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) v2,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 \u03b72 \u00b7 \u0398(1) \u00b7 \u27e8\u2206wr(\u03c4), xi\u27e92 Here v0,\u2113,i and v1,\u2113,i are linear in \u03b7 and v2,\u2113,i is quadratic in \u03b7. Thus, v0,\u2113,i and v1,\u2113,i are the \ufb01rst order term, and v2,\u2113,i is the second order term. We can rewrite the second term in the Eq. (11) above as below: \u27e8vec(Y \u2212F(\u03c4)), vec(F(\u03c4 + 1) \u2212F(\u03c4))\u27e9 = \u27e8vec(Y \u2212F(\u03c4)), vec(v0 + v1 + v2)\u27e9 = \u27e8vec(Y \u2212F(\u03c4)), vec(v0)\u27e9+ \u27e8vec(Y \u2212F(\u03c4)), vec(v1)\u27e9+ \u27e8vec(Y \u2212F(\u03c4)), vec(v2)\u27e9 Therefore, we can conclude that \u2225F(\u03c4 + 1) \u2212Y \u22252 F = \u2225F(\u03c4) \u2212Y \u22252 F + C0 + C1 + C2 + C3. 25 \fE.2 Choice of Parameters Here, we show our choice of parameters m, \u03b7, R, B. Lemma E.2. If the below conditions are true \u2022 Condition 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 Condition 2. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Condition 3. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 Condition 4. R = \u03bb/(2nd exp(10B)) \u2013 Required by Claim E.5 \u2022 Condition 5. B = max{C\u03c3 p log(nd/\u03b4), 1} \u2022 Condition 6. D = 4m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F \u2022 Condition 7. D < R \u2022 Condition 8. \u03b7\u2225\u2206wr(\u03c4)\u22252 \u22640.01, \u2200r \u2208[m] \u2013 Required by Lemma E.1, Claim E.3 and Claim E.7 Then it holds that \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264\u2225F(\u03c4) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2) holds with probability at least 1 \u2212\u03b4. Proof. We can show \u2225F(\u03c4 + 1) \u2212Y \u22252 F = \u2225F(\u03c4) \u2212Y \u22252 F + C0 + C1 + C2 + C3 \u2264(1 \u22120.8m\u03b7\u03bb + 0.1m\u03b7\u03bb + 2m\u03b72n2d2 exp(9B) + \u03b72m2 \u00b7 n2d2 \u00b7 exp(16B)) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F \u2264(1 \u22120.7m\u03b7\u03bb + 2\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B)) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . where the \ufb01rst step follows from Lemma E.1, the second step follows from Lemma E.3 for C0, Lemma E.4, Claim E.5 for C1, Claim E.6 for C2 and Claim E.7 for C3, the last step follows from the simple algebra. Choice of \u03b7. Next, we want to choose \u03b7 such that (1 \u22120.7m\u03b7\u03bb + 2\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B)) \u2264(1 \u2212m\u03b7\u03bb/2). (12) Using the choice of \u03b7 in Condition 3 2\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B) \u22640.2m\u03b7\u03bb This indicates: \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264(1 \u2212m\u03b7\u03bb/2) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . (13) 26 \fLower bound for m, over-parametrization size. We require the following conditions \u2022 m \u2265\u2126(\u03bb\u22122n2d exp(18B) log2(nd/\u03b4)) (required by Lemma E.3) \u2022 m \u2265\u2126(\u03bb\u22122n2d exp(12B) log2(nd/\u03b4)) (required by Lemma E.4) \u2022 D = 4m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F < R = \u03bb/(2nd exp(10B))} (required by Condition 7.) Therefore, by \u2225Y \u2212F(0)\u2225F = O( \u221a nd) from Lemma D.3, it su\ufb03ces to choose: m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)). E.3 Bounding C0 Here, we explain about how to bound C0. Lemma E.3. If the following conditions hold \u2022 Let scalar v0,\u2113,i \u2208R be de\ufb01ned as follows v0,\u2113,i := m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9. \u2022 Let m \u2265\u2126(\u03bb\u22122n2d exp(18B) log2(nd/\u03b4)) \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] \u2022 We de\ufb01ne C0 as follows C0 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v0)\u27e9 Here vec(v0) \u2208Rnd is the vectorization of v0 \u2208Rn\u00d7d and vec(F(\u03c4) \u2212Y ) \u2208Rnd is the vectorization of F(\u03c4) \u2212Y \u2208Rn\u00d7d. Then we have |C0| \u22640.1m\u03b7\u03bb \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F Proof. We can rewrite v0,\u2113,i as follows: v0,\u2113,i = m m X r=1 a\u2113,r((\u03b1i(\u03c4 + 1))\u22121 \u2212\u03b1i(\u03c4)\u22121) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) = m m X r=1 a\u2113,r\u03b1i(\u03c4 + 1)\u22121\u03b1i(\u03c4)\u22121 \u00b7 (\u27e81m, exp(W(\u03c4 + 1)xi) \u2212exp(W(\u03c4)xi)\u27e9) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) = m m X r=1 a\u2113,r\u03b1i(\u03c4 + 1)\u22121\u03b1i(\u03c4)\u22121( m X r2=1 exp(wr2(\u03c4 + 1)xi) \u2212exp(wr2(\u03c4)xi)) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) 27 \f= m m X r=1 a\u2113,r\u03b1i(\u03c4 + 1)\u22121\u03b1i(\u03c4)\u22121 m X r2=1 \u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9exp(wr2(\u03c4)xi) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) = m( m X r=1 a\u2113,r m X r2=1 \u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9Si,r2(\u03c4) \u00b7 Si,r(\u03c4 + 1) | {z } \ufb01rst order term + \u03b72\u22062 | {z } second order term ) (14) where the \ufb01rst step follows from lemma statement, the second step follows from a\u22121 \u2212b\u22121 = b\u2212a ab , the third step follows from simple algebra, the fourth step follows from simple algebra, and the last step follows from |\u03b7\u2206wr(\u03c4)\u22a4xi| \u22640.01 (due to Gradient Property and \u2225xi\u22252 \u22641). The second order term \u03b72\u22062 in Eq. (14) can be bounded in a similar way as the proof of Claim E.6. Further, we can rewrite the \ufb01rst-order term in Eq. (14) m m X r=1 a\u2113,r m X r2=1 \u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9Si,r2(\u03c4) \u00b7 Si,r(\u03c4 + 1) = m2(Q1,i,\u2113+ Q2,i,\u2113) (15) where Q1,i,\u2113:= 1 m m X r=1 a\u2113,r(\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9)Si,r(\u03c4) \u00b7 Si,r(\u03c4 + 1) Q2,i,\u2113:= 1 m m X r=1 a\u2113,r X r2\u0338=r (\u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9)Si,r2(\u03c4) \u00b7 Si,r(\u03c4 + 1) Let us consider how to handle the \ufb01rst term in Eq. (14), Q1,i,\u2113= 1 m m X r=1 a\u2113,r(\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9)Si,r(\u03c4) \u00b7 Si,r(\u03c4 + 1) = m X r=1 a\u2113,rSi,r \u00b7 Si,r(\u03c4 + 1)(\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi where the second step follows from computing \u2206wr(\u03c4) explicitly (see Claim 3.4). Similarly as proof of Lemma E.4, we can use concentration to bound n X i=1 d X \u2113=1 Q1,i,\u2113(F\u2113,i \u2212y\u2113,i) Note that 0 < Sj,r < exp(3B) m by Part 11 of Lemma B.1. The above small term is equivalent to \u2212\u03b7exp(9B) m3 \u00b7 n X i=1 n X j=1 m X r=1 d X \u2113=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u03c3i,j,r,\u2113,\u21132 \u00b7 Ci,j,r,\u2113,\u21132 \u00b7 (F\u2113,i(\u03c4) \u2212y\u2113,i), where \u03c3i,\u2113,\u21132,j,r \u223c[\u22121, +1] and |Ci,\u2113,\u21132,j,r| \u226410. We de\ufb01ne P1,r,\u2113,\u21132 := (F\u21132,j \u2212y\u21132,j)\u03c3i,j,r,\u2113,\u21132Ci,j,r,\u2113,\u21132(F\u2113,i \u2212y\u2113,i) 28 \fSimilarly as Lemma E.4, for each \ufb01xed i, j \u2208[n], using Hanson-Wright inequality (Lemma A.10), we can show Pr[| m X r=1 d X \u2113=1 d X \u21132=1 P1,r,\u2113,\u21132| \u2264100\u2225Fj \u2212yj\u22252\u2225Fi \u2212yi\u22252 \u00b7 \u221a md log(nd/\u03b4)] \u22651 \u2212\u03b4/ poly(nd). By mean inequality, we have n X i=1 n X j=1 \u2225Fj \u2212yj\u22252 \u00b7 \u2225Fi \u2212yi\u22252 \u2264n\u2225F \u2212y\u22252 F . Thus, we have the \ufb01rst term with probability at least 1 \u2212poly(nd), such that | n X i=1 d X \u2113=1 Q1,i,\u2113(F\u2113,i \u2212y\u2113,i)| \u2264\u03b7n exp(9B) m3 \u2225F \u2212y\u22252 F \u221a md log(nd/\u03b4) Similarly, we can compute n X i=1 d X \u2113=1 Q2,i,\u2113(F\u2113,i \u2212y\u2113,i) Using Hanson-Wright inequality (Lemma A.10), we have the second term with probability at least 1 \u2212poly(nd), such that | n X i=1 d X \u2113=1 Q2,i,\u2113(F\u2113,i \u2212y\u2113,i)| \u2264\u03b7n exp(9B) m2 \u2225F \u2212y\u22252 F \u221a md log(nd/\u03b4) Thus, we can complete the proof by the Lemma statement m \u2265\u2126(\u03bb\u22122n2d exp(18B) log2(nd/\u03b4)). E.4 Bounding C1 Here, we give the bound of the \ufb01rst order term C1. Note that this term is making progress. Lemma E.4. Assuming the following condition is met: \u2022 Let \u03bb = \u03bbmin(H\u2217) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9 \u2022 Let m \u2265\u2126(\u03bb\u22122n2d exp(12B) log2(nd/\u03b4)) \u2022 Let scalar v1,\u2113,i \u2208R be de\ufb01ned as follows v1,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) \u2022 C1 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v1)\u27e9 29 \fthen C1 \u2264\u22121.6m\u03b7 vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4) vec(F(\u03c4) \u2212Y ). Proof. To simplify the notation, we omit writing (\u03c4) in Si,r(\u03c4). Then, we can express v1,\u2113,i \u2208R as follows: v1,\u2113,i = m X r\u2208[m] a\u2113,r \u00b7 Si,r \u00b7 (\u2212\u03b7\u27e8xi, \u2206wr(\u03c4)\u27e9) = m2 X r\u2208[m] a\u2113,r \u00b7 Si,r \u00b7 (\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi = m2(Q1,\u2113,i + Q2,\u2113,i) (16) where the second step using equation for \u2206wr(\u03c4) (see Claim 3.4). Note that \u27e8a\u2113,r \u00b7 1m, Si\u27e9= a\u2113,r, so in the above equation, Q1,\u2113,i := X r\u2208[m] \u27e8a\u2113,r \u00b7 1m \u2212a\u2113, Si\u27e9\u00b7 Si,r \u00b7 (\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi Q2,\u2113,i := X r\u2208[m] \u27e8a\u2113, Si\u27e9\u00b7 Si,r \u00b7 (\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi The quantity P i\u2208[n] P \u2113\u2208[d] Q1,\u2113,i(F\u2113,i \u2212Y\u2113,i) is corresponding to \ufb01rst term (Q1,\u2113,i) in Eq. (16). It is X i\u2208[n] X \u2113\u2208[d] Q1,\u2113,i(F\u2113,i \u2212Y\u2113,i) = \u22121 m\u03b7 vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4)\u22a4vec(F(\u03c4) \u2212Y ) (17) The quantity P i\u2208[n] P \u2113\u2208[d] Q2,\u2113,i(F\u2113,i \u2212Y\u2113,i) is corresponding to second term (Q2,\u2113,i) in Eq. (16). Note that 0 < Sj,r < exp(3B) m by Part 11 of Lemma B.1. The quantity, X i\u2208[n] X \u2113\u2208[d] Q2,\u2113,i(F\u2113,i \u2212Y\u2113,i) (18) is equivalent to \u2212\u03b7exp(6B) m2 \u00b7 n X i=1 n X j=1 m X r=1 d X \u2113=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u03c3i,j,r,\u2113,\u21132 \u00b7 Ci,j,r,\u2113,\u21132 \u00b7 (F\u2113,i(\u03c4) \u2212y\u2113,i), where \u03c3i,j,r,\u2113,\u21132 \u2208{\u22121, +1} and |Ci,j,r,\u2113,\u21132| \u226410. Note that there are four cases \u2022 i = j, \u2113= \u21132, this is a p.s.d. case that always makes progress, thus we can drop it. \u2022 i \u0338= j, \u2113= \u21132 we will use random variable P1 to handle \u2022 i = j, \u2113\u0338= \u21132 we will use random variable P2 to handle \u2022 i \u0338= j, \u2113\u0338= \u21132 we will use random variable P2 to handle 30 \fFor each \ufb01xed i, j \u2208[n]. We de\ufb01ne P1,r,\u2113:= (F\u2113,j \u2212y\u2113,j)\u03c3i,j,r,\u2113Ci,j,r,\u2113(F\u2113,i \u2212y\u2113,i) P2,r,\u2113,\u21132 := (F\u21132,j \u2212y\u21132,j)\u03c3i,j,r,\u2113,\u21132Ci,j,r,\u2113,\u21132(F\u2113,i \u2212y\u2113,i) The random variables related to P1,r,\u2113are the following m X r=1 d X \u2113=1 P1,r,\u2113 The random variables related to P2,r,\u2113,\u21132 are the following m X r=1 d X \u2113=1 d X \u21132=1 P2,r,\u2113,\u21132 For each i \u0338= j \u2208[n] and \u2113= \u21132, using Hoe\ufb00ding inequality (see Lemma A.9), we can show Pr[| m X r=1 d X \u2113=1 P1,r,\u2113| \u2264100\u2225Fj \u2212yj\u22252\u2225Fi \u2212yi\u22252 \u00b7 p md log(nd/\u03b4)] \u22651 \u2212\u03b4/ poly(nd). Similarly, we consider i = j and \u2113\u0338= \u21132 by Hanson-Wright inequality (Lemma A.10), we have Pr[| m X r=1 d X \u2113=1 d X \u21132=1 P2,r,\u2113,\u21132| \u2264100\u2225Fj \u2212yj\u22252\u2225Fi \u2212yi\u22252 \u00b7 \u221a md log(nd/\u03b4)] \u22651 \u2212\u03b4/ poly(nd). By mean inequality, we have n X i=1 n X j=1 \u2225Fj \u2212yj\u22252 \u00b7 \u2225Fi \u2212yi\u22252 \u2264n\u2225F \u2212y\u22252 F . Note that by Lemma condition, we have 1 m\u03bb \u2273n exp(6B) m2 \u00b7 \u221a md log(nd/\u03b4) \u21d0 \u21d2m \u2273\u03bb\u22122, the equation (Eq. (17) and the bound for Eq. (18)) above indicates that \u27e8vec(Y \u2212F(\u03c4)), vec(v1)\u27e9 can be expressed as vec(v1)\u22a4vec(Y \u2212F(\u03c4)) \u22650.8m\u03b7 \u00b7 vec(F(\u03c4) \u2212Y )\u22a4 | {z } 1\u00d7nd H(\u03c4)\u22a4 | {z } nd\u00d7nd vec(F(\u03c4) \u2212Y ). (19) We \ufb01nish the proof. Claim E.5. If the below conditions are true \u2022 Let B \u22651 be de\ufb01ned as De\ufb01nition 4.1 \u2022 Let \u03bb = \u03bbmin(H\u2217) > 0 31 \f\u2022 C1 = \u2212m\u03b7 vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4) vec(F(\u03c4) \u2212Y ). \u2022 R = \u03bb/(2nd exp(10B)) Then, we have C1 \u2264\u22121 2m\u03b7\u03bb \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F and \u03bbmin(H(\u03c4)) \u2265\u03bb/2. holds with probability at least 1 \u2212\u03b4. Proof. By Lemma 5.1, with probability at least 1 \u2212\u03b4, we have \u2225H\u2217\u2212H(\u03c4)\u2225F \u2264Rnd \u00b7 exp(10B) \u2264\u03bb/2 (20) where the \ufb01rst step follows from the de\ufb01nition of H(\u03c4), the last step comes from choice of \u03bb (see Claim Statement). Given that \u03bb = \u03bbmin(H\u2217), by eigenvalue perturbation theory \u03bbmin(H(\u03c4)) \u2265\u03bbmin(H\u2217) \u2212\u2225H\u2217\u2212H(\u03c4)\u2225 \u2265\u03bbmin(H\u2217) \u2212\u2225H\u2217\u2212H(\u03c4)\u2225F \u2265\u03bbmin(H\u2217) \u2212\u03bb/2 \u2265\u03bb/2. where the \ufb01rst step comes from triangle inequality, the second step is due to Frobenius norm, the third step is due to Eq.(20), the last step follows from \u03bbmin(H\u2217) = \u03bb. Finally, we have vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4) vec(F(\u03c4) \u2212Y ) \u2265\u03bb/2 \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . Thus, we complete the proof. E.5 Bounding C2 Here, we give the bound of the second order term C2. Claim E.6. If the below conditions are true \u2022 Let \u03bb = \u03bbmin(H\u2217) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9 \u2022 Let scalar v2,\u2113,i \u2208R be de\ufb01ned as follows v2,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 \u03b72 \u00b7 \u0398(1) \u00b7 \u27e8\u2206wr(\u03c4), xi\u27e92 32 \f\u2022 C2 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v2)\u27e9 Then we can conclude that C2 \u22642m\u03b72n2d2 exp(9B)\u2225F(\u03c4) \u2212Y \u22252 F . with probability at least 1 \u2212n \u00b7 exp(\u2212mR). Proof. Let pi,r \u2208[\u22121, 1]. We have |v2,\u2113,i| = m X r\u2208[m] a\u2113,r \u00b7 Si,r \u00b7 (\u03b72pi,r\u27e8xi, \u2206wr(\u03c4)\u27e92) \u2264m\u03b72nd exp(9B)\u2225F(\u03c4) \u2212Y \u22252 F , where the last step follows Lemma D.1 and Part 11 of Lemma B.1. Thus, C2 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v2)\u27e9 \u22642\u2225F(\u03c4) \u2212Y \u2225F \u2225v2\u2225F \u22642m\u03b72n2d2 exp(9B)\u2225F(\u03c4) \u2212Y \u22252 F , where the \ufb01rst step follows Cauchy-Schwartz inequality, and the second step follows \u2225F(\u03c4)\u2212Y \u2225F \u2264 O( \u221a nd) by induction statement (See Lemma C.3). E.6 Bounding \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F Here, we give the bound of the third order term C3. Claim E.7. If the below conditions are true \u2022 Let B \u22651 be de\ufb01ned as De\ufb01nition 4.1 \u2022 C3 = \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F . \u2022 R \u2208(0, 0.01) \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then with probability at least 1 \u2212\u03b4, we have C3 \u2264\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . Proof. Note that we denote \u03b1i as \u27e81m, exp(W \u22a4xi)\u27e9. According to de\ufb01nition of F\u2113,i(\u03c4), we have F\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4) = ma\u22a4 \u2113( + \u03b1i(\u03c4 + 1)\u22121 exp((W(\u03c4 + 1)\u22a4xi) \u2212\u03b1i(\u03c4)\u22121 exp((W(\u03c4 + 1)\u22a4xi) + \u03b1i(\u03c4)\u22121 exp((W(\u03c4 + 1)\u22a4xi) \u2212\u03b1i(\u03c4)\u22121 exp((W(\u03c4)\u22a4xi) ) 33 \fThen we have |F\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4)| (21) \u2264m m X r=1 |\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121| exp(wr(\u03c4 + 1)\u22a4xi) + m m X r=1 \u03b1i(\u03c4)\u22121 exp(wr(\u03c4)\u22a4xi) \u00b7 | exp(\u2212\u03b7\u2206wr(\u03c4)\u22a4xi) \u22121| where it follows from triangle inequality. For the second term in Eq. (21), we have m m X r=1 \u03b1i(\u03c4)\u22121 exp(wr(\u03c4)\u22a4xi) \u00b7 | exp(\u2212\u03b7\u2206wr(\u03c4)\u22a4xi) \u22121| \u2264exp(B + R) exp(B + R) m X r=1 | exp(\u2212\u03b7\u2206wr(\u03c4)\u22a4xi) \u22121| \u2264exp(2B + 2R) m X r=1 2\u03b7\u2225\u2206wr(\u03c4)\u22252 = 2\u03b7 exp(2B + 2R) m X r=1 \u2225\u2206wr(\u03c4)\u22252 \u22642\u03b7 exp(2B + 2R) \u00b7 m \u00b7 exp(3B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F \u2264\u03b7m exp(6B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F where the \ufb01rst step comes from Lemma B.1, the second step is due to \u03b7\u2225\u2206wr(\u03c4)\u22252 \u22640.01 (this is stated in Claim assumption) and Fact A.8, the third step is from simple algebra, the fourth step is due to Lemma D.1, the last step follows from simple algebra. Similarly, for the \ufb01rst term in Eq. (21) we have m m X r=1 |\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121| exp(wr(\u03c4 + 1)\u22a4xi) \u2264m2 exp(B + R)|\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121| \u2264m exp(B + R)|\u03b7\u2206wr(\u03c4)\u22a4xi| exp(3B + 2R) \u2264\u03b7m exp(4B + 3R)\u2225\u2206wr(\u03c4)\u22252 \u2264\u03b7m exp(7B + 3R) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F where the \ufb01rst step follows from Part 5 of Lemma B.1, the second step follows from Part 9 of Lemma B.1 where R = |\u03b7\u2206wr(\u03c4)\u22a4xi|, the third step follows from simple algebra, and the last step follows from Lemma D.1. Thus we have |F\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4)| \u2264\u03b7m exp(8B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F . (22) Finally, we get \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F \u2264nd \u00b7 (\u03b7m exp(8B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F )2 \u2264\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F where the \ufb01rst step is because of Eq. (22), the last step comes from simple algebra. 34 \fF NTK Regression In this section, we introduce the NTK regression, as we will show that the neural network is \u201cequivalent\u201d to this regression so that we can give a \ufb01nal guarantee on the test data. To clarify the function, we use Fnn to denote F as a neural network function. We use xte \u2208Rd to denote the test data. We would like to control the error between the neural network Fnn and the function Fntk. For convenience, we call this error \u201ccoupling error\u201d, which is the di\ufb00erence between the trained neural network and its corresponding NTK regression. Recall that, by De\ufb01nition 3.6, we have the H\u2217= H(W(0)). Recall [H\u2217]i,j \u2208Rd\u00d7d is the kernel between xi and xj. Similarly, \u2200\u21131, \u21132 \u2208[d], for test data, we can de\ufb01ne the NTK induced feature map as [K\u2217 \u21131,\u21132]te,j := 1 mx\u22a4 texj m X r=1 \u27e8v\u21131,r, Ste(0)\u27e9\u00b7 mSte,r(0) \u00b7 \u27e8v\u21132,r, Sj(0)\u27e9\u00b7 mSj,r(0) [K(\u03c4)\u21131,\u21132]te,j := 1 mx\u22a4 texj m X r=1 \u27e8v\u21131,r, Ste(\u03c4)\u27e9\u00b7 mSte,r(\u03c4) \u00b7 \u27e8v\u21132,r, Sj(\u03c4)\u27e9\u00b7 mSj,r(\u03c4), where K\u2217 te, Kte(\u03c4) \u2208Rd\u00d7nd. Similarly, we have K\u2217 i = [H\u2217]i \u2208Rd\u00d7nd, Ki(\u03c4) = [H(\u03c4)]i \u2208Rd\u00d7nd for training data xi. Then, we de\ufb01ne the kernel regression predictor. De\ufb01nition F.1 (NTK regression predictor). We de\ufb01ne NTK regression predictor as Fntk(\u03b3(\u03c4), xte) :=mK\u2217 te\u03b3(\u03c4), (23) where \u03b3(\u03c4) \u2208Rnd is the parameter at timestamp \u03c4. Recall that we have a training dataset Dn = {(xi, yi)}n i=1. Then, we denote the corresponding objective function for Fntk as Lntk(\u03b3(\u03c4)) = 1 2 n X i=1 \u2225Fntk(\u03b3(\u03c4), xi) \u2212yi\u22252 2. (24) Thus, based on Eq. (24), the gradient desent (GD) updating rule of \u03b3(\u03c4) is given by \u03b3(\u03c4 + 1) | {z } nd\u00d71 = \u03b3(\u03c4) |{z} nd\u00d71 \u2212\u03b7 \u00b7 (m H\u2217 |{z} nd\u00d7nd \u03b3(\u03c4) |{z} nd\u00d71 \u2212vec(Y ) | {z } nd\u00d71 ), \u03b3(0) = 0nd, (25) where the Eq. (25) is according to \u03b3(\u03c4 + 1) = \u03b3(\u03c4) \u2212\u03b7\u2207\u03b3Lntk(\u03b3(\u03c4)). F.1 Equivalence between Trained Net and Kernel Regression We provide a stronger bound between Fntk and Fnn result compared to Lemma F.1 in [ADH+19b]. Our following statement is stronger in the two following senses: their result only holds when t \u2192\u221e, and our result holds for all t \u2208[0, \u221e); also their result only works for 1 dimension output space, our result holds arbitrary d dimensional output space. Theorem F.2 (Kernel value perturbation \u21d2prediction perturbation). Fix \u01ebH \u22641 2\u03bb. If for all \u03c4 \u22650, \u2225K\u2217 \u2113,te \u2212K\u2113,te(\u03c4)\u2225F \u2264\u01eb\u2113,test and \u2225H\u2217\u2212H(\u03c4)\u2225F \u2264\u01ebH, then for any xte \u2208Rd, \u2113\u2208[d] and \u03c4 \u22650, we have |Fntk(\u03b3(\u03c4), xte)\u2113\u2212Fnn(W(\u03c4), xte)\u2113| \u2264O \u221a nd \u03bb \u01eb\u2113,test + \u221a nd \u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH ! . 35 \fProof of Theorem F.2. Our proof relies on a careful analysis of the trajectories induced by gradient \ufb02ow for optimizing the neural network predictor Fnn and the NTK predictor Fntk. Then, we can have a similar argument to gradient descent at any timestamp \u03c4. Recall that for any xte, xi \u2208Rd, we have K\u2217 te, K\u2217 i \u2208Rd\u00d7nd be the feature map induced by NTK. For any x \u2208Rd, we de\ufb01ne \u03c6(x) \u2208Rd\u00d7d as following, for any \u2113\u2208[d], \u03c6(x)\u2113= 1 \u221amx m X r=1 \u27e8v\u2113,r, S(0)\u27e9\u00b7 mSr(0). We denote \u03c6(X) \u2208Rd\u00d7nd as the stack of feature map of X \u2208Rd\u00d7n. Note the optimal solution in Eq. (23) can be rewritten as min \u03b3 \u2225\u03b3\u22252 such that mK\u2217 i \u03b3 = yi for i = 1, . . . , n. We have the optimal solution for kernel regression is \u03b3\u2217:= m\u22121(H\u2217)\u22121 vec(Y ) and its corresponding prediction for xte will be Fntk(\u03b3(\u03c4), xte) = K\u2217 te(H\u2217)\u22121 vec(Y ). The solution to this program can be rewritten as applying gradient \ufb02ow on the min \u03b2 n X i=1 \u2225\u221am\u03c6(xi)\u22a4\u03b2 \u2212yi\u22252 2 with initialization \u03b2(0) = 0d. We use \u03b2(\u03c4) to denote this parameter at timestamp \u03c4 trained by gradient \ufb02ow. We denote Fntk2(\u03b2(\u03c4), xte) := \u221am\u03c6(xte)\u22a4\u03b2(\u03c4) where Fntk2(\u03b2(\u03c4), xte) be the predictor for xte at time \u03c4. Then we have Fntk2(\u03b2(\u03c4), xte) = \u221am \u03c6(xte)\u22a4 | {z } Rd\u00d7d \u03b2(\u03c4) |{z} Rd = \u221am \u03c6(xte)\u22a4 | {z } Rd\u00d7d (\u221am \u03c6(X) | {z } Rd\u00d7nd ) \u03b3(\u03c4) |{z} Rnd = m K\u2217 te |{z} Rd\u00d7nd \u03b3(\u03c4) = Fntk(\u03b3(\u03c4), xte) where the second step follows \u03b2(\u03c4) = \u221am\u03c6(X)\u03b3(\u03c4) the third step follows K\u2217 te = \u03c6(xte)\u22a4\u03c6(X). With these notations, as \u03c4 goes to in\ufb01nity, we denote, for any \u2113\u2208[d], Fntk2(xte)\u2113= Z \u221e \u03c4=0 dFntk2(\u03b2(\u03c4), xte)\u2113 d\u03c4 d\u03c4 where we have used the fact that the initial prediction is 0 as \u03b2(0) = 0d. Similarly for Fnn(xte)\u2113. Let Fntk2,i(\u03c4) = Fntk2(\u03b2(\u03c4), xi) and Fntk2(\u03c4) \u2208Rd\u00d7n. Similarly, for the NN predictor Fnn. Now we take a closer look at the time derivative: dFntk2(\u03b2(\u03c4), xte)\u2113 d\u03c4 = \u001c\u2202Fntk2(\u03b2(\u03c4), xte)\u2113 \u2202\u03b2(\u03c4) , d\u03b2(\u03c4) d\u03c4 \u001d 36 \f= \u001c\u2202Fntk2(\u03b2(\u03c4), xte)\u2113 \u2202\u03b2(\u03c4) , \u2212\u2202L(\u03b2(\u03c4), {xi}n i=1) \u2202\u03b2(\u03c4) \u001d = \u2212 * \u2202Fntk2(\u03b2(\u03c4), xte)\u2113 \u2202\u03b2(\u03c4) , n X i=1 d X \u21132=1 (Fntk2,i,\u21132(\u03c4) \u2212yi,\u21132) \u2202Fntk2(\u03b2(\u03c4), xi)\u21132 \u2202\u03b2(\u03c4) + = \u2212m * \u03c6(xte)\u2113, n X i=1 d X \u21132=1 (Fntk2,i,\u21132(\u03c4) \u2212yi,\u21132)\u03c6(xi)\u21132 + = \u2212m vec(K\u2217 \u2113,te)\u22a4vec(Fntk2(\u03c4) \u2212Y ) (26) where the \ufb01rst step follows from simple algebra, the second step follows from ODE formulation (we remark that this is a very standard step in all the NTK literature), the third step follows from Eq. (24), the fourth step follows from the de\ufb01nition of \u03c6(xte)\u2113, the last step follows from simple algebra. We can obtain a time derivative of the same form for Fnn. dFnn(W(\u03c4), xte)\u2113 d\u03c4 = \u001c\u2202Fnn(W(\u03c4), xte)\u2113 \u2202W(\u03c4) , dW(\u03c4) d\u03c4 \u001d = \u001c\u2202Fnn(W(\u03c4), xte)\u2113 \u2202W(\u03c4) , \u2212\u2202L(W(\u03c4), {xi}n i=1) \u2202W(\u03c4) \u001d = \u2212 * \u2202Fnn(W(\u03c4), xte)\u2113 \u2202W(\u03c4) , n X i=1 d X \u21132=1 (Fnn,i,\u21132(\u03c4) \u2212yi,\u21132)\u2202Fnn(W(\u03c4), xi)\u21132 \u2202W(\u03c4) + = \u2212m vec(K\u2113,te(\u03c4))\u22a4vec(Fnn(\u03c4) \u2212Y ) (27) where the \ufb01rst step follows from simple algebra, the second step is standard in NTK literature, the third step follows from Eq. (24), the last step follows from simple algebra. Thus we analyze the di\ufb00erence between the NN predictor and NTK predictor via this integral form |Fnn(xte)\u2113\u2212Fntk2(xte)\u2113| = \f \f \f \fFnn(W(0), xte)\u2113+ Z \u221e \u03c4=0 \u0012dFnn(W(\u03c4), xte)\u2113 d\u03c4 \u2212dFntk2(\u03b2(\u03c4), xte)\u2113 d\u03c4 \u0013 d\u03c4 \f \f \f \f = |Fnn(W(0), xte)\u2113| + \f \f \f \f\u2212m Z \u221e \u03c4=0 \u0010 vec(K\u2113,te(\u03c4))\u22a4vec(Fnn(\u03c4) \u2212Y ) \u2212vec(K\u2217 \u2113,te)\u22a4vec(Fntk2(\u03c4) \u2212Y ) \u0011 d\u03c4 \f \f \f \f = \f \f \f \f\u2212m Z \u221e \u03c4=0 \u0010 vec(K\u2113,te(\u03c4))\u22a4vec(Fnn(\u03c4) \u2212Y ) \u2212vec(K\u2217 \u2113,te)\u22a4vec(Fntk2(\u03c4) \u2212Y ) \u0011 d\u03c4 \f \f \f \f \u2264m \f \f \f \f Z \u221e \u03c4=0 vec(K\u2113,te(\u03c4) \u2212K\u2217 \u2113,te)\u22a4vec(Fnn(\u03c4) \u2212Y )d\u03c4 \f \f \f \f + m \f \f \f \f Z \u221e \u03c4=0 vec(K\u2217 \u2113,te)\u22a4vec(Fnn(\u03c4) \u2212Fntk2(\u03c4))d\u03c4 \f \f \f \f \u2264m max 0\u2264t\u2264\u221e\u2225K\u2113,te(\u03c4) \u2212K\u2217 \u2113,te\u2225F Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 + m max 0\u2264t\u2264\u221e\u2225K\u2217 \u2113,te\u2225F Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264m\u01eb\u2113,test Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 + m max 0\u2264t\u2264\u221e\u2225K\u2217 \u2113,te\u2225F Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4, where the \ufb01rst step follows from the di\ufb00erence between the NN predictor and NTK predictor, the second step follows from Eq. (26) and Eq. (27), the third step follows |Fnn(W(0), xte)\u2113| = 0 by 37 \fsymmetric initialization from De\ufb01nition 3.7, the fourth step follows from simple algebra, the \ufb01fth step follows from Frobenius norm, the last step follows from simple algebra. For the \ufb01rst term, recall \u2225H\u2217\u2212H(\u03c4)\u2225F \u2264\u01ebH and, by Claim E.5, we have \u03bbmin(H(\u03c4)) \u22651 2\u03bb. Using this fact we know \u2225Fnn(\u03c4) \u2212Y \u2225F \u2264exp(\u2212m 2 \u03bb\u03c4)\u2225Fnn(0) \u2212Y \u2225F (The reason to obtain this is due to solve ODE). Therefore, by Lemma D.3, we can bound Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 = Z \u221e \u03c4=0 exp \u0010 \u2212m 2 \u03bb\u03c4 \u0011 \u2225Fnn(0) \u2212Y \u2225F d\u03c4 = O( \u221a nd m\u03bb ). To bound R \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4, we observe that Fnn(\u03c4) \u2192y and Fntk2(\u03c4) \u2192y with linear convergence rate. Therefore, we can choose some \u03c40 = C m\u03bb log \u0010 nd \u01ebH\u00b7m\u03bb \u0011 so that Z \u221e \u03c40 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264 Z \u221e \u03c40 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 + Z \u221e \u03c40 \u2225Fntk2(\u03c4) \u2212Y \u2225F d\u03c4 \u2264O \u0012 1 m\u03bb(\u2225Fnn(\u03c40) \u2212Y \u2225F + \u2225Fntk2(\u03c40) \u2212Y \u2225F ) \u0013 \u2264O \u221a nd m\u03bb exp (\u2212m\u03bb\u03c40) ! \u2264O(\u01ebH). where the \ufb01rst step follows from simple algebra, the second step follows from integral range is \u03c40, the third step follows from Lemma D.3, the last step follows from choice of \u03c40. Thus it su\ufb03ces to bound R \u03c40 \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264\u03c40 max0\u2264t\u2264\u03c40 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F . First observe that \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F \u2264\u2225Fnn(0)\u2225F + Z \u03c4 s=0 \r \r \r \r d(Fnn(s) \u2212Fntk2(s)) ds \r \r \r \r F ds = Z \u03c4 s=0 \r \r \r \r d(Fnn(s) \u2212Fntk2(s)) ds \r \r \r \r F ds, where the last step follows symmetric initialization from De\ufb01nition 3.7. Note d(Fnn(\u03c4) \u2212Fntk2(\u03c4)) d\u03c4 = \u2212mH(\u03c4) vec(Fnn(\u03c4) \u2212Y ) + mH\u2217vec(Fntk2(\u03c4) \u2212Y ) = \u2212mH\u2217vec(Fnn(\u03c4) \u2212Fntk2(\u03c4)) + m(H\u2217\u2212H(\u03c4)) vec(Fnn(\u03c4) \u2212Y ) where the \ufb01rst step follows from de\ufb01nition of Fnn and Fntk2. Since H\u2217is positive semide\ufb01nite, \u2212H\u2217vec(Fnn(\u03c4) \u2212Fntk2(\u03c4)) term only makes \u2225Fnn(\u03c4) \u2212 Fntk2(\u03c4)\u2225F smaller. Therefore, we have \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F \u2264m Z \u03c4 s=0 \u2225Fnn(s) \u2212Y \u2225F \u2225H(\u03c4) \u2212H\u2217\u2225F ds 38 \f\u2264m\u03c4\u2225Fnn(0) \u2212Y \u2225F \u01ebH \u2264O \u0010 m\u03c4 \u221a nd\u01ebH \u0011 , where the last step is by Lemma D.3. Therefore, we have Z \u03c40 \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264O \u0010 m\u03c4 2 0 \u221a nd\u01ebH \u0011 = O \u221a nd m\u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH ! . where the \ufb01rst step follows from integral range is \u03c40, the second step follows from the choice of \u03c40. Lastly, as Fntk2(xte)\u2113= Fntk(xte)\u2113, we put things together and get |Fntk(xte)\u2113\u2212Fnn(xte)\u2113| \u2264O \u221a nd \u03bb \u01eb\u2113,test + \u221a nd \u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH ! . From the above, after we change the integration from (0, \u221e) to (0, \u03c4), the statement still holds. Then, based on the gradient \ufb02ow version, we can have a gradient descent version with a constant error factor by replacing integral with geometric summarization (for example P\u221e i=0 ai < 2, when a \u2208(0, 0.5) ). G Di\ufb00usion In Section G.1, we provide the proof of our main result of di\ufb00usion. In Section G.2, we provide some tools from previous works. We \ufb01rst de\ufb01ne an auxiliary function e Fntk of the same functional form as Fntk, but trained on a pseudo dataset e S := {e yi, xi}n i=1 with e yi := FH(xi) + \u01ebi and \u01ebi := yi \u2212F\u2217(xi). Then, we have the following claim. Claim G.1 (Loss decomposition). We can decompose our target function as the following 1 T Z T 0 E[\u2225Fnn(W(\u03c4), (t, x(t))) \u2212F\u2217(t, x(t))\u22252 2]dt \u2264Z1 + Z2 + Z3 + Z4, where Z1 = 1 T Z T 0 E[\u2225Fnn(W(\u03c4), (t, x(t))) \u2212Fntk(\u03b3(\u03c4), (t, x(t)))\u22252 2]dt (coupling) Z2 = 1 T Z T 0 E[\u2225Fntk(\u03b3(\u03c4), (t, x(t))) \u2212e Fntk(\u03b3(\u03c4), (t, x(t)))\u22252 2]dt (label mismatch) Z3 = 1 T Z T 0 E[\u2225e Fntk(\u03b3(\u03c4), (t, x(t))) \u2212FH(t, x(t))\u22252 2]dt (early stopping) Z4 = 1 T Z T 0 E[\u2225FH(t, x(t)) \u2212F\u2217(t, x(t))\u22252 2]dt. (approximation). The coupling error term is the gap between neural networks Fnn and a kernel function Fntk. The approximation error term is the gap between the target function F\u2217and its corresponding RKHS function FH. These two terms transfer the problem of neural networks training into the problem of kernel regression. 39 \fG.1 Main Result of Di\ufb00usion In this section, we prove the main result of di\ufb00usion. Theorem G.2 (Restatement of Theorem 6.5). Suppose Assumptions 6.1, 6.2, 6.3, 6.4 hold and we set m = \u2126(\u03bb\u22122n3d3 exp(18B) log2(nd/\u03b4)) and \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)). Moreover, suppose b T satis\ufb01es Assumption G.3 with corresponding \u01eb(n, b T). Then for large enough RH, with probability at least 1 \u2212\u03b4, it holds that 1 T Z T 0 \u03bb(t)E[\u2225sW ( b T)(t, x(t)) \u2212\u2207log pt(Xt)\u22252 2]dt \u2264O \u0012 1 \u03bb\u221an + \u01eb(n, b T) + dA2(RH) + dA(RH) + p dA(RH)\u0393\u03b4 + \u0393\u03b4 \u0013 . Proof of Theorem 6.5. Note that the m and \u03b7 satisfy the conditions in Theorem 4.2. The reason about a di\ufb00erent m is that we choose a di\ufb00erent R and apply Lemma E.2 one more time. Recall the \u01eb\u2113,test and \u01ebH are de\ufb01ned in Theorem F.2. Note that H\u2217= H(0). By Lemma 5.1, Part 2, let R = \u03bb/(2n2d2 exp(10B)), we have with probability at least 1 \u2212\u03b4 such that \u2225H\u2217 |{z} nd\u00d7nd \u2212H(\u03c4) | {z } nd\u00d7nd \u2225F \u2264\u01ebH = \u03bb 2nd. Note that K\u2217 \u2113,te and K\u2113,te share the same weight perturbation as H\u2217and H(\u03c4). Thus, by using the same proof as Lemma 5.1, Part 1, we have \u2225K\u2217 \u2113,te |{z} n\u00d7d \u2212K\u2113,te |{z} n\u00d7d \u2225F \u2264\u01eb\u2113,test = \u03bb 2n1.5d1.5 . We have \u2225Fntk(\u03b3(\u03c4), xte) \u2212Fnn(W(\u03c4), xte)\u22252 \u2264 \u221a d max \u2113\u2208d |Fntk(\u03b3(\u03c4)\u2113, xte) \u2212Fnn(W(\u03c4), xte)\u2113| \u2264O \u0012\u221and \u03bb max \u2113\u2208[d] \u01eb\u2113,test + \u221and \u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH \u0013 \u2264O \u0012\u221and \u03bb \u03bb n1.5d1.5 + \u221and \u03bb2 log2 \u0012 nd m\u03bb \u0013 \u03bb nd \u0013 \u2264O \u0012 1 \u03bb\u221an log2 \u0012 nd m\u03bb \u0013\u0013 \u2264O \u0012 1 \u03bb\u221an \u0013 where the \ufb01rst step follows from simple algebra, the second step is by Theorem F.2. Thus, we \ufb01nish the proof by Claim G.1, where coupling is from above, label mismatch is from Theorem G.5, early stopping is from Assumption G.3 and approximation is from Theorem G.4. 40 \fG.2 Tools From Previous Works We have the following assumption and statements from previous works [HRX24]. Assumption G.3 (Assumption 3.11 in [HRX24]). Fix any FH \u2208H with \u2225FH\u22252 H \u2264RH and assume labels are generated as e yj = FH(xj) + \u01ebj. Suppose e Fntk(\u03b3( b T), \u00b7) is obtained by GD-trained kernel regression with the number of iterations b T. We assume there exists \u01eb such that 1 T Z T 0 E[ e Fntk(\u03b3( b T ), (t, x(t))) \u2212FH(t, x(t))\u22252 2]dt \u2264\u01eb(n, b T), and \u01eb(n, b T) \u21920 as n \u2192\u221e. Theorem G.4 (Theorem 3.6 in [HRX24], universal approximation of score function). Suppose Assumptions 6.1, 6.3 and 6.4 hold. Let RH be larger than a constant c1, i.e., C(d + 1, 0) in Proposition 6 of [Bac17], which depends only on d. There exists a function FH \u2208H such that \u2225FH\u22252 H \u2264dRH and 1 T Z T 0 E[\u2225FH(t, x(t)) \u2212F\u2217(t, x(t))\u22252 2]dt \u2264dA2(RH). Theorem G.5 (Theorem 3.10 in [HRX24], label mismatch). Suppose Assumptions 6.1 and 6.2 hold. If we initialize both Fntk and e Fntk properly, then with probability at least 1 \u2212\u03b4 it holds simultaneously for all \u03c4 that 1 T Z T 0 E[\u2225Fntk(\u03b3(\u03c4), (t, x(t))) \u2212e Fntk(\u03b3(\u03c4), (t, x(t)))\u22252 2]dt \u2264dA(RH) + C0( p dA(RH)\u0393\u03b4 + \u0393\u03b4) where C0 is a constant de\ufb01ned in Theorem 1 of [RK20].", + "additional_graph_info": { + "graph": [ + [ + "Jiuxiang Gu", + "Yingyu Liang" + ], + [ + "Jiuxiang Gu", + "Zhenmei Shi" + ], + [ + "Yingyu Liang", + "Mengdi Wang" + ], + [ + "Zhenmei Shi", + "Yingyu Liang" + ], + [ + "Zhenmei Shi", + "Yifei Ming" + ], + [ + "Zhenmei Shi", + "Ying Fan" + ], + [ + "Zhenmei Shi", + "Frederic Sala" + ] + ], + "node_feat": { + "Jiuxiang Gu": [ + { + "url": "http://arxiv.org/abs/2405.03251v1", + "title": "Exploring the Frontiers of Softmax: Provable Optimization, Applications in Diffusion Model, and Beyond", + "abstract": "The softmax activation function plays a crucial role in the success of large\nlanguage models (LLMs), particularly in the self-attention mechanism of the\nwidely adopted Transformer architecture. However, the underlying learning\ndynamics that contribute to the effectiveness of softmax remain largely\nunexplored. As a step towards better understanding, this paper provides a\ntheoretical study of the optimization and generalization properties of\ntwo-layer softmax neural networks, providing theoretical insights into their\nsuperior performance as other activation functions, such as ReLU and\nexponential. Leveraging the Neural Tangent Kernel (NTK) framework, our analysis\nreveals that the normalization effect of the softmax function leads to a good\nperturbation property of the induced NTK matrix, resulting in a good convex\nregion of the loss landscape. Consequently, softmax neural networks can learn\nthe target function in the over-parametrization regime. To demonstrate the\nbroad applicability of our theoretical findings, we apply them to the task of\nlearning score estimation functions in diffusion models, a promising approach\nfor generative modeling. Our analysis shows that gradient-based algorithms can\nlearn the score function with a provable accuracy. Our work provides a deeper\nunderstanding of the effectiveness of softmax neural networks and their\npotential in various domains, paving the way for further advancements in\nnatural language processing and beyond.", + "authors": "Jiuxiang Gu, Chenyang Li, Yingyu Liang, Zhenmei Shi, Zhao Song", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "Introduction 3 2 Related Work 4 3 Preliminary 5 3.1 Neural Tangent Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4 Main Results 7 5 Proof Sketch 8 6 Application in Di\ufb00usion 9 6.1 Preliminary of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6.2 Main Result of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 7 Discussion and Future Work 11 8 Conclusion 12 A De\ufb01nition 13 B Basic Concentration 14 B.1 Some Concentration Basic Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 B.2 Kernel Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C Induction 19 C.1 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 C.2 Induction Part 1. For Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C.3 Induction Part 2. For Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 C.4 Induction Part 3. For Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D Induction Part 1: For Weights 22 D.1 Bounding the Gradient at any Time . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D.2 Bounding the Initialization Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 E Induction Part 2: For Loss 23 E.1 Decomposition for \u2225vec(F(\u03c4 + 1) \u2212Y )\u22252 2 . . . . . . . . . . . . . . . . . . . . . . . . . 24 E.2 Choice of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 E.3 Bounding C0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 E.4 Bounding C1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 E.5 Bounding C2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 E.6 Bounding \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 F NTK Regression 35 F.1 Equivalence between Trained Net and Kernel Regression . . . . . . . . . . . . . . . . 35 1 \fG Di\ufb00usion 39 G.1 Main Result of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 G.2 Tools From Previous Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2 \f1 Introduction Large Language Models (LLMs) like GPT4 [AAA+23] from OpenAI and Claude 3 [Ant24] from Anthropic have widely and profoundly changed the world. Some researchers believe they split human history into two parts, the Pre-LLM Era and the LLM Era. The LLMs have been widely used in human activities, such as education [KSK+23], law [Sun23], \ufb01nance [LWDC23], bio-informatics [TTE+23], coding [HZL+24], and even top AI conference reviews such as ICML, ICLR, and NeurIPS [LIZ+24]. To make LLMs successful, one of the cores of LLMs is the Transformer model architecture [VSP+17], which has many advantages, including faster-parallelized inference rather than sequential inference like RNN [HS97]; being easy to scale up the model capacity to support the scaling laws in neural language models [KMH+20], i.e. since the input and output dimension of each Transformer blocks is the same, we can stack an arbitrary number of layers as we want. The kernel design of the Transformer block is self-attention layers, where each block has many attention heads and each head has its three important private parameter matrices for key, query, and value operation. Many papers believe that the self-attention operation is the critical reason for emergent ability [WTB+22], including in-context learning [OEN+22, Red24] and compositional ability to solve complex task [DLS+24, LPC+24]. The Transformer is so successful and has been widely certi\ufb01ed that this architecture can be adopted in many other modalities such as tabular data, image/video generation, e.g. the video di\ufb00usion model SORA [Ope24] from OpenAI using Transformer [PX23] as its backbone. When we delve into the self-attention mechanism, we \ufb01nd the softmax function plays a crucial role [VSP+17]. It enables the model to focus on the most related information among input sequences by giving higher attention scores to the positions that are more relevant for the current position\u2019s representation and to capture dependencies between positions. [CLJ20] \ufb01nd that softmax attention is more expressive and performs better than any convolutional layer. [DSZ23] exhibits softmax attention outperforms linear attention in most scenarios. Although the softmax function code has been executed every second on thousands of servers, there is a limited understanding of the following question: (\u2217) What is the learning mechanism that makes softmax so powerful? To demystify the black box, in this paper, we analyze the Gradient Descent (GD) training dynamics for two-layer Neural Networks (NN) with softmax activation function for multi-dimensional regression, i.e., F(W, x, a) \u2208Rd and F(W, x, a)\u2113:= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121 \u2200\u2113\u2208{1, . . . , d}, where m is number of hidden neurons, exp(\u00b7) is element-wise exponential function, a\u2113, W are the \ufb01rst and second layer weights respectively and x is the input data. Note that, the self-attention could be written as F(W KX, W QX, W V X) \u2208Rd\u00d7n\u2032, where W K, W Q, W V \u2208Rd\u00d7d denotes key, query, and value matrix and X \u2208Rd\u00d7n\u2032 is a sequence of n\u2032 tokens. Thus, studying the two-layer softmax network is the prerequisite to understanding self-attention. See more discussion in Section 7. There is a rich line of work studying two-layer NN learning trajectory under ReLU activation function ([LL18, DZPS19, AZLS19a, ADH+19a, SY19, MMM19, SYZ21, BPSW21, MOSW22, CB20, ZGJ21, LLWA21, CCBG22] and many more) or exponential activation function from the latest work [GMS23]. As far as we know, our work is the \ufb01rst to theoretically study the optimization and generalization of the two-layer softmax network and it is a \ufb01rst step on understanding the power of softmax. 3 \fReLU ([MOSW22]) exp ([GMS23]) Softmax (ours) m \u2126(\u03bb\u22122n2 log(n)) \u2126(\u03bb\u22122n2+o(1) log2(n)) \u2126(\u03bb\u22122n2+o(1) log2(n)) b T \u2126(\u03bb\u22122n2 log(n/\u01eb)) \u2126(\u03bb\u22122n2+o(1) log(n/\u01eb)) \u2126(\u03bb\u22122n2+o(1) log(n/\u01eb)) Table 1: Comparing hidden neuron number m in two-layer neural networks and training steps b T are required under di\ufb00erent activation functions to guarantee that, for any \u01eb > 0, with probability at least 0.99, the training loss is smaller or equal to \u01eb. Here, n is the number of training samples and \u03bb is the smallest eigenvalue for the matrix of neural tangent kernel, where n > 1 and \u03bb < 1. We can see that the two-layer NN with softmax activation function requires almost the same number of neurons and training steps to converge as that with ReLU or exponential activation functions. More details: Theorem 3.6 in [MOSW22] for ReLU; Theorem 1.1 in [GMS23] for exp; Corollary 4.3 in our paper for softmax. One popular analysis method for studying over-parameterized NN is Neural Tangent Kernel (NTK) [JGH18], where overparameterized networks are approximately linear models around their initialization, so the network training is almost convex. To answer our (\u2217) question above, we adopt the powerful NTK analysis paradigm in this work. Our analysis shows that, because of the normalization e\ufb00ect of the denominator, the Neural Tangent Kernel induced by the softmax has a good perturbation property (Lemma 5.1), which means the loss landscape of softmax version has a large convex region. Thus, the softmax NN requires almost the same number of neurons and training steps to \ufb01t the data and converge as ReLU or exponential NN, which is illustrated in Table 1 clearly (Theorem 4.2). To demonstrate the broad applicability of our theoretical \ufb01ndings, we apply our analysis in a practical case study to show the generalization ability of softmax NN, where the task is learning score estimation functions in di\ufb00usion models with noisy labels, a promising approach for generative modeling, as we can smartly transfer it to a multi-dimensional regression task (Theorem 6.5). Thus, we show that gradient-based algorithms can learn the score function with a provable accuracy. Our paper\u2019s contributions are summarized as follows: \u2022 Softmax NTK: We build up the \ufb01rst NTK analysis framework for two-layer NN with softmax activation function. Furthermore, our multi-dimensional regression setting is more general than previous work [MOSW22, GMS23] (ReLU and exp) and can be degenerated to the linear regression setting. \u2022 Di\ufb00usion Models Case Study: We apply our results in learning score estimation functions in di\ufb00usion models with noisy labels to verify our analysis e\ufb00ectiveness. 2 Related Work Softmax and Attention in LLMs. Recently, signi\ufb01cant advances have been achieved in language modeling, particularly with the introduction of Transformer architectures and attention mechanisms [VSP+17]. Self-attention to capture long-range dependencies in text, revolutionizing the \ufb01eld of NLP, e.g., BERT [DCLT19], PaLM [CND+22], LLaMA [TLI+23], LLaMA 2 [TMS+23], ChatGPT [Ope22], GPT4 [AAA+23], Claude 3 [Ant24] and so on. Many works demonstrate the softmax is beyond other activation functions such as ReLU attention or linear attention in di\ufb00erent aspects, e.g, approximation power [DSZ23, SHT24, NLL+24, GLL+24a], prompt tuning [ORST23], in-context learning ability [GSX23, SWXL23, CPM+24, CSWY24], compositional ability[XSL24]. Many works study to generalize the softmax into high order attention [AS24b] or 4 \fto accelerate softmax computation [WLK+20, CLD+20, SZZ+21, QSD+21, AS23, BSZ24, AS24a, HJK+24, HLSL24, DSY24, SYZ24, GSY23, GSYZ23, KMZ23, GLL+24b]. Another line of work analyzes a one-layer softmax network trained on the linear regression task [LSX+23, DLMS23, DLS23, CSY24, GSWY23, SCWZ24], while our work studies a two-layer softmax setting. Neural Tangent Kernel. Recently many studies show that the analysis of optimization and generalization for deep learning should be interwoven together. One line of work uses the \ufb01rst-order Tyler expansion to study su\ufb03ciently over-parameterized neural networks around its initialization like NTK, e.g. [MRH+18, ZCZG18, JGH18, LL18, AZLS19b, ZG19, OS19, LXS+19, NXL+19, Yan19, SY19, DLL+19, AZLS19a, COB19, OFLS19, ADH+19a, CG19, JT19, AZLL19, OS20, CFW+20, ZCZG20, GSJW20, BPSW21, MZ22, MOSW22, GMS23, QSS23, QMS+23, QSY23, SY23, GQSW24, SZZ24] and more. Thus, the neural network optimization can be a convex problem. The NTK method has been widely used in di\ufb00erent scenarios, such as preprocessing analysis [SYZ21, HSWZ22, ALS+23, SCL+23, SSLL23, SSL24, GQSW24], federated learning [LSY23], LoRA adaptation [HWAZ+21, XSW+24, SMF+23] of LLMs [MWY+23], and learning score estimation functions in di\ufb00usion models [HRX24]. Di\ufb00usion Model. Score-based generative di\ufb00usion models can generate high-quality image samples comparable to GANs which requires adversarial optimization [HJA20, SSDK+21, KLL+24]. Based on the U-Net [RFB15], stable di\ufb00usion can successfully generate business-used images. Based on the softmax-based self-attention [PX23], OpenAI released a video di\ufb00usion model, SORA [Ope24], with a surprising performance. Another line of work studying how to train the di\ufb00usion models to have a better theoretical guarantee [SE19, SE20, SK21, SGSE20, SDME21, LLT22, KFL22, SDCS23, LKB+23, CLL23, CDD23, CHZW23, SCK23, YFZ+23, BDD23, GKL24, CCL+24, GLB+24, WCL+24, CKS24]. In this work, we adapt our analysis in di\ufb00usion models. 3 Preliminary We \ufb01rst introduce some notations. Then, we will introduce our problem setup. Notations. We use N(\u00b5, \u03a3) to denote the Gaussian distribution with \u00b5 and covariance \u03a3. For any positive integer n, we use [n] to denote set {1, 2, \u00b7 \u00b7 \u00b7 , n}. Let a vector z \u2208Rn. We denote the \u21132 norm as \u2225z\u22252 := (Pn i=1 z2 i )1/2, the \u21131 norm as \u2225z\u22251 := Pn i=1 |zi|, \u2225z\u22250 as the number of non-zero entries in z, \u2225z\u2225\u221eas maxi\u2208[n] |zi|. We use z\u22a4to denote the transpose of a z. We use \u27e8\u00b7, \u00b7\u27e9to denote the inner product. Let A \u2208Rn\u00d7d, we use vec(A) to denote a length nd vector. We denote the Frobenius norm as \u2225A\u2225F := (P i\u2208[n],j\u2208[d] A2 i,j)1/2. For a function f(x), we say f is L-Lipschitz if \u2225f(x)\u2212f(y)\u22252 \u2264L\u00b7\u2225x\u2212y\u22252. Let D denote a distribution. We use x \u223cD to denote that we sample a random variable x from distribution D. We use E[] to denote expectation and Pr[] to denote probability. We use p.s.d. to denote the positive-semide\ufb01nite matrix. As we have multiple index, to avoid confusion, we usually use i, j \u2208[n] to index the training data, \u2113\u2208[d] to index the output dimension, r \u2208[m] to index neuron number. Models. We consider a two-layer softmax neural network. The hidden layer has m neurons, and we use the softmax function as the activation function, F(W, \u00b7, a) : Rd1 \u2192Rd2 and F(W, x, a)\u2113:= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121 \u2200\u2113\u2208[d2], (1) where exp(\u00b7) is element-wise exponential function. We use m as a normalization factor. Note that we can reduce the d2 to 1 for the linear regression setting. To simplify the proof we let d1 = d2. 5 \fNote that our proof can generalize to di\ufb00erent d1, d2 easily. We only optimizing W and not both W and a simultaneously as many previous works to simplify optimization, e.g., [DZPS19, SY19, MOSW22], where x \u2208Rd represents the input, w1, \u00b7 \u00b7 \u00b7 , wm \u2208Rd are weight vectors in the \ufb01rst layer, i.e., W = [w1, \u00b7 \u00b7 \u00b7 , wm] \u2208Rd\u00d7m, and a1, \u00b7 \u00b7 \u00b7 , ad \u2208Rm are weights in the second layer. We can simplify the notation as F(W, x) when the context is clear. Data. We have n training data points Dn = {(xi, yi)}n i=1, where x \u2208Rd and y \u2208Rd.1 We denote X = [x1, . . . , xn] \u2208Rd\u00d7n and Y = [y1, . . . , yn] \u2208Rd\u00d7n. We assume that \u2225xi\u22252 \u22641 and \u2225yi\u22252 \u22641, \u2200i \u2208[n]. We have the softmax function S \u2208Rm\u00d7n, where Si \u2208Rm denotes \u27e8exp(W \u22a4xi), 1m\u27e9\u22121 \u00b7 exp(W \u22a4xi) and Si,r \u2208R denotes \u27e8exp(W \u22a4xi), 1m\u27e9\u22121 \u00b7exp(w\u22a4 r xi), \u2200r \u2208[m], \u2200i \u2208[n]. For simplicity, we denote \u03b1i as \u27e81m, exp(W \u22a4xi)\u27e9, expi as exp(W \u22a4xi) and expi,r as exp(w\u22a4 r xi), \u2200r \u2208[m], \u2200i \u2208[n], when the context is clear. Gradient Descent. We use er to denote a vector where the r-th coordinate is 1 and everywhere else is 0. \u2200r \u2208[m], \u2200\u2113\u2208[d], we have \u2202F (W,x,a)\u2113 \u2202wr \u2208Rd can be written as \u2202F(W, x, a)\u2113 \u2202wr = + m\u27e8a\u2113\u25e6er, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121x \u2212m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22122 \u00b7 \u27e8exp(W \u22a4x), er \u25e61m\u27e9x = + m\u27e8a\u2113\u25e6er, S\u27e9\u00b7 x \u2212m\u27e8a\u2113, S\u27e9\u00b7 \u27e8S, er \u25e61m\u27e9x. (2) We use W(\u03c4) to denote the weights of the \ufb01rst layer on the timestamp \u03c4 and similar for S(\u03c4) and F(\u03c4) when the context is clear. Now, we introduce some necessary de\ufb01nition used. De\ufb01nition 3.1 (F(\u03c4), dynamic prediction). We de\ufb01ne Fi(\u03c4) \u2208Rd, for any timestamp \u03c4, as F\u2113,i(\u03c4) := m\u27e8a\u2113, exp(W(\u03c4)\u22a4xi)\u27e9\u00b7 \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9\u22121. Here xi \u2208Rd. It can be rewritten as F\u2113,i(\u03c4) = m\u27e8a\u2113, Si(\u03c4)\u27e9. We consider d-dimensional MSE loss. De\ufb01nition 3.2 (Loss function over time). We de\ufb01ne the objective function L as below: L(W(\u03c4)) := 1 2 X i\u2208[n] X \u2113\u2208[d] (F\u2113,i(\u03c4) \u2212y\u2113,i)2. Thus, we de\ufb01ne the gradient of w. De\ufb01nition 3.3 (\u2206wr(\u03c4)). For any r \u2208[m], we de\ufb01ne \u2206wr(\u03c4) \u2208Rd as below: \u2206wr(\u03c4) := m n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 \u27e8a\u2113\u25e6er, Si(\u03c4)\u27e9\u2212\u27e8a\u2113, Si(\u03c4)\u27e9\u00b7 \u27e8Si(\u03c4), er \u25e61m\u27e9 \u0011 \u00b7 xi where Si(\u03c4) = \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9\u22121 \u00b7 exp(W(\u03c4)\u22a4xi) \u2208Rm. Note that we can simplify the gradient calculation by the fact 1 = \u27e81m, Si(\u03c4)\u27e9. Thus, we have the following claim. Claim 3.4. \u2206wr(\u03c4) := m Pn i=1 Pd \u2113=1(F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 (\u27e8a\u2113,r \u00b7 1m \u2212a\u2113, Si(\u03c4)\u27e9) \u00b7 Si,r(\u03c4) \u0011 \u00b7 xi. 1Our analysis can extend to xi \u2208Rd1 and yi \u2208Rd2 easily. 6 \fWe use the gradient descent (GD) algorithm with the learning rate \u03b7 to train the network. As we only train the hidden layer W and \ufb01x a, we have the following gradient update rule. De\ufb01nition 3.5 (Gradient descent). The gradient descent algorithm for optimizing the weight matrix W is de\ufb01ned as: W(\u03c4 + 1) = W(\u03c4) \u2212\u03b7\u2206W(\u03c4). where \u2206W(\u03c4) \u2208Rd\u00d7m and \u2206wr(\u03c4) \u2208Rd is the r-th column of \u2206W(\u03c4) de\ufb01ned in De\ufb01nition 3.3. 3.1 Neural Tangent Kernel Now, we are ready to introduce our key tools, Neural Tangent Kernel induced by the softmax. We de\ufb01ne the kernel with respect to timestamp \u03c4. De\ufb01nition 3.6 (Kernel function). For simplicity, we denote S(W \u22a4xi) as Si \u2208Rm \u22650 and v\u2113,r = a\u2113,r \u00b7 1m \u2212a\u2113\u2208Rm. We de\ufb01ne the function (Gram matrix) H : Rd\u00d7m \u2192Rnd\u00d7nd as following H(W) := \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 H1,1 H1,2 \u00b7 \u00b7 \u00b7 H1,d H2,1 H2,2 \u00b7 \u00b7 \u00b7 H2,d . . . . . . ... . . . Hd,1 Hd,2 \u00b7 \u00b7 \u00b7 Hd,d \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb, and for each \u21131, \u21132 \u2208[d], we have H\u21131,\u21132 \u2208Rn\u00d7n is de\ufb01ned as [H\u21131,\u21132]i,j(W) := 1 mx\u22a4 i xj m X r=1 \u27e8v\u21131,r, Si\u27e9\u00b7 mSi,r \u00b7 \u27e8v\u21132,r, Sj\u27e9\u00b7 mSj,r. For any timestamp \u03c4, for simplicity, we denote H(\u03c4) := H(W(\u03c4)) and denote H(0) as H\u2217. Note that H\u2217is a positive semi-de\ufb01nite matrix, and we denote its minimum eigenvalue as \u03bb := \u03bbmin(H\u2217). Initialization. We use symmetric initialization, which is widely used in previous works [DM20, DLS22, MOSW22, SWL22, SWL24]. De\ufb01nition 3.7 (Symmetric initialization). For each r \u2208[m/2], we initialize weights as below \u2022 We draw w2r\u22121 from N(0, \u03c32Id) and uniformly draw a2r\u22121 from {\u22121, +1}d. \u2022 We assign a2r = \u2212a2r\u22121 and w2r\u22121 = w2r. Due to symmetric initialization, we can easily see that F(W(0), x) = 0, \u2200x \u2208Rd. 4 Main Results We \ufb01rst de\ufb01ne a constant we used. De\ufb01nition 4.1. Let C > 10 denote a su\ufb03ciently large constant. We de\ufb01ne parameter B as follows B := max{C\u03c3 p log(nd/\u03b4), 1}. Now, we are ready to present our main result, whose complete proof is in Appendix C.1. 7 \fTheorem 4.2 (Main result). Let \u03bb = \u03bbmin(H\u2217) > 0, m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)), \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) and b T = \u2126((m\u03b7\u03bb)\u22121 log(nd/\u01eb)) = \u2126(\u03bb\u22122n2d2 exp(16B) \u00b7 log(nd/\u01eb)). For any \u01eb, \u03b4 \u2208(0, 0.1), after b T iterations, with probability at least 1 \u2212\u03b4, we have \u2225F( b T) \u2212Y \u22252 F \u2264\u01eb. If we \ufb01x \u03b4 and \u03c3 in B de\ufb01ned in the De\ufb01nition 4.1, since exp(\u0398(B)) = (nd)o(1), we can simplify the m = \u2126(\u03bb\u22122(nd)2+o(1)) and b T = \u2126(\u03bb\u22122(nd)2+o(1)). The Theorem 4.2 means that as we have poly(nd) number of neurons and training steps, the softmax NN can \ufb01t any training datasets with n number of d-dim training samples on d-dim regression task. Corollary 4.3. Consider the 1-dimension linear regression setting, i.e., d1 = d and d2 = 1. Let \u03bb = \u03bbmin(H\u2217) > 0, m = \u2126(\u03bb\u22122n2 exp(18B) log2(n/\u03b4)), \u03b7 = 0.1\u03bb/(mn2 exp(16B)) and b T = \u2126((m\u03b7\u03bb)\u22121 log(n/\u01eb)) = \u2126(\u03bb\u22122n2 exp(16B) \u00b7 log(n/\u01eb)). For any \u01eb, \u03b4 \u2208(0, 0.1), after b T iterations, with probability at least 1 \u2212\u03b4, we have \u2225F( b T ) \u2212Y \u22252 2 \u2264\u01eb. Proof. Directly follow Theorem 4.2. As shown in Table 1, our two-layer softmax network needs the same number of training steps b T and number of neurons m as two-layer ReLU networks or two-layer exponential networks. 5 Proof Sketch We \ufb01rst show a key Lemma below, showing that the weight w perturbation will not change the Neural Tangent Kernel too much. Lemma 5.1 (Weight value perturbation \u21d2kernel value perturbation). Let R \u2208(0, 0.01). If the following conditions hold \u2022 Let f W = [ e w1, \u00b7 \u00b7 \u00b7 , e wm] \u2208Rd\u00d7m, where e w1, \u00b7 \u00b7 \u00b7 , e wm are i.i.d. draw from N(0, \u03c32Id). \u2022 Let W = [w1, \u00b7 \u00b7 \u00b7 , wm] \u2208Rd\u00d7m and satisfy \u2225e wr \u2212wr\u22252 \u2264R for any r \u2208[m]. Then with probability at least 1 \u2212\u03b4, we have \u2225H(W) \u2212H(f W)\u2225F \u2264Rnd exp(10B). Please see Appendix B.2 for the proof of Lemma 5.1. We can see that the kernel matrix has a small perturbation when the weights w perturb. Note that in Lemma 4.2 [MOSW22], they have \u2225H(W)\u2212H(f W )\u2225F \u22642Rn for the ReLU activation function and in Lemma 6.7 [GMS23], they have \u2225H(W)\u2212H(f W)\u2225F \u22643Rn1+o(1) for the exp activation function. When we consider the 1-dimension linear regression task, we have \u2225H(W) \u2212H(f W)\u2225F \u2264Rn1+o(1), which is almost the same as the other two cases. Remark 5.2. In the proof of Lemma B.2, we do not use concentration bound as previous work [SY19, MOSW22, GMS23]. The reason is that we consider the worst case. In general, E[H(W)\u2212H(f W)] \u0338= 0nd\u00d7nd. Thus, using the concentration bound may not gain any bene\ufb01ts. 8 \fBased on Lemma 5.1, we can use math induction to \ufb01nish the proof of our main Theorem. We show the induction statement below. Lemma 5.3 (Induction). Let \u03c4 be a \ufb01xed integer. Assume the same condition as Theorem 4.2. Let D be de\ufb01ned as De\ufb01nition A.2 and D < R. If the following conditions hold \u2022 Weights Property. \u2225wr(i) \u2212wr(0)\u22252 \u2264R, \u2200i \u2208[\u03c4] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01 for all r \u2208[m], \u2200i \u2208[\u03c4] Then, for \u03c4 + 1 and \u2200r \u2208[m], we have \u2022 Weights Induction. \u2225wr(\u03c4 + 1) \u2212wr(0)\u22252 \u2264D. \u2022 Loss Induction. \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264(1 \u2212m\u03b7\u03bb/4)\u03c4+1 \u00b7 \u2225F(0) \u2212Y \u22252 F . \u2022 Gradient Induction. \u03b7\u2225\u2206wr(\u03c4 + 1)\u22252 \u22640.01, \u2200r \u2208[m]. Please refer to Appendix C.2, Appendix C.3 and Appendix C.4 for the proof of weights, loss, gradient induction in Lemma 5.3 respectively. Lemma 5.3 means that, at a \ufb01xed timestamp \u03c4, if the weights w(\u03c4) is close to its initialization, the loss is decreasing and the gradient is also small, then we can conclude at timestamp \u03c4 + 1, these conditions still hold as local convexity proved by Lemma 5.1. Thus, after checking the initial condition, we can conclude Theorem 4.2. 6 Application in Di\ufb00usion Now, we apply our results in learning score estimation functions in di\ufb00usion models with noisy labels. We introduce problem setup in Section 6.1 and show our results in Section 6.2. 6.1 Preliminary of Di\ufb00usion In this section, we brie\ufb02y introduce the di\ufb00usion model proposed in [SSDK+21]. Forward Process. During the forward process, we progressively inject the noise into the original data distribution, which can be characterized by the following Stochastic Di\ufb00erential Equation (SDE) [SE20, HJA20]: dx(t) = \u22121 2g(t)x(t) dt + p g(t)dBt, x(0) \u223cp0, (3) where x(t) is the data at the di\ufb00usion process time t, g(t) > 0 is a deterministic weighting function; and (Bt)t\u22650 is a standard d-dimensional Brownian motion/noise. The p0 represents the original/target data distribution that we learn, and we only have few number of accesses to it, i.e., n times. We denote pt as the distribution of x(t) at di\ufb00usion process time t. Then, we can write the explicit solution to Eq. (3) as x(t) = e\u2212 R t 0 1 2g(s)dsx(0) + e\u2212 R t 0 1 2g(s)ds Z t 0 e R s 0 1 2g(u)dup g(s)dBs. 9 \fBackward Process. We denote y(t) = x(T \u2212t) to reverse the forward process in time [HP86, F\u00a8 ol05, CCGL21] that transforms noise into samples from the target distribution. We have a backward process associated to Eq. (3) as: dy(t) = (1 2g(T \u2212t)y(t) + g(T \u2212t)\u2207log pT\u2212t(y(t)))dt + p g(T \u2212t)d \u00af Bt, y(0) \u223cq0. (4) where ( \u00af Bt)t\u22650 is another d-dim Brownian motion/noise. Following the literature, we call \u2207log pt(\u00b7) as \u201cscore function\u201d [SSDK+21]. We have q0 is the initial distribution of the backward process and the score function \u2207log pt(\u00b7) as the gradient of log density of x(t). However, In practice, Eq.(4) cannot be directly used as both the score function and the distribution pT are unknown. To solve the problem, we (1) randomly select a noise distribution as the initial distribution of the backward process pT ; (2) replace the ground-truth score function \u2207log pt(x(t)) by an estimator s\u03b8(x(t), t). The parameterized estimator s\u03b8 is learned by a neural network such as U-Net [HJA20, RBL+22] and Transformer [PX23]. Thus, we obtain a practically implementable approximation of the backward SDE: dy(t) = (1 2g(T \u2212t)y(t) + g(T \u2212t)s\u03b8(y(t), t))dt + p g(T \u2212t)d \u00af Bt, y(0) \u223cN(0, Id), which can be used for sampling/data generation [SE20, CHZW23, CCL+23] Score Matching. When estimate the score function, usually we use L2 loss between the estimated and actual score: min \u03b8 1 T Z T 0 \u03bb(t)E[\u2225s\u03b8(x(t), t) \u2212\u2207log pt(x(t))\u22252 2]dt, (5) where \u03bb(t) is the weighting function that captures time inhomogeneity. As the hardness of estimate \u2207log pt term in Eq. (5), equivalently, we minimize the following denoising score matching [Vin11]: min \u03b8 1 T \u2212T0 Z T T0 \u03bb(t)E[\u2225s\u03b8(x(t), t) \u2212\u2207log pt|0(x(t) | x(0))\u22252 2]dt. (6) In practice, the estimator of the score function is parameterized by a neural network and we have the following sampling procedure for any i \u2208[n], x(0)i \u223cp0, ti \u223cUnif(0, T), x(ti)i \u223cpti|0(\u00b7|x(0)i), and we get the training dataset {x(0)i, (ti, x(ti)i)}n i=1, where x(0)i \u2208Rd and (ti, x(ti)i) \u2208Rd+1. We denote x(0) as the noisy label and E[x(0)|x(t)] as the true label. For simplicity, we denote x(0)i as yi \u2208Rd and (ti, x(ti)i) as xi \u2208Rd+1 and the training dataset as Dn = {(xi, yi)}n i=1. Here, y denotes the image from a dataset and x denotes the noised image with its di\ufb00usion process time t. Neural Network Parameterization. Recall that we consider a two-layer network with softmax activation function as the di\ufb00usion model in Eq. (1), satisfying \u2200\u2113\u2208[d], F(W, x, a)\u2113= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121. Note that, we do not train the top-layer weights a, so we can denote it as Fnn(W, x). Then, similar as [HJA20, HRX24], our loss function Eq. (6) can be rewrite as min W L(W) := 1 2 N X j=1 \u2225Fnn(W, xj) \u2212yj\u22252 2. We denote the target function as F\u2217(t, x(t)) := E[y | (t, x(t))]. Let H be the reproducing Hilbert space (RKHS) induced by the NTK [CDVTU10, JGH18] and let FH in the RKHS H such that \u2225FH\u22252 H \u2264RH. 10 \f6.2 Main Result of Di\ufb00usion We \ufb01rst introduce some natural assumptions we used. Assumption 6.1. Based on normalization, we assume \u2225yi\u22252 \u22641, \u2225xi\u22252 \u22641, \u2200i \u2208[n]. Assumption 6.2. Assume \u03bb = \u03bbmin(H\u2217) > 0. Assumption 6.3. The function g is almost everywhere continuous and bounded on [0, \u221e). Assumption 6.4. For all (t, x(t)) \u2208(0, \u221e) \u00d7 Rd, the function F\u2217(t, x(t)) is \u03b2x-Lipschitz in x, i.e., \u2225F\u2217(t, x(t)) \u2212F\u2217(t, x\u2032(t))\u22252 \u2264\u03b2x\u2225x(t) \u2212x\u2032(t)\u22252. We denote A(RH) := c1\u039b( \u221aRH \u039b )\u22122 d log( \u221aRH \u039b ) and \u039b = O( \u221a d) and \u0393\u03b4 := 2d2A(RH) \u03bb log3/2(e(dn)3/2A(RH) \u03bb ) + 1 \u221an !2 + d2A2(RH) \u03bb2 (log(1/\u03b4) + log(log n)). Now, we are ready to present our main Theorem for di\ufb00usion. Theorem 6.5 (Main results of score estimation and generalization). Suppose Assumptions 6.1, 6.2, 6.3, 6.4 hold and we set m = \u2126(\u03bb\u22122n3d3 exp(18B) log2(nd/\u03b4)) and \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)). Moreover, suppose b T satis\ufb01es Assumption G.3 with corresponding \u01eb(n, b T). Then for large enough RH, with probability at least 1 \u2212\u03b4, it holds that 1 T Z T 0 \u03bb(t)E[\u2225sW ( b T)(t, x(t)) \u2212\u2207log pt(Xt)\u22252 2]dt \u2264O \u0012 1 \u03bb\u221an + \u01eb(n, b T) + dA2(RH) + dA(RH) + p dA(RH)\u0393\u03b4 + \u0393\u03b4 \u0013 . Please refer to Appendix G.1 for the complete proof. Here we provide a proof sketch. Proof sketch of Theorem 6.5. In Theorem F.2, we show the \u201cequivalence\u201d between softmax NN learning and corresponding neural tangent kernel regression, i.e., the gap between them is always small. Then, we can borrow the generalization ability of kernel regression to the generalization ability of two-layer softmax NN. On the other hand, by Claim G.1, we can decompose the loss into a coupling gap, a label mismatch gap, an early stopping gap, and an approximation gap. By using our Theorem 4.2, Theorem F.2 with some tools from [HRX24], we \ufb01nish the proof. From Theorem 6.5, we know that, under some natural assumptions, the GD algorithm trained two-layer softmax NN can learn a provable accuracy on the score estimation functions in the di\ufb00usion model with noisy labels. We use this practical case study to demonstrate the broad applicability of our theoretical \ufb01ndings. 7 Discussion and Future Work Self-attention Learning. The self-attention can be written as F(W KX, W QX, W V X) \u2208Rd\u00d7n\u2032, (7) where W K, W Q, W V \u2208Rd\u00d7d denotes key, query, and value matrix respectively and X \u2208Rd\u00d7n\u2032 is a sequence of n\u2032 tokens. As our work is a \ufb01rst step to understanding softmax, it is natural to consider 11 \fhow to extend our results to self-attention. It is well-known that using two reformulation tricks: tensor-trick and SVM-trick [GSWY23, GSX23, AS24a], any analysis for softmax function can be naturally generalized to attention function F(W KX, W QX, W V X). Therefore, we conjecture that we can borrow the idea from [GSWY23, GSX23, AS24a] to decouple Eq (7) into the value term and the softmax term. And, we can alternatively optimize the weights for the softmax term (W k, W Q) and the value term (W V ). We leave this valuable direction as a future work. Feature Learning. Recently, there is a line of work showing that feature learning may be beyond NTK on sample complexity or time complexity, e.g., [AZL19, WLLM19, HN19, AZLL19, DM20, CBL+20, YH20, HY20, LMZ20, GMMM20, RGKZ21, MKAS21, LXMZ21, DLS22, SWL22, SWL24] and many more. It is worth studying the feature learning ability of two-layer softmax NN to \ufb01gure out what feature pattern the softmax prefers to learn and how it happens. We leave this valuable direction as a future work. 8 Conclusion This paper provides a theoretical analysis of the optimization and generalization properties of twolayer neural networks with softmax activation function. We apply our results in learning score estimation functions in di\ufb00usion models with noisy labels to verify our analysis e\ufb00ectiveness. Our \ufb01ndings contribute to a deeper understanding of the power of softmax neural networks and their potential to self-attention, advance LLMs, and generative modeling. Acknowledgement Research is partially supported by the National Science Foundation (NSF) Grants 2023239-DMS, CCF-2046710, and Air Force Grant FA9550-18-1-0166. The authors would like to thank Yufa Zhou for his helpful suggestions and feedback. 12 \fAppendix Roadmap. In Section A, we introduce some de\ufb01nitions that will be used in the proof. In Section B, we provide the basic concentration. In Section C, we provide the proof of our inductions. In Section D, we establish a bound for the weight of induction Part 1. In Section E, we establish a bound for the loss of induction Part 2. In Section F, we introduce the NTK regression. In Section G, we introduce the di\ufb00usion. A De\ufb01nition Claim A.1 (Restatement of Claim 3.4). We have \u2206wr(\u03c4) := m n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 (\u27e8a\u2113,r \u00b7 1m \u2212a\u2113, Si(\u03c4)\u27e9) \u00b7 Si,r(\u03c4) \u0011 \u00b7 xi Proof of Claim 3.4. We can show that \u2206wr(\u03c4)/m = n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 (\u27e8a\u2113\u25e6er \u2212a\u2113\u00b7 Si,r(\u03c4), Si(\u03c4)\u27e9)xi = n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 (a\u2113,r \u2212\u27e8a\u2113, Si(\u03c4)\u27e9) \u00b7 Si,r(\u03c4) \u0011 \u00b7 xi = n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 \u27e8a\u2113,r \u00b7 1m \u2212a\u2113 | {z } m\u00d71 , Si(\u03c4) | {z } m\u00d71 \u27e9\u00b7 Si,r(\u03c4) \u0011 \u00b7 xi, where the \ufb01rst step follows from the de\ufb01nition of \u2206wr(\u03c4), the second step follows from \u27e8a\u2113\u25e6er, x\u27e9= a\u2113,rxr, and the last step is due to the Fact A.4. We present the following de\ufb01nition to simplify the notation. De\ufb01nition A.2. We de\ufb01ne D D := 4m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F Fact A.3. For any vectors u, v \u2208Rn, the squared Euclidean distance between u and v can be expressed as: \u2225u \u2212v\u22252 2 = \u2225u\u22252 2 \u22122u\u22a4v + \u2225v\u22252 2. Fact A.4. Let 1m be a vector of dimension m consisting of all ones, and Si(\u03c4) \u2208Rm \u22650 be the indicator of some function \u03c4 at position i. We have: 1 = \u27e81m, Si(\u03c4)\u27e9 Fact A.5. For any real number |x| \u22640.1, the following inequality holds: (1 \u2212x)1/2 \u22641 \u22120.5x 13 \fFact A.6. For any real number |x| \u22640.1, we have | exp(x) \u22121| \u22642|x| Fact A.7. For any x \u2208(0, 0.1), we have \u221e X i=0 xi \u2264 1 1 \u2212x Fact A.8. For any |x| \u22640.01, we have exp(x) = 1 + x + \u0398(1)x2 We state the standard Hoe\ufb00ding inequality, Lemma A.9 (Hoe\ufb00ding inequality [Hoe63]). If the below conditions are true \u2022 Let x1, \u00b7 \u00b7 \u00b7 , xn denote n independent variables \u2022 xi \u2208[\u03b1i, \u03b2i], for all i \u2208[n] \u2022 Let x = Pn i=1 xi. Then we have Pr[|x \u2212E[x]| \u2265t] \u22642 exp \u2212 2t2 P i\u2208[n](\u03b2i \u2212\u03b1i)2 ! . Lemma A.10 (Hanson-Wright inequality [HW71, RV13]). Let x \u2208Rn denote a random vector with independent entries xi with E[xi] = 0 and |xi| \u2264K. Let A be an n \u00d7 n matrix. Then, for every t \u22650, Pr[|x\u22a4Ax \u2212E[x\u22a4Ax]| > t] \u22642 \u00b7 exp(\u2212c min{t2/(K4\u2225A\u22252 F ), t/(K2\u2225A\u2225)}). B Basic Concentration In Section B.1, we introduce some concentration basic tools. In Section B.2, given w perturbation within a small ball, we bound the changes of H. B.1 Some Concentration Basic Tools The goal of this section is to prove Lemma B.1. Lemma B.1. If the following conditions hold \u2022 Let B > 1 denote a parameter be de\ufb01ned as De\ufb01nition 4.1. \u2022 Let W = [w1, \u00b7 \u00b7 \u00b7 , wm] and wr be random Gaussian vectors from N(0, \u03c32Id). \u2022 Let V = [v1, \u00b7 \u00b7 \u00b7 , vm] and vr denote the vector where \u2225vr \u2212wr\u22252 \u2264R, \u2200r \u2208[m]. \u2022 Let xi \u2208Rd and \u2225xi\u22252 \u22641, \u2200i \u2208[n]. \u2022 Let R \u2208(0, 0.01). 14 \f\u2022 Let Si and e Si be the softmax function corresponding to W and V respectively. \u2022 Let \u03b1i = \u27e81m, exp(W \u22a4xi)\u27e9and e \u03b1i = \u27e81m, exp(V \u22a4xi)\u27e9, \u2200i \u2208[n]. Then, with probability at least 1 \u2212\u03b4/ poly(nd), we have \u2022 Standard inner product \u2013 Part 1. |\u27e8wr, xi\u27e9| \u2264B, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 2. |\u27e8vr, xi\u27e9| \u2264B + R, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 3. |\u27e8wr \u2212vr, xi + xj\u27e9| \u22642R, \u2200i, j \u2208[n], \u2200r \u2208[m] \u2022 exp function \u2013 Part 4. exp(\u2212B) \u2264exp(\u27e8wr, xi\u27e9) \u2264exp(B), \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 5. exp(\u2212B \u2212R) \u2264exp(\u27e8vr, xi\u27e9) \u2264exp(B + R), \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 6. | exp(\u27e8wr \u2212vr, xi + xj\u27e9) \u22121| \u22644R, \u2200i, j \u2208[n], \u2200r \u2208[m] \u2013 Part 7. | exp(\u27e8wr, xi\u27e9) \u2212exp(\u27e8vr, xi\u27e9)| \u2264R exp(B + R), \u2200i \u2208[n], \u2200r \u2208[m] \u2022 softmax S function \u2013 Part 8. |\u03b1i \u2212e \u03b1i| \u2264mR exp(B + R), \u2200i \u2208[n] \u2013 Part 9. |\u03b1\u22121 i \u2212e \u03b1\u22121 i | \u2264R m exp(3B + 2R), \u2200i \u2208[n] \u2013 Part 10. |Si,r| \u2264exp(2B)/m, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 11. | e Si,r| \u2264exp(2B + 2R)/m, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 12. |Si,r \u2212e Si,r| \u2264R m exp(4B + 3R), \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 13. for any z \u2208Rm and \u2225z\u2225\u221e\u22641, we have |\u27e8z, Si\u27e9\u2212\u27e8z, e Si\u27e9| \u2264R exp(4B+3R), \u2200i \u2208 [n] Proof. As eventually we choose m = poly(nd), we use B > 0 de\ufb01ned in De\ufb01nition 4.1. Proof of Part 1, 2, 4 and 5. We can get the proof by Gaussian tail bound. Proof of Part 3 and 6. Due to \u2225xi\u22252 \u22641 and \u2225xj\u22252 \u22641 and \u2225\u2206wr\u22252 \u2264R, we can have |\u27e8\u2206wr, (xi + xj)\u27e9| \u22642R \u22640.1. (8) Then, we have | exp(\u27e8\u2206wr, (xi + xj)\u27e9) \u22121| \u22642|\u27e8\u2206wr, (xi + xj)\u27e9| \u22644R where the \ufb01rst step follows from the Fact A.6, and the last step follows from Eq. (8). Proof of Part 7. Because \u2225xi\u22252 \u22641 and \u2225\u2206wr\u22252 \u2264R, we can have |\u27e8\u2206wr, xi\u27e9| \u2264R \u22640.1. (9) By convex increasing property of exp function, we have | exp(\u27e8wr, xi\u27e9) \u2212exp(\u27e8vr, xi\u27e9)| \u2264max{exp\u2032(\u27e8wr, xi\u27e9), exp\u2032(\u27e8vr, xi\u27e9} \u00b7 |\u27e8\u2206wr, xi\u27e9| 15 \f\u2264exp(B + R) \u00b7 |\u27e8\u2206wr, xi\u27e9| \u2264exp(B + R)R. where the \ufb01rst step follows from Taylor expansion and exp\u2032 denote the derivative of exp, the second step follows from Part 4 and Part 5 and the last step follows from Eq. (9). Proof of Part 8. |\u03b1i \u2212e \u03b1i| = | X r\u2208[m] expi,r \u2212g X r\u2208[m]expi,r| \u2264 X r\u2208[m] |expi,r \u2212g expi,r| \u2264mR exp(B + R), where the third step is due to Part 7. Proof of Part 9. Similarly, we have |\u03b1\u22121 i \u2212e \u03b1\u22121 i | = | e \u03b1i \u2212\u03b1i \u03b1ie \u03b1i | \u2264mR exp(B + R) |\u03b1ie \u03b1i| \u2264 mR exp(B + R) |m exp(\u2212B)m exp(\u2212B \u2212R)| = R m exp(3B + 2R). where the \ufb01rst step is due to simple algebra, the second step is from Part 8, the third step follows Part 4, 5, and the last step is because of simple algebra. Proof of Part 10 and 11. Trivially follows Part 4 and Part 5. Proof of Part 12. |Si,r \u2212e Si,r| = |\u03b1\u22121 i expi,r \u2212e \u03b1\u22121 i g expi,r| \u2264|\u03b1\u22121 i expi,r \u2212\u03b1\u22121 i g expi,r| + |\u03b1\u22121 i g expi,r \u2212e \u03b1\u22121 i g expi,r| For the \ufb01rst part, we have |\u03b1\u22121 i expi,r \u2212\u03b1\u22121 i g expi,r| = \u03b1\u22121 i | expi,r \u2212g expi,r| \u2264\u03b1\u22121 i exp(B + R)R \u2264exp(B + R)R m exp(\u2212B) = R m exp(2B + R), where the second step follows Part 7 and the third step follows Part 4. 16 \fFor the second part, we have |\u03b1\u22121 i g expi,r \u2212e \u03b1\u22121 i g expi,r| = g expi,r|\u03b1\u22121 i \u2212e \u03b1\u22121 i | \u2264g expi,r R m exp(3B + 2R) \u2264exp(B + R) R m exp(3B + 2R) = R m exp(4B + 3R), where the second step follows Part 9, and the third step follows Part 5. Thus, we have |Si,r \u2212e Si,r| \u2264R m exp(4B + 3R). Proof of Part 13. Note that \u2225z\u2225\u221e\u22641. We have |\u27e8z, Si\u27e9\u2212\u27e8z, e Si\u27e9| = |\u27e8z, Si \u2212e Si\u27e9| \u2264m\u2225Si \u2212e Si\u2225\u221e \u2264R exp(4B + 3R) where the \ufb01rst step follows from simple algebra, the second step follows from |\u27e8a, b\u27e9| \u2264m \u00b7 maxi\u2208[m] |aibi|, and the last step is due to Part 12. B.2 Kernel Perturbation The purpose of this section is to prove Lemma B.2. In the proof, we do not use concentration inequality. Please see Remark 5.2 for more details. Lemma B.2 (Restatement of Lemma 5.1). If the following conditions hold \u2022 Let B \u22651 denote a parameter be de\ufb01ned as De\ufb01nition 4.1. \u2022 Let R \u2208(0, 0.01). \u2022 Let xi \u2208Rd and \u2225xi\u22252 \u22641 for all i \u2208[n]. \u2022 Let f W = [ e w1, \u00b7 \u00b7 \u00b7 , e wm] \u2208Rd\u00d7m, where e w1, \u00b7 \u00b7 \u00b7 , e wm are are i.i.d. draw from N(0, \u03c32Id). \u2022 Let W = [w1, \u00b7 \u00b7 \u00b7 , wm] \u2208Rd\u00d7m and satisfy \u2225e wr \u2212wr\u22252 \u2264R for any r \u2208[m]. \u2022 Let v\u2113,r = a\u2113,r \u00b7 1m \u2212a\u2113\u2208Rm, for any \u2113\u2208[d] and for any r \u2208[m]. Note that a\u2113,r is the r-th in a\u2113. \u2022 Let \u03b1i = \u27e81m, exp(W \u22a4xi)\u27e9and e \u03b1i = \u27e81m, exp(V \u22a4xi)\u27e9, \u2200i \u2208[n]. \u2022 Let H be de\ufb01ned as De\ufb01nition 3.6. Then, we have \u2022 Part 1. Then with probability at least 1 \u2212\u03b4/ poly(nd), |[H\u21131,\u21132]i,j(W) \u2212[H\u21131,\u21132]i,j(f W)| \u2264R \u00b7 exp(10B). 17 \f\u2022 Part 2. Then with probability at least 1 \u2212\u03b4, we have \u2225H(W) \u2212H(f W)\u2225F \u2264Rnd \u00b7 exp(10B). Proof of Lemma 5.1. We de\ufb01ne \ufb01ve real numbers B1, B2, B3, B4, B5 \u2208R as follows, B1 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9expi,r expj,r \u2212\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9g expi,rg expj,r B2 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9g expi,rg expj,r \u2212\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r B3 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r \u2212\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r B4 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r \u2212\u03b1\u22121 i e \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r B5 := \u03b1\u22121 i e \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r \u2212e \u03b1\u22121 i e \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r Thus, we have |[H\u21131,\u21132]i,j(W) \u2212[H\u21131,\u21132]i,j(f W)|/m2 \u2264|B1| + |B2| + |B3| + |B4| + |B5|. To bound B1 We rewrite B1 as B1 = \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9(exp(w\u22a4 r (xi + xj)) \u2212exp( e w\u22a4 r (xi + xj))). Recall that \u2225v\u21131,r\u2225\u221e\u22642 and \u2225Si\u22251 \u22641. Thus, |\u27e8v\u21131,r, Si\u27e9| \u22642. By Fact A.4, we know that |\u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9| \u22642 \u00b7 2 = 4. By Part 4 of Lemma B.1, with probability 1 \u2212\u03b4/ poly(nd), we know that |\u03b1\u22121 i | \u22641 m exp(B). We will condition on the above event is holding in the rest of the proof. By Part 7 of Lemma B.1, | exp( e w\u22a4 r (xi + xj)) \u2212exp(w\u22a4 r (xi + xj))| \u22642R exp(2B + 2R). Finally, we know that |B1| \u22648R m2 exp(5B). To bound B2 and B3 We can rewrite B2 as follows |B2| = |\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9g expi,rg expj,r(\u27e8v\u21132,r, Sj\u27e9\u2212\u27e8v\u21132,r, e Sj\u27e9)| \u2264\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 |\u27e8v\u21131,r, Si\u27e9|g expi,rg expj,r|(\u27e8v\u21132,r, Sj\u27e9\u2212\u27e8v\u21132,r, e Sj\u27e9)|. 18 \fFollowing the similar strategy as B1, by Part 13 of Lemma B.1, we know that |B2| \u22641 m exp(B) \u00b7 1 m exp(B) \u00b7 2 \u00b7 exp(B + R) \u00b7 exp(B + R) \u00b7 4R exp(4B + 3R) \u22648R m2 exp(9B). Similarly, we have |B3| \u22648R m2 exp(9B). To bound B4 and B5 For the term B4, we can rewrite |B4| = |(\u03b1\u22121 j \u2212e \u03b1\u22121 j ) \u00b7 \u03b1\u22121 i 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r| \u2264|\u03b1\u22121 j \u2212e \u03b1\u22121 j | \u00b7 \u03b1\u22121 i 1 m m X r=1 |\u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9|g expi,rg expj,r. Thus, by Part 9 of Lemma B.1, using similar proof strategy as B1 as know |B4| \u2264R m exp(3B + 2R) \u00b7 1 m exp(B) \u00b7 2 \u00b7 2 \u00b7 exp(B + R) \u00b7 exp(B + R) \u22644R m2 exp(7B). Similarly, we have |B5| \u22644R m2 exp(7B). C Induction In Section C.1, we provide the proof of our main result. In Section C.2, we provide an induction lemma for weights part. In Section C.3, we provide an induction lemma for loss part. In Section C.4, we provide an induction lemma for gradient part. C.1 Main Result Our main result is presented as follows. Theorem C.1 (Main result. Restatement of Theorem 4.2). For any \u01eb, \u03b4 \u2208(0, 0.1), if the following conditions hold \u2022 Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 Let m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Let \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) 19 \f\u2022 Let b T = \u2126((m\u03b7\u03bb)\u22121 log(nd/\u01eb)) = \u2126(\u03bb\u22122n2d2 exp(16B) \u00b7 log(nd/\u01eb)) Then, after b T iterations, we have \u2225F( b T) \u2212Y \u22252 F \u2264\u01eb. Proof of Theorem 4.2. Let \u03c3 = 1. We have \u2225F(0) \u2212Y \u22252 F \u2264nd by Lemma D.3. Using the choice of b T, it follows directly from the alternative application of Lemma C.3 and Lemma C.2. Since exp(\u0398(B)) = (nd)o(1), we can simplify the nd exp(\u0398(B)) = (nd)1+o(1). C.2 Induction Part 1. For Weights We provide an induction lemma for weights part. Lemma C.2 (Induction Part 1. For Weights). Let \u03c4 be a \ufb01xed integer. If the below conditions are true \u2022 General Property 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 General Property 2. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 General Property 3. Let D be de\ufb01ned as De\ufb01nition A.2 \u2022 General Property 4. D < R = \u03bb/(2nd exp(10B)) \u2022 General Property 5. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Weights Property. \u2225wr(i) \u2212wr(0)\u22252 \u2264R for all i \u2208[\u03c4] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then, for \u03c4 + 1 and \u2200r \u2208[m], we have \u2225wr(\u03c4 + 1) \u2212wr(0)\u22252 \u2264D. Proof. We have \u03b7 \u221e X i=0 (1 \u2212m\u03b7\u03bb/2)i/2 \u2264\u03b7 \u221e X i=0 (1 \u2212m\u03b7\u03bb/4)i \u2264\u03b7 1 m\u03b7\u03bb/4 \u2264 4 m\u03bb (10) where the \ufb01rst step is due to the Fact A.5, the second stepis due to the Fact A.7, the last step is because of simple algebra. 20 \fWe use the gradient\u2019s norm to measure the weights di\ufb00erence: \u2225wr(0) \u2212wr(\u03c4 + 1)\u22252 \u2264\u03b7 \u03c4 X i=0 \u2225\u2206wr(i)\u22252 \u2264\u03b7 \u03c4 X i=0 exp(3B) \u221a nd \u00b7 \u2225F(i) \u2212Y \u2225F \u2264\u03b7 exp(3B) \u221a nd \u03c4 X i=0 (1 \u2212m\u03b7\u03bb/2)i/2 \u00b7 \u2225F(0) \u2212Y \u2225F \u22644m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F = D where the \ufb01rst step follows from wr(i + 1) \u2212wr(i) = \u03b7 \u00b7 \u2206wr(i), the second step follows from Lemma D.1 for \u03c4 times, the third step follows from Loss Property in Lemma statement, the fourth step follows from Eq. (10), the last step is from General Property 3 in Lemma statement. C.3 Induction Part 2. For Loss We provide an induction lemma for loss part. Lemma C.3 (Induction Part 2. For Loss). Let \u03c4 be a \ufb01xed integer. If the following conditions hold \u2022 General Property 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 General Property 2. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 General Property 3. Let D be de\ufb01ned as De\ufb01nition A.2 \u2022 General Property 4. D < R = \u03bb/(2nd exp(10B)) \u2022 General Property 5. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Weights Property. \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264D < R, \u2200r \u2208[m] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01 \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then we have \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264(1 \u2212m\u03b7\u03bb/4)\u03c4+1 \u00b7 \u2225F(0) \u2212Y \u22252 F . Proof. We have \u2225F(\u03c4) \u2212Y \u22252 F \u2264\u2225F(\u03c4 \u22121) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2) which follows Lemma E.2. Thus, we complete the proof by induction. 21 \fC.4 Induction Part 3. For Gradient We provide an induction lemma for gradient part. Lemma C.4 (Induction Part 3. For Gradient). Let \u03c4 be a \ufb01xed integer. If the following conditions hold \u2022 General Property 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 General Property 2. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 General Property 3. Let D be de\ufb01ned as De\ufb01nition A.2 \u2022 General Property 4. D < R = \u03bb/(2nd exp(10B)) \u2022 General Property 5. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Weights Property. \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264D < R, \u2200r \u2208[m] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01 \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then we have \u03b7\u2225\u2206wr(\u03c4 + 1)\u22252 \u22640.01, \u2200r \u2208[m] Proof. This is trivially follows from Lemma D.1 and Lemma D.2. D Induction Part 1: For Weights In Section D.1, we propose the lemma for bounding gradient and its corresponding proof. In Section D.2, we propose the bounding initialization loss and its corresponding proof. D.1 Bounding the Gradient at any Time In this section, we bound the gradient. Lemma D.1. If the following condition hold, \u2022 Let B > 1 denote a parameter be de\ufb01ned as De\ufb01nition 4.1 \u2022 Let R \u2208(0, 0.01) \u2022 \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264R \u2022 Let v\u2113,r = a\u2113,r \u00b7 1m \u2212a\u2113\u2208Rm, for any \u2113\u2208[d] and for any r \u2208[m] For any timestamp \u03c4, we have \u2225\u2206wr(\u03c4)\u22252 \u2264exp(3B) \u221a nd \u00b7 \u2225F(\u03c4) \u2212Y \u2225F . 22 \fProof. We have \u2225\u2206wr(\u03c4)\u22252 = \r \r \r \r \rm n X i=1 d X \u2113=1 (y\u2113,i \u2212F\u2113,i) \u00b7 xi \u00b7 \u27e8v\u2113,r, Si(\u03c4)\u27e9\u00b7 Si,r(\u03c4) \r \r \r \r \r 2 \u2264exp(3B) n X i=1 d X \u2113=1 |y\u2113,i \u2212F\u2113,i(\u03c4)| \u2264exp(3B) \u221a nd \u00b7 \u2225F(\u03c4) \u2212Y \u2225F where the \ufb01rst step follows from Claim 3.4 and De\ufb01nition 3.3, the second step follows from |\u27e8v\u2113,r, Si\u27e9| \u22642 and |Si,r| \u2264exp(2B + 2R)/m by Part 11 of Lemma B.1, the last step follows from Cauchy-Schwartz inequality. Lemma D.2. If the following conditions hold, \u2022 \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264R Then, for any timestamp \u03c4, we have \u03b7\u2225\u2206wr(\u03c4)\u22252 \u22640.01 Proof. This trivially follows from Lemma D.1 and choice of \u03b7. D.2 Bounding the Initialization Loss In this section, we bound the initialization loss. Lemma D.3. We have \u2225F(0) \u2212Y \u2225F \u2264O( \u221a nd). Proof. This trivially follows from \u2225yi\u2225\u22641, \u2200i \u2208[n] and symmetric initialization from De\ufb01nition 3.7. E Induction Part 2: For Loss In Section E.1, we decompose the loss \u2225F(k + 1) \u2212Y \u22252 F into four parts, namely C0, C1, C2, and C3. In Section E.2, we show our choices of m and \u03b7. In Section E.3, we establish bounds for C0. In Section E.4, we establish bounds for C1. In Section E.5, we establish bounds for C2. In Section E.6, we establish bounds for C3. 23 \fE.1 Decomposition for \u2225vec(F(\u03c4 + 1) \u2212Y )\u22252 2 Here, we decompose the loss \u2225vec(F(\u03c4 + 1) \u2212Y )\u22252 2 into four parts C0, C1, C2 and C3. Lemma E.1. Assuming the following condition is met: \u2022 Let \u03bb = \u03bbmin(H\u2217) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9. \u2022 Let scalar v0,\u2113,i \u2208R be de\ufb01ned as follows v0,\u2113,i := m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) \u2022 Let scalar v1,\u2113,i \u2208R be de\ufb01ned as follows v1,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) \u2022 Let scalar v2,\u2113,i \u2208R be de\ufb01ned as follows v2,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 \u03b72 \u00b7 \u0398(1) \u00b7 \u27e8\u2206wr(\u03c4), xi\u27e92 \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] \u2022 C0 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v0)\u27e9 \u2022 C1 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v1)\u27e9 \u2022 C2 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v2)\u27e9 \u2022 C3 = \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F then \u2225F(\u03c4 + 1) \u2212Y \u22252 F = \u2225F(t) \u2212Y \u22252 F + C0 + C1 + C2 + C3. Proof. The expression \u2225Y \u2212F(\u03c4 + 1)\u22252 F = \u2225vec(Y \u2212F(\u03c4 + 1))\u22252 2 can be rewritten in the following: \u2225vec(Y \u2212F(\u03c4 + 1))\u22252 2 = \u2225vec(Y \u2212F(\u03c4) \u2212(F(\u03c4 + 1) \u2212F(\u03c4)))\u22252 2 = \u2225vec(Y \u2212F(\u03c4))\u22252 2 \u22122 vec(Y \u2212F(\u03c4))\u22a4vec(F(\u03c4 + 1) \u2212F(\u03c4)) + \u2225vec(F(\u03c4 + 1) \u2212F(\u03c4))\u22252 2. (11) where the \ufb01rst step follows from simple algebra, the last step follows from Fact A.3. Recall the update rule (De\ufb01nition 3.5), wr(\u03c4 + 1) = wr(\u03c4) \u2212\u03b7 \u00b7 \u2206wr(\u03c4) In the following manner, \u2200\u2113\u2208[d], we can express F\u2113(\u03c4 + 1) \u2212F\u2113(\u03c4) \u2208Rn: 24 \fF\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4) = m X r\u2208[m] a\u2113,r \u00b7 (\u03b1i(\u03c4 + 1)\u22121 exp(\u27e8wr(\u03c4 + 1), xi\u27e9) \u2212\u03b1i(\u03c4)\u22121 exp(\u27e8wr(\u03c4), xi\u27e9)) = + m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) + m X r\u2208[m] a\u2113,r\u03b1i(\u03c4)\u22121 \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9) \u2212exp(\u27e8wr(\u03c4), xi\u27e9)) = + m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) + m X r\u2208[m] a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (exp(\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) \u22121) = + m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) + m X r\u2208[m] a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((wr(\u03c4)\u22a4xi) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9+ \u0398(1)\u03b72\u27e8\u2206wr(\u03c4), xi\u27e92) = v0,\u2113,i + v1,\u2113,i + v2,\u2113,i where the \ufb01rst step is due to the de\ufb01nition of F\u2113,i(\u03c4), the second step is from the simple algebra, the third step is due to |\u03b7\u2206wr(\u03c4)\u22a4xi| \u22640.01 (due to Gradient Property and \u2225xi\u22252 \u22641), the fourth step follows from the Fact A.8, the last step follows from v0,\u2113,i := m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) v1,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) v2,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 \u03b72 \u00b7 \u0398(1) \u00b7 \u27e8\u2206wr(\u03c4), xi\u27e92 Here v0,\u2113,i and v1,\u2113,i are linear in \u03b7 and v2,\u2113,i is quadratic in \u03b7. Thus, v0,\u2113,i and v1,\u2113,i are the \ufb01rst order term, and v2,\u2113,i is the second order term. We can rewrite the second term in the Eq. (11) above as below: \u27e8vec(Y \u2212F(\u03c4)), vec(F(\u03c4 + 1) \u2212F(\u03c4))\u27e9 = \u27e8vec(Y \u2212F(\u03c4)), vec(v0 + v1 + v2)\u27e9 = \u27e8vec(Y \u2212F(\u03c4)), vec(v0)\u27e9+ \u27e8vec(Y \u2212F(\u03c4)), vec(v1)\u27e9+ \u27e8vec(Y \u2212F(\u03c4)), vec(v2)\u27e9 Therefore, we can conclude that \u2225F(\u03c4 + 1) \u2212Y \u22252 F = \u2225F(\u03c4) \u2212Y \u22252 F + C0 + C1 + C2 + C3. 25 \fE.2 Choice of Parameters Here, we show our choice of parameters m, \u03b7, R, B. Lemma E.2. If the below conditions are true \u2022 Condition 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 Condition 2. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Condition 3. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 Condition 4. R = \u03bb/(2nd exp(10B)) \u2013 Required by Claim E.5 \u2022 Condition 5. B = max{C\u03c3 p log(nd/\u03b4), 1} \u2022 Condition 6. D = 4m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F \u2022 Condition 7. D < R \u2022 Condition 8. \u03b7\u2225\u2206wr(\u03c4)\u22252 \u22640.01, \u2200r \u2208[m] \u2013 Required by Lemma E.1, Claim E.3 and Claim E.7 Then it holds that \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264\u2225F(\u03c4) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2) holds with probability at least 1 \u2212\u03b4. Proof. We can show \u2225F(\u03c4 + 1) \u2212Y \u22252 F = \u2225F(\u03c4) \u2212Y \u22252 F + C0 + C1 + C2 + C3 \u2264(1 \u22120.8m\u03b7\u03bb + 0.1m\u03b7\u03bb + 2m\u03b72n2d2 exp(9B) + \u03b72m2 \u00b7 n2d2 \u00b7 exp(16B)) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F \u2264(1 \u22120.7m\u03b7\u03bb + 2\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B)) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . where the \ufb01rst step follows from Lemma E.1, the second step follows from Lemma E.3 for C0, Lemma E.4, Claim E.5 for C1, Claim E.6 for C2 and Claim E.7 for C3, the last step follows from the simple algebra. Choice of \u03b7. Next, we want to choose \u03b7 such that (1 \u22120.7m\u03b7\u03bb + 2\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B)) \u2264(1 \u2212m\u03b7\u03bb/2). (12) Using the choice of \u03b7 in Condition 3 2\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B) \u22640.2m\u03b7\u03bb This indicates: \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264(1 \u2212m\u03b7\u03bb/2) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . (13) 26 \fLower bound for m, over-parametrization size. We require the following conditions \u2022 m \u2265\u2126(\u03bb\u22122n2d exp(18B) log2(nd/\u03b4)) (required by Lemma E.3) \u2022 m \u2265\u2126(\u03bb\u22122n2d exp(12B) log2(nd/\u03b4)) (required by Lemma E.4) \u2022 D = 4m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F < R = \u03bb/(2nd exp(10B))} (required by Condition 7.) Therefore, by \u2225Y \u2212F(0)\u2225F = O( \u221a nd) from Lemma D.3, it su\ufb03ces to choose: m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)). E.3 Bounding C0 Here, we explain about how to bound C0. Lemma E.3. If the following conditions hold \u2022 Let scalar v0,\u2113,i \u2208R be de\ufb01ned as follows v0,\u2113,i := m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9. \u2022 Let m \u2265\u2126(\u03bb\u22122n2d exp(18B) log2(nd/\u03b4)) \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] \u2022 We de\ufb01ne C0 as follows C0 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v0)\u27e9 Here vec(v0) \u2208Rnd is the vectorization of v0 \u2208Rn\u00d7d and vec(F(\u03c4) \u2212Y ) \u2208Rnd is the vectorization of F(\u03c4) \u2212Y \u2208Rn\u00d7d. Then we have |C0| \u22640.1m\u03b7\u03bb \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F Proof. We can rewrite v0,\u2113,i as follows: v0,\u2113,i = m m X r=1 a\u2113,r((\u03b1i(\u03c4 + 1))\u22121 \u2212\u03b1i(\u03c4)\u22121) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) = m m X r=1 a\u2113,r\u03b1i(\u03c4 + 1)\u22121\u03b1i(\u03c4)\u22121 \u00b7 (\u27e81m, exp(W(\u03c4 + 1)xi) \u2212exp(W(\u03c4)xi)\u27e9) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) = m m X r=1 a\u2113,r\u03b1i(\u03c4 + 1)\u22121\u03b1i(\u03c4)\u22121( m X r2=1 exp(wr2(\u03c4 + 1)xi) \u2212exp(wr2(\u03c4)xi)) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) 27 \f= m m X r=1 a\u2113,r\u03b1i(\u03c4 + 1)\u22121\u03b1i(\u03c4)\u22121 m X r2=1 \u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9exp(wr2(\u03c4)xi) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) = m( m X r=1 a\u2113,r m X r2=1 \u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9Si,r2(\u03c4) \u00b7 Si,r(\u03c4 + 1) | {z } \ufb01rst order term + \u03b72\u22062 | {z } second order term ) (14) where the \ufb01rst step follows from lemma statement, the second step follows from a\u22121 \u2212b\u22121 = b\u2212a ab , the third step follows from simple algebra, the fourth step follows from simple algebra, and the last step follows from |\u03b7\u2206wr(\u03c4)\u22a4xi| \u22640.01 (due to Gradient Property and \u2225xi\u22252 \u22641). The second order term \u03b72\u22062 in Eq. (14) can be bounded in a similar way as the proof of Claim E.6. Further, we can rewrite the \ufb01rst-order term in Eq. (14) m m X r=1 a\u2113,r m X r2=1 \u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9Si,r2(\u03c4) \u00b7 Si,r(\u03c4 + 1) = m2(Q1,i,\u2113+ Q2,i,\u2113) (15) where Q1,i,\u2113:= 1 m m X r=1 a\u2113,r(\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9)Si,r(\u03c4) \u00b7 Si,r(\u03c4 + 1) Q2,i,\u2113:= 1 m m X r=1 a\u2113,r X r2\u0338=r (\u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9)Si,r2(\u03c4) \u00b7 Si,r(\u03c4 + 1) Let us consider how to handle the \ufb01rst term in Eq. (14), Q1,i,\u2113= 1 m m X r=1 a\u2113,r(\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9)Si,r(\u03c4) \u00b7 Si,r(\u03c4 + 1) = m X r=1 a\u2113,rSi,r \u00b7 Si,r(\u03c4 + 1)(\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi where the second step follows from computing \u2206wr(\u03c4) explicitly (see Claim 3.4). Similarly as proof of Lemma E.4, we can use concentration to bound n X i=1 d X \u2113=1 Q1,i,\u2113(F\u2113,i \u2212y\u2113,i) Note that 0 < Sj,r < exp(3B) m by Part 11 of Lemma B.1. The above small term is equivalent to \u2212\u03b7exp(9B) m3 \u00b7 n X i=1 n X j=1 m X r=1 d X \u2113=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u03c3i,j,r,\u2113,\u21132 \u00b7 Ci,j,r,\u2113,\u21132 \u00b7 (F\u2113,i(\u03c4) \u2212y\u2113,i), where \u03c3i,\u2113,\u21132,j,r \u223c[\u22121, +1] and |Ci,\u2113,\u21132,j,r| \u226410. We de\ufb01ne P1,r,\u2113,\u21132 := (F\u21132,j \u2212y\u21132,j)\u03c3i,j,r,\u2113,\u21132Ci,j,r,\u2113,\u21132(F\u2113,i \u2212y\u2113,i) 28 \fSimilarly as Lemma E.4, for each \ufb01xed i, j \u2208[n], using Hanson-Wright inequality (Lemma A.10), we can show Pr[| m X r=1 d X \u2113=1 d X \u21132=1 P1,r,\u2113,\u21132| \u2264100\u2225Fj \u2212yj\u22252\u2225Fi \u2212yi\u22252 \u00b7 \u221a md log(nd/\u03b4)] \u22651 \u2212\u03b4/ poly(nd). By mean inequality, we have n X i=1 n X j=1 \u2225Fj \u2212yj\u22252 \u00b7 \u2225Fi \u2212yi\u22252 \u2264n\u2225F \u2212y\u22252 F . Thus, we have the \ufb01rst term with probability at least 1 \u2212poly(nd), such that | n X i=1 d X \u2113=1 Q1,i,\u2113(F\u2113,i \u2212y\u2113,i)| \u2264\u03b7n exp(9B) m3 \u2225F \u2212y\u22252 F \u221a md log(nd/\u03b4) Similarly, we can compute n X i=1 d X \u2113=1 Q2,i,\u2113(F\u2113,i \u2212y\u2113,i) Using Hanson-Wright inequality (Lemma A.10), we have the second term with probability at least 1 \u2212poly(nd), such that | n X i=1 d X \u2113=1 Q2,i,\u2113(F\u2113,i \u2212y\u2113,i)| \u2264\u03b7n exp(9B) m2 \u2225F \u2212y\u22252 F \u221a md log(nd/\u03b4) Thus, we can complete the proof by the Lemma statement m \u2265\u2126(\u03bb\u22122n2d exp(18B) log2(nd/\u03b4)). E.4 Bounding C1 Here, we give the bound of the \ufb01rst order term C1. Note that this term is making progress. Lemma E.4. Assuming the following condition is met: \u2022 Let \u03bb = \u03bbmin(H\u2217) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9 \u2022 Let m \u2265\u2126(\u03bb\u22122n2d exp(12B) log2(nd/\u03b4)) \u2022 Let scalar v1,\u2113,i \u2208R be de\ufb01ned as follows v1,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) \u2022 C1 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v1)\u27e9 29 \fthen C1 \u2264\u22121.6m\u03b7 vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4) vec(F(\u03c4) \u2212Y ). Proof. To simplify the notation, we omit writing (\u03c4) in Si,r(\u03c4). Then, we can express v1,\u2113,i \u2208R as follows: v1,\u2113,i = m X r\u2208[m] a\u2113,r \u00b7 Si,r \u00b7 (\u2212\u03b7\u27e8xi, \u2206wr(\u03c4)\u27e9) = m2 X r\u2208[m] a\u2113,r \u00b7 Si,r \u00b7 (\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi = m2(Q1,\u2113,i + Q2,\u2113,i) (16) where the second step using equation for \u2206wr(\u03c4) (see Claim 3.4). Note that \u27e8a\u2113,r \u00b7 1m, Si\u27e9= a\u2113,r, so in the above equation, Q1,\u2113,i := X r\u2208[m] \u27e8a\u2113,r \u00b7 1m \u2212a\u2113, Si\u27e9\u00b7 Si,r \u00b7 (\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi Q2,\u2113,i := X r\u2208[m] \u27e8a\u2113, Si\u27e9\u00b7 Si,r \u00b7 (\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi The quantity P i\u2208[n] P \u2113\u2208[d] Q1,\u2113,i(F\u2113,i \u2212Y\u2113,i) is corresponding to \ufb01rst term (Q1,\u2113,i) in Eq. (16). It is X i\u2208[n] X \u2113\u2208[d] Q1,\u2113,i(F\u2113,i \u2212Y\u2113,i) = \u22121 m\u03b7 vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4)\u22a4vec(F(\u03c4) \u2212Y ) (17) The quantity P i\u2208[n] P \u2113\u2208[d] Q2,\u2113,i(F\u2113,i \u2212Y\u2113,i) is corresponding to second term (Q2,\u2113,i) in Eq. (16). Note that 0 < Sj,r < exp(3B) m by Part 11 of Lemma B.1. The quantity, X i\u2208[n] X \u2113\u2208[d] Q2,\u2113,i(F\u2113,i \u2212Y\u2113,i) (18) is equivalent to \u2212\u03b7exp(6B) m2 \u00b7 n X i=1 n X j=1 m X r=1 d X \u2113=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u03c3i,j,r,\u2113,\u21132 \u00b7 Ci,j,r,\u2113,\u21132 \u00b7 (F\u2113,i(\u03c4) \u2212y\u2113,i), where \u03c3i,j,r,\u2113,\u21132 \u2208{\u22121, +1} and |Ci,j,r,\u2113,\u21132| \u226410. Note that there are four cases \u2022 i = j, \u2113= \u21132, this is a p.s.d. case that always makes progress, thus we can drop it. \u2022 i \u0338= j, \u2113= \u21132 we will use random variable P1 to handle \u2022 i = j, \u2113\u0338= \u21132 we will use random variable P2 to handle \u2022 i \u0338= j, \u2113\u0338= \u21132 we will use random variable P2 to handle 30 \fFor each \ufb01xed i, j \u2208[n]. We de\ufb01ne P1,r,\u2113:= (F\u2113,j \u2212y\u2113,j)\u03c3i,j,r,\u2113Ci,j,r,\u2113(F\u2113,i \u2212y\u2113,i) P2,r,\u2113,\u21132 := (F\u21132,j \u2212y\u21132,j)\u03c3i,j,r,\u2113,\u21132Ci,j,r,\u2113,\u21132(F\u2113,i \u2212y\u2113,i) The random variables related to P1,r,\u2113are the following m X r=1 d X \u2113=1 P1,r,\u2113 The random variables related to P2,r,\u2113,\u21132 are the following m X r=1 d X \u2113=1 d X \u21132=1 P2,r,\u2113,\u21132 For each i \u0338= j \u2208[n] and \u2113= \u21132, using Hoe\ufb00ding inequality (see Lemma A.9), we can show Pr[| m X r=1 d X \u2113=1 P1,r,\u2113| \u2264100\u2225Fj \u2212yj\u22252\u2225Fi \u2212yi\u22252 \u00b7 p md log(nd/\u03b4)] \u22651 \u2212\u03b4/ poly(nd). Similarly, we consider i = j and \u2113\u0338= \u21132 by Hanson-Wright inequality (Lemma A.10), we have Pr[| m X r=1 d X \u2113=1 d X \u21132=1 P2,r,\u2113,\u21132| \u2264100\u2225Fj \u2212yj\u22252\u2225Fi \u2212yi\u22252 \u00b7 \u221a md log(nd/\u03b4)] \u22651 \u2212\u03b4/ poly(nd). By mean inequality, we have n X i=1 n X j=1 \u2225Fj \u2212yj\u22252 \u00b7 \u2225Fi \u2212yi\u22252 \u2264n\u2225F \u2212y\u22252 F . Note that by Lemma condition, we have 1 m\u03bb \u2273n exp(6B) m2 \u00b7 \u221a md log(nd/\u03b4) \u21d0 \u21d2m \u2273\u03bb\u22122, the equation (Eq. (17) and the bound for Eq. (18)) above indicates that \u27e8vec(Y \u2212F(\u03c4)), vec(v1)\u27e9 can be expressed as vec(v1)\u22a4vec(Y \u2212F(\u03c4)) \u22650.8m\u03b7 \u00b7 vec(F(\u03c4) \u2212Y )\u22a4 | {z } 1\u00d7nd H(\u03c4)\u22a4 | {z } nd\u00d7nd vec(F(\u03c4) \u2212Y ). (19) We \ufb01nish the proof. Claim E.5. If the below conditions are true \u2022 Let B \u22651 be de\ufb01ned as De\ufb01nition 4.1 \u2022 Let \u03bb = \u03bbmin(H\u2217) > 0 31 \f\u2022 C1 = \u2212m\u03b7 vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4) vec(F(\u03c4) \u2212Y ). \u2022 R = \u03bb/(2nd exp(10B)) Then, we have C1 \u2264\u22121 2m\u03b7\u03bb \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F and \u03bbmin(H(\u03c4)) \u2265\u03bb/2. holds with probability at least 1 \u2212\u03b4. Proof. By Lemma 5.1, with probability at least 1 \u2212\u03b4, we have \u2225H\u2217\u2212H(\u03c4)\u2225F \u2264Rnd \u00b7 exp(10B) \u2264\u03bb/2 (20) where the \ufb01rst step follows from the de\ufb01nition of H(\u03c4), the last step comes from choice of \u03bb (see Claim Statement). Given that \u03bb = \u03bbmin(H\u2217), by eigenvalue perturbation theory \u03bbmin(H(\u03c4)) \u2265\u03bbmin(H\u2217) \u2212\u2225H\u2217\u2212H(\u03c4)\u2225 \u2265\u03bbmin(H\u2217) \u2212\u2225H\u2217\u2212H(\u03c4)\u2225F \u2265\u03bbmin(H\u2217) \u2212\u03bb/2 \u2265\u03bb/2. where the \ufb01rst step comes from triangle inequality, the second step is due to Frobenius norm, the third step is due to Eq.(20), the last step follows from \u03bbmin(H\u2217) = \u03bb. Finally, we have vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4) vec(F(\u03c4) \u2212Y ) \u2265\u03bb/2 \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . Thus, we complete the proof. E.5 Bounding C2 Here, we give the bound of the second order term C2. Claim E.6. If the below conditions are true \u2022 Let \u03bb = \u03bbmin(H\u2217) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9 \u2022 Let scalar v2,\u2113,i \u2208R be de\ufb01ned as follows v2,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 \u03b72 \u00b7 \u0398(1) \u00b7 \u27e8\u2206wr(\u03c4), xi\u27e92 32 \f\u2022 C2 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v2)\u27e9 Then we can conclude that C2 \u22642m\u03b72n2d2 exp(9B)\u2225F(\u03c4) \u2212Y \u22252 F . with probability at least 1 \u2212n \u00b7 exp(\u2212mR). Proof. Let pi,r \u2208[\u22121, 1]. We have |v2,\u2113,i| = m X r\u2208[m] a\u2113,r \u00b7 Si,r \u00b7 (\u03b72pi,r\u27e8xi, \u2206wr(\u03c4)\u27e92) \u2264m\u03b72nd exp(9B)\u2225F(\u03c4) \u2212Y \u22252 F , where the last step follows Lemma D.1 and Part 11 of Lemma B.1. Thus, C2 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v2)\u27e9 \u22642\u2225F(\u03c4) \u2212Y \u2225F \u2225v2\u2225F \u22642m\u03b72n2d2 exp(9B)\u2225F(\u03c4) \u2212Y \u22252 F , where the \ufb01rst step follows Cauchy-Schwartz inequality, and the second step follows \u2225F(\u03c4)\u2212Y \u2225F \u2264 O( \u221a nd) by induction statement (See Lemma C.3). E.6 Bounding \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F Here, we give the bound of the third order term C3. Claim E.7. If the below conditions are true \u2022 Let B \u22651 be de\ufb01ned as De\ufb01nition 4.1 \u2022 C3 = \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F . \u2022 R \u2208(0, 0.01) \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then with probability at least 1 \u2212\u03b4, we have C3 \u2264\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . Proof. Note that we denote \u03b1i as \u27e81m, exp(W \u22a4xi)\u27e9. According to de\ufb01nition of F\u2113,i(\u03c4), we have F\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4) = ma\u22a4 \u2113( + \u03b1i(\u03c4 + 1)\u22121 exp((W(\u03c4 + 1)\u22a4xi) \u2212\u03b1i(\u03c4)\u22121 exp((W(\u03c4 + 1)\u22a4xi) + \u03b1i(\u03c4)\u22121 exp((W(\u03c4 + 1)\u22a4xi) \u2212\u03b1i(\u03c4)\u22121 exp((W(\u03c4)\u22a4xi) ) 33 \fThen we have |F\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4)| (21) \u2264m m X r=1 |\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121| exp(wr(\u03c4 + 1)\u22a4xi) + m m X r=1 \u03b1i(\u03c4)\u22121 exp(wr(\u03c4)\u22a4xi) \u00b7 | exp(\u2212\u03b7\u2206wr(\u03c4)\u22a4xi) \u22121| where it follows from triangle inequality. For the second term in Eq. (21), we have m m X r=1 \u03b1i(\u03c4)\u22121 exp(wr(\u03c4)\u22a4xi) \u00b7 | exp(\u2212\u03b7\u2206wr(\u03c4)\u22a4xi) \u22121| \u2264exp(B + R) exp(B + R) m X r=1 | exp(\u2212\u03b7\u2206wr(\u03c4)\u22a4xi) \u22121| \u2264exp(2B + 2R) m X r=1 2\u03b7\u2225\u2206wr(\u03c4)\u22252 = 2\u03b7 exp(2B + 2R) m X r=1 \u2225\u2206wr(\u03c4)\u22252 \u22642\u03b7 exp(2B + 2R) \u00b7 m \u00b7 exp(3B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F \u2264\u03b7m exp(6B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F where the \ufb01rst step comes from Lemma B.1, the second step is due to \u03b7\u2225\u2206wr(\u03c4)\u22252 \u22640.01 (this is stated in Claim assumption) and Fact A.8, the third step is from simple algebra, the fourth step is due to Lemma D.1, the last step follows from simple algebra. Similarly, for the \ufb01rst term in Eq. (21) we have m m X r=1 |\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121| exp(wr(\u03c4 + 1)\u22a4xi) \u2264m2 exp(B + R)|\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121| \u2264m exp(B + R)|\u03b7\u2206wr(\u03c4)\u22a4xi| exp(3B + 2R) \u2264\u03b7m exp(4B + 3R)\u2225\u2206wr(\u03c4)\u22252 \u2264\u03b7m exp(7B + 3R) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F where the \ufb01rst step follows from Part 5 of Lemma B.1, the second step follows from Part 9 of Lemma B.1 where R = |\u03b7\u2206wr(\u03c4)\u22a4xi|, the third step follows from simple algebra, and the last step follows from Lemma D.1. Thus we have |F\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4)| \u2264\u03b7m exp(8B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F . (22) Finally, we get \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F \u2264nd \u00b7 (\u03b7m exp(8B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F )2 \u2264\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F where the \ufb01rst step is because of Eq. (22), the last step comes from simple algebra. 34 \fF NTK Regression In this section, we introduce the NTK regression, as we will show that the neural network is \u201cequivalent\u201d to this regression so that we can give a \ufb01nal guarantee on the test data. To clarify the function, we use Fnn to denote F as a neural network function. We use xte \u2208Rd to denote the test data. We would like to control the error between the neural network Fnn and the function Fntk. For convenience, we call this error \u201ccoupling error\u201d, which is the di\ufb00erence between the trained neural network and its corresponding NTK regression. Recall that, by De\ufb01nition 3.6, we have the H\u2217= H(W(0)). Recall [H\u2217]i,j \u2208Rd\u00d7d is the kernel between xi and xj. Similarly, \u2200\u21131, \u21132 \u2208[d], for test data, we can de\ufb01ne the NTK induced feature map as [K\u2217 \u21131,\u21132]te,j := 1 mx\u22a4 texj m X r=1 \u27e8v\u21131,r, Ste(0)\u27e9\u00b7 mSte,r(0) \u00b7 \u27e8v\u21132,r, Sj(0)\u27e9\u00b7 mSj,r(0) [K(\u03c4)\u21131,\u21132]te,j := 1 mx\u22a4 texj m X r=1 \u27e8v\u21131,r, Ste(\u03c4)\u27e9\u00b7 mSte,r(\u03c4) \u00b7 \u27e8v\u21132,r, Sj(\u03c4)\u27e9\u00b7 mSj,r(\u03c4), where K\u2217 te, Kte(\u03c4) \u2208Rd\u00d7nd. Similarly, we have K\u2217 i = [H\u2217]i \u2208Rd\u00d7nd, Ki(\u03c4) = [H(\u03c4)]i \u2208Rd\u00d7nd for training data xi. Then, we de\ufb01ne the kernel regression predictor. De\ufb01nition F.1 (NTK regression predictor). We de\ufb01ne NTK regression predictor as Fntk(\u03b3(\u03c4), xte) :=mK\u2217 te\u03b3(\u03c4), (23) where \u03b3(\u03c4) \u2208Rnd is the parameter at timestamp \u03c4. Recall that we have a training dataset Dn = {(xi, yi)}n i=1. Then, we denote the corresponding objective function for Fntk as Lntk(\u03b3(\u03c4)) = 1 2 n X i=1 \u2225Fntk(\u03b3(\u03c4), xi) \u2212yi\u22252 2. (24) Thus, based on Eq. (24), the gradient desent (GD) updating rule of \u03b3(\u03c4) is given by \u03b3(\u03c4 + 1) | {z } nd\u00d71 = \u03b3(\u03c4) |{z} nd\u00d71 \u2212\u03b7 \u00b7 (m H\u2217 |{z} nd\u00d7nd \u03b3(\u03c4) |{z} nd\u00d71 \u2212vec(Y ) | {z } nd\u00d71 ), \u03b3(0) = 0nd, (25) where the Eq. (25) is according to \u03b3(\u03c4 + 1) = \u03b3(\u03c4) \u2212\u03b7\u2207\u03b3Lntk(\u03b3(\u03c4)). F.1 Equivalence between Trained Net and Kernel Regression We provide a stronger bound between Fntk and Fnn result compared to Lemma F.1 in [ADH+19b]. Our following statement is stronger in the two following senses: their result only holds when t \u2192\u221e, and our result holds for all t \u2208[0, \u221e); also their result only works for 1 dimension output space, our result holds arbitrary d dimensional output space. Theorem F.2 (Kernel value perturbation \u21d2prediction perturbation). Fix \u01ebH \u22641 2\u03bb. If for all \u03c4 \u22650, \u2225K\u2217 \u2113,te \u2212K\u2113,te(\u03c4)\u2225F \u2264\u01eb\u2113,test and \u2225H\u2217\u2212H(\u03c4)\u2225F \u2264\u01ebH, then for any xte \u2208Rd, \u2113\u2208[d] and \u03c4 \u22650, we have |Fntk(\u03b3(\u03c4), xte)\u2113\u2212Fnn(W(\u03c4), xte)\u2113| \u2264O \u221a nd \u03bb \u01eb\u2113,test + \u221a nd \u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH ! . 35 \fProof of Theorem F.2. Our proof relies on a careful analysis of the trajectories induced by gradient \ufb02ow for optimizing the neural network predictor Fnn and the NTK predictor Fntk. Then, we can have a similar argument to gradient descent at any timestamp \u03c4. Recall that for any xte, xi \u2208Rd, we have K\u2217 te, K\u2217 i \u2208Rd\u00d7nd be the feature map induced by NTK. For any x \u2208Rd, we de\ufb01ne \u03c6(x) \u2208Rd\u00d7d as following, for any \u2113\u2208[d], \u03c6(x)\u2113= 1 \u221amx m X r=1 \u27e8v\u2113,r, S(0)\u27e9\u00b7 mSr(0). We denote \u03c6(X) \u2208Rd\u00d7nd as the stack of feature map of X \u2208Rd\u00d7n. Note the optimal solution in Eq. (23) can be rewritten as min \u03b3 \u2225\u03b3\u22252 such that mK\u2217 i \u03b3 = yi for i = 1, . . . , n. We have the optimal solution for kernel regression is \u03b3\u2217:= m\u22121(H\u2217)\u22121 vec(Y ) and its corresponding prediction for xte will be Fntk(\u03b3(\u03c4), xte) = K\u2217 te(H\u2217)\u22121 vec(Y ). The solution to this program can be rewritten as applying gradient \ufb02ow on the min \u03b2 n X i=1 \u2225\u221am\u03c6(xi)\u22a4\u03b2 \u2212yi\u22252 2 with initialization \u03b2(0) = 0d. We use \u03b2(\u03c4) to denote this parameter at timestamp \u03c4 trained by gradient \ufb02ow. We denote Fntk2(\u03b2(\u03c4), xte) := \u221am\u03c6(xte)\u22a4\u03b2(\u03c4) where Fntk2(\u03b2(\u03c4), xte) be the predictor for xte at time \u03c4. Then we have Fntk2(\u03b2(\u03c4), xte) = \u221am \u03c6(xte)\u22a4 | {z } Rd\u00d7d \u03b2(\u03c4) |{z} Rd = \u221am \u03c6(xte)\u22a4 | {z } Rd\u00d7d (\u221am \u03c6(X) | {z } Rd\u00d7nd ) \u03b3(\u03c4) |{z} Rnd = m K\u2217 te |{z} Rd\u00d7nd \u03b3(\u03c4) = Fntk(\u03b3(\u03c4), xte) where the second step follows \u03b2(\u03c4) = \u221am\u03c6(X)\u03b3(\u03c4) the third step follows K\u2217 te = \u03c6(xte)\u22a4\u03c6(X). With these notations, as \u03c4 goes to in\ufb01nity, we denote, for any \u2113\u2208[d], Fntk2(xte)\u2113= Z \u221e \u03c4=0 dFntk2(\u03b2(\u03c4), xte)\u2113 d\u03c4 d\u03c4 where we have used the fact that the initial prediction is 0 as \u03b2(0) = 0d. Similarly for Fnn(xte)\u2113. Let Fntk2,i(\u03c4) = Fntk2(\u03b2(\u03c4), xi) and Fntk2(\u03c4) \u2208Rd\u00d7n. Similarly, for the NN predictor Fnn. Now we take a closer look at the time derivative: dFntk2(\u03b2(\u03c4), xte)\u2113 d\u03c4 = \u001c\u2202Fntk2(\u03b2(\u03c4), xte)\u2113 \u2202\u03b2(\u03c4) , d\u03b2(\u03c4) d\u03c4 \u001d 36 \f= \u001c\u2202Fntk2(\u03b2(\u03c4), xte)\u2113 \u2202\u03b2(\u03c4) , \u2212\u2202L(\u03b2(\u03c4), {xi}n i=1) \u2202\u03b2(\u03c4) \u001d = \u2212 * \u2202Fntk2(\u03b2(\u03c4), xte)\u2113 \u2202\u03b2(\u03c4) , n X i=1 d X \u21132=1 (Fntk2,i,\u21132(\u03c4) \u2212yi,\u21132) \u2202Fntk2(\u03b2(\u03c4), xi)\u21132 \u2202\u03b2(\u03c4) + = \u2212m * \u03c6(xte)\u2113, n X i=1 d X \u21132=1 (Fntk2,i,\u21132(\u03c4) \u2212yi,\u21132)\u03c6(xi)\u21132 + = \u2212m vec(K\u2217 \u2113,te)\u22a4vec(Fntk2(\u03c4) \u2212Y ) (26) where the \ufb01rst step follows from simple algebra, the second step follows from ODE formulation (we remark that this is a very standard step in all the NTK literature), the third step follows from Eq. (24), the fourth step follows from the de\ufb01nition of \u03c6(xte)\u2113, the last step follows from simple algebra. We can obtain a time derivative of the same form for Fnn. dFnn(W(\u03c4), xte)\u2113 d\u03c4 = \u001c\u2202Fnn(W(\u03c4), xte)\u2113 \u2202W(\u03c4) , dW(\u03c4) d\u03c4 \u001d = \u001c\u2202Fnn(W(\u03c4), xte)\u2113 \u2202W(\u03c4) , \u2212\u2202L(W(\u03c4), {xi}n i=1) \u2202W(\u03c4) \u001d = \u2212 * \u2202Fnn(W(\u03c4), xte)\u2113 \u2202W(\u03c4) , n X i=1 d X \u21132=1 (Fnn,i,\u21132(\u03c4) \u2212yi,\u21132)\u2202Fnn(W(\u03c4), xi)\u21132 \u2202W(\u03c4) + = \u2212m vec(K\u2113,te(\u03c4))\u22a4vec(Fnn(\u03c4) \u2212Y ) (27) where the \ufb01rst step follows from simple algebra, the second step is standard in NTK literature, the third step follows from Eq. (24), the last step follows from simple algebra. Thus we analyze the di\ufb00erence between the NN predictor and NTK predictor via this integral form |Fnn(xte)\u2113\u2212Fntk2(xte)\u2113| = \f \f \f \fFnn(W(0), xte)\u2113+ Z \u221e \u03c4=0 \u0012dFnn(W(\u03c4), xte)\u2113 d\u03c4 \u2212dFntk2(\u03b2(\u03c4), xte)\u2113 d\u03c4 \u0013 d\u03c4 \f \f \f \f = |Fnn(W(0), xte)\u2113| + \f \f \f \f\u2212m Z \u221e \u03c4=0 \u0010 vec(K\u2113,te(\u03c4))\u22a4vec(Fnn(\u03c4) \u2212Y ) \u2212vec(K\u2217 \u2113,te)\u22a4vec(Fntk2(\u03c4) \u2212Y ) \u0011 d\u03c4 \f \f \f \f = \f \f \f \f\u2212m Z \u221e \u03c4=0 \u0010 vec(K\u2113,te(\u03c4))\u22a4vec(Fnn(\u03c4) \u2212Y ) \u2212vec(K\u2217 \u2113,te)\u22a4vec(Fntk2(\u03c4) \u2212Y ) \u0011 d\u03c4 \f \f \f \f \u2264m \f \f \f \f Z \u221e \u03c4=0 vec(K\u2113,te(\u03c4) \u2212K\u2217 \u2113,te)\u22a4vec(Fnn(\u03c4) \u2212Y )d\u03c4 \f \f \f \f + m \f \f \f \f Z \u221e \u03c4=0 vec(K\u2217 \u2113,te)\u22a4vec(Fnn(\u03c4) \u2212Fntk2(\u03c4))d\u03c4 \f \f \f \f \u2264m max 0\u2264t\u2264\u221e\u2225K\u2113,te(\u03c4) \u2212K\u2217 \u2113,te\u2225F Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 + m max 0\u2264t\u2264\u221e\u2225K\u2217 \u2113,te\u2225F Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264m\u01eb\u2113,test Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 + m max 0\u2264t\u2264\u221e\u2225K\u2217 \u2113,te\u2225F Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4, where the \ufb01rst step follows from the di\ufb00erence between the NN predictor and NTK predictor, the second step follows from Eq. (26) and Eq. (27), the third step follows |Fnn(W(0), xte)\u2113| = 0 by 37 \fsymmetric initialization from De\ufb01nition 3.7, the fourth step follows from simple algebra, the \ufb01fth step follows from Frobenius norm, the last step follows from simple algebra. For the \ufb01rst term, recall \u2225H\u2217\u2212H(\u03c4)\u2225F \u2264\u01ebH and, by Claim E.5, we have \u03bbmin(H(\u03c4)) \u22651 2\u03bb. Using this fact we know \u2225Fnn(\u03c4) \u2212Y \u2225F \u2264exp(\u2212m 2 \u03bb\u03c4)\u2225Fnn(0) \u2212Y \u2225F (The reason to obtain this is due to solve ODE). Therefore, by Lemma D.3, we can bound Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 = Z \u221e \u03c4=0 exp \u0010 \u2212m 2 \u03bb\u03c4 \u0011 \u2225Fnn(0) \u2212Y \u2225F d\u03c4 = O( \u221a nd m\u03bb ). To bound R \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4, we observe that Fnn(\u03c4) \u2192y and Fntk2(\u03c4) \u2192y with linear convergence rate. Therefore, we can choose some \u03c40 = C m\u03bb log \u0010 nd \u01ebH\u00b7m\u03bb \u0011 so that Z \u221e \u03c40 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264 Z \u221e \u03c40 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 + Z \u221e \u03c40 \u2225Fntk2(\u03c4) \u2212Y \u2225F d\u03c4 \u2264O \u0012 1 m\u03bb(\u2225Fnn(\u03c40) \u2212Y \u2225F + \u2225Fntk2(\u03c40) \u2212Y \u2225F ) \u0013 \u2264O \u221a nd m\u03bb exp (\u2212m\u03bb\u03c40) ! \u2264O(\u01ebH). where the \ufb01rst step follows from simple algebra, the second step follows from integral range is \u03c40, the third step follows from Lemma D.3, the last step follows from choice of \u03c40. Thus it su\ufb03ces to bound R \u03c40 \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264\u03c40 max0\u2264t\u2264\u03c40 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F . First observe that \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F \u2264\u2225Fnn(0)\u2225F + Z \u03c4 s=0 \r \r \r \r d(Fnn(s) \u2212Fntk2(s)) ds \r \r \r \r F ds = Z \u03c4 s=0 \r \r \r \r d(Fnn(s) \u2212Fntk2(s)) ds \r \r \r \r F ds, where the last step follows symmetric initialization from De\ufb01nition 3.7. Note d(Fnn(\u03c4) \u2212Fntk2(\u03c4)) d\u03c4 = \u2212mH(\u03c4) vec(Fnn(\u03c4) \u2212Y ) + mH\u2217vec(Fntk2(\u03c4) \u2212Y ) = \u2212mH\u2217vec(Fnn(\u03c4) \u2212Fntk2(\u03c4)) + m(H\u2217\u2212H(\u03c4)) vec(Fnn(\u03c4) \u2212Y ) where the \ufb01rst step follows from de\ufb01nition of Fnn and Fntk2. Since H\u2217is positive semide\ufb01nite, \u2212H\u2217vec(Fnn(\u03c4) \u2212Fntk2(\u03c4)) term only makes \u2225Fnn(\u03c4) \u2212 Fntk2(\u03c4)\u2225F smaller. Therefore, we have \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F \u2264m Z \u03c4 s=0 \u2225Fnn(s) \u2212Y \u2225F \u2225H(\u03c4) \u2212H\u2217\u2225F ds 38 \f\u2264m\u03c4\u2225Fnn(0) \u2212Y \u2225F \u01ebH \u2264O \u0010 m\u03c4 \u221a nd\u01ebH \u0011 , where the last step is by Lemma D.3. Therefore, we have Z \u03c40 \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264O \u0010 m\u03c4 2 0 \u221a nd\u01ebH \u0011 = O \u221a nd m\u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH ! . where the \ufb01rst step follows from integral range is \u03c40, the second step follows from the choice of \u03c40. Lastly, as Fntk2(xte)\u2113= Fntk(xte)\u2113, we put things together and get |Fntk(xte)\u2113\u2212Fnn(xte)\u2113| \u2264O \u221a nd \u03bb \u01eb\u2113,test + \u221a nd \u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH ! . From the above, after we change the integration from (0, \u221e) to (0, \u03c4), the statement still holds. Then, based on the gradient \ufb02ow version, we can have a gradient descent version with a constant error factor by replacing integral with geometric summarization (for example P\u221e i=0 ai < 2, when a \u2208(0, 0.5) ). G Di\ufb00usion In Section G.1, we provide the proof of our main result of di\ufb00usion. In Section G.2, we provide some tools from previous works. We \ufb01rst de\ufb01ne an auxiliary function e Fntk of the same functional form as Fntk, but trained on a pseudo dataset e S := {e yi, xi}n i=1 with e yi := FH(xi) + \u01ebi and \u01ebi := yi \u2212F\u2217(xi). Then, we have the following claim. Claim G.1 (Loss decomposition). We can decompose our target function as the following 1 T Z T 0 E[\u2225Fnn(W(\u03c4), (t, x(t))) \u2212F\u2217(t, x(t))\u22252 2]dt \u2264Z1 + Z2 + Z3 + Z4, where Z1 = 1 T Z T 0 E[\u2225Fnn(W(\u03c4), (t, x(t))) \u2212Fntk(\u03b3(\u03c4), (t, x(t)))\u22252 2]dt (coupling) Z2 = 1 T Z T 0 E[\u2225Fntk(\u03b3(\u03c4), (t, x(t))) \u2212e Fntk(\u03b3(\u03c4), (t, x(t)))\u22252 2]dt (label mismatch) Z3 = 1 T Z T 0 E[\u2225e Fntk(\u03b3(\u03c4), (t, x(t))) \u2212FH(t, x(t))\u22252 2]dt (early stopping) Z4 = 1 T Z T 0 E[\u2225FH(t, x(t)) \u2212F\u2217(t, x(t))\u22252 2]dt. (approximation). The coupling error term is the gap between neural networks Fnn and a kernel function Fntk. The approximation error term is the gap between the target function F\u2217and its corresponding RKHS function FH. These two terms transfer the problem of neural networks training into the problem of kernel regression. 39 \fG.1 Main Result of Di\ufb00usion In this section, we prove the main result of di\ufb00usion. Theorem G.2 (Restatement of Theorem 6.5). Suppose Assumptions 6.1, 6.2, 6.3, 6.4 hold and we set m = \u2126(\u03bb\u22122n3d3 exp(18B) log2(nd/\u03b4)) and \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)). Moreover, suppose b T satis\ufb01es Assumption G.3 with corresponding \u01eb(n, b T). Then for large enough RH, with probability at least 1 \u2212\u03b4, it holds that 1 T Z T 0 \u03bb(t)E[\u2225sW ( b T)(t, x(t)) \u2212\u2207log pt(Xt)\u22252 2]dt \u2264O \u0012 1 \u03bb\u221an + \u01eb(n, b T) + dA2(RH) + dA(RH) + p dA(RH)\u0393\u03b4 + \u0393\u03b4 \u0013 . Proof of Theorem 6.5. Note that the m and \u03b7 satisfy the conditions in Theorem 4.2. The reason about a di\ufb00erent m is that we choose a di\ufb00erent R and apply Lemma E.2 one more time. Recall the \u01eb\u2113,test and \u01ebH are de\ufb01ned in Theorem F.2. Note that H\u2217= H(0). By Lemma 5.1, Part 2, let R = \u03bb/(2n2d2 exp(10B)), we have with probability at least 1 \u2212\u03b4 such that \u2225H\u2217 |{z} nd\u00d7nd \u2212H(\u03c4) | {z } nd\u00d7nd \u2225F \u2264\u01ebH = \u03bb 2nd. Note that K\u2217 \u2113,te and K\u2113,te share the same weight perturbation as H\u2217and H(\u03c4). Thus, by using the same proof as Lemma 5.1, Part 1, we have \u2225K\u2217 \u2113,te |{z} n\u00d7d \u2212K\u2113,te |{z} n\u00d7d \u2225F \u2264\u01eb\u2113,test = \u03bb 2n1.5d1.5 . We have \u2225Fntk(\u03b3(\u03c4), xte) \u2212Fnn(W(\u03c4), xte)\u22252 \u2264 \u221a d max \u2113\u2208d |Fntk(\u03b3(\u03c4)\u2113, xte) \u2212Fnn(W(\u03c4), xte)\u2113| \u2264O \u0012\u221and \u03bb max \u2113\u2208[d] \u01eb\u2113,test + \u221and \u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH \u0013 \u2264O \u0012\u221and \u03bb \u03bb n1.5d1.5 + \u221and \u03bb2 log2 \u0012 nd m\u03bb \u0013 \u03bb nd \u0013 \u2264O \u0012 1 \u03bb\u221an log2 \u0012 nd m\u03bb \u0013\u0013 \u2264O \u0012 1 \u03bb\u221an \u0013 where the \ufb01rst step follows from simple algebra, the second step is by Theorem F.2. Thus, we \ufb01nish the proof by Claim G.1, where coupling is from above, label mismatch is from Theorem G.5, early stopping is from Assumption G.3 and approximation is from Theorem G.4. 40 \fG.2 Tools From Previous Works We have the following assumption and statements from previous works [HRX24]. Assumption G.3 (Assumption 3.11 in [HRX24]). Fix any FH \u2208H with \u2225FH\u22252 H \u2264RH and assume labels are generated as e yj = FH(xj) + \u01ebj. Suppose e Fntk(\u03b3( b T), \u00b7) is obtained by GD-trained kernel regression with the number of iterations b T. We assume there exists \u01eb such that 1 T Z T 0 E[ e Fntk(\u03b3( b T ), (t, x(t))) \u2212FH(t, x(t))\u22252 2]dt \u2264\u01eb(n, b T), and \u01eb(n, b T) \u21920 as n \u2192\u221e. Theorem G.4 (Theorem 3.6 in [HRX24], universal approximation of score function). Suppose Assumptions 6.1, 6.3 and 6.4 hold. Let RH be larger than a constant c1, i.e., C(d + 1, 0) in Proposition 6 of [Bac17], which depends only on d. There exists a function FH \u2208H such that \u2225FH\u22252 H \u2264dRH and 1 T Z T 0 E[\u2225FH(t, x(t)) \u2212F\u2217(t, x(t))\u22252 2]dt \u2264dA2(RH). Theorem G.5 (Theorem 3.10 in [HRX24], label mismatch). Suppose Assumptions 6.1 and 6.2 hold. If we initialize both Fntk and e Fntk properly, then with probability at least 1 \u2212\u03b4 it holds simultaneously for all \u03c4 that 1 T Z T 0 E[\u2225Fntk(\u03b3(\u03c4), (t, x(t))) \u2212e Fntk(\u03b3(\u03c4), (t, x(t)))\u22252 2]dt \u2264dA(RH) + C0( p dA(RH)\u0393\u03b4 + \u0393\u03b4) where C0 is a constant de\ufb01ned in Theorem 1 of [RK20]." + }, + { + "url": "http://arxiv.org/abs/2402.09469v2", + "title": "Fourier Circuits in Neural Networks: Unlocking the Potential of Large Language Models in Mathematical Reasoning and Modular Arithmetic", + "abstract": "In the evolving landscape of machine learning, a pivotal challenge lies in\ndeciphering the internal representations harnessed by neural networks and\nTransformers. Building on recent progress toward comprehending how networks\nexecute distinct target functions, our study embarks on an exploration of the\nunderlying reasons behind networks adopting specific computational strategies.\nWe direct our focus to the complex algebraic learning task of modular addition\ninvolving $k$ inputs. Our research presents a thorough analytical\ncharacterization of the features learned by stylized one-hidden layer neural\nnetworks and one-layer Transformers in addressing this task. A cornerstone of\nour theoretical framework is the elucidation of how the principle of margin\nmaximization shapes the features adopted by one-hidden layer neural networks.\nLet $p$ denote the modulus, $D_p$ denote the dataset of modular arithmetic with\n$k$ inputs and $m$ denote the network width. We demonstrate that a neuron count\nof $ m \\geq 2^{2k-2} \\cdot (p-1) $, these networks attain a maximum $ L_{2,k+1}\n$-margin on the dataset $ D_p $. Furthermore, we establish that each\nhidden-layer neuron aligns with a specific Fourier spectrum, integral to\nsolving modular addition problems. By correlating our findings with the\nempirical observations of similar studies, we contribute to a deeper\ncomprehension of the intrinsic computational mechanisms of neural networks.\nFurthermore, we observe similar computational mechanisms in the attention\nmatrix of the one-layer Transformer. This research stands as a significant\nstride in unraveling their operation complexities, particularly in the realm of\ncomplex algebraic tasks.", + "authors": "Jiuxiang Gu, Chenyang Li, Yingyu Liang, Zhenmei Shi, Zhao Song, Tianyi Zhou", + "published": "2024-02-12", + "updated": "2024-05-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction 3 2 Related Work 4 3 Problem Setup 5 3.1 Data and Network Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Margins of the Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.3 Connection between Training and the Maximum Margin Solutions . . . . . . . . . . 7 4 Main Result 7 4.1 Technique Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5 Experiments 9 5.1 One-hidden Layer Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.2 One-layer Transformer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.3 Grokking under Different k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 6 Discussion 11 7 Conclusion 13 A Limitations 14 B Societal Impact 14 C More Notations and Definitions 14 D Tools from Previous Work 15 D.1 Tools from Previous Work: Implying Single/Combined Neurons . . . . . . . . . . . . 15 D.2 Tools from Previous Work: Maximum Margin for Multi-Class . . . . . . . . . . . . . 15 E Class-weighted Max-margin Solution of Single Neuron 16 E.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 E.2 Transfer to Discrete Fourier Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 E.3 Get Solution Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 E.4 Transfer to Discrete Fourier Space for General k Version . . . . . . . . . . . . . . . . 21 E.5 Get Solution Set for General k Version . . . . . . . . . . . . . . . . . . . . . . . . . . 22 F Construct Max Margin Solution 25 F.1 Sum-to-product Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 F.2 Constructions for \u03b8\u2217 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 F.3 Constructions for \u03b8\u2217for General k Version . . . . . . . . . . . . . . . . . . . . . . . . 30 G Check Fourier Frequencies 32 G.1 All Frequencies are Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 G.2 All Frequencies are Used for General k Version . . . . . . . . . . . . . . . . . . . . . 34 1 \fH Main Result 36 H.1 Main result for k = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 H.2 Main Result for General k Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 I More Empirical Details and Results 38 I.1 Implement Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 I.2 One-hidden Layer Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 I.3 One-layer Transformer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2 \f1 Introduction The field of artificial intelligence has experienced a significant transformation with the development of large language models (LLMs), particularly through the introduction of the Transformer architecture [VSP+17]. This advancement has revolutionized approaches to challenging tasks in natural language processing, notably in machine translation [PCR20, GHG+20] and text generation [LSX+22]. Consequently, models e.g., BERT [DCLT18], PaLM [CND+22], Mistral [JSM+23], Llama [TLI+23], Llama2 [TMS+23], Llama3 [AI24], Gemini [TAB+23], Gemma [TMH+24], Claude3 [Ant24], ChatGPT [Ope22], GPT4 [Ope23] and so on, have become predominant in NLP. Central to this study is the question of how these advanced models transcend mere pattern recognition to engage in what appears to be logical reasoning and problem-solving. This inquiry is not purely academic; it probes the core of \u201cunderstanding\u201d in artificial intelligence. While LLMs, such as Claude3 and GPT4, demonstrate remarkable proficiency in human-like text generation, their capability to comprehend and process mathematical logic is a topic of considerable debate. This line of investigation is crucial, given AI\u2019s potential to extend beyond text generation into deeper comprehension of complex subjects. Mathematics, often seen as the universal language, presents a uniquely challenging domain for these models [YC23]. Our research aims to determine whether Transformers with attention, noted for their NLP efficiency, can also demonstrate an intrinsic understanding of mathematical operations and reasoning. In a recent surprising study of mathematical operations learning, [PBE+22] train Transformers on small algorithmic datasets, e.g., a1 + a2 mod p and we let p be a prime number, and show the \u201cgrokking\u201d phenomenon, where models abruptly transition from bad generalization to perfect generalization after a large number of training steps. Nascent studies, such as those by [NCL+23], empirically reveal that Transformers can solve modular addition using Fourier-based circuits. They found that the Transformers trained by Stochastic Gradient Descent (SGD) not only reliably compute a1 +a2 mod p, but also that the networks consistently employ a specific geometric algorithm. This algorithm, which involves composing integer rotations around a circle, indicates an inherent comprehension of modular arithmetic within the network\u2019s architecture. The algorithm relies on this identity: for any a1, a2 and \u03b6 \u2208Zp\\{0}, the following two quantities are equivalent (a1 + a2) mod p = arg max c\u2208Zp {cos(2\u03c0\u03b6(a1 + a2 \u2212c)/p)}. [NCL+23] further show that the attention and MLP module in the Transformer imbues the neurons with Fourier circuit-like properties. To study why networks arrive at Fourier-based circuits computational strategies, [MEO+24] theoretically study one-hidden layer neural network learning on two inputs modular addition task and certify that the trained networks will execute modular addition by employing Fourier features aligning closely with the previous empirical observations. However, the question remains whether neural networks can solve more complicated mathematical problems. Inspired by recent developments in mechanistic interpretability [OCS+20, ENO+21, EHO+22] and the study of inductive biases [SHN+18, Var23] in neural networks, we extend our research to modular addition with more (k) inputs. (a1 + a2 + \u00b7 \u00b7 \u00b7 + ak) mod p. (1) This approach offers insights into why certain representations and solutions emerge from neural network training. By integrating these insights with our empirical findings, we aim to provide a comprehensive understanding of neural networks\u2019 learning mechanisms, especially in solving the 3 \fmodular addition problem. We also determine the necessary number of neurons for the network to learn this Fourier method for modular addition. Our paper\u2019s contributions are summarized as follows: \u2022 Expansion of Input for Modular Addition Problem: We extend the input parameter range for the modular addition problem from a binary set to k-element sets. \u2022 Network\u2019s Maximum Margin: For p-modular addition of k inputs, we give the closed form of the maximum margin of a network (Lemma 4.2): \u03b3\u2217= 2(k!) (2k + 2)(k+1)/2(p \u22121)p(k\u22121)/2 . \u2022 Neuron Count in One-Hidden-Layer Networks: We propose that in a general case, a one-hidden-layer network having m \u226522k\u22122 \u00b7 (p \u22121) neurons can achieve the maximum L2,k+1-margin solution. This ensures the network\u2019s capability to effectively solve the modular addition problem in a Fourier-based method (Theorem 4.1). \u2022 Empirical Validation of Theoretical Findings: We validate our theoretical finding that when m \u226522k\u22122 \u00b7 (p \u22121), for each spectrum \u03b6 \u2208{1, . . . , p\u22121 2 }, there exists a hidden-neuron utilizes this spectrum, strong empirical support of our analysis. (Figure 1 and Figure 2). \u2022 Similar Findings in Transformer: We have a similar observation in one-layer Transformer learning modular addition involving k inputs. For the 2-dimensional matrix WKWQ, where WK, WQ denotes the key and query matrix, it shows the superposition of two cosine waveforms in each dimension, each characterized by distinct frequencies (Figure 3). \u2022 Grokking under Different k: We observe that as k increases, the grokking phenomenon becomes weaker, as predicted by our analysis (Figure 4). 2 Related Work Max Margin Solutions in Neural Networks. [BBG22] demonstrated that neurons in a onehidden-layer ReLU network align with clauses in max margin solutions for read-once DNFs, employing a unique proof technique involving the construction of perturbed networks. [MEO+24] utilize max-min duality to certify maximum-margin solutions. Further, extensive research in the domain of margin maximization in neural networks, including works by [GLSS18b, SHN+18, GLSS18a, WLLM19a, LL19, JT19, MWG+20, CB20, JT20, LLWA21, FVB+22, FVBS23, SMF+23, GLL+24a] and more, has highlighted the implicit bias towards margin maximization inherent in neural network optimization. They provide a foundational understanding of the dynamics of neural networks and their inclination towards maximizing margins under various conditions and architectures. Algebraic Tasks Learning Mechanism Interpretability. The study of neural networks trained on algebraic tasks has been pivotal in shedding light on their training dynamics and inductive biases. Notable contributions include the work of [PBE+22, Gro23, QB23] on modular addition and subsequent follow-up studies, investigations into learning parities [DM20, BEG+22, SWL22, SWXL23, SWL23, SCL+23, ZTB+23, XSL24], and research into algorithmic reasoning capabilities [SGHK18, HBK+21, LAD+22, MBAB22, DLS22, CCN23, SYFB23, NLW23, ZLTA23, THGN23, HLV23]. The field of mechanistic interpretability, focusing on the analysis of internal representations in neural networks, has also seen significant advancements through the works of [CGC+20, OEN+22, MTS23, RSR23, VSK+23] and others. 4 \fGrokking and Emergent Ability. The phenomenon known as \u201cgrokking\u201d was initially identified by [PBE+22] and is believed to be a way of studying the emerging abilities of LLM [WTB+22]. This research observed a unique trend in two-layer transformer models engaged in algorithmic tasks, where there was a significant increase in test accuracy, surprisingly occurring well after these models had reached perfect accuracy in their training phase. In [Mil22], it was hypothesized that this might be the result of the SGD process that resembles a random path along what is termed the optimal manifold. Adding to this, [NCL+23] aligns with the findings of [Bel22], indicating a steady advancement of networks towards algorithms that are better at generalization. [LKN+22, XWF+24, LJL+24] developed smaller-scale examples of grokking and utilized these to map out phase diagrams, delineating multiple distinct learning stages. Furthermore, [TLZ+22, MSAM23] suggested the possibility of grokking occurring naturally, even in the absence of explicit regularization. They attributed this to an optimization quirk they termed the slingshot mechanism, which might inadvertently act as a regularizing factor. Theoretical Work About Fourier Transform. To calculate Fourier transform there are two main methodologies: one uses carefully chosen samples through hashing functions (referenced in works like [GMS05, HIKP12a, HIKP12b, IKP14, IK14, Kap16, Kap17]) to achieve sublinear sample complexity and running time, while the other uses random samples (as discussed in [CT06, RV08, Bou14, HR17, NSW19]) with sublinear sample complexity but nearly linear running time. There are many other works studying Fourier transform [PS15, Moi15, Son19, JLS23, GSS22, LSZ19, CLS20, SSWZ22, CKPS16, SSWZ23, CSS+23, SYYZ23, GLL+24b]. 3 Problem Setup 3.1 Data and Network Setup Data. Following [MEO+24], let Zp = [p] denote the modular group on p integers, where p > 2 is a given prime number. The input space is X := Zk p for some integer k, and the output space is Y := Zp. Then an input data point is a = (a1, . . . , ak) with ai \u2208Zp. When clear from context, we also let xi \u2208{0, 1}p be the one-hot encoding of ai, and let x = (x1, . . . , xk) denote the input point. Network. We consider single-hidden layer neural networks with polynomial activation functions: f(\u03b8, x) := m X i=1 \u03d5(\u03b8i, x), \u03d5(\u03b8i, x) := (u\u22a4 i,1x1 + \u00b7 \u00b7 \u00b7 + u\u22a4 i,kxk)kwi, (2) where \u03b8 := {\u03b81, . . . , \u03b8m}, \u03d5(\u03b8i, x) is one neuron, and \u03b8i := {ui,1, . . . , ui,k, wi} are the parameters of the neuron, with ui,1, . . . , ui,k, wi \u2208Rp. We use polynomial activation functions as the homogeneous requirement in Lemma 3.7 and easy sum-to-product identities calculation in Fourier analysis. Using the notation a instead of the one-hot encodings x, we can also write: f(\u03b8, a) := m X i=1 \u03d5(\u03b8i, a), \u03d5(\u03b8i, a) := (ui,1(a1) + \u00b7 \u00b7 \u00b7 + ui,k(ak))kwi, where with ui,j(aj) being the aj-th component of ui,j. We consider the parameter set: \u0398 := {\u2225\u03b8\u22252,k+1 \u22641}, where \u2225\u03b8\u22252,k+1 := ( m X i=1 \u2225\u03b8i\u2225k+1 2 ) 1 k+1 , and \u2225\u03b8i\u22252 := ( k X j=1 \u2225ui,j\u22252 2 + \u2225wi\u22252 2) 1 2 . 5 \fHere \u2225\u03b8\u22252,k+1 is the L2,k+1 matrix norm of \u03b8 (Definition C.2), and \u2225\u03b8i\u22252 is the L2 vector norm of the concatenated vector of the parameters in \u03b8i. The training objective over \u0398 is then as follows. Definition 3.1. Given a dataset Dp and the cross-entropy loss l, the regularized training objective is: L\u03bb(\u03b8) := 1 |Dp| X (x,y)\u2208Dp l(f(\u03b8, x), y) + \u03bb\u2225\u03b8\u22252,k+1. 3.2 Margins of the Neural Networks Now, we define the margin for a data point and the margin for a whole dataset. Definition 3.2. We denote g : RU \u00d7X \u00d7Y \u2192R as the margin function, where for given (x, y) \u2208Dp, g(\u03b8, x, y) := f(\u03b8, x)[y] \u2212maxy\u2032\u2208Y\\{y} f(\u03b8, x)[y\u2032]. Definition 3.3. The margin for a given dataset Dp is denoted as h : RU \u2192R where h(\u03b8) := min(x,y)\u2208Dp g(\u03b8, x, y). For parameter \u03b8, its normalized margin is denoted as h(\u03b8/\u2225\u03b8\u22252,k+1). For simplicity, we define \u03b3\u2217to be the maximum normalized margin as the following: Definition 3.4. The minimum of the regularized objective is denoted as \u03b8\u03bb \u2208arg min\u03b8\u2208RU L\u03bb(\u03b8). We define the normalized margin of \u03b8\u03bb as \u03b3\u03bb := h(\u03b8\u03bb/\u2225\u03b8\u03bb\u22252,k+1). We define the maximum normalized margin as \u03b3\u2217:= max\u03b8\u2208\u0398 h(\u03b8), where \u0398 = {\u2225\u03b8\u22252,k+1 \u22641}. Let P(Dp) denote as a set containing any distributions over Dp. Then \u03b3\u2217can be rewritten as \u03b3\u2217= max \u03b8\u2208\u0398 h(\u03b8) = max \u03b8\u2208\u0398 min (x,y)\u2208Dp g(\u03b8, x, y) = max \u03b8\u2208\u0398 min q\u2208P(Dp) E (x,y)\u223cq[g(\u03b8, x, y)], (3) where the first step follows from Definition 3.4, the second step follows from Definition 3.3 and the last step follows from the linearity of the expectation. Now, we introduce an important concept of a duality stationary pair (\u03b8\u2217, q\u2217). Definition 3.5. We define a stationary pair (\u03b8\u2217, q\u2217) when satisfying q\u2217\u2208arg min q\u2208P(Dp) E (x,y)\u223cq[g(\u03b8\u2217, x, y)], \u03b8\u2217\u2208arg min \u03b8\u2208\u0398 E (x,y)\u223cq\u2217[g(\u03b8, x, y)]. (4) This means that q\u2217is a distribution that minimizes the expected margin based on \u03b8\u2217, and simultaneously, \u03b8\u2217is a solution that maximizes the expected margin relative to q\u2217. The max-min inequality [BV04] indicates that presenting such a duality adequately proves \u03b8\u2217to be a maximum margin solution. Recall that there is a \u201cmax\u201d operation in Definition 3.2, which makes the swapping of expectation and summation infeasible, meaning that the expected network margin cannot be broken down into the expected margins of individual neurons. To tackle this problem, the class-weighted margin is proposed, whose intuition is similar to label smoothing. Let \u03c4 : Dp \u2192\u2206(Y) allocate weights to incorrect labels for every data point. Given (x, y) in Dp and for any y\u2032 \u2208Y, we have \u03c4(x, y)[y\u2032] \u22650 and P y\u2032\u2208Y\\{y} \u03c4(x, y)[y\u2032] = 1. Then, we denote a proxy g\u2032 as the following to solve the issue. Definition 3.6. Draw (x, y) \u2208Dp. The class-weighted margin g\u2032 is defined as g\u2032(\u03b8, x, y) := f(\u03b8, x)[y] \u2212 X y\u2032\u2208Y\\{y} \u03c4(x, y)[y\u2032]f(\u03b8, x)[y\u2032]. 6 \fWe have g\u2032 uses a weighted sum rather than max, so g(\u03b8, x, y) \u2264g\u2032(\u03b8, x, y). Following linearity of the expectation, we can get the expected class-weighted margin as E (x,y)[g\u2032(\u03b8, x, y)] = m X i=1 E (x,y)[\u03d5(\u03b8i, x)[y] \u2212 X y\u2032\u2208Y\\{y} \u03c4(x, y)[y\u2032]\u03d5(\u03b8i, x)[y\u2032]], where we can move the summation Pm i=1 out of the expectation E[]. 3.3 Connection between Training and the Maximum Margin Solutions 0 10 20 30 40 0.5 0.0 0.5 0 5 10 15 20 0 250 0 10 20 30 40 0.5 0.0 0.5 0 5 10 15 20 0 200 0 10 20 30 40 0.5 0.0 0.5 0 5 10 15 20 0 200 0 10 20 30 40 0.5 0.0 0.5 0 5 10 15 20 0 200 0 10 20 30 40 0.5 0.0 0.5 0 5 10 15 20 0 200 0 10 20 30 40 0 1 0 5 10 15 20 0 200 0 10 20 30 40 0.5 0.0 0.5 0 5 10 15 20 0 200 0 10 20 30 40 0.5 0.0 0.5 0 5 10 15 20 0 250 Figure 1: Cosine shape of the trained embeddings (hidden layer weights) and corresponding power of Fourier spectrum. The two-layer network with m = 2944 neurons is trained on k = 4-sum modp = 47 addition dataset. We even split the whole datasets (pk = 474 data points) into the training and test datasets. Every row represents a random neuron from the network. The left figure shows the final trained embeddings, with red dots indicating the true weight values, and the pale blue interpolation is achieved by identifying the function that shares the same Fourier spectrum. The right figure shows their Fourier power spectrum. The results in these figures are consistent with our analysis statements in Lemma 4.2. See Figure 5, 7 in Appendix I.2 for similar results when k is 3 or 5. We denote \u03bd as the network\u2019s homogeneity constant, where the equation f(\u03b1\u03b8, x) = \u03b1\u03bdf(\u03b8, x) holds for any x and any scalar \u03b1 > 0. Specifically, we focus on networks with homogeneous neurons that satisfy \u03d5(\u03b1\u03b8i, x) = \u03b1\u03bd\u03d5(\u03b8i, x) for any \u03b1 > 0. Note that our one-hidden layer networks (Eq. (2)) are k + 1 homogeneous. As the following Lemma states, when \u03bb is small enough during training homogeneous functions, we have the L\u03bb global optimizers\u2019 normalized margin converges to \u03b3\u2217. Lemma 3.7 ([WLLM19b], Theorem 4.1). Let f be a homogeneous function. For any norm \u2225\u00b7 \u2225, if \u03b3\u2217> 0, we have lim\u03bb\u21920 \u03b3\u03bb = \u03b3\u2217. Therefore, we can replace comprehending the global minimizes by exploring the maximum-margin solution as a surrogate, enabling us to bypass complex analyses in nonconvex optimization. Furthermore, [MEO+24] states that under the following condition, the maximum-margin solutions and class-weighted maximum-margin (g\u2032) solutions are equivalent with each other. Condition 3.8 (Condition C.1 in page 8 in [MEO+24]). We have g\u2032(\u03b8\u2217, x, y) = g(\u03b8\u2217, x, y) for all (x, y) \u2208spt(q\u2217). It means: {y\u2032 \u2208Y\\{y} : \u03c4(x, y)[y\u2032] > 0} \u2286arg max y\u2032\u2208Y\\{y} f(\u03b8\u2217, x)[y\u2032]. Thus, under these conditions, we only need to focus on the class-weighted maximummargin solutions in our following analysis. 4 Main Result We characterize the Fourier features to perform modular addition with k input in the one-hidden-layer neuron network. We show that every neuron 7 \fonly focus on a distinct Fourier frequency. Additionally, within the network, there is at least one neuron for each frequency. When we consider the uniform class weighting, where L\u03bb(\u03b8) is based on \u03c4(a1, . . . , ak)[c\u2032] := 1/(p \u22121) \u2200c\u2032 \u0338= a1 + \u00b7 \u00b7 \u00b7 + ak, we have the following main result: Theorem 4.1 (Main result, informal version of Theorem H.2). Let f(\u03b8, x) be the one-hidden layer networks defined in Eq (2). If m \u226522k\u22121 \u00b7 p\u22121 2 , then the max L2,k+1-margin network satisfies: \u2022 The maximum L2,k+1-margin for a given dataset Dp is: \u03b3\u2217= 2(k!) (2k + 2)(k+1)/2(p \u22121)p(k\u22121)/2 . \u2022 For each neuron \u03d5({u1, . . . , uk, w}; a1, . . . , ak), there is a constant scalar \u03b2 \u2208R and a frequency \u03b6 \u2208{1, . . . , p\u22121 2 } satisfying ui(ai) = \u03b2 \u00b7 cos(\u03b8\u2217 ui + 2\u03c0\u03b6ai/p), \u2200i \u2208[k] and w(c) = \u03b2 \u00b7 cos(\u03b8\u2217 w + 2\u03c0\u03b6c/p), where \u03b8\u2217 u1, . . . , \u03b8\u2217 uk, \u03b8\u2217 w \u2208R are some phase offsets satisfying \u03b8\u2217 u1 + \u00b7 \u00b7 \u00b7 + \u03b8\u2217 uk = \u03b8\u2217 w. \u2022 For each frequency \u03b6 \u2208{1, . . . , p\u22121 2 }, there exists one neuron using this frequency only. Theorem 4.1 tells us when the number of neurons is large enough, e.g., m \u226522k\u22121 \u00b7 p\u22121 2 (the lower bound of m may not be the tightest in our analysis), the one hidden neural network will exactly learn all Fourier frequencies/spectrum/basis to recover the modular addition operation. More specifically, each neuron will only focus on one Fourier frequency. Thus, our analysis provides a comprehensive understanding of why neural networks trained by SGD prefer to learn Fourierbased circuits. We provide the proof sketch of Theorem 4.1 in the following section. 4.1 Technique Overview In this section, we propose techniques overview of the proof for our main result. We use i to denote \u221a\u22121. Let f : Zp \u2192C. Then, for each frequency j \u2208Zp, we define f discrete Fourier transform (DFT) as b f(j) := P \u03b6\u2208Zp f(\u03b6) exp(\u22122\u03c0i \u00b7 j\u03b6/p). Let \u2126 \u2032\u2217 q be the single neuron classweighted maximum-margin solution set (formally defined in Definition E.6). We first show how we get \u2126\u2032\u2217 q . Lemma 4.2 (Informal version of Lemma E.8). If for any \u03b6 \u2208{1, . . . , p\u22121 2 }, there exists a scaling constant \u03b2 \u2208R, such that ui(ai) = \u03b2 \u00b7 cos(\u03b8\u2217 ui + 2\u03c0\u03b6ai/p) for any i \u2208[k] and w(c) = \u03b2 \u00b7 cos(\u03b8\u2217 w + 2\u03c0\u03b6c/p), where \u03b8\u2217 u1, . . . , \u03b8\u2217 uk, \u03b8\u2217 w \u2208R are some phase offsets satisfying \u03b8\u2217 u1 + \u00b7 \u00b7 \u00b7 + \u03b8\u2217 uk = \u03b8\u2217 w. Then, we have the following \u2126\u2032\u2217 q ={(u1, . . . , uk, w)}, \u03b3\u2217= 2(k!) (2k + 2)(k+1)/2(p \u22121)p(k\u22121)/2 . Proof sketch of Lemma 4.2. See formal proof in Appendix E.5. The proof establishes the maximummargin solution\u2019s sparsity in the Fourier domain through several key steps. Initially, by Lemma E.7, focus is directed to maximizing Eq. (14). For odd p, Eq. (14) can be reformulated with magnitudes and phases of b ui and b w (discrete Fourie transform of ui and w), leading to an equation involving cosine of their phase differences. Plancherel\u2019s theorem is then employed to translate the norm constraint to the Fourier domain. This allows for the optimization of the cosine term in the sum, effectively reducing the problem to maximizing the product of magnitudes of b ui and b w (Eq. (18)). 8 \fBy applying the inequality of arithmetic and geometric means we have an upper bound for the optimization problem. To achieve the upper bound, equal magnitudes are required for all b ui and b w at a single frequency, leading to Eq. (20). The neurons are finally expressed in the time domain, demonstrating that they assume a specific cosine form with phase offsets satisfying certain conditions. Next, we show the number of neurons required to solve the problem and the property of these neurons. We demonstrate how to use these neurons to construct the network \u03b8\u2217. Lemma 4.3 (Informal version of Lemma F.3). Let cos\u03b6(x) denote cos(2\u03c0\u03b6x/p). Then, we have the maximum L2,k+1-margin solution \u03b8\u2217will consist of 22k\u22121 \u00b7 p\u22121 2 neurons \u03b8\u2217 i \u2208\u2126 \u2032\u2217 q to simulate p\u22121 2 type of cosine computation, each cosine computation is uniquely determined a \u03b6 \u2208{1, . . . , p\u22121 2 }. In particular, for each \u03b6 the cosine computation is cos\u03b6(a1 + \u00b7 \u00b7 \u00b7 + ak \u2212c), \u2200a1, . . . , ak, c \u2208Zp. Proof sketch of Lemma 4.3. See formal proof in Appendix F.3. Our goal is to show that 22k\u22121 \u00b7 p\u22121 2 neurons \u03b8\u2217 i \u2208\u2126 \u2032\u2217 q are able to simulate p\u22121 2 type of cos computation. We have the following expansion function of cos\u03b6(x), which denotes cos(2\u03c0\u03b6x/p). cos\u03b6( k X i=1 ai) = X b\u2208{0,1}k k Y i=1 cos1\u2212bi(ai) \u00b7 sinbi(ai) \u00b7 1[ k X i=1 bi%2 = 0] \u00b7 (\u22121)1[Pk i=1 bi%4=2]. The above equation can decompose a cos(P) to some basic elements. Note that we have 2k terms in the above equation. By using the following fact in Lemma F.1, 2k \u00b7 k! \u00b7 k Y i=1 ai = X c\u2208{\u22121,+1}k (\u22121)(k\u2212Pk i=1 ci)/2( k X j=1 cjaj)k, where each term can be constructed by 2k\u22121 neurons. Therefore, we need 2k\u221212k total neurons. To simulate p\u22121 2 type of simulation, we need 22k\u22121 p\u22121 2 neurons. Then, using the Lemma D.1, we construct the network \u03b8\u2217. Finally, by using the Lemma D.2 from [MEO+24], we know that it is the maximum-margin solution. Now, we are ready to prove our main results. Proof sketch of Theorem 4.1. See formal proof in Appendix H.2. By Lemma 4.2, we get \u03b3\u2217and the single-neuron class-weighted maximum-margin solution set \u2126 \u2032\u2217 q . By satisfying Condition 3.8, we know it is used in the maximum-margin solution. By Lemma 4.3, we can construct the network \u03b8\u2217that uses neurons in \u2126 \u2032\u2217 q . By Lemma D.2, we know that it is the maximum-margin solution. Finally, by Lemma G.2, we know that all frequencies are covered. 5 Experiments In Section 5.1, we conduct simulation experiments to verify our analysis for k = 3, 4, 5. In Section 5.2, we show that the one-layer transformer learns 2-dimensional cosine functions in their attention weights. In Section 5.3, we show the grokking phenomenon under different k. Please refer to Appendix I.1 for more details about implementation. 9 \f0 5 10 15 20 Frequency 0 25 50 75 100 125 150 Number of neurons All frequency convecered (p=47, k=4, m=2944) (a) 0.0 0.2 0.4 0.6 0.8 1.0 Max normalized power 0.0% 10.0% 20.0% 30.0% 40.0% Number of neurons Initial distribution (p=47, k=4, m=2944) (b) 0.0 0.2 0.4 0.6 0.8 1.0 Frequency 0.0% 5.0% 10.0% 15.0% Number of neurons Max margine distribution (p=47, k=4, m=2944) (c) Figure 2: All Fourier spectrum frequencies being covered and the maximum normalized power of the embeddings (hidden layer weights). The one-hidden layer network with m = 2944 neurons is trained on k = 4-sum mod-p = 47 addition dataset. We denote b u[i] as the Fourier transform of u[i]. Let maxi |b u[i]|2/(P |b u[j]|2) be the maximum normalized power. Mapping each neuron to its maximum normalized power frequency, (a) shows the final frequency distribution of the embeddings. Similar to our construction analysis in Lemma 4.3, we have an almost uniform distribution over all frequencies. (b) shows the maximum normalized power of the neural network with random initialization. (c) shows, in frequency space, the embeddings of the final trained network are onesparse, i.e., maximum normalized power being almost 1 for all neurons. This is consistent with our max-margin analysis results in Lemma 4.3. See Figure 6 and Figure 8 in Appendix I.2 for similar results when k is 3 or 5. 5.1 One-hidden Layer Neural Network We conduct simulation experiments to verify our analysis. In Figure 1 and Figure 2, we use SGD to train a two-layer network with m = 2944 = 22k\u22122 \u00b7 (p \u22121) neurons, i.e., Eq. (2), on k = 4-sum mod-p = 47 addition dataset, i.e., Eq. (1). Figure 1 shows that the networks trained with SGD have single-frequency hidden neurons, which support our analysis in Lemma 4.2. Furthermore, Figure 2 demonstrates that the network will learn all frequencies in the Fourier spectrum which is consistent with our analysis in Lemma 4.3. Together, they verify our main results in Theorem 4.1 and show that the network trained by SGD prefers to learn Fourier-based circuits. There are more similar results when k is 3 or 5 in Appendix I.2. 5.2 One-layer Transformer We find similar results in the one-layer transformer. Recall that the m-heads attention layer can be written as W P \uf8eb \uf8ec \uf8ec \uf8ed W V \u22a4 1 E \u00b7 softmax \u0010 E\u22a4W K 1 W Q\u22a4 1 E \u0011 . . . W V \u22a4 m E \u00b7 softmax \u0010 E\u22a4W K m W Q\u22a4 m E \u0011 \uf8f6 \uf8f7 \uf8f7 \uf8f8, where E is input embedding and W P , W V , W K, W Q are projection, value, key and query matrix. We denote W KW Q\u22a4as W KQ and call it attention matrix. In Figure 3 , we train a one-layer transformer with m = 160 heads attention, i.e., above equation, on k = 4-sum mod-p = 31 addition dataset, i.e., Eq. (1). Figure 3 shows that the SGD trained onelayer transformer learns 2-dim cosine shape attention matrices, which is similar to the one-hidden layer neural networks in Figure 1. This means that the attention layer has a learning mechanism similar to neural networks in the modular arithmetic task. It prefers to learn (2-dim) Fourier-based circuits when trained by SGD. There are more similar results when k is 3 or 5 in Appendix I.3. 10 \f5.3 Grokking under Different k 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 20 10 0 10 20 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 0 500 1000 1500 2000 2500 3000 3500 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 0.10 0.05 0.00 0.05 0.10 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 0 1 2 3 4 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 0.10 0.05 0.00 0.05 0.10 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 0 1 2 3 4 5 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 10 5 0 5 10 0 3 6 9 12 15 18 21 24 27 0 3 6 9 12 15 18 21 24 27 0 200 400 600 800 Figure 3: 2-dimension cosine shape of the trained W KQ (attention weights) and their Fourier power spectrum. The one-layer transformer with attention heads m = 160 is trained on k = 4-sum modp = 31 addition dataset. We even split the whole datasets (pk = 314 data points) into training and test datasets. Every row represents a random attention head from the transformer. The left figure shows the final trained attention weights being an apparent 2-dim cosine shape. The right figure shows their 2-dim Fourier power spectrum. The results in the figures are consistent with Figure 1. See Figure 9 and Figure 10 in Appendix I.3 for similar results when k is 3 or 5. Following the experiments\u2019 protocol in [PBE+22], we show there is the grokking phenomenon under different k. We train two-layer transformers with m = 160 attention heads on k = 2, 3, 4, 5sum mod-p = 97, 31, 11, 5 addition dataset with 50% of the data in the training. We use different p to guarantee the dataset sizes are roughly equal to each other. Figure 4 shows that the grokking weakens as the number of k increases, which is consistent with our analysis. It implies that when the ground-truth function class becomes \u201ccomplicated\u201d, the transformers need to train more steps to fit the training datasets and the generalization tends to be better. Brilliant recent works by [LJL+24, KBGP23] argue that, during learning, the network will be first in the lazy training/NTK regime and then transfer to the rich/feature learning regime sharply, leading to a grokking phenomenon. We use the learning steps required for regime switch as a metric of grokking strength. \u201cUnderfitting\u201d in NTK but \u201coverfitting\u201d in feature learning. Note that NTK is a notorious overparameterized regime, which probably needs a much larger number of neurons than our max-margin convergence case, i.e., much larger than \u2126(22k) in Theorem 4.1. Thus, under the fixed m and increasing k, the model may easily escape the NTK regime, or there is no longer an NTK regime. Thus, we will see a weaker grokking phenomenon as the learning steps needed to transfer from the NTK regime to the feature learning regime become fewer. With increasing k, the model will have an \u201cunderfitting\u201d issue in the NTK regime, meaning the model must need feature learning to fit the task but cannot only fit the task by NTK. However, the model still has an \u201coverfitting\u201d in the feature learning regime. 6 Discussion Grokking in Large Language Models. The interpretability of grokking in large language models (LLMs) is explored in [NCL+23]. By examining various intermediate states within the 11 \f100 101 102 103 104 Training Steps 0.0 0.2 0.4 0.6 0.8 1.0 Acc k=2 test acc train acc 100 101 102 103 Training Steps 0.0 0.2 0.4 0.6 0.8 1.0 k=3 100 101 102 103 Training Steps 0.2 0.4 0.6 0.8 1.0 k=4 100 101 102 103 104 Training Steps 0.2 0.4 0.6 0.8 1.0 k=5 Figure 4: Grokking (models abruptly transition from bad generalization to perfect generalization after a large number of training steps) under learning modular addition involving k = 2, 3, 4, 5 inputs. We train two-layer transformers with m = 160 attention heads on k = 2, 3, 4, 5-sum modp = 97, 31, 11, 5 addition dataset with 50% of the data in the training set under AdamW [LH18] optimizer 1e-3 learning rate and 1e-3 weight decay. We use different p to guarantee the dataset sizes are roughly equal to each other. The blue curves show training accuracy and the red ones show validation accuracy. There is a grokking phenomenon in all figures. However, as k increases, the grokking phenomenon becomes weak. See explanation in Section 5.3. residual stream of the Transformer model, it is validated that the model employs Fourier features to tackle the modular addition task. However, fully comprehending how the Transformer model and LLMs perform modular addition remains challenging based on the current work, particularly from a theoretical standpoint. We contend that beginning with a simplistic model setup and achieving a thorough and theoretical understanding of how the network utilizes Fourier features to address the problem serves as a valuable starting point and it provides a theoretical understanding of the grokking phenomenon. We believe that further study on LLMs will be an interesting and important future direction. Grokking, Benign Overfitting, and Implicit Bias. Recently, [XWF+24] connects the grokking phenomenon to benign overfitting [BLLT20, CCBG22, TB23, FCB22, FVBS23]. It shows how the network undergoes a grokking period from catastrophic to benign overfitting. [LJL+24, KBGP23] uses implicit bias [SHN+18, GLSS18a, JT19, STR+20, MWG+20, CB20, LLWA21, Jac22, XSW+23, XSW+24] to explain grokking, where grokking happens if the early phase bias implies an overfitting solution while late phase bias implies a generalizable solution. The intuition from the benign overfitting and the implicit bias well align with our observation in Section 5.3. It is interesting and valuable to rigorously analyze the grokking or emergent ability under different function class complexities, e.g., Eq (1). We leave this challenge problem as a future work. Connection to Parity and SQ Hardness. If we let p = 2, then (a1 + \u00b7 \u00b7 \u00b7 + ak) mod p will degenerate to parity function, i.e., b1, . . . , bk \u2208{\u00b11} and determining Qk i=1 bi. Parity functions serve as a fundamental set of learning challenges in computational learning theory, often used to demonstrate computational obstacles [SSSS17]. In particular, (n, k)-sparse parity problem is notorious hard to learn, i.e., Statistical Query (SQ) hardness [BFJ+94]. [DM20] showed that onehidden layer networks need an \u2126(exp(k)) number of neurons or an \u2126(exp(k)) number of training steps to successfully learn it by SGD. In our work, we are studying Eq. (2), which is a more general function than parity and indeed is a learning hardness. Our Theorem 4.1 states that we need \u2126(exp(k)) number of neurons to represent the maximum-margin solution, which well aligns with existing works. Our experiential results in Section 5.1 are also consistent. Hence, our modular addition involving k inputs function class is a good data model to analyze and test the model learning ability, i.e., approximation power, optimization, and generalization. 12 \fHigh Order Correlation Attention. [SHT23, AS23, AS24] state that, when k = 3, a1 + a2 + a3 mod p is hard to be captured by traditional attention. Thus, they introduce high-order attention to capture high-order correlation from the input sequence. However, in Section 5.2, we show that one-layer transformers have a strong learning ability and can successfully learn modular arithmetic tasks even when k = 5. This implies that the traditional attention may be more powerful than we expect. We conjecture that the layer norm and residual connection contribute as they are ignored by most transformer learning theoretical analysis work [JSL22, LWLC23, LLR23, TWCD23]. 7 Conclusion We study neural networks and transformers learning on (a1 + \u00b7 \u00b7 \u00b7 + ak) mod p. We theoretically show that networks prefer to learn Fourier circuits. Our experiments on neural networks and transformers support our analysis. Finally, we study the grokking phenomenon under this new data setting. Acknowledgement Research is partially supported by the National Science Foundation (NSF) Grants 2008559-IIS, 2023239-DMS, CCF-2046710, and Air Force Grant FA9550-18-1-0166. 13 \fAppendix Roadmap. In Section A, we provide the potential limitations of this work. In Section B, we discuss the societal impacts of our work. In Section C, we introduce some definitions that will be used in the proof. In Section D, we introduce some auxiliary lemma from previous work that we need. In Section E, Section F, Section G, Section H, we provide the proof of our Lemmas and our main results. In particular, we provide two versions of proof (1) k = 3 and (2) general k \u22653. We use k = 3 version to illustrate our proof intuition and then extend our proof to the general k version. Finally, in Section I, we provide more experimental results and implementation details. A Limitations Our work has made progress in exploring how neural networks and Transformers can solve complex mathematical problems such as modular addition operation, but the practical application scope of their conclusions is limited. On the other hand, we admit that our theorem can provide intuition but cannot fully explain the phenomena shown in Figure 4. Thus, we would like to introduce this more general data setting to the community so that we can study and understand grokking in a more broad way. Studying the relationship between the number of neurons and the grokking strength is interesting and important, and we will leave it as our future work. B Societal Impact Our work aims to understand the potential of large language models in mathematical reasoning and modular arithmetic. Our paper is purely theoretical and empirical in nature (mathematics problem) and thus we foresee no immediate negative ethical impact. We propose that neural networks and transformers prefer to learn Fourier circuits when training on modular addition involving k inputs under SGD, which may have a positive impact on the machine learning community. We hope our work will inspire effective algorithm design and promote a better understanding of large language models learning mechanisms. C More Notations and Definitions We use i to denote \u221a\u22121. Let z = a + ib denote a complex number where a and b are real numbers. Then we have z = a \u2212ib and |z| := \u221a a2 + b2. For any positive integer n, we use [n] to denote set {1, 2, \u00b7 \u00b7 \u00b7 , n}. We use E[] to denote expectation. We use Pr[] to denote probability. We use z\u22a4to denote the transpose of a vector z. Considering a vector z, we denote the \u21132 norm as \u2225z\u22252 := (Pn i=1 z2 i )1/2. We denote the \u21131 norm as \u2225z\u22251 := Pn i=1 |zi|. The number of non-zero entries in vector z is defined as \u2225z\u22250. \u2225z\u2225\u221eis defined as maxi\u2208[n] |zi|. We define the vector norm and matrix norm as the following. Definition C.1 (Lb (vector) norm). Given a vector v \u2208Rn and b \u22651, we have \u2225v\u2225b := (Pn i=1 |vi|b)1/b. Definition C.2 (La,b (matrix) norm). The La,b norm of a network with parameters \u03b8 = {\u03b8i}m i=1 is \u2225\u03b8\u2225a,b := (Pm i=1 \u2225\u03b8i\u2225b a)1/b, where \u03b8i denotes the vector of concatenated parameters for a single neuron. 14 \fWe define our regularized training objective function. Definition C.3. Let l be the cross-entropy loss. Our regularized training objective function is L\u03bb(\u03b8) := 1 |Dp| X (x,y)\u2208Dp l(f(\u03b8, x), y) + \u03bb\u2225\u03b8\u22252,k+1. Definition C.4. We define \u0398\u2217:= arg max\u03b8\u2208\u0398 h(\u03b8). Finally, let \u2126:= Rp\u00d7(k+1) denote the domain of each \u03b8i, and let \u2126\u2032 be a subset of \u2126. We say the parameter set \u03b8 = {\u03b81, . . . , \u03b8m} has directional support on \u2126\u2032, if for every i \u2208[m], either \u03b8i = 0 or there exists \u03b1i > 0 such that \u03b1i\u03b8i \u2208\u2126\u2032. D Tools from Previous Work Section D.1 states that we can use the single neuron level optimization to get the maximum-margin network. Section D.2 introduces the maximum-margin for multi-class. D.1 Tools from Previous Work: Implying Single/Combined Neurons Lemma D.1 (Lemma 5 in page 8 in [MEO+24]). If the following conditions hold \u2022 Given \u0398 := {\u03b8 : \u2225\u03b8\u2225a,b \u22641}. \u2022 Given \u0398\u2032\u2217 q := arg max\u03b8\u2208\u0398 E(x,y)\u223cq[g\u2032(\u03b8, x, y)]. \u2022 Given \u2126:= {\u03b8i : \u2225\u03b8i\u2225a \u22641}. \u2022 Given \u2126\u2032\u2217 q := arg max\u03b8i\u2208\u2126E(x,y)\u223cq[\u03c8\u2032(\u03b8, x, y)]. Then: \u2022 Let \u03b8 \u2208\u0398\u2032\u2217 q . We have \u03b8 only has directional support on \u2126\u2032\u2217 q . \u2022 Given \u03b8\u2217 1, . . . , \u03b8\u2217 m \u2208\u2126\u2032\u2217 q , we have for any set of neuron scalars where Pm i=1 \u03b1\u03bd i = 1, \u03b1i \u22650, the weights \u03b8 = {\u03b1i\u03b8\u2217 i }m i=1 is in \u0398\u2032\u2217 q . Given q\u2217, then we can get the \u03b8\u2217satisfying \u03b8\u2217\u2208arg min \u03b8\u2208\u0398 E (x,y)\u223cq\u2217[g\u2032(\u03b8, x, y)]. (5) D.2 Tools from Previous Work: Maximum Margin for Multi-Class Lemma D.2 (Lemma 6 in page 8 in [MEO+24]). If the following conditions hold \u2022 Given \u0398 = {\u03b8 : \u2225\u03b8\u2225a,b \u22641} and \u0398\u2032\u2217 q = arg max\u03b8\u2208\u0398 E(x,y)\u223cq[g\u2032(\u03b8, x, y)]. \u2022 Given \u2126= {\u03b8i : \u2225\u03b8i\u2225a \u22641} and \u2126\u2032\u2217 q = arg max\u03b8i\u2208\u2126E(x,y)\u223cq[\u03c8\u2032(\u03b8, x, y)]. \u2022 Suppose that \u2203{\u03b8\u2217, q\u2217} such that Equations (4) and (5), and 3.8 holds. Then, we can show: \u2022 \u03b8\u2217\u2208arg max\u03b8\u2208\u0398 g(\u03b8, x, y) \u2022 \u2200b \u03b8 \u2208arg max\u03b8\u2208\u0398 min(x,y)\u2208D g(\u03b8, x, y) the below properties hold: b \u03b8 only has directional support on \u2126\u2032\u2217 q\u2217. \u2200(x, y) \u2208spt(q\u2217), f(b \u03b8, x, y) \u2212maxy\u2032\u2208Y\\{y} f(b \u03b8, x, y\u2032) = \u03b3\u2217. 15 \fE Class-weighted Max-margin Solution of Single Neuron Section E.1 introduces some definitions. Section E.2 shows how we transfer the problem to discrete Fourier space. Section E.3 proposes the weighted margin of the single neuron. Section E.4 shows how we transfer the problem to discrete Fourier space for general k version. Section E.5 provides the solution set for general k version and the maximum weighted margin for a single neuron. E.1 Definitions Definition E.1. When k = 3, let \u03b7u1,u2,u3,w(\u03b4) := E a1,a2,a3[(u1(a1) + u2(a2) + u3(a3))3w(a1 + a2 + a3 \u2212\u03b4)]. Definition E.2. Let \u03b7 be defined in Definition E.1. When k = 3, provided the following conditions are met \u2022 We denote B as the ball that \u2225u1\u22252 + \u2225u2\u22252 + \u2225u3\u22252 + \u2225w\u22252 \u22641. We define \u2126\u2032\u2217 q = arg max u1,u2,u3,w\u2208B (\u03b7u1,u2,u3,w(0) \u2212E \u03b4\u0338=0[\u03b7u1,u2,u3,w(\u03b4)]). E.2 Transfer to Discrete Fourier Space The goal of this section is to prove the following Lemma, Lemma E.3. When k = 3, provided the following conditions are met \u2022 We denote B as the ball that \u2225u1\u22252 + \u2225u2\u22252 + \u2225u3\u22252 + \u2225w\u22252 \u22641. \u2022 We define \u2126 \u2032\u2217 q in Definition E.2. \u2022 We adopt the uniform class weighting: \u2200c\u2032 \u0338= a1 + a2 + a3, \u03c4(a1, a2, a3)[c\u2032] := 1/(p \u22121). We have the following \u2126\u2032\u2217 q = arg max u1,u2,u3,,w\u2208B 6 (p \u22121)p3 X j\u0338=0 b u1(j)b u2(j)b u3(j) b w(\u2212j). Proof. We have \u03b7u1,u2,u3,w(\u03b4) = E a1,a2,a3[(u1(a1) + u2(a2) + u3(a3))3w(a1 + a2 + a3 \u2212\u03b4)] = E a1,a2,a3[(u1(a1)3 + 3u1(a1)2u2(a2) + 3u1(a1)2u3(a3) + 3u1(a1)u2(a2)2 + 6u1(a1)u2(a2)u3(a3) + 3u1(a1)u3(a3)2 + u2(a2)3 + 3u2(a2)2u3(a3) + 3u2(a2)u3(a3)2 + u3(a3)3)w(a1 + a2 + a3 \u2212\u03b4)]. Recall B is defined as Lemma Statement. The goal is to solve the following mean margin maximization problem: arg max u1,u2,u3,w\u2208B (\u03b7u1,u2,u3,w(0) \u2212E \u03b4\u0338=0[\u03b7u1,u2,u3,w(\u03b4)]) 16 \f= p p \u22121(\u03b7u1,u2,u3,w(0) \u2212E \u03b4 [\u03b7u1,u2,u3,w(\u03b4)]), (6) where the equation follows \u03c4(a1, a2, a3)[c\u2032] := 1/(p \u22121) \u2200c\u2032 \u0338= a1 + a2 + a3 and 1 \u2212 1 p\u22121 = p p\u22121. First, note that E a1,a2,a3[u1(a1)3w(a1 + a2 + a3 \u2212\u03b4)] = E a1[u1(a1)3 E a2,a3[w(a1 + a2 + a3 \u2212\u03b4)]] = 0, where the first step follows from taking out the u1(a1) from the expectation for a2, a3, and the last step is from the definition of w. Similarly for the u2(a2)3,u3(a3)3 components of \u03b7, they equal to 0. Note that E a1,a2,a3[u1(a1)2u2(a2)w(a1 + a2 + a3 \u2212\u03b4)] = E a1[u1(a1)2 E a2[u2(a2) E a3[w(a1 + a2 + a3 \u2212\u03b4)]]] = 0, where the first step follows from simple algebra and the last step comes from the definition of w. Similarly for the u1(a1)2u3(a3), u2(a2)2u1(a1), u2(a2)2u3(a3), u3(a3)2u1(a1), u3(a3)2u2(a2) components of \u03b7, they equal to 0. Hence, we can rewrite Eq. (6) as arg max u1,u2,u3,w\u2208B 6p p \u22121(e \u03b7u1,u2,u3,w(0) \u2212E \u03b4 [e \u03b7u1,u2,u3,w(\u03b4)]), where e \u03b7u1,u2,u3,w(\u03b4) := E a1,a2,a3[u1(a1)u2(a2)u3(a3)w(a1 + a2 + a3 \u2212\u03b4)]. Let \u03c1 := e2\u03c0i/p, and let b u1, b u2, b u3, b w be the DFT of u1, u2, u3, and w respectively: e \u03b7u1,u2,u3,w(\u03b4) = E a1,a2,a3[(1 p p\u22121 X j1=0 b u1(j1)\u03c1j1a1)(1 p p\u22121 X j2=0 b u2(j2)\u03c1j2a2)(1 p p\u22121 X j3=0 b u3(j3)\u03c1j3a3)(1 p p\u22121 X j4=0 b w(j4)\u03c1j4(a1+a2+a3\u2212\u03b4))] = 1 p4 X j1,j2,j3,j4 b u1(j1)b u2(j2)b u3(j3) b w(j4)\u03c1\u2212j4\u03b4(E a1[\u03c1(j1+j4)a1])(E a2[\u03c1(j2+j4)a2])(E a3[\u03c1(j3+j4)a3]) = 1 p4 X j b u1(j)b u2(j)b u3(j) b w(\u2212j)\u03c1j\u03b4 where the first step follows from \u03c1 := e2\u03c0i/p and b u1, b u2, b u3, b w are the discrete Fourier transforms of u1, u2, u3, w, the second step comes from simple algebra, the last step is from that only terms where j1 + j4 = j2 + j4 = j3 + j4 = 0 survive. 17 \fHence, we need to maximize 6p p \u22121(e \u03b7u1,u2,u3,w(0) \u2212E \u03b4 [e \u03b7u1,u2,u3,w(\u03b4)]) = 6p p \u22121( 1 p4 X j b u1(j)b u2(j)b u3(j) b w(\u2212j) \u22121 p4 X j b u1(j)b u2(j)b u3(j) b w(\u2212j)(E \u03b4 \u03c1j\u03b4)) = 6 (p \u22121)p3 X j\u0338=0 b u1(j)b u2(j)b u3(j) b w(\u2212j). = 6 (p \u22121)p3 X j\u2208[\u2212(p\u22121)/2,+(p\u22121)/2]\\0 b u1(j)b u2(j)b u3(j) b w(\u2212j). (7) where the first step is from e \u03b7u1,u2,u3,w(\u03b4) definition, the second step is from E\u03b4 \u03c1j\u03b4 = 0 when j \u0338= 0, and the last step follows from simple algebra. E.3 Get Solution Set Lemma E.4. When k = 3, provided the following conditions are met \u2022 We denote B as the ball that \u2225u1\u22252 + \u2225u2\u22252 + \u2225u3\u22252 + \u2225w\u22252 \u22641. \u2022 We define \u2126 \u2032\u2217 q in Definition E.2. \u2022 We adopt the uniform class weighting: \u2200c\u2032 \u0338= a1 + a2 + a3, \u03c4(a1, a2, a3)[c\u2032] := 1/(p \u22121). \u2022 For any \u03b6 \u2208{1, . . . , p\u22121 2 }, there exists a scaling constant \u03b2 \u2208R and u1(a1) = \u03b2 \u00b7 cos(\u03b8\u2217 u1 + 2\u03c0\u03b6a1/p) u2(a2) = \u03b2 \u00b7 cos(\u03b8\u2217 u2 + 2\u03c0\u03b6a2/p) u3(a3) = \u03b2 \u00b7 cos(\u03b8\u2217 u3 + 2\u03c0\u03b6a3/p) w(c) = \u03b2 \u00b7 cos(\u03b8\u2217 w + 2\u03c0\u03b6c/p) where \u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w \u2208R are some phase offsets satisfying \u03b8\u2217 u1 + \u03b8\u2217 u2 + \u03b8\u2217 u3 = \u03b8\u2217 w. Then, we have the following \u2126\u2032\u2217 q ={(u1, u2, u3, w)}, and max u1,u2,u3,w\u2208B(\u03b7u1,u2,u3,w(0) \u2212E \u03b4\u0338=0[\u03b7u1,u2,u3,w(\u03b4)]) = 3 16 \u00b7 1 p(p \u22121). Proof. By Lemma E.3, we only need to maximize Equation (7). Thus, the mass of b u1, b u2, b u3, and b w must be concentrated on the same frequencies. For all j \u2208Zp, we have b u1(\u2212j) = b u1(j), b u2(\u2212j) = b u2(j), b u3(\u2212j) = b u3(j), b w(\u2212j) = b w(j) (8) as u1, u2, u3, w are real-valued. 18 \fFor all j \u2208Zp and for u1, u2, u3, w, we denote \u03b8u1, \u03b8u2, \u03b8u3, \u03b8w \u2208[0, 2\u03c0)p as their phase, e.g.: b u1(j) = |b u1(j)| exp(i\u03b8u1(j)). Consider the odd p, Equation (7) becomes: (7) = 6 (p \u22121)p3 X j\u2208[\u2212(p\u22121)/2,+(p\u22121)/2]\\0 b u1(j)b u2(j)b u3(j) b w(\u2212j) = 6 (p \u22121)p3 (p\u22121)/2 X j=1 (b u1(j)b u2(j)b u3(j) b w(j) + b u1(j)b u2(j)b u3(j) b w(j)) = 6 (p \u22121)p3 (p\u22121)/2 X j=1 |b u1(j)||b u2(j)||b u3(j)|| b w(j)|\u00b7 \u0000exp(i(\u03b8u1(j) + \u03b8u2(j) + \u03b8u3(j) \u2212\u03b8w(j)) + exp(i(\u2212\u03b8u1(j) \u2212\u03b8u2(j) \u2212\u03b8u3(j) + \u03b8w(j)) \u0001 = 12 (p \u22121)p3 (p\u22121)/2 X j=1 |b u1(j)||b u2(j)||b u3(j)|| b w(j)| cos(\u03b8u1(j) + \u03b8u2(j) + \u03b8u3(j) \u2212\u03b8w(j)). where the first step comes from definition (7), the second step follows from Eq. (8), the third step comes from b u1(\u2212j) = b u1(j) and b u1(j) = |b u1(j)| exp(i\u03b8u1(j)), the last step follow from Euler\u2019s formula. Thus, we need to optimize: max u1,u2,u3,w\u2208B 12 (p \u22121)p3 (p\u22121)/2 X j=1 |b u1(j)||b u2(j)||b u3(j)|| b w(j)| cos(\u03b8u1(j) + \u03b8u2(j) + \u03b8u3(j) \u2212\u03b8w(j)). (9) The norm constraint \u2225u1\u22252 + \u2225u2\u22252 + \u2225u3\u22252 + \u2225w\u22252 \u22641 is equivalent to \u2225b u1\u22252 + \u2225b u2\u22252 + \u2225b u3\u22252 + \u2225b w\u22252 \u2264p by using Plancherel\u2019s theorem. Thus, we need to select them in such a way that \u03b8u1(j) + \u03b8u2(j) + \u03b8u3(j) = \u03b8w(j), ensuring that, for each j, the expression cos(\u03b8u1(j) + \u03b8u2(j) + \u03b8u3(j) \u2212\u03b8w(j)) = 1 is maximized, except in cases where the scalar of the j-th term is 0. This further simplifies the problem to: max |b u1|,|b u2|,|b u3|,| b w|:\u2225b u1\u22252+\u2225b u2\u22252+\u2225b u3\u22252+\u2225b w\u22252\u2264p 12 (p \u22121)p3 (p\u22121)/2 X j=1 |b u1(j)||b u2(j)||b u3(j)|| b w(j)|. (10) Then, we have |b u1(j)||b u2(j)||b u3(j)|| b w(j)| \u2264(1 4 \u00b7 (|b u1(j)|2 + |b u2(j)|2 + |b u3(j)|2 + | b w(j)|2))2. (11) where the first step is from inequality of quadratic and geometric means. 19 \fWe define z : {1, . . . , p\u22121 2 } \u2192R as z(j) := |b u1(j)|2 + |b u2(j)|2 + |b u3(j)|2 + | b w(j)|2. We need to have b u1(0) = b u2(0) = b u3(0) = b w(0) = 0. Then, the upper-bound of Eq. (10) is given by 12 (p \u22121)p3 \u00b7 max \u2225z\u22251\u2264p 2 (p\u22121)/2 X j=1 (z(j) 4 )2 = 3 4(p \u22121)p3 \u00b7 max \u2225z\u22251\u2264p 2 (p\u22121)/2 X j=1 z(j)2 = 3 4(p \u22121)p3 \u00b7 max \u2225z\u22251\u2264p 2 \u2225z\u22252 2 \u2264 3 4(p \u22121)p3 \u00b7 p2 4 = 3 16 \u00b7 1 p(p \u22121), where the first step follows from simple algebra, the second step comes from the definition of L2 norm, the third step follows from \u2225z\u22252 \u2264\u2225z\u22251 \u2264p 2, the last step comes from simple algebra. For the inequality of quadratic and geometric means, Eq. (11) becomes equality when |b u1(j)| = |b u2(j)| = |b u3(j)| = | b w(j)|. To achieve \u2225z\u22252 = p 2, all the mass must be placed on a single frequency. Hence, for some frequency \u03b6 \u2208{1, . . . , p\u22121 2 }, to achieve the upper bound, we have: |b u1(j)| = |b u2(j)| = |b u3(j)| = | b w(j)| = n p p/8 if j = \u00b1\u03b6 0 otherwise , (12) In this case, Eq. (10) matches the upper bound. 12 (p \u22121)p3 \u00b7 (p 8)2 = 3 16 \u00b7 1 p(p \u22121), where the first step is by simple algebra. Hence, the maximum-margin is 3 16 \u00b7 1 p(p\u22121). Let \u03b8\u2217 u1 := \u03b8u1(\u03b6). Combining all the results, up to scaling, it is established that all neurons which maximize the expected class-weighted margin conform to the form: u1(a1) = 1 p p\u22121 X j=0 b u1(j)\u03c1ja1 = 1 p \u00b7 (b u1(\u03b6)\u03c1\u03b6a1 + b u1(\u2212\u03b6)\u03c1\u2212\u03b6a1) = 1 p \u00b7 ( rp 8 exp(i\u03b8\u2217 u1)\u03c1\u03b6a1 + rp 8 exp(\u2212i\u03b8\u2217 u1)\u03c1\u2212\u03b6a1) = r 1 2p cos(\u03b8\u2217 u1 + 2\u03c0\u03b6a1/p), 20 \fwhere the first step comes from the definition of u1(a), the second step and third step follow from Eq. (12), the last step follows from Euler\u2019s formula. Similarly, u2(a2) = r 1 2p cos(\u03b8\u2217 u2 + 2\u03c0\u03b6a2/p) u3(a3) = r 1 2p cos(\u03b8\u2217 u3 + 2\u03c0\u03b6a3/p) w(c) = r 1 2p cos(\u03b8\u2217 w + 2\u03c0\u03b6c/p), for some phase offsets \u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w \u2208R satisfying \u03b8\u2217 u1 + \u03b8\u2217 u2 + \u03b8\u2217 u3 = \u03b8\u2217 w and some \u03b6 \u2208Zp\\{0}, where u1, u2, u3, and w shares the same \u03b6. E.4 Transfer to Discrete Fourier Space for General k Version Definition E.5. Let \u03b7u1,...,uk,w(\u03b4) := E a1,...,ak[(u1(a1) + \u00b7 \u00b7 \u00b7 + uk(ak))kw(a1 + \u00b7 \u00b7 \u00b7 + ak \u2212\u03b4)]. Definition E.6. Let \u03b7 be defined in Definition E.5. Provided the following conditions are met \u2022 We denote B as the ball that \u2225u1\u22252 + \u00b7 \u00b7 \u00b7 + \u2225uk\u22252 + \u2225w\u22252 \u22641. We define \u2126\u2032\u2217 q = arg max u1,...,uk,w\u2208B (\u03b7u1,...,uk,w(0) \u2212E \u03b4\u0338=0[\u03b7u1,...,uk,w(\u03b4)]). The goal of this section is to prove the following Lemma, Lemma E.7. Provided the following conditions are met \u2022 Let B denote the ball that \u2225u1\u22252 + \u00b7 \u00b7 \u00b7 + \u2225uk\u22252 + \u2225w\u22252 \u22641. \u2022 We define \u2126 \u2032\u2217 q in Definition E.6. \u2022 We adopt the uniform class weighting: \u2200c\u2032 \u0338= a1 + \u00b7 \u00b7 \u00b7 + ak, \u03c4(a1, . . . , ak)[c\u2032] := 1/(p \u22121). We have the following \u2126\u2032\u2217 q = arg max u1,...,uk,w\u2208B k! (p \u22121)pk X j\u0338=0 b w(\u2212j) k Y i=1 b ui(j). Proof. We have \u03b7u1,...,uk,w(\u03b4) = E a1,...,ak[(u1(a1) + \u00b7 \u00b7 \u00b7 + uk(ak))kw(a1 + \u00b7 \u00b7 \u00b7 + ak \u2212\u03b4)]. The goal is to solve the following mean margin maximization problem: arg max u1,...,uk,w\u2208B (\u03b7u1,...,uk,w(0) \u2212E \u03b4\u0338=0[\u03b7u1,...,uk,w(\u03b4)]) 21 \f= p p \u22121(\u03b7u1,...,uk,w(0) \u2212E \u03b4 [\u03b7u1,...,uk,w(\u03b4)]), (13) where the equation follows \u03c4(a1, . . . , ak)[c\u2032] := 1/(p \u22121) \u2200c\u2032 \u0338= a1 + \u00b7 \u00b7 \u00b7 + ak and 1 \u2212 1 p\u22121 = p p\u22121. We note that all terms are zero rather than w(\u00b7) \u00b7 Qk i=1 ui(ai). Hence, we can rewrite Eq. (13) as arg max u1,...,uk,w\u2208B k!p p \u22121(e \u03b7u1,...,uk,w(0) \u2212E \u03b4 [e \u03b7u1,...,uk,w(\u03b4)]), where e \u03b7u1,...,uk,w(\u03b4) := E a1,...,ak[w(a1 + \u00b7 \u00b7 \u00b7 + ak \u2212\u03b4) k Y i=1 ui(ai)]. Let \u03c1 := e2\u03c0i/p, and b u1, . . . , b uk, b w denote the discrete Fourier transforms of u1, . . . , uk, and w respectively. We have e \u03b7u1,...,uk,w(\u03b4) = 1 pk+1 p\u22121 X j=0 b w(\u2212j)\u03c1j\u03b4 k Y i=1 b ui(j) which comes from \u03c1 := e2\u03c0i/p and b u1, . . . , b uk, b w are the discrete Fourier transforms of u1, . . . , uk, w. Hence, we need to maximize k!p p \u22121(e \u03b7u1,...,uk,w(0) \u2212E \u03b4 [e \u03b7u1,...,uk,w(\u03b4)]) = k!p p \u22121 \u00b7 \uf8eb \uf8ed 1 pk+1 p\u22121 X j=0 b w(\u2212j) k Y i=1 b ui(j) \u2212 1 pk+1 p\u22121 X j=0 b w(\u2212j)(E \u03b4 [\u03c1j\u03b4]) k Y i=1 b ui(j) \uf8f6 \uf8f8 = k! (p \u22121)pk X j\u0338=0 b w(\u2212j) k Y i=1 b ui(j). = k! (p \u22121)pk X j\u2208[\u2212(p\u22121)/2,+(p\u22121)/2]\\0 b w(\u2212j) k Y i=1 b ui(j). (14) where the first step follows from the definition of e \u03b7u1,...,uk,w(\u03b4), the second step follows from E\u03b4[\u03c1j\u03b4] = 0 when j \u0338= 0, the last step is from simple algebra. E.5 Get Solution Set for General k Version Lemma E.8 (Formal version of Lemma 4.2). Provided the following conditions are met \u2022 We denote B as the ball that \u2225u1\u22252 + \u00b7 \u00b7 \u00b7 + \u2225uk\u22252 + \u2225w\u22252 \u22641. \u2022 Let \u2126 \u2032\u2217 q be defined as Definition E.6. \u2022 We adopt the uniform class weighting: \u2200c\u2032 \u0338= a1 + \u00b7 \u00b7 \u00b7 + ak, \u03c4(a1, . . . , ak)[c\u2032] := 1/(p \u22121). 22 \f\u2022 For any \u03b6 \u2208{1, . . . , p\u22121 2 }, there exists a scaling constant \u03b2 \u2208R and u1(a1) = \u03b2 \u00b7 cos(\u03b8\u2217 u1 + 2\u03c0\u03b6a1/p) u2(a2) = \u03b2 \u00b7 cos(\u03b8\u2217 u2 + 2\u03c0\u03b6a2/p) . . . uk(ak) = \u03b2 \u00b7 cos(\u03b8\u2217 uk + 2\u03c0\u03b6ak/p) w(c) = \u03b2 \u00b7 cos(\u03b8\u2217 w + 2\u03c0\u03b6c/p) where \u03b8\u2217 u1, . . . , \u03b8\u2217 uk, \u03b8\u2217 w \u2208R are some phase offsets satisfying \u03b8\u2217 u1 + \u00b7 \u00b7 \u00b7 + \u03b8\u2217 uk = \u03b8\u2217 w. Then, we have the following \u2126\u2032\u2217 q ={(u1, . . . , uk, w)}, and max u1,...,uk,w\u2208B(\u03b7u1,...,uk,w(0) \u2212E \u03b4\u0338=0[\u03b7u1,...,uk,w(\u03b4)]) = 2(k!) (2k + 2)(k+1)/2(p \u22121)p(k\u22121)/2 . Proof. By Lemma E.7, we only need to maximize Equation (14). Thus, the mass of b u1, . . . , b uk, and b w must be concentrated on the same frequencies. For all j \u2208Zp, we have b ui(\u2212j) = b ui(j), b w(\u2212j) = b w(j) (15) as u1, . . . , uk, w are real-valued. For all j \u2208Zp and for u1, u2, u3, w, we denote \u03b8u1, . . . , \u03b8uk, \u03b8w \u2208 [0, 2\u03c0)p as their phase, e.g.: b u1(j) = |b u1(j)| exp(i\u03b8u1(j)). (16) Considering odd p, Equation (14) becomes: (14) = k! (p \u22121)pk X j\u2208[\u2212(p\u22121)/2,+(p\u22121)/2]\\0 b w(\u2212j) k Y i=1 b ui(j) = k! (p \u22121)pk (p\u22121)/2 X j=1 ( k Y i=1 b ui(j) b w(j) + b w(j) k Y i=1 b ui(j)) = 2(k!) (p \u22121)pk (p\u22121)/2 X j=1 | b w(j)| cos( k X i=1 \u03b8ui(j) \u2212\u03b8w(j)) k Y i=1 |b ui(j)|. where the first step follows from definition (14), the second step comes from Eq. (15), the last step follows from Eq. (16), i.e., Euler\u2019s formula. Thus, we need to optimize: max u1,...,uk,w\u2208B 2(k!) (p \u22121)pk (p\u22121)/2 X j=1 | b w(j)| cos( k X i=1 \u03b8ui(j) \u2212\u03b8w(j)) k Y i=1 |b ui(j)|. (17) We can transfer the norm constraint to \u2225b u1\u22252 + \u00b7 \u00b7 \u00b7 + \u2225b uk\u22252 + \u2225b w\u22252 \u2264p 23 \fby using Plancherel\u2019s theorem. Therefore, we need to select them in a such way that \u03b8u1(j) + \u00b7 \u00b7 \u00b7 + \u03b8uk(j) = \u03b8w(j), ensuring that, for each j, the expression cos(\u03b8u1(j) + \u00b7 \u00b7 \u00b7 + \u03b8uk(j) \u2212\u03b8w(j)) = 1 is maximized, except in cases where the scalar of the j-th term is 0. This further simplifies the problem to: max \u2225b u1\u22252+\u00b7\u00b7\u00b7+\u2225b uk\u22252+\u2225b w\u22252\u2264p 2(k!) (p \u22121)pk (p\u22121)/2 X j=1 | b w(j)| k Y i=1 |b ui(j)|. (18) Then, we have | b w(j)| k Y i=1 |b ui(j)| \u2264( 1 k + 1 \u00b7 (|b u1(j)|2 + \u00b7 \u00b7 \u00b7 + |b uk(j)|2 + | b w(j)|2))(k+1)/2. (19) where the first step follows from inequality of quadratic and geometric means. We define z : {1, . . . , p\u22121 2 } \u2192R, where z(j) := |b u1(j)|2 + \u00b7 \u00b7 \u00b7 + |b uk(j)|2 + | b w(j)|2. We need to have b u1(0) = \u00b7 \u00b7 \u00b7 = b uk(0) = b w(0) = 0. Then, the upper-bound of Equation (18) is given by 2(k!) (p \u22121)pk \u00b7 max \u2225z\u22251\u2264p 2 (p\u22121)/2 X j=1 ( z(j) k + 1)(k+1)/2 = 2(k!) (k + 1)(k+1)/2(p \u22121)pk \u00b7 max \u2225z\u22251\u2264p 2 (p\u22121)/2 X j=1 z(j)(k+1)/2 \u2264 2(k!) (k + 1)(k+1)/2(p \u22121)pk \u00b7 (p/2)(k+1)/2 = 2(k!) (2k + 2)(k+1)/2(p \u22121)p(k\u22121)/2 , where the first step follows from simple algebra, the second step comes from the definition of L2 norm, the third step follows from \u2225z\u22252 \u2264\u2225z\u22251 \u2264p 2, the last step follows from simple algebra. For the inequality of quadratic and geometric means, Eq. (19) becomes equality when |b u1(j)| = \u00b7 \u00b7 \u00b7 = |b uk(j)| = | b w(j)|. To achieve \u2225z\u22252 = p 2, all the mass must be placed on a single frequency. Hence, for some frequency \u03b6 \u2208{1, . . . , p\u22121 2 }, to achieve the upper bound, we have: |b u1(j)| = \u00b7 \u00b7 \u00b7 = |b uk(j)| = | b w(j)| = n q p 2(k+1), if j = \u00b1\u03b6; 0, otherwise. (20) In this case, Equation (18) matches the upper bound. Hence, this is the maximum-margin. Let \u03b8\u2217 u1 := \u03b8u1(\u03b6). Combining all the results, up to scaling, it is established that all neurons which maximize the expected class-weighted margin conform to the form: u1(a1) = 1 p p\u22121 X j=0 b u1(j)\u03c1ja1 24 \f= 1 p \u00b7 (b u1(\u03b6)\u03c1\u03b6a1 + b u1(\u2212\u03b6)\u03c1\u2212\u03b6a1) = 1 p \u00b7 ( r p 2(k + 1) exp(i\u03b8\u2217 u1)\u03c1\u03b6a1 + r p 2(k + 1) exp(\u2212i\u03b8\u2217 u1)\u03c1\u2212\u03b6a1) = s 2 (k + 1)p cos(\u03b8\u2217 u1 + 2\u03c0\u03b6a1/p), where the first step comes from the definition of u1(a), the second step and third step follow from Eq. (20), the last step follows from Eq. (16) i.e., Euler\u2019s formula. We have similar results for other neurons where \u03b8\u2217 u1, . . . , \u03b8\u2217 uk, \u03b8\u2217 w \u2208R satisfying \u03b8\u2217 u1+\u00b7 \u00b7 \u00b7+\u03b8\u2217 uk = \u03b8\u2217 w and some \u03b6 \u2208Zp\\{0}, where u1, . . . , uk, and w shares the same \u03b6. F Construct Max Margin Solution Section F.1 proposed the sum-to-product identities for k inputs. Section F.2 shows how we construct \u03b8\u2217when k = 3. Section F.3 gives the constructions for \u03b8\u2217for general k version. F.1 Sum-to-product Identities Lemma F.1 (Sum-to-product Identities). If the following conditions hold \u2022 Let a1, . . . , ak denote any k real numbers We have \u2022 Part 1. 22 \u00b7 2! \u00b7 a1a2 = (a1 + a2)2 \u2212(a1 \u2212a2)2 \u2212(\u2212a1 + a2)2 + (\u2212a1 \u2212a2)2 \u2022 Part 2. 23 \u00b7 3! \u00b7 a1a2a3 = (a1 + a2 + a3)3 \u2212(a1 + a2 \u2212a3)3 \u2212(a1 \u2212a2 + a3)3 \u2212(\u2212a1 + a2 + a3)3 + (a1 \u2212a2 \u2212a3)3 + (\u2212a1 + a2 \u2212a3)3 + (\u2212a1 \u2212a2 + a3)3 \u2212(\u2212a1 \u2212a2 \u2212a3)3 \u2022 Part 3. 2k \u00b7 k! \u00b7 k Y i=1 ai = X c\u2208{\u22121,+1}k (\u22121)(k\u2212Pk i=1 ci)/2( k X j=1 cjaj)k. Proof. Proof of Part 1. We define A1, A2, A3, A4 as follows A1 := (a1 + a2)2, A2 := (a1 \u2212a2)2, A3 := (\u2212a1 + a2)2, A4 := (\u2212a1 \u2212a2)2, For the first term, we have A1 = a2 1 + a2 2 + 2a1a2. 25 \fFor the second term, we have A2 = a2 1 + a2 2 \u22122a1a2. For the third term, we have A3 = a2 1 + a2 2 \u22122a1a2. For the fourth term, we have A4 = a2 1 + a2 2 + 2a1a2. Putting things together, we have (a1 + a2)2 \u2212(a1 \u2212a2)2 \u2212(\u2212a1 + a2)2 + (\u2212a1 \u2212a2)2 = A1 \u2212A2 \u2212A3 + A4 = 8a1a2 = 23a1a2 Proof of Part 2. We define B1, B2, B3, B4, B5, B6, B7, B8 as follows B1 := (a1 + a2 + a3)3, B2 := (a1 + a2 \u2212a3)3, B3 := (a1 \u2212a2 + a3)3, B4 := (\u2212a1 + a2 + a3)3, B5 := (a1 \u2212a2 \u2212a3)3, B6 := (\u2212a1 + a2 \u2212a3)3, B7 := (\u2212a1 \u2212a2 + a3)3, B8 := (\u2212a1 \u2212a2 \u2212a3)3, For the first term, we have B1 = a3 1 + a3 2 + a3 3 + 3a2a2 3 + 3a2a2 1 + 3a1a2 3 + 3a1a2 2 + 3a3a2 1 + 3a3a2 2 + 6a1a2a3. For the second term, we have B2 = a3 1 + a3 2 \u2212a3 3 + 3a2a2 3 + 3a2a2 1 + 3a1a2 3 + 3a1a2 2 \u22123a3a2 1 \u22123a3a2 2 \u22126a1a2a3. For the third term, we have B3 = a3 1 \u2212a3 2 + a3 3 \u22123a2a2 3 \u22123a2a2 1 + 3a1a2 3 + 3a1a2 2 + 3a3a2 1 + 3a3a2 2 \u22126a1a2a3. For the fourth term, we have B4 = \u2212a3 1 + a3 2 + a3 3 + 3a2a2 3 + 3a2a2 1 \u22123a1a2 3 \u22123a1a2 2 + 3a3a2 1 + 3a3a2 2 \u22126a1a2a3. For the fifth term, we have B5 = a3 1 \u2212a3 2 \u2212a3 3 \u22123a2a2 3 \u22123a2a2 1 + 3a1a2 3 + 3a1a2 2 \u22123a3a2 1 \u22123a3a2 2 + 6a1a2a3. 26 \fFor the sixth term, we have B6 = \u2212a3 1 + a3 2 \u2212a3 3 + 3a2a2 3 + 3a2a2 1 \u22123a1a2 3 \u22123a1a2 2 \u22123a3a2 1 \u22123a3a2 2 + 6a1a2a3. For the seventh term, we have B7 = \u2212a3 1 \u2212a3 2 + a3 3 \u22123a2a2 3 \u22123a2a2 1 \u22123a1a2 3 \u22123a1a2 2 + 3a3a2 1 + 3a3a2 2 + 6a1a2a3. For the eighth term, we have B8 = \u2212a3 1 \u2212a3 2 \u2212a3 3 \u22123a2a2 3 \u22123a2a2 1 \u22123a1a2 3 \u22123a1a2 2 \u22123a3a2 1 \u22123a3a2 2 \u22126a1a2a3. Putting things together, we have (a1 + a2 + a3)3 \u2212(a1 + a2 \u2212a3)3 \u2212(a1 \u2212a2 + a3)3 \u2212(\u2212a1 + a2 + a3)3 + (a1 \u2212a2 \u2212a3)3 + (\u2212a1 + a2 \u2212a3)3 + (\u2212a1 \u2212a2 + a3)3 \u2212(\u2212a1 \u2212a2 \u2212a3)3 = B1 \u2212B2 \u2212B3 \u2212B4 + B5 + B6 + B7 \u2212B8 = 48a1a2a3 = 3 \u00b7 24a1a2a3 Proof of Part 3. 2k \u00b7 k! \u00b7 k Y i=1 ai = X c\u2208{\u22121,+1}k (\u22121)(k\u2212Pk i=1 ci)/2( k X j=1 cjaj)k. We first let a1 = 0. Then each term on RHS can find a corresponding negative copy of this term. In detail, let c1 change sign and we have, (\u22121)(k\u2212c1\u2212Pk i=2 ci)/2(c1 \u00b7 0 + Pk j=2 cjaj)k = \u2212(\u22121)(k+c1\u2212Pk i=2 ci)/2(\u2212c1 \u00b7 0 + Pk j=2 cjaj)k. We can find this mapping is always one-to-one and onto mapping with each other. Thus, we have RHS is constant 0 regardless of a2, . . . , ak. Thus, a1 is a factor of RHS. By symmetry, a2, . . . , ak also are factors of RHS. Since RHS is k-th order, we have RHS= \u03b1 Qk i=1 ai where \u03b1 is a constant. Take a1 = \u00b7 \u00b7 \u00b7 = ak = 1, we have \u03b1 = 2k \u00b7 k! =RHS. Thus, we finish the proof. F.2 Constructions for \u03b8\u2217 Lemma F.2. When k = 3, provided the following conditions are met \u2022 We denote B as the ball that \u2225u1\u22252 + \u2225u2\u22252 + \u2225u3\u22252 + \u2225w\u22252 \u22641. \u2022 We define \u2126 \u2032\u2217 q in Definition E.2. \u2022 We adopt the uniform class weighting: \u2200c\u2032 \u0338= a1 + a2 + a3, \u03c4(a1, a2, a3)[c\u2032] := 1/(p \u22121). \u2022 Let cos\u03b6(x) denote cos(2\u03c0\u03b6x/p) \u2022 Let sin\u03b6(x) denote sin(2\u03c0\u03b6x/p) Then, we have 27 \f\u2022 The maximum L2,4-margin solution \u03b8\u2217will consist of 16(p \u22121) neurons \u03b8\u2217 i \u2208\u2126 \u2032\u2217 q to simulate p\u22121 2 type of cosine computation, each cosine computation is uniquely determined a \u03b6 \u2208{1, . . . , p\u22121 2 }. In particular, for each \u03b6 the cosine computation is cos\u03b6(a1 + a2 + a3 \u2212 c), \u2200a1, a2, a3, c \u2208Zp. Proof. Referencing Lemma E.4, we can identify elements within \u2126 \u2032 q. Our set \u03b8\u2217will be composed of 16(p\u22121) neurons, including 32 neurons dedicated to each frequency in the range 1, . . . , p\u22121 2 . Focusing on a specific frequency \u03b6, for the sake of simplicity, let us use cos\u03b6(x) to represent cos(2\u03c0\u03b6x/p) and sin\u03b6(x) likewise. We note: cos\u03b6(a1 + a2 + a3 \u2212c) = cos\u03b6(a1 + a2 + a3) cos\u03b6(c) + sin\u03b6(a1 + a2 + a3) sin\u03b6(c) = cos\u03b6(a1 + a2) cos\u03b6(a3) cos\u03b6(c) \u2212sin\u03b6(a1 + a2) sin\u03b6(a3) cos\u03b6(c) + sin\u03b6(a1 + a2) cos\u03b6(a3) sin\u03b6(c) + cos\u03b6(a1 + a2) sin\u03b6(a3) sin\u03b6(c) = (cos\u03b6(a1) cos\u03b6(a2) \u2212sin\u03b6(a1) sin\u03b6(a2)) cos\u03b6(a3) cos\u03b6(c) \u2212(sin\u03b6(a1) cos\u03b6(a2) + cos\u03b6(a1) sin\u03b6(a2)) sin\u03b6(a3) cos\u03b6(c) + (sin\u03b6(a1) cos\u03b6(a2) + cos\u03b6(a1) sin\u03b6(a2)) cos\u03b6(a3) sin\u03b6(c) + ((cos\u03b6(a1) cos\u03b6(a2) \u2212sin\u03b6(a1) sin\u03b6(a2))) sin\u03b6(a3) sin\u03b6(c) = cos\u03b6(a1) cos\u03b6(a2) cos\u03b6(a3) cos\u03b6(c) \u2212sin\u03b6(a1) sin\u03b6(a2) cos\u03b6(a3) cos\u03b6(c) \u2212sin\u03b6(a1) cos\u03b6(a2) sin\u03b6(a3) cos\u03b6(c) \u2212cos\u03b6(a1) sin\u03b6(a2) sin\u03b6(a3) cos\u03b6(c) + sin\u03b6(a1) cos\u03b6(a2) cos\u03b6(a3) sin\u03b6(c) + cos\u03b6(a1) sin\u03b6(a2) cos\u03b6(a3) sin\u03b6(c) + cos\u03b6(a1) cos\u03b6(a2) sin\u03b6(a3) sin\u03b6(c) \u2212sin\u03b6(a1) sin\u03b6(a2) sin\u03b6(a3) sin\u03b6(c) (21) where all steps comes from trigonometric function. Each of these 8 terms can be implemented by 4 neurons \u03d51, \u03d52, \u00b7 \u00b7 \u00b7 , \u03d54. Consider the first term, cos\u03b6(a1) cos\u03b6(a2) cos\u03b6(a3) cos\u03b6(c). For the i-th neuron, we have \u03d5i = (ui,1(a1) + ui,2(a2) + ui,3(a3))3 \u00b7 wi(c). By changing (\u03b8i,j)\u2217, we can change the constant factor of cos\u03b6(\u00b7) to be +\u03b2 or \u2212\u03b2. Hence, we can view ui,j(\u00b7), wi(\u00b7) as the following: ui,1(\u00b7) := pi,1 \u00b7 cos\u03b6(\u00b7), ui,2(\u00b7) := pi,2 \u00b7 cos\u03b6(\u00b7), ui,3(\u00b7) := pi,3 \u00b7 cos\u03b6(\u00b7), wi(\u00b7) := pi,4 \u00b7 cos\u03b6(\u00b7) where pi,j \u2208{\u22121, 1}. For simplicity, let di denote cos\u03b6(ai). We set (\u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w) = (0, 0, 0, 0), then p1,1, p1,2, p1,3, p1,4 = 1, then we have \u03d51 = (d1 + d2 + d3)3 cos\u03b6(c). 28 \fWe set (\u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w) = (0, 0, \u03c0, \u03c0), then p2,1, p2,2 = 1 and p2,3, p2,4 = \u22121, then we have \u03d52 = \u2212(d1 + d2 \u2212d3)3 cos\u03b6(c). We set (\u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w) = (0, \u03c0, 0, \u03c0), then p3,1, p3,3 = 1 and p3,2, p3,4 = \u22121, then we have \u03d53 = \u2212(d1 \u2212d2 + d3)3 cos\u03b6(c). We set (\u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w) = (\u03c0, 0, 0, \u03c0), then p4,1, p4,4 = \u22121 and p2,2, p2,3 = 1, then we have \u03d54 = \u2212(\u2212d1 + d2 + d3)3 cos\u03b6(c). Putting them together, we have 4 X i=1 \u03d5i(a1, a2, a3) = 4 X i=1 (ui,1(a1) + ui,2(a2) + ui,3(a3))3wi(c) = 4 X i=1 (pi,1 cos\u03b6(a1) + pi,2 cos\u03b6(a2) + pi,3 cos\u03b6(a3))3wi(c) = [(d1 + d2 + d3)3 \u2212(d1 + d2 \u2212d3)3 \u2212(d1 \u2212d2 + d3)3 \u2212(\u2212d1 + d2 + d3)3] cos\u03b6(c) = 24d1d2d3 cos\u03b6(c) = 24 cos\u03b6(a1) cos\u03b6(a2) cos\u03b6(a3) cos\u03b6(c) (22) where the first step comes from the definition of \u03d5i, the second step comes from the definition of ui,j, the third step comes from di = cos\u03b6(ai), the fourth step comes from simple algebra, the last step comes from di = cos\u03b6(ai). Similarly, consider \u2212sin\u03b6(a1) sin\u03b6(a2) cos\u03b6(a3) cos\u03b6(c). We set (\u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w) = (\u03c0/2, \u03c0/2, 0, \u03c0), then we have \u03d51 = \u2212(sin\u03b6(a1) + sin\u03b6(a2) + cos\u03b6(a3))3 cos\u03b6(c). We set (\u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w) = (\u03c0/2, \u03c0/2, \u2212\u03c0, 0), then we have \u03d52 = (sin\u03b6(a1) + sin\u03b6(a2) \u2212cos\u03b6(a3))3 cos\u03b6(c). We set (\u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w) = (\u03c0/2, \u2212\u03c0/2, 0, 0), then we have \u03d53 = (sin\u03b6(a1) \u2212sin\u03b6(a2) + cos\u03b6(a3))3 cos\u03b6(c). We set (\u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w) = (\u2212\u03c0/2, \u03c0/2, 0, 0), then we have \u03d54 = (\u2212sin\u03b6(a1) + sin\u03b6(a2) + cos\u03b6(a3))3 cos\u03b6(c). Putting them together, we have 4 X i=1 \u03d5i(a1, a2, a3) = \u221224 sin\u03b6(a1) sin\u03b6(a2) cos\u03b6(a3) cos\u03b6(c) (23) 29 \fSimilarly, all other six terms in Eq. (21) can be composed by four neurons with different (\u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w). When we include such 56 neurons for all frequencies \u03b6 \u2208{1, . . . , p\u22121 2 }, we have that the network will calculate the following function f(a1, a2, a3, c) = (p\u22121)/2 X \u03b6=1 cos\u03b6(a1 + a2 + a3 \u2212c) = p\u22121 X \u03b6=1 1 2 \u00b7 exp(2\u03c0i\u03b6(a1 + a2 + a3 \u2212c)/p) = ( p\u22121 2 if a1 + a2 + a3 = c 0 otherwise where the first step comes from the definition of f(a1, a2, a3, c), the second step comes from Euler\u2019s formula, the last step comes from the properties of discrete Fourier transform. The scaling factor \u03b2 for each neuron can be selected such that the entire network maintains an L2,4-norm of 1. In this setup, every data point lies exactly on the margin, meaning q = unif(Zp) uniformly covers points on the margin, thus meeting the criteria for q\u2217as outlined in Definition 3.5. Furthermore, for any input (a1, a2, a3), the function f yields an identical result across all incorrect labels c\u2032, adhering to Condition 3.8. F.3 Constructions for \u03b8\u2217for General k Version Lemma F.3 (Formal version of Lemma 4.3). Provided the following conditions are met \u2022 We denote B as the ball that \u2225u1\u22252 + \u00b7 \u00b7 \u00b7 + \u2225uk\u22252 + \u2225w\u22252 \u22641. \u2022 We define \u2126 \u2032\u2217 q in Definition E.2. \u2022 We adopt the uniform class weighting: \u2200c\u2032 \u0338= a1 + \u00b7 \u00b7 \u00b7 + ak, \u03c4(a1, . . . , ak)[c\u2032] := 1/(p \u22121). \u2022 Let cos\u03b6(x) denote cos(2\u03c0\u03b6x/p) \u2022 Let sin\u03b6(x) denote sin(2\u03c0\u03b6x/p) Then, we have \u2022 The maximum L2,k+1-margin solution \u03b8\u2217will consist of 22k\u22121 \u00b7 p\u22121 2 neurons \u03b8\u2217 i \u2208\u2126 \u2032\u2217 q to simulate p\u22121 2 type of cosine computation, each cosine computation is uniquely determined a \u03b6 \u2208{1, . . . , p\u22121 2 }. In particular, for each \u03b6 the cosine computation is cos\u03b6(a1 + \u00b7 \u00b7 \u00b7 + ak \u2212 c), \u2200a1, . . . , ak, c \u2208Zp. Proof. By Lemma E.4, we can get elements of \u2126 \u2032\u2217 q . Our set \u03b8\u2217will be composed of 22k\u22121 \u00b7 p\u22121 2 neurons, including 22k\u22121 neurons dedicated to each frequency in the range 1, . . . , p\u22121 2 . Focusing on a specific frequency \u03b6, for the sake of simplicity, let us use cos\u03b6(x) to represent cos(2\u03c0\u03b6x/p) and sin\u03b6(x) likewise. We define a[k] := k X i=1 ak 30 \fand we also define ak+1 := \u2212c. For easy of writing, we will write cos\u03b6 as cos and sin\u03b6 as sin. We have the following. cos\u03b6( k X i=1 ai \u2212c) = cos( k X i=1 ai \u2212c) = cos(a[k+1]) = cos(a[k]) cos(ak+1) \u2212sin(a[k]) sin(ak+1) = cos(a[k\u22121] + ak) cos(ak+1) \u2212sin(a[k\u22121] + ak) sin(ak+1) = cos(a[k\u22121]) cos(ak) cos(ak+1) \u2212sin(a[k\u22121]) sin(ak) cos(ak+1) \u2212sin(a[k\u22121]) cos(ak) sin(ak+1) \u2212cos(a[k\u22121]) sin(ak) sin(ak+1) = X b\u2208{0,1}k+1 k+1 Y i=1 cos1\u2212bi(ai) \u00b7 sinbi(ai) \u00b7 1[ k+1 X i=1 bi%2 = 0] \u00b7 (\u22121)1[Pk+1 i=1 bi%4=2], (24) where the first step comes from the simplicity of writing, the second step comes from the definition of a[k+1] and ak+1, the third step comes from the trigonometric function, the fourth step also follows trigonometric function, and the last step comes from the below two observations: \u2022 First, we observe that cos(a+b) = cos(a) cos(b)\u2212sin(a) sin(b) and sin(a+b) = sin(a) cos(b)+ cos(a) sin(b). When we split cos once, we will remove one cos product and we may add zero or two sin products. When we split sin once, we may remove one sin product and we will add one sin product as well. Thus, we can observe that the number of sin products in each term is always even. \u2022 Second, we observe only when we split cos and add two sin products will introduce a \u22121 is this term. Thus, when the number of sin products %4 = 2, the sign of this term will be \u22121. Otherwise, it will be +1. Note that we have 2k non-zero term in Eq. (24). Each of these 2k terms can be implemented by 2k\u22121 neurons \u03d51, \u00b7 \u00b7 \u00b7 , \u03d52k\u22121. For the i-th neuron, we have \u03d5i = ( k X j=1 ui,j(aj))k \u00b7 wi(c). By changing (\u03b8i,j)\u2217, we can change the ui,j(aj) from cos\u03b6(\u00b7) to be \u2212cos\u03b6(\u00b7) or sin\u03b6(\u00b7) or \u2212sin\u03b6(\u00b7). Denote \u03b8\u2217 ui as (\u03b8i,ai)\u2217. For simplicity, let di denote the i-th product in one term of Eq. (24). By fact that 2k \u00b7 k! \u00b7 k Y i=1 di = X c\u2208{\u22121,+1}k (\u22121)(k\u2212Pk i=1 ci)/2( k X j=1 cjdj)k, 31 \feach term can be constructed by 2k\u22121 neurons (note that there is a symmetric effect so we only need half terms). Based on Eq. (24) and the above fact with carefully check, we can see that \u03b8\u2217 u1 + \u00b7 \u00b7 \u00b7 + \u03b8\u2217 uk = \u03b8\u2217 w. Thus, we need 2k \u00b7 2k\u22121 \u00b7 p\u22121 2 neurons in total. When we include such 2k \u00b72k\u22121 neurons for all frequencies \u03b6 \u2208{1, . . . , p\u22121 2 }, we have the network will calculate the following function f(a1, . . . , ak, c) = (p\u22121)/2 X \u03b6=1 cos\u03b6( k X i=1 ai \u2212c) = p\u22121 X \u03b6=1 1 2 \u00b7 exp(2\u03c0i\u03b6( k X i=1 ai \u2212c)/p) = ( p\u22121 2 if Pk i=1 ai = c 0 otherwise where the first step comes from the definition of f(a1, . . . , ak, c), the second step comes from Euler\u2019s formula, the last step comes from the properties of discrete Fourier transform. The scaling parameter \u03b2 for each neuron can be adjusted to ensure that the network possesses an L2,k+1-norm of 1. For this network, all data points are positioned on the margin, which implies that q = unif(Zp) naturally supports points along the margin, aligning with the requirements for q\u2217presented in Definition 3.5. Additionally, for every input (a1, . . . , ak), the function f assigns the same outcome to all incorrect labels c\u2032, thereby fulfilling Condition 3.8. G Check Fourier Frequencies Section G.1 proves all frequencies are used. Section G.2 proves all frequencies are used for general k version. G.1 All Frequencies are Used Let f : Z4 p \u2192C. Its multi-dimensional discrete Fourier transform is defined as: b f(j1, j2, j3, j4) := X a1\u2208Zp e\u22122\u03c0i\u00b7j1a1/p( X a2\u2208Zp e\u22122\u03c0i\u00b7j2a2/p( X a3\u2208Zp e\u22122\u03c0i\u00b7j3a3/p( X c\u2208Zp e\u22122\u03c0i\u00b7j4c/pf(a1, a2, a3, c))). Lemma G.1. When k = 3, if the following conditions hold \u2022 We adopt the uniform class weighting: \u2200c\u2032 \u0338= a1 + a2 + a3, \u03c4(a1, a2, a3)[c\u2032] := 1/(p \u22121). \u2022 f is the maximum L2,4-margin solution. Then, for any j1 = j2 = j3 = \u2212j4 \u0338= 0, we have b f(j1, j2, j3, j4) > 0. Proof. In this proof, let j1, j2, j3, j4 \u2208Z, and \u03b8u = \u03b8\u2217 u \u00b7 p 2\u03c0 to simplify the notation. By Lemma E.4, u1(a1) = r 1 2p cosp(\u03b8u1 + \u03b6a1). (25) 32 \fLet f(a1, a2, a3, c) = H X h=1 \u03d5h(a1, a2, a3, c) = H X h=1 (uh,1(a1) + uh,2(a2) + uh,3(a3))3wh(c) = ( 1 2p)2 H X h=1 (cosp(\u03b8uh,1 + \u03b6ha1) + cosp(\u03b8uh,2 + \u03b6ha2) + cosp(\u03b8uh,3 + \u03b6ha3))3 cosp(\u03b8wh + \u03b6hc) where each neuron conforms to the previously established form, and the width H function is an arbitrary margin-maximizing network. The first step is from the definition of f(a1, a2, a3, c), the subsequent step on the definition of \u03d5h(a1, a2, a3, c), and the final step is justified by Eq. (25). We can divide each \u03d5 into ten terms: \u03d5(a1, a2, a3, c) = \u03d5(1)(a1, a2, a3, c) + \u00b7 \u00b7 \u00b7 + \u03d5(10)(a1, a2, a3, c) = \u0000u1(a1)3 + u2(a2)3 + u3(a3)3 + 3u1(a1)2u2(a2) + 3u1(a1)2u3(a3) + 3u2(a2)2u1(a1) + 3u2(a2)2u3(a3) + 3u3(a3)2u1(a1) + 3u3(a3)2u2(a2) + 6u1(a1)u2(a2)u3(a3) \u0001 w(c). Note, \u03c1 = e2\u03c0i/p. b \u03d51(j1, j2, j3, j4) is nonzero only for j1 = 0, and b \u03d54(j1, j2, j3, j4) is nonzero only for j1 = j2 = 0. Similar to other terms. For the tenth term, we have b \u03d510(j1, j2, j3, j4) = 6 X a1,a2,a3,c\u2208Zp u1(a1)u2(a2)u3(a3)w(c)\u03c1\u2212(j1a1+j2a2+j3a3+j4c) = 6b u1(j1)b u2(j2)b u3(j3) b w(j4). In particular, b u1(j1) = X a1\u2208Zp r 1 2p cosp(\u03b8u1 + \u03b6a1)\u03c1\u2212j1a1 = (8p)\u22121/2 X a1\u2208Zp (\u03c1\u03b8u1+\u03b6a1 + \u03c1\u2212(\u03b8u1+\u03b6a1))\u03c1\u2212j1a1 = (8p)\u22121/2(\u03c1\u03b8u1 X a1\u2208Zp \u03c1(\u03b6\u2212j1)a1 + \u03c1\u2212\u03b8u1 X a1\u2208Zp \u03c1\u2212(\u03b6+j1)a1) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 p p/8 \u00b7 \u03c1\u03b8u1 if j1 = +\u03b6 p p/8 \u00b7 \u03c1\u2212\u03b8u1 if j1 = \u2212\u03b6 0 otherwise where the first step comes from b u1(j1) definition, the second step comes from Euler\u2019s formula, the third step comes from simple algebra, the last step comes from the properties of discrete Fourier transform. Similarly for b u2, b u3 and b w. As we consider \u03b6 to be nonzero, we ignore the \u03b6 = 0 case. Hence, b \u03d510(j1, j2, j3, j4) is nonzero only when j1, j2, j3, j4 are all \u00b1\u03b6. We can summarize that b \u03d5(j1, j2, j3, j4) can only be nonzero if one of the following satisfies: 33 \f\u2022 j1 \u00b7 j2 \u00b7 j3 = 0 \u2022 j1, j2, j3, j4 = \u00b1\u03b6. Setting aside the previously discussed points, it\u2019s established in Lemma D.2 that the function f maintains a consistent margin for various inputs as well as over different classes, i.e., f can be broken down as f(a1, a2, a3, c) = f1(a1, a2, a3, c) + f2(a1, a2, a3, c) where f1(a1, a2, a3, c) = F(a1, a2, a3) for some F : Zp \u00d7 Zp \u00d7 Zp \u2192R, and f2(a1, a2, a3, c) = \u03bb \u00b7 1a1+a2+a3=c where \u03bb > 0 is the margin of f. Then, we have the DFT of f1 and f2 are b f1(j1, j2, j3, j4) = ( b F(j1, j2, j3) if j4 = 0 0 otherwise and b f2(j1, j2, j3, j4) = ( \u03bbp3 if j1 = j2 = j3 = \u2212j4 0 otherwise . Hence, when j1 = j2 = j3 = \u2212j4 \u0338= 0, we must have b f(j1, j2, j3, j4) > 0. G.2 All Frequencies are Used for General k Version Let f : Zk+1 p \u2192C. Its multi-dimensional discrete Fourier transform is defined as: b f(j1, . . . , jk+1) := X a1\u2208Zp e\u22122\u03c0i\u00b7j1a1/p(. . . ( X ak\u2208Zp e\u22122\u03c0i\u00b7jkak/p( X c\u2208Zp e\u22122\u03c0i\u00b7jk+1c/pf(a1, . . . , ak, c))). Lemma G.2. If the following conditions hold \u2022 We adopt the uniform class weighting: \u2200c\u2032 \u0338= a1 + \u00b7 \u00b7 \u00b7 + ak, \u03c4(a1, . . . , ak)[c\u2032] := 1/(p \u22121). \u2022 f is the maximum L2,k+1-margin solution. Then, for any j1 = \u00b7 \u00b7 \u00b7 = jk = \u2212jk+1 \u0338= 0, we have b f(j1, . . . , jk+1) > 0. 34 \fProof. For this proof, for all j1, . . . , jk+1 \u2208Z, to simplify the notation, let \u03b8u = \u03b8\u2217 u \u00b7 p 2\u03c0, by Lemma E.8, so u1(a1) = s 2 (k + 1)p cosp(\u03b8u1 + \u03b6a1). (26) Let f(a1, . . . , ak, c) = H X h=1 \u03d5h(a1, . . . , ak, c) = H X h=1 (uh,1(a1) + \u00b7 \u00b7 \u00b7 + uh,k(ak))kwh(c) = ( 2 (k + 1)p)(k+1)/2 H X h=1 (cosp(\u03b8uh,1 + \u03b6ha1) + \u00b7 \u00b7 \u00b7 + cosp(\u03b8uh,k + \u03b6hak))k cosp(\u03b8wh + \u03b6hc) where each neuron conforms to the previously established form, and the width H function is an arbitrary margin-maximizing network. The first step is based on the definition of f(a1, . . . , ak, c), the subsequent step on the definition of \u03d5h(a1, . . . , ak, c), and the final step is justified by Eq. (26). Each neuron \u03d5 we have b \u03d5(j1, . . . , jk, jk+1) = k! X a1,...,ak,c\u2208Zp w(c)\u03c1\u2212(j1a1+\u00b7\u00b7\u00b7+jkak+jk+1c) k Y i=1 ui(ai) = k! b w(jk+1) k Y i=1 b ui(ji). In particular, b u1(j1) = X a1\u2208Zp s 2 (k + 1)p cosp(\u03b8u1 + \u03b6a1)\u03c1\u2212j1a1 = s 1 2(k + 1)p X a1\u2208Zp (\u03c1\u03b8u1+\u03b6a1 + \u03c1\u2212(\u03b8u1+\u03b6a1))\u03c1\u2212j1a1 = s 1 2(k + 1)p(\u03c1\u03b8u1 X a1\u2208Zp \u03c1(\u03b6\u2212j1)a1 + \u03c1\u2212\u03b8u1 X a1\u2208Zp \u03c1\u2212(\u03b6+j1)a1) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 q p 2(k+1) \u00b7 \u03c1\u03b8u1 if j1 = +\u03b6 q p 2(k+1) \u00b7 \u03c1\u2212\u03b8u1 if j1 = \u2212\u03b6 0 otherwise, where the first step comes from b u1(j1) definition, the second step comes from Euler\u2019s formula, the third step comes from simple algebra, the last step comes from the properties of discrete Fourier transform. Similarly for b ui and b w. We consider \u03b6 to be nonzero, so we ignore the \u03b6 = 0 case. Hence, b \u03d5(j1, . . . , jk, jk+1) is nonzero only when j1, . . . , jk, jk+1 are all \u00b1\u03b6. We can summarize that b \u03d5(j1, . . . , jk, jk+1) can only be nonzero if one of the below conditions satisfies: 35 \f\u2022 Qk i=1 ji = 0 \u2022 j1, . . . , jk, jk+1 = \u00b1\u03b6. Setting aside the previously discussed points, it\u2019s established in Lemma D.2 that the function f maintains a consistent margin for various inputs as well as over different classes, i.e., f can be broken down as f(a1, . . . , ak, c) = f1(a1, . . . , ak, c) + f2(a1, . . . , ak, c) where f1(a1, . . . , ak, c) = F(a1, . . . , ak) for some F : Zk p \u2192R, and f2(a1, . . . , ak, c) = \u03bb \u00b7 1a1+\u00b7\u00b7\u00b7+ak=c where \u03bb > 0 is the margin of f. Then, we have the DFT of f1 and f2 are b f1(j1, . . . , jk, jk+1) = ( b F(j1, . . . , jk) if jk+1 = 0 0 otherwise and b f2(j1, . . . , jk, jk+1) = ( \u03bbpk if j1 = \u00b7 \u00b7 \u00b7 = jk = \u2212jk+1 0 otherwise . Hence, when j1 = \u00b7 \u00b7 \u00b7 = jk = \u2212jk+1 \u0338= 0, we must have b f(j1, . . . , jk, jk+1) > 0. H Main Result Section H.1 proves the main result for k = 3. Section H.2 proves the general k version of our main result. H.1 Main result for k = 3 Theorem H.1. When k = 3, let f(\u03b8, x) be the one-hidden layer networks defined in Section 3. If the following conditions hold \u2022 We adopt the uniform class weighting: \u2200c\u2032 \u0338= a1 + a2 + a3, \u03c4(a1, a2, a3)[c\u2032] := 1/(p \u22121). \u2022 m \u226516(p \u22121) neurons. Then we have the maximum L2,4-margin network satisfying: 36 \f\u2022 The maximum L2,4-margin for a given dataset Dp is: \u03b3\u2217= 3 16 \u00b7 1 p(p \u22121). \u2022 For each neuron \u03d5({u1, u2, u3, w}; a1, a2, a3), there is a constant scalar \u03b2 \u2208R and a frequency \u03b6 \u2208{1, . . . , p\u22121 2 } satisfying u1(a1) = \u03b2 \u00b7 cos(\u03b8\u2217 u1 + 2\u03c0\u03b6a1/p) u2(a2) = \u03b2 \u00b7 cos(\u03b8\u2217 u2 + 2\u03c0\u03b6a2/p) u3(a3) = \u03b2 \u00b7 cos(\u03b8\u2217 u3 + 2\u03c0\u03b6a3/p) w(c) = \u03b2 \u00b7 cos(\u03b8\u2217 w + 2\u03c0\u03b6c/p) where \u03b8\u2217 u1, \u03b8\u2217 u2, \u03b8\u2217 u3, \u03b8\u2217 w \u2208R are some phase offsets satisfying \u03b8\u2217 u1 + \u03b8\u2217 u2 + \u03b8\u2217 u3 = \u03b8\u2217 w. \u2022 For each frequency \u03b6 \u2208{1, . . . , p\u22121 2 }, there exists one neuron using this frequency only. Proof. By Lemma E.4, we get the single neuron class-weighted margin solution set \u2126 \u2032\u2217 q satisfying Condition 3.8 and \u03b3\u2217. By Lemma F.2 and Lemma D.1, we can construct network \u03b8\u2217which uses neurons in \u2126 \u2032\u2217 q and satisfies Condition 3.8 and Definition 3.5 with respect to q = unif(Zp). By Lemma D.2, we know it is the maximum-margin solution. By Lemma G.1, when j1 = j2 = j3 = \u2212j4 \u0338= 0, we must have b f(j1, j2, j3, j4) > 0. However, as discrete Fourier transform b \u03d5 of each neuron is nonzero, for each frequency, we must have that there exists one neuron using it. H.2 Main Result for General k Version Theorem H.2 (Formal version of Theorem 4.1). Let f(\u03b8, x) be the one-hidden layer networks defined in Section 3. If the following conditions hold \u2022 We adopt the uniform class weighting: \u2200c\u2032 \u0338= a1 + \u00b7 \u00b7 \u00b7 + ak, \u03c4(a1, . . . , ak)[c\u2032] := 1/(p \u22121). \u2022 m \u226522k\u22121 \u00b7 p\u22121 2 neurons. Then we have the maximum L2,k+1-margin network satisfying: \u2022 The maximum L2,k+1-margin for a given dataset Dp is: \u03b3\u2217= 2(k!) (2k + 2)(k+1)/2(p \u22121)p(k\u22121)/2 . \u2022 For each neuron \u03d5({u1, . . . , uk, w}; a1, . . . , ak) there is a constant scalar \u03b2 \u2208R and a frequency \u03b6 \u2208{1, . . . , p\u22121 2 } satisfying u1(a1) = \u03b2 \u00b7 cos(\u03b8\u2217 u1 + 2\u03c0\u03b6a1/p) . . . uk(ak) = \u03b2 \u00b7 cos(\u03b8\u2217 uk + 2\u03c0\u03b6ak/p) w(c) = \u03b2 \u00b7 cos(\u03b8\u2217 w + 2\u03c0\u03b6c/p) where \u03b8\u2217 u1, . . . , \u03b8\u2217 uk, \u03b8\u2217 w \u2208R are some phase offsets satisfying \u03b8\u2217 u1 + \u00b7 \u00b7 \u00b7 + \u03b8\u2217 uk = \u03b8\u2217 w. 37 \f\u2022 For every frequency \u03b6 \u2208{1, . . . , p\u22121 2 }, there exists one neuron using this frequency only. Proof. Follow the same proof sketch as Theorem H.1 by Lemma E.8, Condition 3.8, Lemma F.3, Lemma D.1, Definition 3.5, Lemma D.2, Lemma G.2. I More Empirical Details and Results I.1 Implement Details Licenses for Existing Assets & Open Access to Data and Code. Our code is based on a brilliant open source repository, https://github.com/Sea-Snell/grokking, which requires MIT License. We provide all of our codes in the supplemental material, including dataset generation code. We do not require open data access as we run experiments on synthetic datasets, i.e., modular addition. Experimental Result Reproducibility. We provide all of our codes in the supplemental material with a clear README file and clear configuration files for our experiments reproducibility. Experimental Setting/Details & Experiment Statistical Significance. The detailed configuration can be found in supplemental material. We make a copy version here for convenience. For two-layer neural network training, we have the following details: \u2022 number of data loader workers: 4 \u2022 batch size: 1024 \u2022 learning rate: 5 \u00d7 10\u22123 \u2022 regularization strength \u03bb: 0.005 \u2022 AdamW hyper-parameter (\u03b21, \u03b22): (0.9, 0.98) \u2022 warm-up steps: 10 For one-layer Transformer training, we have the following details: \u2022 number of data loader workers: 4 \u2022 batch size: 1024 \u2022 learning rate: 1 \u00d7 10\u22123 \u2022 regularization strength \u03bb: 0.001 \u2022 AdamW hyper-parameter (\u03b21, \u03b22): (0.9, 0.98) \u2022 warm-up steps: 10 All results we ran 3 times with different random seeds. In Figure 4, we reported the mean and variance range. Experiments Compute Resources. All experiments is conducted on single A100 40G NVIDIA GPU. All experiments can be finished in at most three days. 38 \fI.2 One-hidden Layer Neural Network In Figure 5 and Figure 6, we use SGD to train a two-layer network with m = 1536 = 22k\u22122 \u00b7 (p \u22121) neurons, i.e., Eq. (2), on k = 3-sum mod-p = 97 addition dataset, i.e., Eq. (1). In Figure 7 and Figure 8, we use SGD to train a two-layer network with m = 5632 = 22k\u22122 \u00b7 (p \u22121) neurons, i.e., Eq. (2), on k = 5-sum mod-p = 23 addition dataset, i.e., Eq. (1). Figure 5 and Figure 7 show that the networks trained with stochastic gradient descent have single-frequency hidden neurons, which support our analysis in Lemma 4.2. Furthermore, Figure 6 and Figure 8 demonstrate that the network will learn all frequencies in the Fourier spectrum which is consistent with our analysis in Lemma 4.3. Together, they verify our main results in Theorem 4.1 and show that the network trained by SGD prefers to learn Fourier-based circuits. I.3 One-layer Transformer In Figure 9 , we train a one-layer transformer with m = 160 heads attention, on k = 3-sum modp = 61 addition dataset, i.e., Eq. (1). In Figure 10 , we train a one-layer transformer with m = 160 heads attention, on k = 5-sum mod-p = 17 addition dataset, i.e., Eq. (1). Figure 9 and Figure 10 show that the one-layer transformer trained with stochastic gradient descent learns 2-dim cosine shape attention matrices, which is similar to one-hidden layer neural networks in Figure 5 and Figure 7. This means that the attention layer has a similar learning mechanism to neural networks in the modular arithmetic task, where it prefers to learn Fourierbased circuits when trained by SGD. 39 \f0 20 40 60 80 100 0.5 0.0 0.5 0 10 20 30 40 50 0 500 1000 0 20 40 60 80 100 0.5 0.0 0.5 0 10 20 30 40 50 0 500 1000 0 20 40 60 80 100 0.5 0.0 0.5 0 10 20 30 40 50 0 500 0 20 40 60 80 100 0.5 0.0 0.5 0 10 20 30 40 50 0 500 1000 0 20 40 60 80 100 0.5 0.0 0.5 0 10 20 30 40 50 0 500 0 20 40 60 80 100 0.5 0.0 0.5 0 10 20 30 40 50 0 500 1000 0 20 40 60 80 100 0.5 0.0 0.5 0 10 20 30 40 50 0 500 1000 0 20 40 60 80 100 0.5 0.0 0.5 0 10 20 30 40 50 0 500 1000 Figure 5: Cosine shape of the trained embeddings (hidden layer weights) and corresponding power of Fourier spectrum. The two-layer network with m = 1536 neurons is trained on k = 3-sum modp = 97 addition dataset. We even split the whole datasets (pk = 973 data points) into the training and test datasets. Every row represents a random neuron from the network. The left figure shows the final trained embeddings, with red dots indicating the true weight values, and the pale blue interpolation is achieved by identifying the function that shares the same Fourier spectrum. The right figure shows their Fourier power spectrum. The results in these figures are consistent with our analysis statements in Lemma 4.2. 40 \f0 10 20 30 40 50 Frequency 0 10 20 30 40 Number of neurons All frequency convecered (p=97, k=3, m=1536) 0.0 0.2 0.4 0.6 0.8 1.0 Max normalized power 0.0% 10.0% 20.0% 30.0% 40.0% Number of neurons Initial distribution (p=97, k=3, m=1536) 0.0 0.2 0.4 0.6 0.8 1.0 Frequency 0% 20% 40% 60% Number of neurons Max margine distribution (p=97, k=3, m=1536) Figure 6: All Fourier spectrum frequencies being covered and the maximum normalized power of the embeddings (hidden layer weights). The one-hidden layer network with m = 1536 neurons is trained on k = 3-sum mod-p = 97 addition dataset. We denote b u[i] as the Fourier transform of u[i]. Let maxi |b u[i]|2/(P |b u[j]|2) be the maximum normalized power. Mapping each neuron to its maximum normalized power frequency, (a) shows the final frequency distribution of the embeddings. Similar to our construction analysis in Lemma 4.3, we have an almost uniform distribution over all frequencies. (b) shows the maximum normalized power of the neural network with random initialization. (c) shows, in frequency space, the embeddings of the final trained network are onesparse, i.e., maximum normalized power being almost 1 for all neurons. This is consistent with our maximum-margin analysis results in Lemma 4.3. 41 \f0 5 10 15 20 0.5 0.0 0.5 0 2 4 6 8 10 0 25 0 5 10 15 20 0.25 0.00 0.25 0 2 4 6 8 10 0 10 0 5 10 15 20 0.5 0.0 0.5 0 2 4 6 8 10 0 20 0 5 10 15 20 0.5 0.0 0.5 0 2 4 6 8 10 0 20 0 5 10 15 20 0.5 0.0 0.5 0 2 4 6 8 10 0 20 0 5 10 15 20 0.5 0.0 0.5 0 2 4 6 8 10 0 25 0 5 10 15 20 0.5 0.0 0.5 0 2 4 6 8 10 0 25 0 5 10 15 20 0.25 0.00 0.25 0 2 4 6 8 10 0 10 Figure 7: Cosine shape of the trained embeddings (hidden layer weights) and corresponding power of Fourier spectrum. The two-layer network with m = 5632 neurons is trained on k = 5-sum modp = 23 addition dataset. We even split the whole datasets (pk = 235 data points) into the training and test datasets. Every row represents a random neuron from the network. The left figure shows the final trained embeddings, with red dots indicating the true weight values, and the pale blue interpolation is achieved by identifying the function that shares the same Fourier spectrum. The right figure shows their Fourier power spectrum. The results in these figures are consistent with our analysis statements in Lemma 4.2. 42 \f0 2 4 6 8 10 Frequency 0 100 200 300 400 Number of neurons All frequency convecered (p=23, k=5, m=5632) 0.0 0.2 0.4 0.6 0.8 1.0 Max normalized power 0.0% 5.0% 10.0% 15.0% 20.0% 25.0% Number of neurons Initial distribution (p=23, k=5, m=5632) 0.0 0.2 0.4 0.6 0.8 1.0 Frequency 0% 20% 40% 60% 80% Number of neurons Max margine distribution (p=23, k=5, m=5632) Figure 8: All Fourier spectrum frequencies being covered and the maximum normalized power of the embeddings (hidden layer weights). The one-hidden layer network with m = 5632 neurons is trained on k = 5-sum mod-p = 23 addition dataset. We denote b u[i] as the Fourier transform of u[i]. Let maxi |b u[i]|2/(P |b u[j]|2) be the maximum normalized power. Mapping each neuron to its maximum normalized power frequency, (a) shows the final frequency distribution of the embeddings. Similar to our construction analysis in Lemma 4.3, we have an almost uniform distribution over all frequencies. (b) shows the maximum normalized power of the neural network with random initialization. (c) shows, in frequency space, the embeddings of the final trained network are onesparse, i.e., maximum normalized power being almost 1 for all neurons. This is consistent with our maximum-margin analysis results in Lemma 4.3. 43 \f0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 0.4 0.2 0.0 0.2 0.4 0 5 10 15 20 25 30 35 40 45 50 55 0 5 10 15 20 25 30 35 40 45 50 55 0 5 10 15 20 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 0 5 10 15 20 25 30 35 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 0.75 0.50 0.25 0.00 0.25 0.50 0.75 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 0 5 10 15 20 25 30 35 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 1.0 0.5 0.0 0.5 1.0 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 0 20 40 60 80 100 120 Figure 9: 2-dimension cosine shape of the trained W KQ (attention weights) and their Fourier power spectrum. The one-layer transformer with attention heads m = 160 is trained on k = 3-sum modp = 61 addition dataset. We even split the whole datasets (pk = 613 data points) into training and test datasets. Every row represents a random attention head from the transformer. The left figure shows the final trained attention weights being an apparent 2-dim cosine shape. The right figure shows their 2-dim Fourier power spectrum. The results in these figures are consistent with Figure 5. 44 \f0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0 2 4 6 8 10 12 14 16 0.075 0.050 0.025 0.000 0.025 0.050 0.075 0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0 2 4 6 8 10 12 14 16 0.02 0.01 0.00 0.01 0.02 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0 2 4 6 8 10 12 14 16 0.04 0.02 0.00 0.02 0.04 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0.0 0.1 0.2 0.3 0.4 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0 2 4 6 8 10 12 14 16 0.4 0.2 0.0 0.2 0.4 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Figure 10: 2-dimension cosine shape of the trained W KQ (attention weights) and their Fourier power spectrum. The one-layer transformer with attention heads m = 160 is trained on k = 5-sum mod-p = 17 addition dataset. We even split the whole datasets (pk = 175 data points) into training and test datasets. Every row represents a random attention head from the transformer. The left figure shows the final trained attention weights being an apparent 2-dim cosine shape. The right figure shows their 2-dim Fourier power spectrum. The results in these figures are consistent with Figure 7. 45" + }, + { + "url": "http://arxiv.org/abs/2204.10939v2", + "title": "Unified Pretraining Framework for Document Understanding", + "abstract": "Document intelligence automates the extraction of information from documents\nand supports many business applications. Recent self-supervised learning\nmethods on large-scale unlabeled document datasets have opened up promising\ndirections towards reducing annotation efforts by training models with\nself-supervised objectives. However, most of the existing document pretraining\nmethods are still language-dominated. We present UDoc, a new unified\npretraining framework for document understanding. UDoc is designed to support\nmost document understanding tasks, extending the Transformer to take multimodal\nembeddings as input. Each input element is composed of words and visual\nfeatures from a semantic region of the input document image. An important\nfeature of UDoc is that it learns a generic representation by making use of\nthree self-supervised losses, encouraging the representation to model\nsentences, learn similarities, and align modalities. Extensive empirical\nanalysis demonstrates that the pretraining procedure learns better joint\nrepresentations and leads to improvements in downstream tasks.", + "authors": "Jiuxiang Gu, Jason Kuen, Vlad I. Morariu, Handong Zhao, Nikolaos Barmpalios, Rajiv Jain, Ani Nenkova, Tong Sun", + "published": "2022-04-22", + "updated": "2022-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CV" + ], + "main_content": "Introduction Document intelligence is a broad research area that includes techniques for information extraction and understanding. Unlike plain-text documents in natural language processing (NLP) [1, 2], a physical document can be composed of multiple elements: tables, \ufb01gures, charts, etc. In addition, a document usually includes rich visual information, and can be one of various types of documents (scienti\ufb01c paper, form, resume, etc.), with various combinations of multiple elements and layouts. Complex content and layout, noisy data, font and style variations make automatic document understanding very challenging. For example, to understand text-rich documents such as letters, a system needs to focus almost exclusively on text content, paying attention to a long sequential context, while processing semi-structured documents such as forms requires the system to analyze spatially distributed short words, paying particular attention to the spatial arrangement of the words. Following the success of BERT [3] on NLP tasks, there has been growing interest in developing pretraining methods for document understanding [4, 5, 6]. Pretrained models have achieved state-of-the-art (SoTA) performance across diverse document understanding tasks [7, 8]. Huge training datasets help pretraining models to learn a good representation for downstream tasks. However, we observe three major problems with the current pretraining setup: (1) documents are composed of semantic regions. Most of the recent document pretraining works follow BERT and split documents into words. However, unlike the sequence-to-sequence learning in NLP, documents have a hierarchical structure (words form sentences, sentences form a semantic region, and semantic regions form a document). Also, the importance of words and sentences are highly context-dependent, i.e., the same word or sentence may have different importance in a different context. Moreover, current transformer-based document pretraining models suffer from input length constraints. Also, input 35th Conference on Neural Information Processing Systems (NeurIPS 2021). arXiv:2204.10939v2 [cs.CL] 28 Apr 2022 \flength becomes a problem for text-rich documents or multi-page documents. (2) documents are more than words. The semantic structure of the document is not only determined by the text within it but also the visual features such as table, font size and style, and \ufb01gure, etc. Moreover, the visual appearance of the text within a block are often overlooked. Most of recent BERT-based pretraining works only take the words as input without considering multimodal content and alignment of multimodal information within semantic regions. (3) documents have spatial layout. Visual and layout information is critical for document understanding. Recent works encode spatial information via 2D position encoding and model spatial relationships with self-attention, which computes attention weights for long inputs [4, 5]. However, for semi-structured documents, such as forms and receipts, words are more related to their local surroundings. This corresponds strongly with human intuition \u2013 when we look at magazines or newspapers, the receptive \ufb01elds are modulated by our reading order and attention. Based on the above observations, we ask the following question: Can uni\ufb01ed document pretraining bene\ufb01t all of these different kinds of documents? We propose a uni\ufb01ed pretraining framework for document understanding, shown in Fig. 1. Our model integrates image information in the pretraining stage by taking advantage of the transformer architecture to learn cross-modal interactions between visual and textual information. To handle textual information, we encode sentences using a hierarchical transformer encoder. The \ufb01rst level of the hierarchical encoder models the formation of the sentences from words. The second level models the formation of the document from sentences. With the help of the hierarchical structure, UDoc learns how words form sentences and how sentences form documents. Meanwhile, it reduces model computation complexity exponentially and increases the number of input words. This also mimics human reading behaviors since the sentence/paragraph is a reasonable unit for people to read and understand\u2014people rarely check the interactions between arbitrary words across different regions in order to understand an article. Convolution has been very successful in the extraction of local features that encode visual and spatial information [9], so we use convolution layers as a more ef\ufb01cient complement to self-attention for addressing local intra-region dependencies in a document image. Meanwhile, self-attention uses all input tokens to generate attention weights for capturing global dependencies. Thus, we combine convolution with self-attention to form a mixed attention mechanism that combines the advantages of the two operations. We depart from previous vision-language pretraining [10, 11] by extracting both the textual and visual features for each semantic region. We propose a novel gated cross-attentional transformer that enables information exchange between modalities. A visually-rich region (\ufb01gure, chart, etc) may have stronger visual information than textual information. Instead of treating outputs from both modalities identically, we design a gating mechanism that can dynamically control the in\ufb02uence of textual and visual features. This approach enables cross-modal connections and allows for variable highlight the relevant information in visual and textual modality and enables cross-modal connections. During pretraining, the CNN-based visual backbone and multi-layer gated cross-attention encoder are jointly trained in both pretraining and \ufb01ne-tuning phases. Our contributions are summarized as follows: (1) We introduce UDoc, a powerful pretraining framework for document understanding. UDoc is capable of learning contextual textual and visual information and cross-modal correlations within a single framework, which leads to better performance. (2) We present Masked Sentence Modeling for language modeling, Visual Contrastive Learning for vision modeling, and Vision-Language Alignment for pretraining. (3) We present extensive experiments and analyses to validate the effectiveness of the proposed UDoc. Extensive experiments and analysis provide useful insights on the effectiveness of the pretraining tasks and show outstanding performance on various downstream tasks. 2 Related Work Self-supervised learning has shown great success in producing generic representations that learn from large-scale unlabeled corpora [3]. Like the development of pretraining in computer vision [12] and NLP [3], there has been a surging interest in self-supervised learning for Vision-Language (VL) tasks [10, 13, 14, 11]. Transformers [3] are the key technology that enables learning contextualized representations from large-scale unlabeled training data. The unique characteristics of document images (spatial layout and multiple elements) distinguish document image pretraining from pretraining works in NLP and VL domains. In the NLP domain, the inputs are pure texts without spatial layouts (bounding boxes). In the VL domain, the inputs are the visual objects and captions. While for 2 \fPretraining Tas k: Visual Contrastive Learning Pretraining Tas k: Vision-Language Alignment .... OCR [CLS] Img Feat Locations + Words + RoI Features Sentence 1 RoI Feat 1 Sentence 2 RoI Feat 2 Sentence 3 RoI Feat 3 [SEP] Img Feat Cross-Attention Cross-Attention Feature Extraction Gated Cross-Attention Feature Embedding Pretraining Tasks ... VLA Quantization Quantization Quantization Negative Positive ... VLA Pretraining Tas k: Masked Sentence Modeling MSM VLA Unlabeled Document Images MASK MASK ... Figure 1: Overview of the proposed approach, UDoc. UDoc \ufb01rst uses a CNN-based visual backbone to learn visual representations. The model then extracts the Region of Interest (RoI) features with OCR bounding boxes and generates a multimodal embedding by combining the textual embedding and position encoding. The transformer-based encoder takes a set of masked multimodal embeddings as input and is pretrained with three pretraining tasks. All the network parameters except those of the textual encoder are jointly trained during both pretraining and \ufb01ne-tuning phases. document images, the input elements are spatially distributed, and the visual and textual information co-occur within the semantic regions. Several recent works have explored pretraining on document images [4, 5, 15]. LayoutLM [4] extends BERT to learn contextualized word representations for document images through multi-task learning. It takes a sequence of OCR words as input during pretraining and incorporates the 2D position embedding as input for each token. However, LayoutLM only considers textual information during pretraining without modeling the alignment between visual and textual information\u2013visual information is only incorporated into the model during the \ufb01ne-tuning stage. The most recent version, LayoutLMv2 [5], improves on this by incorporating the image encoder into pretraining and jointly training the image encoder along with the BERT model. LayoutLMv2 splits the document image into several parts and concatenates the visual embeddings and text embeddings into a single sequence. Apart from masked language learning (MLM), LayoutLMv2 also considers image-text alignment and image-text matching during pretraining. The most related work to ours is SelfDoc [6], which proposes a multimodal document pretraining framework. It \ufb01rst extracts the document object proposals from pre-trained Faster R-CNN [16] and then applies OCR for each proposal to get the words. It takes the pre-extracted RoI features and sentence embeddings as input, and models the perform learning over the textual and visual information using the cross-modality encoder. There is a noticeable difference between our proposed method, UDoc, and other concurrent works in document image pretraining. UDoc is a multimodal end-to-end pretraining framework for document images. Unlike the \ufb01xed document object detector in [6], the parameters of the image encoder with RoI align, which derive the visual features for semantic regions, are also updated in UDoc. In contrast to [5], our visual features come from the semantic regions instead of splitting the image into \ufb01xed regions. Like the object-level semantic elements in natural images, for document images, we represent the typical document layout elements such as paragraph, title, \ufb01gure, and table as semantic regions. Moreover, to learn the contextualized visual representations, UDoc masks visual information in the latent space and learns contextualized representations by solving a contrastive learning task de\ufb01ned over a quantization of the latent visual embeddings. 3 Method 3.1 Model Architecture Fig. 1 illustrates our approach, UDoc, which consists of four components: feature extraction, feature embedding, multi-layer gated cross-attention encoder, and pretraining tasks. Given a document image and the locations of document elements (sentence or RoI), UDoc takes image regions and words that correspond to each document elements as inputs, and extracts their respective embeddings through a visual feature extractor and a sentence encoder. These embeddings are then fed into a transformer-based encoder to learn the cross-modal contextualized embeddings that integrate both visual features and textual features. 3 \fIn the feature extraction step, we \ufb01rst employ an off-the-shelf OCR tool [17] to extract text from a document image I, where the words are grouped into sentences S = {s1, . . . , sN} whose corresponding bounding boxes are P = {p1, . . . , pN}. For each sentence bounding box pi, we use a ConvNet-based backbone fImEnc and RoI Align [18] fRoIAlign to extract the pooled RoI features vi. To obtain a feature embedding, we extract the sentence embedding si for each sentence si via a pretrained sentence encoder fSentEnc. Each region\u2019s RoI feature vi is discretized into a \ufb01nite set of visual representations vQ i \u2208VQ via product quantization [19]. The multi-layer Gated Cross-Attention encoder takes the position information, masked visual features \u02dc V and masked textual features \u02dc S as inputs, and then it generates the contextualized multimodal representations (Hl V and Hl S, l \u2208[1, L]) and outputs the predicted features ( \u02c6 V and \u02c6 S), where L is the number of stacked transformer blocks. More formally, the pretraining procedure can be decomposed into the following steps: I OCR \u2212 \u2212 \u2192 \u0012P S \u0013 fImEnc+fRoIAlign \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 fSentEnc \u0012V, VQ S \u0013 \u2212 \u2212 \u2212 \u2192 fMask \u0012 \u02dc V \u02dc S \u0013 \u2212 \u2192 \u0012Hl V Hl S \u0013 \u2212 \u2192 \u0012 \u02c6 V \u02c6 S \u0013 \u2212 \u2192LPretraining (1) where fMask denotes the masking function that randomly masks RoI features and sentence embeddings with the respective probabilities pv Mask and ps Mask. LPretraining is composed of three pretraining tasks: Masked Sentence Modeling (MSM), Visual Contrastive Learning (VCL), and Vision-Language Alignment (VLA). Next, we provide details mentioned in Eq. 1. Feature Extraction and Embedding. Formally, a document image I \u2208RW \u00d7H consists of N regions, where each region\u2019s bounding box is characterized by a 6-d vector, as pi = { xLT W , yLT H , xRB W , yRB H , w W , h H }, where w and h are of the width and height the region, W and H are the width and height of I, while (xLT, yLT) and (xRB, yRB) denote the coordinates of the top-left and bottom-right corners respectively. The 6-d vector is mapped into a high-dimensional representation via a linear mapping function. The visual embedding is the sum of the mapped RoI feature and position embedding. Likewise, textual embedding is the sum of sentence embedding and position embedding. We also have different types of segments to distinguish different modalities. The input sequence to the transformer-based encoder starts with a special start element ([CLS] and full visual features), then it is followed by multimodal elements, and it ends with a special ending element ([SEP]+full visual features). For the special elements ([CLS] and [SEP]), the corresponding full visual features are features extracted from the whole input image, by applying fImEnc to an RoI covering the whole input image. Quantization Module. Unlike the \ufb01xed image encoder in [6], we jointly learn the image encoder in an end-to-end fashion alongside the multimodal model. A visual representation can be learned by predicting the visual features of the masked regions, but it is challenging to predict such features exactly, since they are unconstrained and of continuous representation. To constrain the representation space of the visual features and facilitate the end-to-end learning of image encoder (see Task #2 in Sec. 3.2), we follow [20, 21] and use vector quantization to discretize the visual features V = {v1, . . . , vN} into a \ufb01nite set of representations VQ = {vQ 1 , . . . , vQ N}. Speci\ufb01cally, we de\ufb01ne latent embedding spaces e \u2208RC\u00d7E, where C is the number of codebooks, and E is the number of entries for each codebook. For each vi, we \ufb01rst map it to logits v\u2113 i \u2208RC\u00d7E, and calculate the probability for the j-th codebook entry in i-th group as pc,e = exp((v\u2113 c,e + ge)/\u03c4)/ PE k=1 exp((v\u2113 c,k + gk)/\u03c4), where \u03c4 is a non-negative temperature, g1:E are i.i.d samples drawn from Gumbel(0,1) distribution. During the forward pass, we choose one entry vector from each codebook by \u02dc ei \u223cargmaxepc,e and generate the quantized representation vQ i by a concatenation of {\u02dc e1, . . . , \u02dc eG} which is then followed by a linear transformation. During the backward pass, the gradients are computed through a Gumbel-Softmax estimator [22]. Gated Cross-Attention. To model the interactions among multimodal inputs, we introduce a multimodal transformer with gated cross-attention to model the cross-modality relationships. Let Hl+1 m be output features at the l-th layer for one modality m, and let n be another modality (m, n \u2208 {V, S}). We obtain the features at (l + 1)-th layer as: Hl+1 m = fLN \u0010 fLN \u0000Hl m + f l Cross-Att(Hl m|Hl n) \u0001 + f l FF \u0000fLN(Hl m + f l Cross-Att(Hl m|Hl n)) \u0001\u0011 (2) 4 \fwhere fLN denotes layer normalization [23]. The feed-forward sub-layer fFF in Eq. 2 is further composed of two fully-connected sub-layers, both wrapped in residual adds and fLN. The core part of Eq. 2 is the cross-attention fCross-Att(\u00b7). Given the intermediate representations Hl m and Hl n, the cross-attention output for modality m is computed as: fCross-Att(Hl m|Hl n) = [Cross-Att1(Hl m|Hl n); . . . ; Cross-Atth(Hl m|Hl n)]U (3) Cross-Atti(Hl m|Hl n) = softmax \u0010 f i q(Hl m)f i k(Hl n)T / \u221a d \u0011 f i v(Hl n) (4) where f i q(Hl m), f i k(Hl n), and f i v(Hl n) are the query, key, and value calculated by linear mapping layers for the i-th head. d is the model dimension, h is the number of heads, and U is the weight matrix that combines the outputs of the heads. Considering the substantial diversity of document images and the different information needs of differing document types, we use a gating mechanism [24] to dynamically weight the outputs of the visual and textual branches. Speci\ufb01cally, we feed the concatenated the visual and textual features to a non-linear network fGate([Hl+1 m ; Hl+1 n ]), which generates the modality-speci\ufb01c attention weights \u03b1l m and \u03b1l n, and returns the weights separately to their respective modality-speci\ufb01c branches to perform element-wise products. We multiply the features for modality m with its modality-speci\ufb01c attention weight, and compute the updated feature as: Hl+1 m = Hl+1 m (1 + \u03b1l m), same that for modality n. 3.2 Training Tasks and Objectives The full pretraining objective of UDoc (right block in Fig. 1) is de\ufb01ned as: LPretraining = LMSM + LVCL + LVLA. In the rest of this section, we describe each task in detail. Task #1: Masked Sentence Modeling. This task is similar to the MLM task utilized in BERT. The key difference is that we mask sentences instead of tokens. During pretraining, each sentence and RoI of the input document is randomly and independently masked. For the masked sentence, its token is replaced with a special sentence of [MASK]. The model is trained to predict the masked sentence feature, based on the unmasked words and the visual features. The goal is to predict the masked sentence embeddings based on the contextual information from the surrounding sentences and image regions, by minimizing the smooth L1 loss [16]: LMSM(\u0398) = X i smoothL1(si \u2212fUDoc(si|s\\i, \u02dc V)) (5) where \u0398 is the trainable parameters and fUDoc(.) outputs the unmasked textual feature, s\\i is the surrounding features for the i-th input, \u02dc V are the image features with random masking. Task #2: Visual Contrastive Learning. We learn visual feature representations by solving a visual contrastive learning task which requires estimating the true quantized latent RoI representation. Given a prediction \u02c6 vi \u2208\u02c6 V for the masked RoI \u02dc vi \u2208\u02dc V, the model needs to estimate the positive quantized representation vQ i in a set of quantized candidate representations VQ. Good representations are learned by maximizing the agreement between output representation and quantized representation of the same RoIs as follows: LVCL(\u0398) = \u2212 X \u02dc vi\u2208\u02dc V \u0010 log exp(sim(\u02c6 vi, vQ i )/\u03ba) P vQ j exp(sim(\u02c6 vi, vQ j )/\u03ba) \u0011 + \u03bb 1 CE C X c=1 E X e=1 pc,e log pc,e (6) where sim(\u00b7, \u00b7) computes the cosine similarity between two vectors, \u03bb is a hyperparameter, and \u03ba is a temperature scalar. The second term encourages the model to use the codebook entries more equally. Task #3: Vision-Language Alignment. To enforce the alignment among different modalities, we explicitly encourage alignment between words and image regions via similarity-preserving knowledge distillation [25]. Note that, unlike the text-image alignment in LayoutLMv2 [5] which splits the image into four regions and predicts whether the given word is covered or not on the image side, we align the image and text belonging to the same region. The goal is to minimize the differences 5 \fbetween the pairwise similarities of sentence embeddings and the pairwise similarities of image region features: LVLA(\u0398) = 1 N \u00d7 N ||fNorm(S \u00b7 S\u22a4) \u2212fNorm(HL V \u00b7 HL\u22a4 V )||2 F (7) where S is the unmasked input sentence embeddings, HL V is the mapped visual representations of the \ufb01nal layer, || \u00b7 ||F is the Frobenius norm, and fNorm performs L2 normalization. 4 Experiment 4.1 Pretraining UDoc Pretraining corpus. We build our pretraining corpus based on IIT-CDIP Test Collection 1.0 [26], which contains more than 11M scanned document images. To differentiate pretraining from \ufb01netuning, we \ufb01lter out the document images of RVL-CDIP [8] from IIT-CDIP since it is a subset of IIT-CDIP, and sample 1M document images as our pretraining corpus. Table 1: Comparison of the datasets used for pretraining and \ufb01netuning process. \u2018Box\u2019, \u2018Label\u2019, and \u2018Text\u2019 indicate the availability of location, label and text annotations for document entities. \u2018Tag\u2019 denotes the document class label availability. Dataset Type Size Box Label Text Tag IIT-CDIP [26] Misc 11M \u0017 \u0017 \u0013 \u0017 RVL-CDIP [8] Misc 400K \u0017 \u0017 \u0017 \u0013 CORD [7] Receipt 1K \u0013 \u0013 \u0013 \u0017 FUNSD [27] Form 0.2K \u0013 \u0013 \u0013 \u0017 PubLayNet [28] Article 347K \u0013 \u0013 \u0017 \u0017 Table 1 shows the dataset statistics. IIT-CDIP only provides the OCR texts in XML format. We extract words and their locations by applying EasyOCR [17] on document images. As shown in Fig. 3 (a), EasyOCR provides two kinds of output modes: non-paragraph and paragraph. The paragraph mode groups the non-paragraph results into text regions. We think document image pretraining should be treated differently than sequence-based pretraining in NLP, since the words in the document (2D) are arranged according to spatial layouts, while the words in NLP corpora are sequential (1D). Considering the special characteristics of documents (complex layout, multi-pages) and the limited input length of BERT models, it is not intuitive to formulate the input at the word level. Hence, we adopt the paragraph-level outputs as the basic input elements since textual regions provide semantically more meaningful information than independent words. Figure 2: Distribution of words per region on RVLCDIP according to the categories. There are some advantages to our design: (1) the region-level design hierarchically encodes document elements and this facilitates the modeling of latent relationships at the region level which has higher-level semantics than the word level. (2) the hierarchical encoding also overcomes the input size limitation of word-level BERT-based models [4, 5]. Fig. 2 shows the distribution of words per region on RVL-CDIP. It can be seen that even though we consider regionlevel input, for some semi-structured documents, single-words dominate the inputs; this somehow forces UDoc to pay attention to word-level inputs. Unlike MLM that predicts the masked word, UDoc predicts the textual embedding of the masked input with MSM. Pretraining setting. We initialize the sentence encoder fSentEnc with BERT-NLI-STSb-base [29] pretrained for NLI [30] and STS-B [31]. The ResNet-50 backbone in the image encoder is pretrained on the PubLayNet training set [28]. All the parameters (except fSentEnc and fImEnc) are randomly initialized. During pretraining, we freeze the parameters of fSentEnc and jointly train the visual encoder and multi-modal UDoc model in an end-to-end fashion. Such an end-to-end training allows the ConvNet and Transformer to realize their full potentials in spatial and sequence modeling for pretraining. UDoc contains 12 layers of gated cross-attention transformer blocks. We set the hidden size to 768 and the number of heads to 12, the maximum number of regions N to 64, and the maximum input sequence length for fSentEnc to 512. The pretraining is conducted on 8 NVIDIA Tesla V100 32GB GPUs with a batch size of 64. It is trained with Adam optimizer [32], with an initial learning rate of 10\u22125, weight decay of 10\u22124, and learning rate warmup in the \ufb01rst 20% iterations. 6 \f(a) IIT-CDIP (b) FUNSD (c) CORD (d) PubLayNet Figure 3: Document image samples. The boxes in red/green are OCR bounding boxes obtained with/without paragraph mode, while the boxes in blue are of\ufb01cially-provided bounding boxes. To learn a useful multimodal representation, random masking is applied to both textual and visual inputs. For MSM, we set the mask probability ps Mask for input sentences to 15%. 80% among the masked sentences are replaced by special sentence [CLS, MASK, SEP], while 10% sentences are replaced by random sentences sampled from other documents, and 10% remains unchanged. For VCL, the \u03bb is set to 0.1, \u03ba is set to 0.1, the mask probability pv Mask is set to 7.5% and the masked RoI features are \ufb01lled with zeros. The temperature \u03c4 is annealed from 2.0 to 0.5 by a factor of of 0.999995 at every iteration. We select the pretraining checkpoint with the lowest LPretraining for \ufb01netuning stage. 4.2 Finetuning Tasks Form Understanding. Form understanding requires the model to predict the label for each semantic entity. We use FUNSD [27] as the evaluation dataset. It contains 149/50 training/testing images. Fig. 3 (b) shows a sample from FUNSD. Each semantic entity comprises a list of words, a label, and a bounding box. The of\ufb01cially-provided OCR texts and bounding boxes are used during training and testing. We take the semantic entities as input and feed the concatenated visual and textual output representations to a classi\ufb01er. We apply cross-entropy loss for \ufb01netuning. The model is \ufb01netuned for 100 epochs with a learning rate of 10\u22125 and batch size of 16. All the parameters except fSentEnc are trained. One of question, answer, header or other is predicted for each semantic entity. We use entity-level F1 score as the evaluation metric. Receipt Understanding. Receipt understanding requires the model to recognize a list of text lines with bounding boxes. The performance on this task is evaluated on CORD [7] dataset. The of\ufb01cial data contains 800/100/100 receipts for training/validation/testing. The receipts are labeled with 30 types of entities under 4 categories: company, date, address, and total. Like FUNSD, we feed the concatenated visual and textual output representations to the classi\ufb01er. The model is \ufb01netuned for 200 epochs with a batch size of 16 and a learning rate of 10\u22125. The evaluation metric is entity-level F1 score. Document Classi\ufb01cation. Document classi\ufb01cation involves predicting the category for each document image. We use RVL-CDIP [8] as the target dataset. It consists of 320K/40K/40K training/validation/testing images under 16 categories. The OCR words and bounding boxes are extracted by EacyOCR. To \ufb01ne-tune UDoc on RVL-CDIP, we compute the overall representation as an elementwise product between the visual and textual representations averaged from all sentences/regions, and learn a classi\ufb01er on top of the overall representation with cross-entropy loss. We \ufb01ne-tune the model for 30 epochs with a batch size of 64 and a learning rate of 10\u22125. Classi\ufb01cation accuracy over 16 categories is used to measure model performance. Document Object Detection. Document object detection involves decomposing a document image into semantic units. We evaluate the effectiveness of our pretrained visual backbone on PubLayNet [28]. As shown Fig. 3 (d), the documents in PubLayNet are scienti\ufb01c articles. PubLayNet consists of 336K/11K training/validation images with six category labels (text, title, list, \ufb01gure, and table). We train Faster-RCNN (F-RCNN) using Detectron2 [33] and initialize the visual backbone with the pretrained ResNet-50 from UDoc. The model is trained for 180k iterations with a base 7 \flearning rate of 0.01 and a batch size of 8. Mean average precision (MAP) @ intersection over union (IOU) [0.50:0.95] of bounding boxes is used to measure the performance. 4.3 Results and Discussion The importance of multimodal learning. To study the effect of multimodal learning, we experiment in three different settings (1) Vision only (V): this setting omits the textual components of UDoc and adopts multilayer self-attention transformer to learn the visual representation. (2) Language only (L): this setting omits the visual encoder and keeps only the textual components. (3) Vision-Language (V+L): this setting considers both vision and language information. We \ufb01rst train three settings without pretraining. Table 2 shows consistent improvement across tasks for V+L over the single-stream baselines (V or L). This demonstrates that our UDoc model is able to learn important visual-linguistic relationships that bene\ufb01t downstream tasks even without pretraining. In Table 2, we \ufb01nd that visual information dominates the performance of document classi\ufb01cation, while language information contributes a lot to form understanding and receipt understanding. The results also indicate that different document tasks rely on different information. For document entity recognition tasks, language information is more important than visual features. As can be seen in Fig. 3 (b) and (c), entity recognition is more word-oriented. On the other hand, document classi\ufb01cation is more focused on global-level understanding. As a result, visual and layout information contribute a lot to the \ufb01nal prediction of the document classi\ufb01cation model. This matches well with the innate abilities of humans to distinguish between document types without fully understanding the words. We also observe that gated cross-attention (V+L) achieves a better performance than the non-gated version (V+L\u266f), as its gating mechanism can learn to adaptively determine how much each modality contributes to the output features. Effect of pretraining tasks. We analyze the effectiveness of different pretraining settings through ablation studies over FUNSD, CORD\u2020, and RVL-CDIP, which are representative document benchmarks. Table 2 ablates the key design choices in pretraining UDoc. Note that, for CORD\u2020, \u2020 indicates that we use different splits for the ablation study instead of the of\ufb01cial ones. For experimental ef\ufb01ciency, UDoc models evaluated here are trained with 5 epochs on 300k training corpus. Overall, the pretraining of UDoc consistently improves the performance over all three downstream tasks. The improvement gains vary among different tasks. Table 2: Experimental results and comparison on FUNSD, CORD\u2020, and RVL-CDIP test sets. Pretraining FUNSD CORD\u2020 RVL-CDIP Enable #Data Modality Max #Words #Param. Tasks Epoch F1 F1 Accuracy \u0017 \u2013 V \u2013 85M \u2013 \u2013 77.49 57.08 91.35 \u2013 L \u2013 153M \u2013 \u2013 78.46 71.52 86.82 \u2013 V+L\u266f \u2013 255M \u2013 \u2013 80.60 95.98 92.76 \u2013 V+L \u2013 267M \u2013 \u2013 83.34 96.59 92.93 \u0013 300K V+L 64 \u00d7 512 270M MSM + MVM 5 84.37 97.44 93.10 300K V+L 64 \u00d7 512 272M MSM + VCL 5 86.87 98.70 93.59 300K V+L 64 \u00d7 512 272M MSM + VCL + VLA 5 87.38 98.75 93.92 300K V+L 64 \u00d7 512 274M MSM + VCL + VLA + REL 5 87.20 98.13 93.64 We \ufb01rst establish two baselines: MSM+MVM in Table 2 indicates the combination of masked sentence learning and masked visual feature prediction. Similar to MSM, for MVM, we freeze the visual backbone and perform masked visual feature prediction via RoI-feature regression. MSM+VCL jointly trains the visual backbone end-to-end with contrastive learning. As shown in Table 2, MSM+MVM achieves better results than the model without pretraining. Furthermore, when combining VCL together with MSM, consistent performance gains are observed across all the benchmarks. Among the three \ufb01netuning tasks, the improvements on FUNSD and CORD are bigger than on RVL-CDIP. We think the local context modeling capability of the ConvNet-based image encoder brings more bene\ufb01ts to entity recognition, since entities are heavily linked and correlated to their local surroundings. When MSM, VCL, and VLA are jointly trained, we observe further performance gains across all the benchmarks. For VCL, instead of sampling the negatives from the same input document, we also try including the negative samples from other document images of the same batch. However, we \ufb01nd that sampling negatives from the entire batch of document images hurt the performance. This is likely because the negatives from other document images are easy to distinguish from each other. We also consider the image-text matching task (Rel) [5], and combine Rel with MSM+VCL+VLA. It 8 \f(a) Samples from RVL-CDIP (b) Performance Comparsion on 16 Classes Figure 4: For (a) we show the samples from RVL-CDIP. The boxes in orange color are grouped OCR bounding boxes. For (b) we plot the accuracies on 16 classes achieved by different models that are represented by different colors in the bar chart. hurts the performance of all three downstream tasks. We conjecture that the image-text matching task introduces mismatched pairs of image and OCR texts as negative examples that potentially hamper the training of other tasks. What if Masked Language Modeling is included? To study the feasibility of that, we consider MLM during pretraining. Since the number of words may be very large, we select the tokens by randomly applying a sliding window (window size 128) across all sequenced OCR words. Each word is formulated as a single-word sentence ([CLS] [Token] [SEP]). We randomly mask 15% of those sampled words ([CLS] [MASK] [SEP]) and concatenate them along with the region-based inputs. During pretraining, we add the word prediction head on top of UDoc and predict the masked words. We conduct \ufb01netuning experiments on entity recognition tasks (FUNSD and CORD), and \ufb01nd that such a direct combination hurts the performance: FUNSD: 87.38 (UDoc) vs. 83.76\u2193(UDoc+MLM), CORD: 98.75 (UDoc) vs. 98.63\u2193(UDoc+MLM). There are consistent performance drops from adding MLM. One possible reason is that the RoI features extracted by token bounding boxes might not be discriminative enough due to the tiny word-level bounding boxes. Table 3: Comparison with state-of-the-art methods. The symbol \u2021 implies using Google OCR engine.. Method Pretraining FUNSD CORD RVL-CDIP Source #Data Scale Max #Words Modality #Param. F1 F1 Accuracy BERTBASE [5] \u2013 \u2013 Word 512 L 110M 60.26 89.68 89.81 BERTLARGE [5] \u2013 \u2013 Word 512 L 340M 65.63 90.25 89.92 LayoutLMBASE [5] IIT-CDIP 11M Word 512 L 113M 78.66 94.72 94.42 LayoutLMLARGE [5] IIT-CDIP 11M Word 512 L 343M 78.95 94.93 94.43 LayoutLMv2BASE [5] IIT-CDIP 11M Word 512 V+L 200M 82.76 94.95 95.25 LayoutLMv2LARGE [5] IIT-CDIP 11M Word 512 V+L 426M 84.20 96.01 95.64 SelfDoc [6] RVL-CDIP 320K Region 50\u00d7512 V+L \u2013 83.36 \u2013 92.81 SelfDoc+VGG-16 [6] RVL-CDIP 320K Region 50\u00d7512 V+L \u2013 \u2013 \u2013 93.81 TILT-Base [34] RVL-CDIP+ 1.1M Word 512 V+L 230M \u2013 95.11 95.25 TILT-Large [34] RVL-CDIP+ 1.1M Word 512 V+L 780M \u2013 96.33 95.52 UDoc IIT-CDIP 1M Region 64\u00d7512 V+L 272M 87.96 96.64 93.96 UDoc\u2217 IIT-CDIP 1M Region 64\u00d7512 V+L 272M 87.93 96.86 95.05\u2021 Performance Comparison with SoTA. We further pretrain UniDoc on 1M document images with 5 epochs and report the \ufb01netuning results in Table 3. UniDoc outperforms previous models on the of\ufb01cial test set of FUNSD and CORD, by a signi\ufb01cantly large margin, demonstrating that our proposed approach is highly effective, partially due to the end-to-end training of the image encoder that improves the semantic alignments between images and texts. Note that UniDoc is pretrained on a subset of IIT-CDIP (1M document images), which is considerably less than the 11M document images used in LayoutLM [4] and LayoutLMv2 [5]. TILT [34] builds a 1.1M pretraining corpus by combining RVL-CDIP, UCSF Industry Documents Library, and Common Crawl. UniDoc also achieves promising results on document classi\ufb01cation. Note that both LayoutLM v2 and TILT use Microsoft OCR, which is a commercial service with a stronger OCR performance than EasyOCR, which is used in our experiments. We \ufb01nd that OCR plays a key role in document classi\ufb01cation performance. As shown in Fig. 4, UniDoc performs the best on the \u2018email\u2019 category but worst on 9 \fthe \u2018form\u2019 category. We also report the results with different OCR engines: 93.42 (Tesseract [35]) vs. 93.96 (EasyOCR [17]) vs. 94.10 (Google OCR [36]). UniDoc with EasyOCR achieves a better performance than with Tesseract since EasyOCR is powered by an advanced neural network, while Tesseract is based on less sophisticated techniques. Since different tasks require task-speci\ufb01c input embeddings to perform well, instead of \ufb01netuning the sentence encoder during pretraining, we explore unfreezing the sentence encoder during the \ufb01netuning stage (named as UniDoc\u2217) and report the results in Table 3. Unsurprisingly, we see performance improvements on several downstream applications. E.g., RVL-CDIP: 93.96 (UniDoc) vs. 95.05\u2191(UniDoc\u2217). However, this also makes the training more challenging in terms of computational resources and training time. Table 4: MAP @ IOU [0.50:0.95] of the document detection models on PubLayNet dev set. Method Text Title List Table Figure mAP F-RCNN (ResNet-101) [28] 91.0 82.6 88.3 95.4 93.7 90.0 M-RCNN (ResNet-101) [28] 91.6 84.0 88.6 96.0 94.9 90.7 F-RCNN (ResNet-50) 92.2 84.4 89.5 96.5 94.5 91.4 F-RCNN (UDoc, ResNet-50) 93.9 88.5 93.7 97.3 96.4 93.9 Effect of visual backbone. Additionally, we apply the trained visual backbone to document object detection on PubLayNet. The performance of the F-RCNN on the validation set is depicted in Table 4. To better compare, we establish two F-RCNN models with: (1) backbone initialized with ResNet-50 pretrained on ImageNet; (2) backbone initialized from UDoc\u2019s pretrained visual backbone. It can be seen that our pretrained backbone outperforms ImageNet-pretrained backbones. By leveraging UDoc, we can train different variants of the visual backbone and apply them to document-speci\ufb01c downstream applications, without relying on incompatible pretrained backbones from other domains (e.g., natural image). Moreover, the visual backbone of UDoc does not require any custom layers, and thus any ConvNet architecture can be used in place of ResNet. 5 Conclusion, Limitations, and Future Works We develop UDoc, a uni\ufb01ed pretraining framework for document understanding. Our model introduces a novel joint training framework that effectively exploits the visual and textual information during pretraining and \ufb01netuning. We evaluate the UDoc comprehensively on three downstream tasks: form understanding, receipt understanding, and document image classi\ufb01cation. Extensive empirical analysis demonstrates that the pretraining procedure can take advantage of multimodal inputs. Also, it can effectively aggregate and align visual and textual information of document images with the proxy tasks. This work has a broader impact on document applications. By \ufb01netuning the pretrained UDoc on task-speci\ufb01c data, document processing systems can provide better results and reduce the expensive data annotations costs. In terms of negative social impact, the document images used for pretraining may contain sensitive information and therefore the models trained on such data may inappropriately leak some private information. To address the privacy leakage, it is worthwhile to explore the combination of privacy-preserving learning and self-supervised learning. There are interesting shortand long-term research directions for UDoc: (1) we freeze the sentence encoder during pretraining and \ufb01ne-tuning phases due to computational constraints. A better document representation can be learned by jointly training the sentence encoder, visual backbone and cross-attention encoder in a completely end-to-end fashion. (2) Although impressive performance has been achieved in document entity recognition tasks such as form and receipt understanding, the classi\ufb01cation accuracy on semi-structured documents such as forms is still inferior to that of rich-text documents. It is possible to devise a better method to model the spatial relationship among words. (3) An interesting direction is to extend UDoc to multipage/multilingual document pretraining. Additionally, there exist many text-based labeled document datasets in the NLP domain, such as document summarization. Can we transfer the knowledge learned from the text-based document domain to the image-based document domain? How to unify the pretraining of the pure-text document (1D) and image-based document (2D) in a single framework is also worth to try. Lastly, the use of different OCR tools is one of the major sources of inconsistency among the existing document pretraining works. It is worthwhile and essential to build standardized pretraining document image datasets with preprovided OCR results. In addition to scanned documents, using digital PDF as part of the pretraining data is a direction worth exploring since it provides rich metadata which could be bene\ufb01cial for multimodal learning. 10" + }, + { + "url": "http://arxiv.org/abs/1904.00560v1", + "title": "Scene Graph Generation with External Knowledge and Image Reconstruction", + "abstract": "Scene graph generation has received growing attention with the advancements\nin image understanding tasks such as object detection, attributes and\nrelationship prediction,~\\etc. However, existing datasets are biased in terms\nof object and relationship labels, or often come with noisy and missing\nannotations, which makes the development of a reliable scene graph prediction\nmodel very challenging. In this paper, we propose a novel scene graph\ngeneration algorithm with external knowledge and image reconstruction loss to\novercome these dataset issues. In particular, we extract commonsense knowledge\nfrom the external knowledge base to refine object and phrase features for\nimproving generalizability in scene graph generation. To address the bias of\nnoisy object annotations, we introduce an auxiliary image reconstruction path\nto regularize the scene graph generation network. Extensive experiments show\nthat our framework can generate better scene graphs, achieving the\nstate-of-the-art performance on two benchmark datasets: Visual Relationship\nDetection and Visual Genome datasets.", + "authors": "Jiuxiang Gu, Handong Zhao, Zhe Lin, Sheng Li, Jianfei Cai, Mingyang Ling", + "published": "2019-04-01", + "updated": "2019-04-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction With recent breakthroughs in deep learning and image recognition, higher-level visual understanding tasks, such as visual relationship detection, has been a popular research topic [9, 19, 15, 40, 44]. Scene graph, as an abstraction of objects and their complex relationships, provides rich semantic information of an image. It involves the detection of all \u27e8subject-predicate-object\u27e9triplets in an image and the localization of all objects. Scene graph provides a structured representation of an image that can support a wide range of high-level visual tasks, including image captioning [12, 14, 13, 43], visual question answering [36, 38, 47], image retrieval [11, 21], and image generation [20]. How\u2217This work was done during the author\u2019s internship at Adobe Research. Figure 1: Conceptual illustration of our scene graph learning model. The left (green) part illustrates the image to scene graph generation, the right (blue) part illustrates the image-level regularizer that reconstructs the image based on object labels and bounding boxes. The commonsense knowledge reasoning (top) is introduced to the scene graph generation process. ever, it is not easy to extract scene graphs from images, since it involves not only detecting and localizing pairs of interacting objects but also recognizing their pairwise relationships. Currently, there are two categories of approaches for scene graph generation. Both categories group object proposals into pairs and use the phrase features (features of their union area) for predicate inference. The difference of the two categories lies in the different procedures. The \ufb01rst category detects the objects \ufb01rst and then recognizes the relationships between those objects [5, 28, 29]. The second category jointly identi\ufb01es the objects and their relationships based on the object and relationship proposals [27, 25, 37]. Despite the promising progress introduced by these approaches, most of them suffer from the limitations of existing scene graph datasets. First, to comprehensively depict an image using the scene graph, it requires a wide variety of relation triplets \u27e8subject-predicate-object\u27e9. Unfortunately, current datasets only capture a small portion of the knowledge [29], e.g., Visual Relationship Detection (VRD) dataset. Training on such a dataset with long-tail arXiv:1904.00560v1 [cs.CV] 1 Apr 2019 \fdistributions will cause the prediction model bias towards those most-frequent relationships. Second, predicate labels are highly determined by the identi\ufb01cation of object pairs [46]. However, due to the dif\ufb01culty of exhaustively labeling bounding boxes of all instances of each object, the current large-scale crowd-sourced datasets like Visual Genome (VG) [22] are contaminated by noises (e.g., missing annotations and meaningless proposals). Such a noisy dataset will inevitably result in a poor performance of the trained object detector [3], which further hinders the performance of predicate detection. For human beings, we are capable of reasoning over visual elements of an image based on our commonsense knowledge. For example, in Figure 1, humans have the background knowledge: the subject (woman) appears / stands on something; the object (snow) enhances the evidence of the predicate (skiing). Commonsense knowledge can also help correct object detection. For example, the speci\ufb01c external knowledge for skiing bene\ufb01ts inference of the object (snow) as well. This motivates us to leverage commonsense knowledge to help scene graph generation. Meanwhile, despite the crucial role of object labels for relationship prediction, existing datasets are very noisy due to the signi\ufb01cant amount of missing object annotations. However, our goal is to obtain scene graphs with more complete scene representation. Motivated by this goal, we regularize our scene graph generation network by reconstructing the image from detected objects. Considering the case in Figure 1, a method might recognize snow as grass by mistake. If we generate an image based on the falsely predicted scene graph, this minor error would be heavily penalized, even though most of the snow\u2019s relationships might be correctly identi\ufb01ed. The contributions of this paper are threefold. 1) We propose a knowledge-based feature re\ufb01nement module to incorporate commonsense knowledge from an external knowledge base. Speci\ufb01cally, the module extracts useful information from ConceptNet [35] to re\ufb01ne object and phrase features before scene graph generation. We exploit Dynamic Memory Network (DMN) [23] to implement multihop reasoning over the retrieved facts and infer the most probable relations accordingly. 2) We introduce image-level supervision module by reconstructing the image to regularize our scene graph generation model. We view this auxiliary branch as a regularizer, which is only present during training. 3) We conduct extensive experiments on two benchmark datasets: VRD and VG datasets. Our empirical results demonstrate that our approach can signi\ufb01cantly improve the state-of-the-art on scene graph generation. 2. Related Works Incorporating Knowledge in Neural Networks. There has been growing interest in improving data-driven models with external Knowledge Bases (KBs) in natural language processing [17, 4] and computer vision communities [24, 1, 6]. Large-scale structured KBs are constructed either by manual effort (e.g., Wikipedia, DBpedia [2]), or by automatic extraction from unstructured or semi-structured data (e.g., ConceptNet). One direction to improve the datadriven model is to distill external knowledge into Deep Neural Networks [39, 45, 18]. Wu et al. [38] encode the mined knowledge from DBpedia [2] into a vector and combine it with visual features to predict answers. Instead of aggregating the textual vectors with average-pooling operation [38], Li et al. [24] distill the retrieved context-relevant external knowledge triplet through a DMN for open-domain visual question answering. Unlike [38, 24], Yu et al. [45] extract linguistic knowledge from training annotations and Wikipedia, and distill knowledge to regularize training and provide extra cues for inference. A teacher-student framework is adopted to minimize the KL-divergence of the prediction distributions of teacher and student. Visual Relationship Detection. Visual relationship detection has been investigated by many works in the last decade [21, 8, 7, 31]. Lu et al. [29] introduce generic visual relationship detection as a visual task, where they detect objects \ufb01rst, and then recognize predicates between object pairs. Recently, some works have explored the message passing for context propagation and feature re\ufb01nement [41, 27]. Xu et al. [41] construct the scene graph by re\ufb01ning the object and relationship features jointly with message passing. Dai et al. [5] exploit the statistical dependencies between objects and their relationships and re\ufb01ne the posterior probabilities iteratively with a Conditional Random Field (CRF) network. More recently, Zeller et al. [46] achieve a strong baseline by predicting relationships with frequency priors. To deal with the large number of potential relations between objects, Yang et al. [42] propose a relation proposal network that prunes out uncorrelated object pairs, and captures the contextual information with an attentional graph convolutional network. In [25], they propose a clustering method which factorizes the full graph into subgraphs, where each subgraph is composed of several objects and a subset of their relationships. Most related to our work are the approaches proposed by Li et al. [25] and Yu et al. [45]. Unlike [25], which focuses on the ef\ufb01cient scene graph generation, our approach addresses the long tail distribution of relationships by commonsense cues along with visual cues. Unlike [45], which leverages linguistic knowledge to regularize the network, our knowledge-based module improves the feature re\ufb01ning procedure by reasoning over a basket of commonsense knowledge retrieved from ConceptNet. \fFigure 2: Overview of the proposed scene graph generation framework. The left part generates a scene graph from the input image. The right part is an auxiliary image-level regularizer which reconstructs the image based on the detected object labels and bounding boxes. After training, we discard the image reconstruction branch. 3. Methodology Figure 2 gives an overview of our proposed scene graph generation framework. The entire framework can be divided into the following steps: (1) generate object and subgraph proposals for a given image; (2) re\ufb01ne object and subgraph features with external knowledge; (3) generate the scene graph by recognizing object categories with object features and recognizing object relations by fusing subgraph features and object feature pairs; (4) reconstruct the input image via an additional generative path. During training, we use two types of supervisions: scene graph level supervision and image-level supervision. For scene graph level supervision, we optimize our model by guiding the generated scene graph with the ground truth object and predicate categories. The image-level supervision is introduced to overcome the aforementioned missing annotations by reconstructing the image from objects and enforcing the reconstructed image close to the original image. 3.1. Proposal Generation Object Proposal Generation. Given an image I, we \ufb01rst use the Region Proposal Network (RPN) [33] to extract a set of object proposals: [o0, \u00b7 \u00b7 \u00b7 , oN\u22121] = fRPN(I) (1) where fRPN(\u00b7) stands for the RPN module, and oi is the i-th object proposal represented by a bounding box ri = [xi, yi, wi, hi] with (xi, yi) being the coordinates of the top left corner and wi and hi being the width and the height of the bounding box, respectively. For any two different objects \u27e8oi, oj\u27e9, there are two possible relationships in opposite directions. Thus, for N object proposals, there are totally N(N \u22121) potential relations. Although more object proposals lead to a bigger scene graph, the number of potential relations will increase dramatically, which signi\ufb01cantly increases the computational cost and deteriorates the inference speed. To address this issue, subgraph is introduced in [25] to reduce the number of potential relations by clustering. Subgraph Proposal Construction. We adopt the clustering approach proposed in [25]. In particular, for a pair of object proposals, a subgraph proposal is constructed as the union box with the con\ufb01dence score being the product of the scores of the two object proposals. Then, subgraph proposals are suppressed by non-maximum-suppression (NMS). In this way, a candidate relation can be represented by two objects and one subgraph: \u27e8oi, oj, si k\u27e9, where i \u0338= j and si k is the k-th subgraph of all the subgraphs associated with oi, which contains oj as well as some other object proposals. Following [25], we represent a subgraph and an object as a feature map, si k \u2208RD\u00d7Ks\u00d7Ks, and a feature vector, oi \u2208RD, respectively, where D and Ks are the dimensions. 3.2. Feature Re\ufb01nement with External Knowledge Object and Subgraph Inter-re\ufb01nement. Considering that each object oi is connected to a set of subgraphs Si and each subgraph sk is associated with a set of objects Ok, we re\ufb01ne the object vector (resp. the subgraph) by attending the associated subgraph feature maps (resp. the associated object vectors): \u00af oi =oi + fs\u2192o \uf8eb \uf8edX si k\u2208Si \u03b1s\u2192o k \u00b7 si k \uf8f6 \uf8f8 (2) \u00af sk =sk + fo\u2192s \uf8eb \uf8edX ok i \u2208Ok \u03b1o\u2192s i \u00b7 ok i \uf8f6 \uf8f8 (3) where \u03b1s\u2192o k (resp. \u03b1o\u2192s i ) is the output of a softmax layer indicating the weight for passing si k (resp. ok i ) to oi (resp. sk), and fs\u2192o and fo\u2192s are non-linear mapping functions. This part is similar to [25]. Note that due to different dimensions of oi and sk, pooling or spatial location based attention needs to be respectively applied for s \u2192o or o \u2192s \fFigure 3: Illustration of our proposed knowledge-based feature re\ufb01nement module. Given the object labels, we retrieve the facts (or symbolic triplets) from the ConceptNet (bottom), and then reason those facts with dynamic memory network using two passes (top right). re\ufb01nement. Interested readers are referred to [25] for details. Knowledge Retrieval and Embedding. To address the relationship distribution bias of the current visual relationship datasets, we propose a novel feature re\ufb01nement network to further improve the feature representation by taking advantage of the commonsense relationships in external knowledge base (KB). In particular, we predict the object label ai from the re\ufb01ned object vector \u00af oi, and match ai with the corresponding semantic entities in KB. Afterwards, we retrieve the corresponding commonsense relationships from KB using the object label ai: ai retrieve \u2212 \u2192\u27e8ai, ar i,j, ao j, wi,j\u27e9, j \u2208[0, K \u22121] (4) where ar i,j, ao j and wi,j are the top-K corresponding relationships, the object entity and the weight, respectively. Note that the weight wi,j is provided by KB (i.e., ConceptNet [35]), indicating how common a triplet \u27e8ai, ar i,j, ao j\u27e9is. Based on the weight wi,j, we can identify the top-K most common relationships for ai. Figure 3 illustrates the process of our proposed knowledge-based feature re\ufb01nement module. To encode the retrieved commonsense relationships, we \ufb01rst transform each symbolic triplet \u27e8ai, ar i,j, ao j\u27e9into a sequence of words: [X0, \u00b7 \u00b7 \u00b7 , XTa\u22121], and then map each word in the sentence into a continuous vector space with word embedding xt = WeXt. The embedded vectors are then fed into an RNN-based encoder [39] as ht k = RNNfact(xt k, ht\u22121 k ), t \u2208[0, Ta \u22121] (5) where xt k is the t-th word embedding of the k-th sentence, and ht k is the hidden state of the encoder. We use a bidirectional Gated Recurrent Unit (GRU) for RNNfact and the \ufb01nal hidden state hTa\u22121 k is treated as the vector representation for the k-th retrieved sentence or fact, denoted as f i k for object oi. Attention-based Knowledge Fusion. The knowledge units are stored in memory slots for reasoning and updating. Our target is to incorporate the external knowledge into the procedure of feature re\ufb01ning. However, for N objects, we have N \u00d7 K relevant fact vectors in memory slots. This makes it dif\ufb01cult to distill the useful information from the candidate knowledge when N \u00d7K is large. DMN [23] provides a mechanism to pick out the most relevant facts by using an episodic memory module. Inspired by this, we adopt the improved DMN [39] to reason over the retrieved facts F, where F denotes the set of fact embedding {fk}. It consists of an attention component which generates a contextual vector using the episode memory mt\u22121. Speci\ufb01cally, we feed the object vector \u00af o to a non-linear fully-connected layer and attend the facts as follows: q = tanh(Wq\u00af o + bq) (6) zt =[F \u25e6q; F \u25e6mt\u22121; |F \u2212q|; |F \u2212mt\u22121|] (7) gt =softmax(W1 tanh(W2zt + b2) + b1) (8) et =AGRU(F, gt) (9) where zt is the interactions between the facts F, the episode memory mt\u22121 and the mapped object vector q, gt is the output of a softmax layer, \u25e6is the element-wise product, |\u00b7| is the element-wise absolute value, and [ ; ] is the concatenation operation. Note that q and m need to be expanded via duplication in order to have the same dimension as F for the interactions. In (9), AGRU(\u00b7) refers to the Attention based GRU [39] which replaces the update gate in GRU with the output attention weight gt k for fact k: et k =gt kGRU(fk, et k\u22121) + (1 \u2212gt k)et k\u22121 (10) where et K is the \ufb01nal state of the episode which is the state of the GRU after all the K sentences have been seen. After one pass of the attention mechanism, the memory is updated using the current episode state and the previous memory state: mt = ReLU(Wm[mt\u22121; et K; q] + bm). (11) where mt is the new episode memory state. By the \ufb01nal pass Tm, the episodic memory mTm\u22121 can memorizes useful knowledge information for relationship prediction. The \ufb01nal episodic memory mTm\u22121 is passed to re\ufb01ne the object feature \u00af o as \u02dc o = ReLU(Wc[\u00af o; mTm\u22121] + bc) (12) where Wc and bc are parameters to be learned. In particular, we re\ufb01ne objects with KB via (12) as well as jointly re\ufb01ning objects and subgraphs by replacing {oi, si} with {\u02dc oi,\u00af si} in (2) and (3), in an iterative fashion (see Alg. 1). \fFigure 4: Illustration of our proposed object-to-image generation module Geno2i. 3.3. Scene Graph Generation Relation Prediction. After the feature re\ufb01nement, we can predict object labels as well as predicate labels with the re\ufb01ned object and subgraph features. For object label, we can predict it directly with the object features. For relationship label, as the subgraph feature is related to several object pairs, we predict the label based on subject and object feature vectors along with their corresponding subgraph feature map. We formulate the inference process as Pi,j \u223csoftmax(frel([\u02dc oi \u2297\u00af sk; \u02dc oj \u2297\u00af sk;\u00af sk])) (13) Vi \u223csoftmax(fnode(\u02dc oi)) (14) where frel(\u00b7) and fnode(\u00b7) denote the mapping layers for predicate and object recognition, respectively, and \u2297denotes the convolution operation [25]. Then, we can construct the scene graph as: G = \u27e8Vi, Pi,j, Vj\u27e9, i \u0338= j. Scene Graph Level Supervision. Like other approaches [26, 25, 37], during training we want the generated scene graph close to the ground-truth scene graph by optimizing the scene graph generation process with object detection loss and relationship classi\ufb01cation loss Lim2sg = \u03bbpredLpred + \u03bbobjLobj + \u03bbreg1u\u22651Lreg (15) where Lpred, Lobj and Lreg are the predicate classi\ufb01cation loss, the object classi\ufb01cation loss and the bounding box regression loss, respectively, \u03bbobj, \u03bbpred and \u03bbreg are hyperparameters, and 1 is the indicator function with u being the object label, u \u22651 for object categories and u = 0 for background. For the predicate detection, the output is the probability over all the candidate predicates. Lpred is de\ufb01ned as the softmax loss. Like the predicate classi\ufb01cation, the output of the object detection is the probability over all the object categories. Lcls is also de\ufb01ned as the softmax loss. For the bounding box regression loss Lreg, we use smooth L1 loss [33]. 3.4. Image Generation To better regularize the networks, an object-to-image generative path is added. Figure 4 depicts our proposed object-to-image generation module Geno2i. In particular, we \ufb01rst compute a scene layout based on the object labels and their corresponding locations. For each object i, we expand the object embedding vectors oi \u2208RD to shape D \u00d7 8 \u00d7 8, and then wrap it to the position of the bounding box ri using bilinear interpolation to give an object layout olayout i \u2208RD\u00d7H\u00d7W , where D is the dimension of the embedding vectors for objects and H \u00d7 W = 64 \u00d7 64 is the output image resolution. We sum all object layouts to obtain the scene layout Slayout = P i olayout i . Given the scene layout, we synthesize an image that respects the object positions with an image generator G. Here, we adopt a cascaded re\ufb01nement network [20] which consists of a series of convolutional re\ufb01nement modules to generate the image. The spatial resolution doubles between the convolutional re\ufb01nement modules. This allows the generation to proceed in a coarse-to-\ufb01ne manner. For each module, it takes two inputs. One is the output from the previous module (the \ufb01rst module takes Gaussian noise), and the other one is the scene layout Slayout, which is downsampled to the input resolution of the module. These inputs are concatenated channel-wisely and passed to a pair of 3 \u00d7 3 convolution layers. The outputs are then upsampled using nearest-neighbor interpolation before being passed to the next module. The output from the last module is passed to two \ufb01nal convolution layers to produce the output image. Image-level Supervision. In addition to the common pixel reconstruction loss Lpixel, we also adopt a conditional GAN loss [32], considering the image is generated based on the objects. In particular, we train the discriminator Di and the generator Gi by alternatively maximizing LDi in Eq. (16) and LGi in Eq. (17): LDi =EI\u223cpreal[log Di(I)] (16) LGi =E\u02c6 I\u223cpG[log(1 \u2212Di(\u02c6 I)] + \u03bbpLpixel (17) where \u03bbp is the tuning parameter. For the generator loss, we maximize log Di(Gi(z|Slayout)) rather than minimizing the original log(1 \u2212Di(Gi(z|Slayout))) for better gradient behavior. For the pixel reconstruction loss, we calculate the \u21131 distance between the real image I and a corresponding synthetic image \u02c6 I as ||I \u2212\u02c6 I||1. As shown in Figure 2, we view the object-to-image generation branch as a regularizer. It can be seen as a corrective model for scene graph generation by improving the performance of object detection. During training, backpropagation from losses (15), (16), and (17) in\ufb02uences the model parameter updates. This image-level supervision can be seen as a corrective model for scene graph generation by improving the performance of object detection. The gradi\fents back-propagated from the object-to-image branch update the parameters of our object detector and the feature re\ufb01nement module which is followed by the relation prediction. Alg. 1 summarizes the entire training procedure. Algorithm 1 Training procedure. Input: Image I, number of training steps Ts. 1: Pretrain image generation module Geno2i (GT objects) 2: for t = 0 : Tm \u22121 do 3: Get objects and relationship triples. 4: Proposal Generation: (O, S) \u2190I {RPN} 5: /*Knowledge-based Feature Re\ufb01ning*/ 6: for r = 0 : Tr \u22121 do 7: \u00af oi \u2190{oi, Si} /*Re\ufb01ning using (2)*/ 8: \u00af sk \u2190{sk, Ok} /*Re\ufb01ning using (3)*/ 9: \u02dc oi \u2190{F, \u00af oi} /*Re\ufb01ning using (12)*/ 10: oi \u2190\u02dc oi, si \u2190\u00af si 11: end for 12: Update parameters with Geno2i (predicted objects) 13: Update parameters with (15) 14: end for Function: Geno2i Input: Real image I, objects (GT / predicted). 1: Object Layout Generation: olayout i \u2190{oi, ri} 2: Scene Layout Generation: Slayout = P i olayout i 3: Image Reconstruction: \u02c6 I = Gi(z, Slayout) 4: Update image generator Gi parameters using (17). 5: Update image discriminator Di parameters using (16). 4. Experiments 4.1. Datasets We evaluate our approach on two datasets: VRD [29] and VG [26]. VRD is the most widely used benchmark dataset for visual relationship detection. Compared with VRD, the raw VG [22] contains a large number of noisy labels. In our experiment, we use a cleansed-version VGMSDN in [26]. Detailed statistics of both datasets are shown in Table 1. For the external KB, we employ the English subgraph of ConceptNet [35] as our knowledge graph. ConceptNet is a large-scale graph of general knowledge which aims to align its knowledge resources on its core set of 40 relations. A large portion of these relation types can be considered as visual relations, such as, spatial co-occurrence (e.g., AtLocation, LocatedNear), visual properties of objects (e.g., HasProperty, PartOf), and actions (e.g., CapableOf, UsedFor). 4.2. Implementation Details As shown in Alg. 1, we train our model in two phrases. The initial phase looks only at the object annotations of Table 1: Dataset statistics. #Img and #Rel denote the number of images and relation pairs respectively, #Obj denotes the number of object categories, and #Pred denotes the number of predicate categories. Dataset Training Set Testing Set #Obj #Pred #Img #Rel #Img #Rel VRD [29] 4,000 30,355 1,000 7,638 100 70 VG-MSDN [26] 46,164 507,296 10,000 111,396 150 50 the training set, ignoring the relationship triplets. For each dataset, we \ufb01lter the objects according to the category and relation vocabularies in Table 1. We then learn an imagelevel regularizer that reconstructs the image based on the object labels and bounding boxes. The output size of the image generator is 64\u00d764\u00d73, and the real image is resized before inputting to the discriminator. We train the regularizer with learning rate 10\u22124 and batch size 32. For each mini-batch we \ufb01rst update Gi, and then update Di. The second phase jointly trains the scene graph generation model and the auxiliary reconstruction branch. We adopt the Faster R-CNN [33] associated with VGG-16 [34] as the backbone. During training, the number of object proposals is 256. For each proposal, we use ROI align [16] pooling to generate object and subgraph features. The subgraph regions are pooled to 5 \u00d7 5 feature maps. The dimension D of the pooled object vector and the subgraph feature map is set to 512. For the knowledge-based re\ufb01nement module, we set the dimension of word embedding to 300 and initialize it with the GloVe 6B pre-trained word vectors [30]. We keep the top-8 commonsense relationships. The number of hidden units of the fact encoder is set to 300, and the dimension of episodic memory is set to 512. The iteration number Tm of DMN update is set to 2. For the relation inference module, we adopt the same bottleneck layer as [25]. All the newly introduced layers are randomly initialized except the auxiliary regularizer. We set \u03bbpred = 2.0, \u03bbcls = 1.0, and \u03bbreg = 0.5 in Eq (15). The hyperparameter \u03bbp in Eq (17) is set to 1.0. The iteration number Tr of the feature re\ufb01nement is set to 2. We \ufb01rst train RPNs and then jointly train the entire network. The initial learning rate is 0.01, decay rate is 0.1, and stochastic gradient descent (SGD) is used as the optimizer. We deploy weight decay and dropout to prevent over-\ufb01tting. During testing, the image reconstruction branch will be discarded. We respectively set the RPN non-maximum suppression (NMS) [33] threshold to 0.6 and subgraph clustering [25] threshold to 0.5. We output all the predicates and use the top-1 category as the prediction for objects and relations. Models are evaluated on two tasks: Visual Phrase Detection (PhrDet) and Scene Graph Generation (SGGen). PhrDet is to detect the \u27e8subject-predicate-object\u27e9phrases. SGGen is to detect the objects within the image and recognize their pairwise relationships. Following [29, 25], \fthe Top-K Recall (denoted as Rec@K) is used as the performance metric; it calculates how many labeled relationships are hit in the top K predictions. In our experiments, Rec@50 and Rec@100 are reported. Note that, Li et al. [26] and Yang et al. [42] reported the results on two more metrics: Predicate Recognition and Phrase Recognition. These two evaluation metrics are based on ground-truth object locations, which is not the case we consider. In our setting, we use detected objects for image reconstruction and scene graph generation. To be consistent with the training, we choose PhrDet and SGGen as the evaluation metrics, which is also more practical. 4.3. Baseline Approaches for Comparisons Baseline. This baseline model is the re-implementation of Factorizable Net [25]. We re-train it based on our backbone. Speci\ufb01cally, we use the same RPN model, and jointly train the scene graph generator until convergence. KB. This model is a KB-enhanced version of the baseline model. External knowledge triples are incorporated in DMN. The explicit knowledge-based reasoning is incorporated in the feature re\ufb01ning procedure. GAN. This model improves the baseline model by attaching an auxiliary branch that generates the image from objects with GAN. We train this model in two phases. The \ufb01rst phase trains the image reconstruction branch only with the object annotations. Then we re\ufb01ne the model jointly with the scene graph generation model. KB-GAN. This is our full model containing both KB and GAN. It is initialized with the trained parameters from KB and GAN, and \ufb01ne-tuned with Alg. 1. 4.4. Quantitative Results In this section, we present our quantitative results and analysis. To verify the effectiveness of our approach and analyze the contribution of each component, we \ufb01rst compare different baselines in Table 2, and investigate the improvement in recognizing objects in Table 3. Then, we conduct a simulation experiment on VRD to investigate the effectiveness of our auxiliary regularizer in Table 4. The comparison of our approach with the state-of-the-art methods is reported in Table 5. Component Analysis. In our framework, we proposed two novel modules \u2013 KB-based feature re\ufb01nement (KB) and auxiliary image generation (GAN). To get a clear sense of how these components affect the \ufb01nal performance, we perform ablation studies in Table 2. The left-most columns in Table 2 indicate whether or not we use KB and GAN in our approach. To further investigate the improvement of our approach on recognizing objects, we also report object detection performance mAP [10] in Table 3. In Table 2, we observe that KB boosts PhrDet and SGGen signi\ufb01cantly. This indicates our knowledge-based Table 2: Ablation studies of individual components of our method on VRD. KB GAN PhrDet SGGen Rec@50 Rec@100 Rec@50 Rec@100 25.57 31.09 18.16 22.30 \u2713 27.02 34.04 19.85 24.58 \u2713 26.65 34.06 19.56 24.64 \u2713 \u2713 27.39 34.38 20.31 25.01 Table 3: Ablation study of the object detection on VRD. Model Faster R-CNN [33] ViPCNN [27] Baseline KB GAN KBGAN mAP 14.35 20.56 20.70 22.26 22.10 22.49 feature re\ufb01nement can effectively learn the commonsense knowledge of objects to achieve high recall for the correct relationships. By adding the image-level supervision to the baseline model, the performance is further improved. This improvement demonstrates that the proposed imagelevel supervision is capable of capturing meaningful context across the objects. These results align with our intuitions discussed in the introduction. With KB and GAN, our model can generate scene graphs with high recall. Table 3 demonstrates the improvement in recognizing objects. We can see that our full model (KB-GAN) outperforms Faster R-CNN [33], ViP-CNN [27] measured by mAP. It is worth noticing that the huge gain of KB illustrates that the introduction of commonsense knowledge substantially contributes to the object detection task. Table 4: Ablation study of image-level supervision on subsampled VRD. KB GAN PhrDet SGGen Rec@50 Rec@100 Rec@50 Rec@100 15.44 20.96 10.94 14.53 \u2713 24.07 30.89 17.50 22.31 \u2713 \u2713 26.62 31.13 19.78 24.17 Investigation on Image-level Supervision. As aforementioned, our image-level supervision can exploit the instances of rare categories. To demonstrate that our introduced image-level supervision can help on this issue, we exaggerate the problem by randomly removing 20% object instances as well as their corresponding relationships from the dataset. In Table 4, we can see that training on such a subsampled dataset (with only 80% object instances), Rec@50 of the baseline model drops from 25.57 (resp. 18.16) to 15.44 (resp. 10.94) for PhrDet and SGGen. However, with the help of GAN, Rec@50 of our \ufb01nal model decreases only slightly from 27.39 (resp. 20.31) to 26.62 (resp. 19.78) for PhrDet and SGGen, respectively. We give our explanation on this signi\ufb01cant performance improvement as below. Too many low-frequency categories deteriorate the training gain when only utilizing the class la\fTable 5: Comparison with existing methods on PhrDet and SGGen. Dataset Model PhrDet SGGen Rec@50 Rec@100 Rec@50 Rec@100 VRD [29] ViP-CNN [27] 22.78 27.91 17.32 20.01 DR-Net [5] 19.93 23.45 17.73 20.88 U+W+SF+LK: T+S [45] 26.32 29.43 19.17 21.34 Factorizable Net [25] 26.03 30.77 18.32 21.20 KB-GAN 27.39 34.38 20.31 25.01 VG-MSDN [26] ISGG [41] 15.87 19.45 8.23 10.88 MSDN [26] 19.95 24.93 10.72 14.22 Graph R-CNN [42] \u2013 \u2013 11.40 13.70 Factorizable Net [25] 22.84 28.57 13.06 16.47 KB-GAN 23.51 30.04 13.65 17.57 Figure 5: Qualitative results from KB-GAN. In each example, the left image is the original input image; the scene graph is generated by KB-GAN; and the right image is reconstructed from the detected objects. bel as training targets. With the explicit image-level supervision, the proposed image reconstruction path can utilize the large quantities of instances of rare classes. This imagelevel supervision idea is generic, which can apply to many potential applications such as object detection. Comparison with Existing Methods. Table 5 shows the comparison of our approach with the existing methods. We can see that our proposed method outperforms all the existing methods in the recall on both datasets. Compared with these methods, our model recognizes the objects and their relationships not only in the graph domain but also in the image domain. 4.5. Qualitative Results Figure 5 visualizes some examples of our full-model. We show the generated scene graph as well as the reconstructed image for each sample. It is clear that our method can generate high-quality relationship predictions in the generated scene graph. Also notable is that our auxiliary output images are reasonable. This demonstrates our model\u2019s capability to generate rich scene graph by learning with both external KB and auxiliary image-level regularizer. 5. Conclusion In this work, we have introduced a new model for scene graph generation which includes a novel knowledge-base feature re\ufb01nement network that effectively propagates contextual information across the graph, and an image-level supervision that regularizes the scene graph generation from image domain. Our framework outperforms state-of-theart methods for scene graph generation on VRD and VG datasets. Our experiments show that it is fruitful to incorporate the commonsense knowledge as well as the imagelevel supervision into the scene graph generation. Our work shows a promising way to improve high-level image understanding via scene graph. Acknowledgments This work was supported in part by Adobe Research, NTU-IGS, NTU-Alibaba Lab, and NTU ROSE Lab." + }, + { + "url": "http://arxiv.org/abs/1903.10658v4", + "title": "Unpaired Image Captioning via Scene Graph Alignments", + "abstract": "Most of current image captioning models heavily rely on paired image-caption\ndatasets. However, getting large scale image-caption paired data is\nlabor-intensive and time-consuming. In this paper, we present a scene\ngraph-based approach for unpaired image captioning. Our framework comprises an\nimage scene graph generator, a sentence scene graph generator, a scene graph\nencoder, and a sentence decoder. Specifically, we first train the scene graph\nencoder and the sentence decoder on the text modality. To align the scene\ngraphs between images and sentences, we propose an unsupervised feature\nalignment method that maps the scene graph features from the image to the\nsentence modality. Experimental results show that our proposed model can\ngenerate quite promising results without using any image-caption training\npairs, outperforming existing methods by a wide margin.", + "authors": "Jiuxiang Gu, Shafiq Joty, Jianfei Cai, Handong Zhao, Xu Yang, Gang Wang", + "published": "2019-03-26", + "updated": "2019-08-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Today\u2019s image captioning models heavily depend on paired image-caption datasets. Most of them employ an encoder-decoder framework [34, 9, 13, 12, 39], which uses a convolutional neural network (CNN) [16] to encode an image into a feature vector and then a recurrent neural network (RNN) to decode it into a text sequence. However, it is worthwhile noticing that the overwhelming majority of image captioning studies are conducted in English [1]. The bottleneck is the lack of large scale image-caption paired datasets in other languages, and getting such paired data for each target language requires human expertise in a timeconsuming and labor-intensive process. Several encoder-decoder models have been proposed in recent years for unsupervised neural machine translation [23, 4]. The key idea of these methods mainly relies on training denoising auto-encoders for language modeling and on sharing latent representations across the source and target languages for the encoder and the decoder. Despite the promising results achieved by the unsupervised neural Figure 1: Illustration of our graph-based learning method. Our model consists of one visual scene graph detector (TopLeft), one \ufb01xed off-the-shelf scene graph language parser (Bottom-Left), a scene graph encoder GS Enc, a sentence decoder GS Dec, and a feature mapping module. machine translation, unpaired image-to-sentence translation is far from mature. Recently, there have been few attempts at relaxing the requirement of paired image-caption data for this task. The \ufb01rst work in this direction is the pivot-based semisupervised solution proposed by Gu et al. [14], where they take a pivot language as a bridge to connect the source image and the target language caption. Their method requires an image-text paired data for the pivot language (Chinese), and a parallel corpus for the pivot to target translation. Feng et al. [10] move a step further, where they conduct purely unsupervised image captioning without relying on any labeled image-caption pairs. Their method uses a sentence discriminator along with a visual concept detector to connect the image and the text modalities through adversarial training. Although promising, the results of the existing methods are still far below compared to their paired counterparts. Unlike unsupervised neural machine translation where the encoders can be shared across the source and target languages, due to the different structures and characteristics of image and text modalities, the encoders of image and sentence cannot be shared to connect the two modalities. The critical challenge in unpaired image captioning is, therefore, the gap of information misalignment in images and sentences, so as to \ufb01t the encoder-decoder framework. arXiv:1903.10658v4 [cs.CV] 17 Aug 2019 \fFortunately, with recent breakthroughs in deep learning and image recognition, higher-level visual understanding tasks such as scene graph construction have become popular research topics with signi\ufb01cant advancements [43, 6, 8, 7, 42, 37, 17]. Scene graph, as an abstraction of objects and their complex relationships, provide rich semantic information of an image. The value of scene graph representation has been proven in a wide range of vision-language tasks, such as visual question answering [32] and paired image captioning [37]. Considering the signi\ufb01cant challenges that unpaired image captioning problem poses in terms of different characteristics between visual and textual modalities, in this paper, we propose a scene graph-based method that exploits the rich semantic information captured by scene graphs. Our framework comprises an image scene graph generator, a sentence scene graph generator, a scene graph encoder, a sentence decoder, and a feature alignment module that maps the features from image to sentence modality. Figure 1 sketches our solution. We \ufb01rst extract the sentence scene graphs from the sentence corpus and train the scene graph encoder and the sentence decoder on the text modality. To align the scene graphs between images and sentences, we use CycleGAN [44] to build the data correspondence between the two modalities. Speci\ufb01cally, given the unrelated image and sentence scene graphs, we \ufb01rst encode them with the scene graph encoder trained on the sentence corpus. Then, we perform unsupervised cross-modal mapping for feature level alignments with CycleGAN. By mapping the features, the encoded image scene graph is pushed close to the sentence modality, which is then used effectively as input to the sentence decoder to generate meaningful sentences. The main contributions of this work include: (1) a novel scene graph-based framework for unpaired image captioning; (2) an unsupervised feature alignment method that learns the cross-modal mapping without any paired data. Our experimental results demonstrate the effectiveness of our proposed model in producing quite promising image captions. The comparison with recent unpaired image captioning methods validates the superiority of our method. 2. Background Paired Image Captioning. Image captioning has been extensively studied in the past few years. Most of the existing approaches are under the paired setting, that is, input images come with their corresponding ground-truth captions [34, 19, 40, 13, 12, 38]. One classic work in this setting is [34], in which an image is encoded with a CNN, and the sentence is decoded with a Long Short-Term Memory (LSTM) network. Following this, many methods have been proposed to improve this encoder-decoder method. One of the notable improvements is the attention mechanism [36, 13, 3], which allows the sentence decoder to dynamically focus on some related image regions during the caption generation process. Some other works explore other architectures for language modeling [15, 31]. For example, Gu et al. [15] introduce a CNN-based language model for image captioning. Another theme of improvements is to use reinforcement learning (RL) to address the exposure bias and loss-evaluation mismatch problems for sequence prediction [29, 13]. The self-critical learning approach proposed in [29] is a pioneering work, which well addresses the above two problems. Our work in this paper is closely related to [37], which uses scene graph to connect images and sentences to incorporate inductive language bias. The key difference is that the framework in [37] is based on the paired setting, while in this work, we learn the scene graph-based network under the unsupervised training setting. Unpaired Image Captioning. More recently, some researchers started looking into the problem of image captioning in the unpaired setting [14, 10], where there is no correspondence between images and sentences during training. The \ufb01rst work on this task is the pivot-based solution proposed by Gu et al. [14]. In their setting, although they do not have any correspondence between images and sentences in the target language, they do require a paired image-caption dataset in the pivot language and another machine translation dataset which consists of sentences in the pivot language and the paired sentences in the target language. They connect the pivot language sentences in different domains by shared word embeddings. The most recent work on unpaired image captioning is proposed by Fang et al. [10]. They generate pseudo image-sentence pairs by feeding the visual concepts of images to a concept-to-sentence model and performing the alignment between image features and sentence features in an adversarial manner. While several attempts have been made for the unpaired image captioning problem, this challenging task is far from mature. Arguably, compared to unpaired sentenceto-sentence [23] and image-to-image [44] translations, unpaired image-to-sentence translation is more challenging because of the signi\ufb01cantly different characteristics of the two modalities. In contrast to the existing unpaired image captioning methods [14, 10], our proposed method adopts scene graph as an explicit representation to bridge the gap between image and sentence domains. 3. Method In this section, we describe our unpaired image captioning framework. We \ufb01rst revisit the paired captioning setting. 3.1. Paired Image Captioning Revisited In the paired captioning setting, our training goal is to generate a caption S from an image I such that S is similar \fto its ground-truth caption. The popular encoder-decoder framework for image captioning can be formulated as: P(S|I) = P(V |I) | {z } Encoder P(S|V ) | {z } Decoder (1) where the encoder P(V |I) encodes the image I into the image features V with a CNN model [16], and the decoder P(S|V ) predicts the image description S from the image features V . The most common training objective is to maximize the probability of the ground-truth caption words given the image: P t log p\u03b8I\u2192S(St|S0:t\u22121, I), where p\u03b8I\u2192S(St|S0:t\u22121, I) corresponds to the Softmax output at time step t. During inference, the word St is drawn from the dictionary DS according to the Softmax distribution. 3.2. Unpaired Image Captioning In the unpaired image captioning setting, we have a dataset of images I = {I1, . . . , INI}, and a dataset of sentences S = {S1, . . . , SNS}, where NI and NS are the total numbers of images and sentences, respectively. In this setting, there is no alignment between I and S. In fact, I and S can be completely unrelated coming from two different domains. Our goal is to train an image captioning model in a completely unsupervised way. In our setup, we assume that we have access to an off-the-shelf image scene graph detector and a sentence (or text) scene graph parser. As shown in Figure 1, our proposed image captioning model consists of one image scene graph generator, one sentence scene graph generator, one scene graph encoder GS Enc, one attention-based decoder for sentence generation GS Dec, and a cycle-consistent feature alignment module. Given an image I as input, our method \ufb01rst extracts an image scene graph GI using the scene graph generator. It then maps GI to the sentence scene graph GS from which the RNN-based decoder generates a sentence S. More formally, the image captioner P(S|I) in the unpaired setting can be decomposed into the following submodels, I \u2192GI (2) P(S|I) = P(GS|GI) | {z } Unpaired Mapping P(S|GS) | {z } Decoder (3) where GI and GS are the image scene graph and the sentence scene graph, respectively. The most crucial component in Eq. (3) is the unpaired mapping of image and text scene graphs. In our approach, this mapping is done in the feature space. In particular, we encode the image and sentence scene graphs into feature vectors and learn to map the feature vectors across the two modalities. We reformulate Eq. (3) as follows: P(S|I) = P(GS|GI)P(S|GS) \u2248 P(f I|GI)P(f S|f I)P(S|f S) (4) where P(f I|GI) is a graph encoder, P(S|f S) is an RNNbased sentence decoder, and P(f S|f I) is a cross-modal feature mapper in the unpaired setting. In our implementation, we learn the scene graph encoder and the RNN-based decoder on the text modality \ufb01rst, and then we try to map the image scene graph into a common feature space (i.e., the text space) so that the same sentence decoder can be used to decode the sentence from the mapped image features. The sentence encoding and decoding processes can be formulated as the following two steps: S \u2192GS (5) \u02c6 S = arg max S P(S|f S)P(f S|GS) (6) where \u02c6 S is the reconstructed sentence. We train the model to enforce \u02c6 S to be close to the original sentence S. In the following, we describe the scene graph generator in Sec. 3.2.1, the scene graph encoder in Sec. 3.2.2, the sentence decoder in Sec. 3.2.3, and our unpaired feature mapping process in Sec. 3.2.4. 3.2.1 Scene Graph Generator Formally, a scene graph is a graph G = (V, E) containing a set of nodes V and a set of edges E. As exempli\ufb01ed in Figure 1, the nodes can be of three types: object node, attribute node, and relationship node. We denote oi as the i-th object, ri,j as the relation between object oi and oj, and al i as the l-th attribute of object oi. An image scene graph generator contains an object detector, an attribute classi\ufb01er, and a relationship classi\ufb01er. We use Faster-RCNN [28] as the object detector, MOTIFS [41] as the relationship detector, and an additional classi\ufb01er for attribute identi\ufb01cation [37]. To generate the sentence scene graph GS for a sentence, we \ufb01rst parse the sentence into a syntactic tree using the parser provided by [2], which uses a syntactic dependency tree built by [21]. Then, we transform the tree into a scene graph with a rule-based method [30]. 3.2.2 Scene Graph Encoder We follow [37] to encode a scene graph. Speci\ufb01cally, we represent each node as a de-dimensional feature vector, and use three different spatial graph convolutional encoders to encode the three kinds of nodes by considering their neighborhood information in the scene graph. Encoding objects. In a scene graph (image or sentence), an object oi can play either a subject or an object role in a relation triplet depending on the direction of the edge. Therefore, for encoding objects, we consider what relations they are associated with and what roles they play in that relation. Let \u27e8oi, oj, ri,j\u27e9denote the triplet for relation ri,j, where oi \fplays a subject role and oj plays an object role. The encoding for object oi, that is xoi \u2208Rdx is computed by xoi = 1 Nri X oj gs(eoi, eoj, eri,j) + 1 Nri X ok go(eok, eoi, erk,i) (7) where eoi \u2208Rde and eri,j \u2208Rde are the embeddings (randomly initialized) representing the object oi and the relation ri,j, respectively; gs(\u00b7) and go(\u00b7) are the spatial graph convolution operations for objects as a subject and as an object, respectively; and Nri is the total number of relation triplets that oi is associated with in the scene graph. Encoding attributes. An object oi may have multiple attributes in the scene graph. The encoding of an object based on its attributes, i.e., xai \u2208Rdx is computed by: xai = 1 Nai X l ga(eoi, eal i) (8) where Nai is the total number of attributes that object oi has, and ga(\u00b7) is the spatial convolutional operation for attribute based encoding. Encoding relations. Each relation ri,j is encoded into xri,j \u2208Rdx by considering the objects that the relation connects in the relation triplet, xri,j = gr(eoi, eoj, eri,j) (9) where gr(\u00b7) is the associated convolutional operation. After graph encoding, for each image scene graph or sentence scene graph, we have three sets of embeddings: X k o =[xk o1, \u00b7 \u00b7 \u00b7 , xk oNk o ] X k r =[xk r1, \u00b7 \u00b7 \u00b7 , xk rNk r ] X k a =[xk a1, \u00b7 \u00b7 \u00b7 , xk aNk a ] , k \u2208{I, S} (10) where N k o , N k r , and N k a can be different from each other. Figure 2 illustrates the encoding process. 3.2.3 Sentence Decoder The goal of the sentence decoder is to generate a sentence \u02c6 S from the encoded embeddings, X k o , X k r , and X k a . However, these three sets of embeddings are of different lengths and contain different information. Therefore, their importance for the sentence decoding task also vary. In order to compute a relevant context for the decoding task effectively, we use three attention modules, one for each type of embeddings. The attention module go Att over X k o is de\ufb01ned as: f k o = N k o X i=1 \u03b1ixk oi; \u03b1i = exp(wT o xk oi) P m exp(wT o xk om) (11) Figure 2: The architectures for scene graph encoding, attention, and sentence decoding; go Att, gr Att, and ga Att are attention modules for each kind of features, respectively. where wo is the associated (learnable) weight vector. The attentions over X k r , and X k a are similarly de\ufb01ned to get the respective attention vectors, f k r \u2208Rdf and f k a \u2208Rdf . The attention vectors are then combined to get a triplet level embedding, which is then fed into an RNN-based decoder to generate the sentence \u02c6 S. The following sequence of operations formally describes the process. f k ora = gora([f k o, f k r, f k a]) (12) ot, ht = RNNDec(f k ora, ht\u22121, \u02c6 St\u22121) (13) \u02c6 St \u2248softmax(Woot) (14) where gora(\u00b7) is a neural network that generates a triplet level embedding, and ot is the cell output of the decoder at time step t. 3.2.4 Training and Inference We \ufb01rst train the graph encoder and the sentence decoder in the text modality (Eq. (6)), and then perform a feature level alignments for cross-modal unsupervised mapping. Training in Text Modality. The graph convolutional encoding of a sentence scene graph GS into a feature representation f S ora, and reconstructing the original sentence S from it are shown at the bottom part of Figure 1, where the encoder and the decoder are denoted as GS Enc and GS Dec, respectively. We \ufb01rst train GS Enc and GS Dec models by minimizing the cross-entropy (XE) loss: LXE(\u03b8G\u2192S) = \u2212 X t log p\u03b8G\u2192S(St|S0:t\u22121) (15) where \u03b8G\u2192S are the parameters of GS Enc and GS Dec, p\u03b8G\u2192S(St|S0:t\u22121) is the output probability of t-th word in the sentence given by the sentence decoder. We further employ a reinforcement learning (RL) loss that takes the entire sequence into account. Speci\ufb01cally, we take the CIDEr [33] score as the reward and optimize \u03b8G\u2192S \fFigure 3: Conceptual illustration of our unpaired feature mapping. For each kind of embedding p, there are two mapping functions F p I\u2192S and F p S\u2192I, and two associated adversarial discriminators Dp I and Dp S. by minimizing the negative expected rewards as follows: LRL(\u03b8G\u2192S) = \u2212E \u02dc S\u223cP\u03b8G\u2192S [r( \u02dc S)] (16) where r( \u02dc S) is the reward calculated by comparing the sampled sentence \u02dc S with the ground-truth sentence S using the CIDEr metric. In our model, we follow the RL approach proposed in [29, 13]. Unsupervised Mapping of Scene Graph Features. To adapt the learned model from sentence modality to the image modality, we need to translate the scene graph from the image to the sentence modality. We take the discrepancy in the modality of scene graphs directly into account by aligning the representation of the image scene graph with the sentence scene graph. We propose to use CycleGAN [44] to learn the feature alignment across domains. Figure 3 illustrates our idea. Given two sets of unpaired features f I p and f S p , where p \u2208{o, r, a}, we have two mapping functions F p I\u2192S(\u00b7) and F p S\u2192I(\u00b7), and two discriminators Dp S and Dp I. F p I\u2192S(\u00b7) maps the image features to the sentence features, and F p S\u2192I(\u00b7) maps the sentence features to the image features. The discriminators are trained to distinguish the real (original modality) features from the fake (mapped) features. The mappers are trained to fool the respective discriminators through adversarial training. For image to text mapping, the adversarial loss is LGAN(F p I\u2192S, Dp S) = ES[log Dp S(f S p,Real)] +EI[log(1 \u2212Dp S(F p I\u2192S(f I p,Real))] (17) Similarly, for sentence to image mapping, we have the similar adversarial loss, LGAN(F p S\u2192I, Dp I) = EI[log Dp I(f I p,Real)] +ES[log(1 \u2212Dp I(F p S\u2192I(f S p,Real))] (18) Due to the unpaired setting, the mapping from the source to the target modality is highly under-constrained. To make the mapping functions cycle-consistent, CycleGAN introduces a cycle consistency loss to regularize the training, Lcyc(F p S\u2192I, F p I\u2192S) =EI[\u2225f I p,Rec \u2212f I p,Real\u22251] +ES[\u2225f S p,Rec \u2212f S p,Real\u22251] (19) where f I p,Rec and f S p,Rec are the reconstructed features in the image and text modalities, respectively. Formally, our overall objective for unpaired feature mapping is to optimize the following loss: LAdv(\u03b8I\u2194S) = X p\u2208{o,r,a} LAdv(\u03b8p I\u2194S) (20) LAdv(\u03b8p I\u2194S) = LGAN(F p S\u2192I, Dp I) + LGAN(F p I\u2192S, Dp S) + \u03bbLcyc(F p S\u2192I, F p I\u2192S) (21) where \u03b8p I\u2194S are the parameters of the two mapping functions and the discriminators for each kind of embedding p, and \u03bb is a hyperparameter to control the regularization. Cross-modal Inference. During inference, given an image I, we generate its corresponding scene graph GI using a pre-trained image scene graph generator, use the scene graph encoder to get the image features f I p, which are then mapped through the image-to-text mapper F p I\u2192S. The mapped features are then used for sentence generation using the sentence decoder. The cross-modal inference process can be formally expressed as: \u02c6 S = arg max S P(S|f I ora)P(f I r, f I o, f I a|GI) (22) f I ora =gora([F o I\u2192S(f I o), F r I\u2192S(f I r ), F a I\u2192S(f I a)]) (23) where gora(\u00b7) is the same module as Eq. (12). 4. Experiments In this section, we evaluate the effectiveness of our proposed method. We \ufb01rst introduce the datasets and the experimental settings. Then, we present the performance comparisons as well as ablation studies to understand the impact of different components of our framework. 4.1. Datasets and Setting Table 1 shows the statistics of the training datasets used in our experiments. We use Visual Genome (VG) dataset [22] to train our image scene graph generator. We \ufb01lter the object, attribute, and relation annotations by keeping those that appear more than 2,000 times in the training set. The resulting dataset contains 305 objects, 103 attributes, and 64 relations (a total of 472 items). We collect the image descriptions from the training split of MSCOCO [24] and use them as our sentence corpus to train the scene graph encoder and the sentence decoder. In pre-processing, we tokenize the sentences and convert all the tokens to lowercase. The tokens that appear less than \f\ufb01ve times are treated as \u27e8UNK\u27e9tokens. The maximum caption length is \ufb01xed to 16, and all the captions longer than 16 are truncated. This results in a base vocabulary of 9,487 words. For sentence scene graph generation, we generate the scene graph using the language parser in [2, 37]. We perform a \ufb01ltering process by removing objects, relations, and attributes which appear less than 10 times in all the parsed scene graphs. After this \ufb01ltering, we obtain 5,364 objects, 1,308 relations, and 3,430 attributes. This gives an extended vocabulary where the previous 9,487 words are consistent with the base vocabulary. The embeddings for the vocabulary items are randomly initialized. Table 1: Statistics of the training datasets. Scene Graph Vocabulary Size #Object #Attribute #Relation Image (VG) 305 103 64 Sentence (MSCOCO) 5,364 3,430 1,308 For learning the mapping between the modalities, the unpaired training data is intentionally collected by shuf\ufb02ing the images and the sentences from MSCOCO randomly. We validate the effectiveness our method on the same test splits as used in [14, 10] for a fair comparison. The widely used CIDEr-D [33], BLEU [27], METEOR [5], and SPICE [2] are used to measure the quality of the generated captions. 4.2. Implementation Details We follow [37] to train our image scene graph generator on VG. We \ufb01rst train a Faster-RCNN and use it to identify the objects in each image. We select at least 10 and at most 100 objects for an image. The object features extracted by RoI pooling are used as input to the object detector, the relation classi\ufb01er, and the attribute classi\ufb01er. We adopt the LSTM-based relation classi\ufb01er from [41]. Our attribute classi\ufb01er is a single hidden layer network with ReLU activation (i.e., fc-ReLU-fc-Softmax), and we keep only the three most probable attributes for each object. For scene graph encoding, we set de = dx = df = 1000. We implement gs, go, gr, ga, and gora (Eq. (7) (12)) as fullyconnected layers with ReLU activations. The two mapping functions in Eq. (17) and Eq. (18) are implemented as fullyconnected layers with leaky ReLU activations. The sentence decoder has two LSTM layers. The input to the \ufb01rst LSTM is the word embeddings and its previous hidden state. The input to the second LSTM is the concatenation of three terms: the triplet embedding f ora, the output from the \ufb01rst LSTM, and its previous hidden state. We set the number of hidden units in each LSTM to 1,000. During training, we \ufb01rst train the network with the crossentropy loss (Eq. (15)) for 20 epochs and then \ufb01ne-tune it with RL loss in Eq. (16). The learning rate is initialized to 4 \u00d7 10\u22124 for all parameters and decayed by 0.8 after every 5 epoch. We use Adam [20] for optimization with a batch size of 50. During the (unpaired) alignment learning, we freeze the parameters of the scene graph encoder and the sentence decoder, and only learn the mapping functions and the discriminators. For all the experiments, we empirically set \u03bb to 10 in Eq. (21). During inference, we use beam search with a beam size of 5. For quantifying the ef\ufb01cacy of the proposed framework, we use several baselines for performance comparison. Graph-Enc-Dec (Avg). This baseline learns the graph encoder GS Enc and the sentence decoder GS Dec only on sentence corpus. It takes the average operation (as opposed to attention) over the three sets of features: X k o , X k r , and X k a . During testing, we directly feed the image scene graph GI to this model and get the image description. Graph-Enc-Dec (Att\u2217). This model shares the same setting with Graph-Enc-Dec (Avg) but replaces the average operation with a shared attention mechanism for all three sets (i.e., same attention for object, attribution, and relation). Graph-Enc-Dec (Att). This model modi\ufb01es the GraphEnc-Dec (Att\u2217) with an independent attention mechanism for each set of features. Graph-Align. This is our \ufb01nal model. It is initialized with the trained parameters from Graph-Enc-Dec (Att) that uses separate attentions, and then it also learns the feature mapping functions using adversarial training. 4.3. Quantitative Results Investigation on Sentence Decoding. In this experiment, we \ufb01rst train the network with Eq. (15), and then \ufb01ne-tune it with Eq. (16) on the sentence corpus. Table 2 compares the results of three baseline models on the sentence corpus. It can be seen that the attention-based model performs better than the average-based model in all metrics, which demonstrates that weighting over features can better model the global dependency of features. Note that separate attention model for each set of features can signi\ufb01cantly improve the performance. The inconsistent alignment of three kinds of features in Figure 4 also supports that we should treat these sets of features separately. Table 2: Results for different sentence scene graph decoders on MSCOCO test split, where B@n refers to BLEU-n, M refers to METEOR, and C refers to CIDEr. All values are reported in percentage (bold numbers are the best results). Methods B@1 B@2 B@3 B@4 M C Graph-Enc-Dec(Avg) 84.3 71.8 58.8 47.1 31.0 129.4 Graph-Enc-Dec(Att\u2217) 91.8 80.3 67.5 55.5 34.3 151.4 Graph-Enc-Dec(Att) 94.1 84.6 72.9 61.5 36.3 168.8 Table 3: Results for different baselines without GAN training on the test split of the MSCOCO. Methods B@1 B@2 B@3 B@4 M C Graph-Enc-Dec(Avg) 52.1 34.1 23.8 17.6 14.9 41.4 Graph-Enc-Dec(Att\u2217) 54.3 37.0 26.8 20.3 15.9 47.2 Graph-Enc-Dec(Att) 56.0 33.6 20.1 11.9 17.0 48.5 \f(a) Object Features (Raw) (b) Relation Features (Raw) (c) Attribute Features (Raw) (d) Triplet Features (Raw) (e) Object Features (Aligned) (f) Relation Features (Aligned) (g) Attribute Features (Aligned) (h) Triplet Features (Aligned) Figure 4: Visualization of features in 2D space by t-SNE [25]. We plot the scatter diagrams for 1,500 samples. Investigation on Unpaired Setting without GAN. Table 3 shows the comparisons among different baselines when no explicit cross-modal mapping of the features is done. By feeding the image scene graph directly to the trained scene graph encoder and the sentence decoder, we can achieve promising performance on the test set. Graph-Enc-Dec (Att) still achieves the best performance in all metrics. This is reasonable since both scene graphs and captions are highlevel understandings of the image, and by capturing rich semantic information about objects and their relationships, scene graphs provide an effective way to connect an image to its natural language description. This \ufb01nding also validates the feasibility of our approach to unpaired image captioning through the use of scene graphs. However, compared to the paired setup (see Table 6), these results are still inferior, meaning that only scene graph is not enough to achieve comparable performance. Investigation on Unpaired Setting with GAN. To align the features from the image modality to the text modality, we use CycleGAN with our Graph-Enc-Dec(Att) model. Table 4 shows the comparisons of three kinds of GAN loss: binary cross-entropy (BCE) loss with logits (the vanilla GAN loss [11]), mean squared error (MSE) loss, and gradient penalty (GP) [18]. We also compare the results for using different output dimensions in the discriminator.1 We can see that most of the CycleGAN variants improve the performance substantially compared to the results in Table 3. The GP with 64-dimension discriminator output achieves the best performance. Note that, when we set the output dimension to 1, the performance drops. This in1For example, for a dimension of 64, the output is a 64-dimensional vector, which is compared against an all-one vector of length 64 for a \u2018Real\u2019 input, and with an all-zero vector of length 64 for a \u2018Fake\u2019 input. Table 4: Ablation studies of different GAN losses for Graph-Align model. GAN Loss Discriminator B@1 B@2 B@3 B@4 C BCE df \u2192df 64.9 44.2 28.6 18.1 63.0 df \u219264 66.0 46.0 30.3 19.7 65.5 df \u21921 65.5 45.4 29.6 18.8 65.2 MSE df \u2192df 65.3 44.8 28.9 18.3 62.9 df \u219264 66.0 45.9 29.7 18.8 63.8 df \u21921 58.4 36.3 21.7 12.6 46.7 GP df \u2192df 66.1 46.1 30.3 19.5 65.5 df \u219264 67.1 47.8 32.3 21.5 69.5 df \u21921 64.5 44.2 28.5 17.9 61.1 Table 5: The performances of using different feature mappings on MSCOCO test split. Shared GAN learns a shared feature mapping for three sets of features with CycleGAN. Single GAN concatenates the three kinds of embeddings together and learns a mapping with CycleGAN. Methods B@1 B@2 B@3 B@4 M C Shared GAN 60.7 41.3 26.9 17.6 20.0 60.1 Single GAN 61.8 42.1 27.3 17.7 20.1 61.2 Graph-Align 67.1 47.8 32.3 21.5 20.9 69.5 dicates that a strong discriminator is crucial for unpaired feature alignments. From the bottom row of Figure 4, we can see that with the help of the mapping module, the three kinds of embeddings are aligned very well, especially the attribute embedding (Figure 4g). It is also worth noting that the triplet features in Figure 4h are better aligned compared to the raw triplet features in Figure 4d. To further demonstrate the effectiveness of the proposed three feature mapping functions, we conduct additional experiments in Table 5. It can be seen that treating the three set of embeddings (X k o , X k r , and X k a ) without distinction performs worse than Graph-Align. \fFigure 5: Qualitative examples of different methods. In each example, the left image is the original input image; the middle is the image scene graph; the right image is the ground-truth sentence scene graph for compassion. Table 6: Performance comparisons on the test split of the MSCOCO dataset. Method BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE CIDEr SPICE Paired Setting Soft-Attention [36] 71.8 50.4 35.7 25.0 23.0 \u2013 90.0 \u2013 Stack-Cap [13] 78.6 62.5 47.9 36.1 27.4 56.9 120.4 20.9 SGAE (base) [38] 79.9 \u2013 \u2013 36.8 27.7 57.0 120.6 20.9 Unpaired Setting Language Pivoting [14] 46.2 24.0 11.2 5.4 13.2 \u2013 17.7 Adversarial+Reconstruction [10] 58.9 40.3 27.0 18.6 17.9 43.1 54.9 11.1 Graph-Align 67.1 47.8 32.3 21.5 20.9 47.2 69.5 15.0 Finally, Table 6 compares the results of the Graph-Align model with those of the existing unpaired image captioning methods [14, 10] on the MSCOCO test split. We can notice that our proposed Graph-Align achieves the best performance in all metrics. This demonstrates the effectiveness of our scene graph-based unpaired image captioning model. Figure 6: Examples of unpaired image captioning failure cases. Although the accuracy of image scene graph highly in\ufb02uences the performance of captioning results, our Graph-Align can still generate relevant image captions. 4.4. Qualitative Results Figure 5 visualizes some examples of our models. We show the generated image descriptions using different models along with the ground-truth captions (bottom part). In the generated image and sentence scene graphs, we mark object, relation, attribute nodes in orange, blue, and green, respectively. From these exemplary results, we observe that our method can generate reasonable image descriptions by aligning the unpaired visual-textual modalities with the help of scene graphs. Also, we observe that the number of attributes (words in green) in the sentence scene graph is less than that in the image scene graph. This observation potentially explains why there is a huge feature embedding gap between image and text in Figure 4c. Figure 6 presents some failure cases of our Graph-Align model. We can see that the image scene graphs mainly focus on local regions/objects, while sentence scene graphs convey more information about the images. Such information misalignment leads to generating different captions. 5. Conclusions In this paper, we have proposed a novel framework to train an image captioning model in an unsupervised manner without using any paired image-sentence data. Our method uses scene graph as an intermediate representation of the image and the sentence, and maps the scene graphs in their feature space through cycle-consistent adversarial training. We used graph convolution and attention methods to encode the objects, their attributes, their relationships in a scene graph. Our experimental results based on quantitative and qualitative evaluations show the effectiveness of our method in generating meaningful captions, which also outperforms existing methods by a good margin. In future, we would like to evaluate our method on other datasets and explore other mapping methods such as optimal transport. Acknowledgments This work was supported in part by NTU-IGS, NTUAlibaba Lab, NTU DSAIR Center, and NTU ROSE Lab." + }, + { + "url": "http://arxiv.org/abs/1711.06420v2", + "title": "Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models", + "abstract": "Textual-visual cross-modal retrieval has been a hot research topic in both\ncomputer vision and natural language processing communities. Learning\nappropriate representations for multi-modal data is crucial for the cross-modal\nretrieval performance. Unlike existing image-text retrieval approaches that\nembed image-text pairs as single feature vectors in a common representational\nspace, we propose to incorporate generative processes into the cross-modal\nfeature embedding, through which we are able to learn not only the global\nabstract features but also the local grounded features. Extensive experiments\nshow that our framework can well match images and sentences with complex\ncontent, and achieve the state-of-the-art cross-modal retrieval results on\nMSCOCO dataset.", + "authors": "Jiuxiang Gu, Jianfei Cai, Shafiq Joty, Li Niu, Gang Wang", + "published": "2017-11-17", + "updated": "2018-06-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction As we are entering the era of big data, data from different modalities such as text, image, and video are growing at an unprecedented rate. Such multi-modal data exhibit heterogeneous properties, making it dif\ufb01cult for users to search information of interest effectively and ef\ufb01ciently. This paper focuses on the problem in multi-modal information retrieval, which is to retrieve the images (resp. texts) that are relevant to a given textual (resp. image) query. The fundamental challenge in cross-modal retrieval lies in the heterogeneity of different modalities of data. Thus, the learning of a common representation shared by data with different modalities plays the key role in cross-modal retrieval. In recent years, a great deal of research has been devoted to bridge the heterogeneity gap between different modalities [15, 12, 16, 8, 38, 7, 6, 3, 34]. For textural-visual crossmodal embedding, the common way is to \ufb01rst encode individual modalities into their respective features, and then map them into a common semantic space, which is often Figure 1: Conceptual illustration of our proposed crossmodal feature embedding with generative models. The cross-modal retrievals (Image-to-Text and Text-to-Image) are shown in different color. The two blue boxes are crossmodal data, and the generated data are shown in two dashed yellow clouds. optimized via a ranking loss that encourages the similarity of the mapped features of ground-truth image-text pairs to be greater than that of any other negative pair. Once the common representation is obtained, the relevance / similarity between the two modalities can be easily measured by computing the distance (e.g. l2) between their representations in the common space. Although the feature representations in the learned common representation space have been successfully used to describe high-level semantic concepts of multi-modal data, they are not suf\ufb01cient to retrieve images with detailed local similarity (e.g., spatial layout) or sentences with word-level similarity. In contrast, as humans, we can relate a textual (resp. image) query to relevant images (resp. texts) more accurately, if we pay more attention to the \ufb01ner details of the images (resp. texts). In other words, if we can ground the representation of one modality to the objects in the other arXiv:1711.06420v2 [cs.CV] 13 Jun 2018 \fmodality, we can learn a better mapping. Inspired by this concept, in this paper we propose to incorporate generative models into textual-visual feature embedding for cross-modal retrieval. In particular, in addition to the conventional cross-modal feature embedding at the global semantic level, we also introduce an additional cross-modal feature embedding at the local level, which is grounded by two generative models: image-to-text and text-to-image. Figure 1 illustrates the concept of our proposed cross-modal feature embedding with generative models at high level, which includes three learning steps: look, imagine, and match. Given a query in image or text, we \ufb01rst look at the query to extract an abstract representation. Then, we imagine what the target item (text or image) in the other modality should look like, and get a more concrete grounded representation. We accomplish this by asking the representation of one modality (to be estimated) to generate the item in the other modality, and comparing the generated items with gold standards. After that, we match the right image-text pairs using the relevance score which is calculated based on a combination of grounded and abstract representations. The contributions of this paper are twofold. First, we incorporate two generative models into the conventional textual-visual feature embedding, which is able to learn concrete grounded representations that capture the detailed similarity between the two modalities. Second, we conduct extensive experimentations on the benchmark dataset, MSCOCO. Our empirical results demonstrate that the combination of the grounded and the abstract representations can signi\ufb01cantly improve the state-of-the-art performance on cross-modal image-caption retrieval. 2. Related Works Our work is closely related to the existing works on supervised cross-modal feature learning/embedding for cross image-text applications such as image captioning and image-text cross-modal retrieval. Particularly, the pairwise ranking is often adopted to utilize similar or dissimilar cross-modal data pairs to learn a proper similarity or distance metric between different modalities [36, 2, 24, 10]. Frome et al. [5] proposed a cross-modal feature embedding framework that use CNN and Skip-Gram [21] to extract cross-modal feature representations, and then associated them with a structured objective in which the distance between the matched image-caption pair is smaller than that between the mismatched pair. A similar framework is proposed by Kiros et al. [15], in which a Gated Recurrent Unit (GRU) was used as the sentence encoder. They also mapped the images and sentences to a common space and adopted the rank loss to penalize the model by averaging the individual violations across the negatives. Vendrov et al. [38] introduced an improved objective, which can preserve the partial order structure of a visual-semantic hierarchy. Klein et al. [17] adopted a similar objective and employed Fisher Vectors (FV) [29] as a pooling strategy of word embeddings for caption representation. In [18], they sold the visual word embedding idea and proposed a joint image-caption embedding model for image captioning. However, their model is based on cartoon-like images, which is dif\ufb01cult to be applied to real images. Considering the strong ability of Generative Adversarial Networks (GANs) in learning discriminative representation, Peng et al. [28] explored intermodality and intra-modality with a cross-modal GAN. Recently, Faghri et al. [4] improved Kiros\u2019s work by replacing the sum violations across the negative samples with the hardest negative samples. Several works have explored the alignment of visual objects and textual words [13, 30, 11, 25]. Karpathy et al. [13] used local alignment to embed the fragments of images and the sentences into a common space. Plummer et al. [30] went a step further and used all pairwise instances for similarity measurement. Jiang et al. [11] learned a multi-modal embedding by optimizing pairwise ranking, while enhancing both local alignment and global alignment. In [10], they introduced the context-modulated attention mechanism into the cross-modal embedding. Their attention scheme can selectively attend to pairwise instances of image and sentence, and then dynamically aggregate the measured similarity to obtain a global similarity between image and text. Instead of embedding the sentence with chain-structured RNNs, the recent work of Niu et al. [25] adopted a tree-structured LSTM to learn the hierarchical relations between sentences and images, and between phrases and visual objects. However, to align visual objects with textual words, a suf\ufb01cient amount of annotations need to be acquired as well, which induces expensive human annotations. Most of the existing studies on cross-modal textualvisual retrieval mainly focus on learning a high-level common space with ranking loss. In contrast, our approach learns not only the high-level global common space but also the local common space through generative models. 3. Proposed Generative Cross-modal Learning Network 3.1. System Overview Figure 2 shows the overall architecture for the proposed generative cross-modal feature learning framework, named GXN. The entire system consists of three training paths: multi-modal feature embedding (the entire upper part), image-to-text generative feature learning (the blue path), and text-to-image generative adversarial feature learning (the green path). The \ufb01rst path is similar to the existing cross-modal feature embedding that maps different modality features into a common space. However, the difference \fFigure 2: The proposed generative cross-modal learning framework (GXN). The entire framework consists of three training paths: cross-modal feature embedding (the entire upper part), image-to-text generative feature learning (the blue path), and text-to-image generative adversarial feature learning (the green path). It includes six networks: two sentence encoders RNNh Enc (dark green) and RNNl Enc (light green), one image encoder CNNEnc (blue), one sentence decoder RNNDec, one image decoder CNNDec and one discriminator Di. here is that we use two branches of feature embedding, i.e., making the embedded visual feature vh (resp. vl) and the textual feature th (resp. tl) closer. We consider (vh, th) as high-level abstract features and (vl, tl) as detailed grounded features. The grounded features will be used and regularized in the other two generative feature learning paths. The entire \ufb01rst training path mainly includes one image encoder CNNEnc and two sentence encoders RNNh Enc and RNNl Enc. The second training path (the blue path) is to generate a sentence from the embedded generative visual feature vl. It consists of the image encoder CNNEnc and a sentence detector RNNDec. With a proper loss against ground-truth sentences, the grounded feature vl will be adjusted via back propagation. The third training path (the green path) is to generate an image from the textual feature tl. Here we adopt the generative adversarial model, which comprises a generator / decoder CNNDec and a discriminator Di. Overall, through these two paths of cross-modal generative feature learning, we hope to learn powerful cross-modal feature representations. During the testing stage, {vh, vl} and {th, tl} will be used as the \ufb01nal feature representations for cross-modal retrieval, although the proposed GXN also produces other byproducts such as image-to-text generation and text-to-image generation, which are not the main focus of this paper. In the following, we describe each of the three training paths in detail. 3.2. Cross-modal Feature Embedding We follow the common cross-modal feature embedding approach to embed the representations of the image and the caption into a common space, and then use a pairwise ranking loss to learn the model parameters [38]. In particular, given an image-caption pair (i, c), where i is the image and c = (w0, \u00b7 \u00b7 \u00b7 , wT \u22121) is the corresponding description with wi being the one-hot word encoding, we encode a caption by embedding each word in c into a distributed representation using Wewi, where We is a shared word embedding matrix to be learned. We can be initialized randomly or using pre-trained embeddings like word2vec [22]. Then we use two sequential sentence encoders (e.g., GRU) to get the sentence representations. As for image encoding, we use a CNN that is pre-trained on ImageNet. More formally, we formulate the embedding and mapping of each modality as: vk =P k v (CNNEnc(i; \u03b8i)) tk =P k t (RNNk Enc(c; \u03b8k c )) , k \u2208{h, l} (1) where \u03b8i and \u03b8k c are the parameters of the image and caption encoders, P k v and P k t are the linear transformation functions which map the encoded vectors into a common embedding space, and vk and tk are the resulting mapped vectors for the image and the caption, respectively. We \ufb01rst consider the same pairwise ranking loss proposed in [15, 36, 12, 38]. We refer (i, c) as positive pairs and denote the negative samples by i\u2032 and c\u2032, where i\u2032 goes over images not described by c and c\u2032 goes over captions that do not describe i. We want the objective function to encourage the similarity of ground truth caption-image pairs to be greater than that of all other negative pairs. We, therefore, optimize the ranking loss of LRank = 1 N PN n=1 LR(in, cn), \fwhere the single sample ranking loss LR is de\ufb01ned as: LR = X t\u2032 [\u03b1 \u2212s(t, v) + s(t\u2032, v)]++ X v\u2032 [\u03b1 \u2212s(t, v) + s(t, v\u2032)]+ (2) where \u03b1 is a margin, s(t, v) = \u2212\u2225max(0, v \u2212t)\u22252 is the order-violation penalty [38] used as a similarity, t\u2032 and v\u2032 denote the representations of the negative samples. Here, [x]+ represents max(x, 0). Considering we have two branches of cross-modal feature embedding, which result in two pairs of cross-modal features: the abstract features (th, vh) and the grounded features (tl, vl), we modify the ranking loss as LR+ = X t\u2032 [\u03b1 \u2212s\u2217(th,l, vh,l) + s\u2217(t\u2032 h,l, vh,l)]++ X v\u2032 [\u03b1 \u2212s\u2217(th,l, vh,l) + s\u2217(th,l, v\u2032 h,l)]+ (3) where s\u2217(th,l, vh,j) = \u03bbs(th, vh) + (1 \u2212\u03bb)s(tl, vl) is a combined score with \u03bb being the tradeoff weight. 3.3. Image-to-text Generative Feature Learning For the image-to-text training path (i2t, blue path in Figure 2), our goal is to encourage the grounded visual feature vl to be able to generate sentences that are similar to the ground-truth captions. In particular, we \ufb01rst encode the image with CNNEnc, and then decode the grounded visual feature into a sentence with RNNDec. Like the traditional RNN-based text generation models, we \ufb01rst train our model on a cross-entropy (XE) loss de\ufb01ned as: Lxe = \u2212 T \u22121 X t=0 log p\u03b8t(wt|w0:t\u22121, vl; \u03b8t) (4) where wt is the ground-truth word, p\u03b8t(wt|w0:t\u22121, vl) is the output probability of word wt given by the decoder with parameter \u03b8t. However, the XE loss is a word-level cost, and models trained on this suffer from the exposure bias problem [1] and the loss-evaluation mismatch problem [6, 31]. Thus we further employ a loss that takes the entire sequence into account. Speci\ufb01cally, to directly optimize the sentence-level metrics, we optimize our model by minimizing the negative expected reward given by: Lrl = \u2212E\u02dc c\u223cp\u03b8t [r(\u02dc c)] (5) where \u02dc c = ( \u02dc w0, \u00b7 \u00b7 \u00b7 , \u02dc wT \u22121) is the word sequence sampled from the decoder, r(\u02dc c) is the reward calculated by comparing the generated sentence with the corresponding reference sentences using a standard evaluation metric like BLEU [26] or CIDEr [37]. Following the reinforcement learning (RL) approach described in [33, 6], the expected gradients of Equation (5) using Monte-Carlo sample \u02dc c from p\u03b8t can be approximated as: \u2207\u03b8tLrl = \u2212E\u02dc c\u223cp\u03b8t [r(\u02dc c) \u00b7 \u2207\u03b8t log p\u03b8t(\u02dc c)] \u2248\u2212r(\u02dc c)\u2207\u03b8t log p\u03b8t(\u02dc c) \u2248\u2212(r(\u02dc c) \u2212rb)\u2207\u03b8t log p\u03b8t(\u02dc c) (6) where rb is the baseline estimator used to reduce the variance without changing the expected gradient. In our model, we use the inference process reward as the baseline. During the early stage of training, optimization of Equation (6) alone does not ensure the readability and \ufb02uency of the generated caption [27]. To deal with this, we use a mixture of XE and RL losses: Lxe+rl = (1 \u2212\u03b3)Lxe + \u03b3Lrl (7) where \u03b3 is a tuning parameter used to balance the two losses. Equation (7) improves results on the metric used to compute the reward through the reinforcement loss but also ensures better readability and \ufb02uency due to the XE loss. For annealing and faster convergence, we start with optimizing XE loss in Equation (4), and then move to optimizing the joint loss in Equation (7). 3.4. Text-to-image Generative Adversarial Feature Learning For the text-to-image training path (t2i, green path in Figure 2), our goal is to encourage the grounded text feature tl to be able to generate an image that is similar to the ground-truth one. However, unlike the image-to-text path in Section 3.3, where the model is trained to predict the word conditioned on image and history words, the reverse path suffers from the highly multi-modal distribution of images conditioned on a text representation. The natural way to model such a conditional distribution is to use a conditional GAN [23, 32], which consists of a discriminator and a generator. The discriminator is trained to distinguish the real samples \u27e8real image, true caption\u27e9 from the generated samples of \u27e8fake image, true caption\u27e9 as well as samples of \u27e8real image, wrong caption\u27e9. Specifically, the discriminator Di and the generator Gi (CNNDec in Figure 2) play the min-max game on the following value function V (Di, Gi): min Gi max Di V (Di, Gi) = LDi + LGi. (8) The discriminator loss LDi and the generator loss LGi are de\ufb01ned as: LDi =Ei\u223cpdata[log Di(i, tl)] + \u03b2fE\u02c6 i\u223cpG[log(1 \u2212Di(\u02c6 i, tl))] + \u03b2wEi\u223cpdata[log(1 \u2212Di(i, t\u2032 l))] (9) LGi =E\u02c6 i\u223cpG[log(1 \u2212Di(\u02c6 i, tl))] (10) \fwhere tl and t\u2032 l denote the encoded grounded feature vectors for a matched and a mismatched captions, respectively, i is the matched real image from the true data distribution pdata, \u03b2f and \u03b2w are the tuning parameters, and \u02c6 i = Gi(z, tl) is the generated image by the generator Gi conditioned on tl and a noise sample z. The variable z is sampled from a \ufb01xed distribution (e.g., uniform or Gaussian distribution). In implementation, we compress tl to a lower dimension and then combine it with z. However, directly combining tl with z cannot produce satisfactory results. This is because of the limited amount of data and the unsmoothness between tl and z. Thus, we introduce another variable tc, which is sampled from a Gaussian distribution of N(\u00b5(\u03d5(tl)), \u03c3(\u03d5(tl))) [40], where \u00b5(\u03d5(tl)) and \u03c3(\u03d5(tl)) are the mean and the standard deviation of tl, \u03d5(tl) compresses tl to a lower dimension. We now generate the image conditioned on z and tc with \u02c6 i = Gi(z, tc). The discriminator loss LDi and the generator loss LGi are then modi\ufb01ed to: LDi =Ei\u223cpdata[log Di(i, tl)] + \u03b2fE\u02c6 i\u223cpG[log(1 \u2212Di(\u02c6 i, tl))] + \u03b2wEi\u223cpdata[log(1 \u2212Di(i, t\u2032 l))] (11) LGi =E\u02c6 i\u223cpG[log(1 \u2212Di(\u02c6 i, tl))] + \u03b2sDKL(N(\u00b5(\u03d5(tl)), \u03c3(\u03d5(tl))) \u2225N(0, 1)) (12) where \u03b2f, \u03b2w and \u03b2s are the tuning parameters, and the KLdivergence term is to enforce the smoothness of the latent data manifold. Alg. 1 summarizes the entire training procedure. 4. Experiments 4.1. Dataset and Implementation Details We evaluate our approach on the MSCOCO dataset [19]. For cross-modal retrieval, we use the setting of [12], which contains 113,287 training images with \ufb01ve captions each, 5,000 images for validation and 5,000 images for testing. We experiment with two image encoders: VGG19 [35] and ResNet152 [9]. For VGG19, we extract the features from the penultimate fully connected layer. For ResNet152, we obtain the global image feature by taking a mean-pooling over the last spatial image features. The dimensions of the image feature vectors is 4096 for VGG19 and 2048 for ResNet152. As for text preprocessing, we convert all sentences to lower case, resulting in a vocabulary of 27,012 words. We set the word embedding size to 300 and the dimensionality of the joint embedding space to 1024. For the sentence encoders, we use a bi-directional GRU-based encoder to get the abstract feature representation th and one GRU-based encoder to get the grounded feature representation tl. The number of hidden units of both GRUs is set to 1024. For the sentence decoder, we adopt a one-layer GRUbased decoder which has the same hidden dimensions as the Algorithm 1 GXN training procedure. Input: Positive image i, negative image i\u2032, positive text c, negative text c\u2032, number of training batch steps S 1: for n = 1 : S do 2: /*Look*/ 3: Draw image-caption pairs: (i, c), i\u2032 and c\u2032. 4: vh, vl, v\u2032 h, v\u2032 l \u2190i, i\u2032 {Image encoding} 5: th, tl, t\u2032 h, t\u2032 l \u2190c, c\u2032 {Text encoding} 6: Update parameters with Geni2t-GXN 7: Update parameters with Gent2i-GXN 8: end for Function: Geni2t-GXN 1: /*Imagine*/ 2: \u02c6 c = RNNDec(vl, c) {Scheduled sampling} 3: Compute XE loss Lxe using (4). 4: \u02dc c \u2190RNNDec(vl){Sampling} 5: \u00af c \u2190RNNDec(vl){Greedy decoding} 6: Compute RL loss Lrl using (5). 7: Update model parameters by descending stochastic gradient of (7) with rb = r(\u00af c) (see (6)). 8: /*Match*/ 9: Update model parameters using (3). Function: Gent2i-GXN 1: /*Imagine*/ 2: tc \u223cN(\u00b5(\u03d5(tl)), \u03c3(\u03d5(tl))) 3: \u02c6 i = Gi(z, tc) 4: Update image discriminator Di using (11). 5: Update image generator Gi using (12). 6: /*Match*/ 7: Update model parameters using (3). two GRU-based encoders. During the RL training, we use CIDEr score as the sentence-level reward. We set \u03b2f = 0.5, \u03b2w = 0.5 and \u03b2s = 2.0 in Eq. (11) and (12), margin \u03b1 and \u03bb in Eq. (3) to be 0.05 and 0.5 respectively, and \u03b3 in Eq. (7) is increased gradually based on the epoch from 0.05 to 0.95. The output size of the image decoder CNNDec is 64 \u00d7 64 \u00d7 3, and the real image is resized before inputting to the discriminator. All the modules are randomly initialized before training except for the CNN encoder and decoder. Dropout and batch normalization are used in all our experiments. We use Adam [14] for optimization with a mini-batch size of 128 in all our experiments. The initial learning rate is 0.0002, and the momentum is 0.9. For evaluation, we use the same measures as those in [38], i.e., R@K, de\ufb01ned as the percentage of queries in which the ground-truth matchings are contained in the \ufb01rst K retrieved results. The higher value of R@K means better performance. Another metric we use is Med r, which is the median rank of the \ufb01rst retrieved ground-truth sentence or image. The lower its value, the better. We also compute another score, denoted as \u2018Sum\u2019, to evaluate the overall per\fformance for cross-modal retrieval, which is the summation of all R@1 and R@10 scores de\ufb01ned as follows: Sum = R@1 + R@10 | {z } Image-to-Text + R@1 + R@10 | {z } Text-to-Image (13) In addition, we evaluate the quality of the generated captions with the standard evaluation metrics: CIDEr and BLEU-n. BLEU-n rates the quality of the retrieved captions by comparing n-grams of the candidate with the ngrams of the \ufb01ve gold captions and count the number of matches. CIDEr is a consensus-based metric which is more correlated with human assessment of caption quality. 4.2. Baseline Approaches for Comparisons GRU (VGG19) and GRUBi (VGG19): These two baselines use the pre-trained VGG19 as the image encoder. GRU (VGG19) adopts a one layer GRU as the sentence encoder, while GRUBi (VGG19) adopts a bi-directional GRU as the sentence encoder. These two models are trained using Eq. (2). GXN (ResNet152) and GXN (\ufb01ne-tune): These two baselines use the same two GRU sentence encoders as our proposed GXN framework, but without the generation components. In other words, they only contain the cross-modal feature embedding training path using Eq. (3). Here, the pre-trained ResNet152 is adopted as the image encoder. GXN (ResNet152) and GXN (\ufb01ne-tune) refer to the models without or with \ufb01ne-tuning ResNet152, respectively. The \ufb01ne-tuned ResNet152 model is used as the image encoder for all other GXN models. GXN (i2t, xe) and GXN (i2t, mix): These two GXN baseline models contain not only the cross-modal feature embedding training path but also the image-to-text generative training path. GXN (i2t, xe) and GXN (i2t, mix) are the two models optimized with Eq. (4) and (7), respectively. GXN (t2i): This baseline model contains both the crossmodal feature embedding training path and the text-toimage generative training path, and is trained with Gent2iGXN in Algorithm 1. GXN (i2t+t2i): This is our proposed full GXN model containing all the three training paths. It is initialized with the trained parameters from GXN (i2t, mix) and GXN (t2i) and \ufb01ne-tuned with Algorithm 1. 4.3. Quantitative Results In this section, we present our quantitative results and analysis. To verify the effectiveness of our approach and to analyze the contribution of each component, we compare different baselines in Table 1 and 2. The comparison of our approach with the state-of-the-art methods is shown in Table 3. Effect of a Better Text Encoder. The \ufb01rst two rows in Table 1 compare the effectiveness of the two sentence encoders. Compared with GRU (VGG19), GRUBi (VGG19) Table 1: Cross-modal retrieval results on MSCOCO 1Kimage test set (bold numbers are the best results). Image-to-Text Text-to-Image Model R@1 R@10 Med R@1 R@10 Med GRU(VGG19) 51.4 91.4 1.0 39.1 86.7 2.0 GRUBi(VGG19) 53.6 90.2 1.0 40.0 87.8 2.0 GXN(ResNet152) 59.4 94.7 1.0 47.0 92.6 2.0 GXN(\ufb01ne-tune) 64.0 97.1 1.0 53.6 94.4 1.0 GXN(i2t,xe) 68.2 98.0 1.0 54.5 94.8 1.0 GXN(i2t,mix) 68.4 98.1 1.0 55.6 94.6 1.0 GXN(t2i) 67.1 98.3 1.0 56.5 94.8 1.0 GXN (i2t+t2i) 68.5 97.9 1.0 56.6 94.5 1.0 can make full use of the context information from both directions and achieve better performance, i.e., GRUBi (VGG19) increases the caption retrieval R@1 from 51.4 to 53.6 and image retrieval R@1 from 39.1 to 40.0. Effect of a Better Image Encoder. We further investigate the effect of image encoding model on the cross-modal feature embedding. By replacing the VGG19 model in GRUBi (VGG19) with ResNet152, we achieve huge performance gains. The caption retrieval R@1 increases from 53.6 to 64.0, and the image retrieval R@1 increases from 40.0 to 53.6. Effect of the Generative Models. We \ufb01rst consider the incorporation of the image-to-caption generation process into our GXN model. From Table 1, it can be seen that, compared with GXN (\ufb01ne-tune), GXN (i2t, xe) achieves signi\ufb01cantly better performance on the image-to-text retrieval. This validates our assumption that by combining the abstract representation with the grounded representation learned by caption generation (imagining), we can retrieve more relevant captions. Then, as we further enrich the model with the mixed RL+XE loss of Eq. (7), we observe further improvements (see GXN (i2t, mix)). We also evaluate the effect of incorporating the text-toimage generation process into our GXN model. It can be seen from Table 1 that, compared with GXN (\ufb01ne-tune), GXN (t2i) signi\ufb01cantly improves the text-to-image retrieval performance. This is because the grounded text feature tl is well learned via the text-to-image generation process (imagining). Although the image-to-text retrieval performance of GXN (t2i) is not as good as GXN (i2t, mix), it is still much better than GXN (\ufb01ne-tune), which does not incorporate any generative process. The \ufb01nal row in Table 1 shows the performance of our complete model, i.e., GXN (i2t+t2i), which incorporates both image and text generations. We can see that GXN (i2t+t2i) achieves the best performances in general, having the advantages of both GXN (i2t, mix) and GXN (t2i). Quality of the retrieved captions. For the image-to-text retrieval task in Table 1, Table 2 reports the quality of the retrieved captions using the sentence-level metrics, BLEU \fTable 2: Evaluating the quality of the retrieved captions on MSCOCO 1K test set using the sentence-level metrics, where B@n is a short form for BLEU-n, and C is a short form for CIDEr. All values are reported in percentage. The 2nd column is the rank order of the retrieved caption. Model No. B@1 B@2 B@3 B@4 C GXN(\ufb01ne-tune) 1 54.6 34.5 21.0 12.9 56.3 GXN(i2t,xe) 1 56.5 36.2 22.6 14.1 59.2 GXN(i2t,mix) 1 57.0 36.7 23.0 14.4 60.0 GXN(t2i) 1 56.0 36.0 22.4 14.3 58.8 1 57.1 36.9 23.3 14.9 61.1 2 55.8 35.8 22.4 13.7 58.3 GXN(t2i+t2i) 3 54.2 33.6 20.5 12.7 54.0 4 53.1 32.9 19.9 11.9 51.2 5 53.2 32.8 19.6 11.3 51.1 Figure 3: Visual results of image-to-text retrieval, where the top-5 retrieved captions and the generated caption are shown in red color. and CIDEr. Both BLEU and CIDEr have been shown to correlate well with human judgments [37]. As shown in Table 2, incorporating the generative models into GXN yields better results than GXN (\ufb01ne-tune) that does not incorporate any generation process. Note that those scores are calculated over \ufb01ve reference sentences. This demonstrates that our proposed GXN model can retrieve captions that are closer to the ground-truth ones. 4.3.1 Comparisons with the State-of-the-art Table 3 shows the comparisons of our cross-modal retrieval results on MSCOCO dataset with state-of-the-art methods. We can see that our framework achieves the best performance in all metrics, which clearly demonstrates the advantages of our model. To make our approach more convincing and generic, we also conduct experiments on Flickr30K dataset with results shown in Table 4. 4.4. Qualitative Results In this section, we present a qualitative analysis of our GXN (i2t+t2i) framework on cross-modal retrieval. Results of image-to-text retrieval. Figure 3 depicts some examples for image-to-text retrieval, where the results of Figure 4: Visual results of text-to-image retrieval. 2nd row: retrieved images. 3rd row: image samples generated by our conditional GAN. VSE0 and VSE++ are adopted from [4]. We show the top-5 retrieved captions as well as the ground-truth captions. We can see that the retrieved captions of our model can better describe the query images. Results of text-to-image retrieval. Figure 4 depicts some examples for text-to-image retrieval, where we show the top-5 retrieved images as well as the generated images. Compared to the ground-truth image and the retrieved images, although the generated images are of limited quality for complex multi-object scenes, they still contain certain plausible shapes, colors, and backgrounds. This suggests that our model can capture the complex underlying language-image relations. Some more samples are shown in Figure 5. We show the retrieved and generated results for both image-to-text and text-to-image on the same image-caption pairs. Results of word embedding. As a byproduct, a word embedding matrix We (mentioned at the beginning of Section 3.2) is also learned in our GXN models. We visualize the learned word embedding by projecting some selected word vectors into a 2-D space in Figure 6. We can see that compared with the embeddings learned from GXN (\ufb01netune), our GXN (i2t+t2i) can learn word embedding with more related visual meaning. For example, we \ufb01nd that words like \u2018eats\u2019 and \u2018stares\u2019 of GXN (i2t+t2i) are closer to each other compared to those of GXN (\ufb01ne-tune). This is also consistent with the fact that when we \u2018eat\u2019 some food; we also tend to \u2018stare\u2019 at it. 5. Conclusion In this paper, we have proposed a novel cross-modal feature embedding framework for cross image-text retrieval. The uniqueness of our framework is that we incorporate the image-to-text and the text-to-image generative models into the conventional cross-modal feature embedding. We learn both the high-level abstract representation and the lo\fTable 3: Comparisons of the cross-modal retrieval results on MSCOCO dataset with the state-of-the-art methods. We mark the unpublished work with \u2217symbol. Note that \u2018Sum\u2019 is the summation of the two R@1 scores and the two R@10 scores. Image-to-Text Retrieval Text-to-Image Retrieval Model R@1 R@10 Med r R@1 R@10 Med r Sum 1K Test Images m-CNN [20] 42.8 84.1 2.0 32.6 82.8 3.0 242.3 HM-LSTM [25] 43.9 87.8 2.0 36.1 86.7 3.0 254.5 Order-embeddings [38] 46.7 88.9 2.0 38.9 85.9 2.0 260.4 DSPE+Fisher Vector [39] 50.1 89.2 39.6 86.9 265.8 sm-LSTM [10] 53.2 91.5 1.0 40.7 87.4 2.0 272.8 \u2217VSE++ (ResNet152, \ufb01ne-tune) [4] 64.7 95.9 1.0 52.0 92.0 1.0 304.6 GXN (i2t+t2i) 68.5 97.9 1.0 56.6 94.5 1.0 317.5 5K Test Images Order-embeddings [38] 23.3 65.0 5.0 18.0 57.6 7.0 163.9 \u2217VSE++ (ResNet152, \ufb01ne-tune) [4] 41.3 81.2 2.0 30.3 72.4 4.0 225.2 GXN(t2i+t2i) 42.0 84.7 2.0 31.7 74.6 3.0 233.0 Figure 5: More visual results of cross-modal retrieval. Table 4: Experimental results on Flickr30K 1k image test set. Image-to-Text Text-to-Image Model R@1 R@10 Med r R@1 R@10 Med r *VSE++ [4] 52.9 87.2 1.0 39.6 79.5 2.0 GXN (i2t+t2i) 56.8 89.6 1.0 41.5 80.1 2.0 (a) GXN (\ufb01ne-tune) (b) GXN (i2t+t2i) Figure 6: Visualization of word embedding. cal grounded representation of multi-modal data in a maxmargin learning-to-rank framework. Our framework signi\ufb01cantly outperforms state-of-the-art methods for texturalvisual cross-modal retrieval on MSCOCO dataset. Future research directions include considering pixel-level visual quality assessment and other strong discriminators to improve the quality of the generated images. Acknowledgments This research was supported by the National Research Foundation, Prime Minister\u2019s Of\ufb01ce, Singapore, under its IDM Futures Funding Initiative, and NTU CoE Grant. This research was carried out at the Rapid-Rich Object Search (ROSE) Lab at the Nanyang Technological University, Singapore. The ROSE Lab is supported by the National Research Foundation, Singapore, and the Infocomm Media Development Authority, Singapore. We gratefully acknowledge the support of NVAITC (NVIDIA AI Tech Center) for our research at NTU ROSE Lab, Singapore." + }, + { + "url": "http://arxiv.org/abs/1709.03376v3", + "title": "Stack-Captioning: Coarse-to-Fine Learning for Image Captioning", + "abstract": "The existing image captioning approaches typically train a one-stage sentence\ndecoder, which is difficult to generate rich fine-grained descriptions. On the\nother hand, multi-stage image caption model is hard to train due to the\nvanishing gradient problem. In this paper, we propose a coarse-to-fine\nmulti-stage prediction framework for image captioning, composed of multiple\ndecoders each of which operates on the output of the previous stage, producing\nincreasingly refined image descriptions. Our proposed learning approach\naddresses the difficulty of vanishing gradients during training by providing a\nlearning objective function that enforces intermediate supervisions.\nParticularly, we optimize our model with a reinforcement learning approach\nwhich utilizes the output of each intermediate decoder's test-time inference\nalgorithm as well as the output of its preceding decoder to normalize the\nrewards, which simultaneously solves the well-known exposure bias problem and\nthe loss-evaluation mismatch problem. We extensively evaluate the proposed\napproach on MSCOCO and show that our approach can achieve the state-of-the-art\nperformance.", + "authors": "Jiuxiang Gu, Jianfei Cai, Gang Wang, Tsuhan Chen", + "published": "2017-09-11", + "updated": "2018-03-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction The challenge of image captioning lies in designing a model that can effectively utilize the image information and generate more human-like rich image descriptions. Motivated by the recent advances in natural language processing, current image captioning approaches typically follow the encodingdecoding framework (Ranzato et al. 2016), which consists of a Convolutional Neural Network (CNN) based image encoder and a Recurrent Neural Network (RNN) based sentence decoder, with various variants for image captioning (Fang et al. 2015; Mao et al. 2014; Wu et al. 2016). Most of these existing image captioning approaches are trained by maximizing the likelihood of each ground-truth word given the previous ground-truth words and the image with back propagation. There are three major problems in these existing image captioning methods. Firstly, it is extremely hard for them to generate rich \ufb01ne-grained descriptions. This is because rich descriptions require high-complexity models, where the problem of vanishing gradients often occurs, considering Copyright c \u20dd2018, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. Figure 1: Illustration of our proposed coarse-to-\ufb01ne framework. Our model consists of one image encoder (CNN) and a sequence of sentence decoders (attention-based LSTM networks), and it takes the image as input and re\ufb01nes the image descriptions from coarse to \ufb01ne. Here we show the increasingly improved image descriptions in two stages (gray and dark gray). the back-propagated gradients diminish in strength as they propagate through many layers of a complex network. Secondly, there is an exposure bias between the training and the testing (Ranzato et al. 2016; Wiseman and Rush 2016; Gu, Cho, and Li 2017). Speci\ufb01cally, the sentence decoder is trained to predict a word given the previous ground-truth words, while at testing time, the caption generation is accomplished by greedy search or with beam search, which predicts the next word based on the previously generated words that is different from the training mode. Since the model has never been exposed to its own predictions, it will result in error accumulation at test time. To address the exposure bias problem, scheduled sampling (Bengio et al. 2015), i.e., randomly selecting between previous groundtruth words and previously generated words, has become the current dominant training procedure to \ufb01t RNNs based models. However, it can only mitigate the exposure bias but cannot largely solve it. Thirdly, there is a loss-evaluation mismatch (Ranzato et al. 2016). Speci\ufb01cally, language models are usually trained to minimize the cross-entropy loss at each time-step, while at testing time, we evaluate the generated captions with the sentence-level evaluation metarXiv:1709.03376v3 [cs.CV] 14 Mar 2018 \frics, e.g., BLEU-n (Papineni et al. 2002), CIDEr (Vedantam, Lawrence Zitnick, and Parikh 2015), SPICE (Anderson et al. 2016), etc., which are non-differentiable and cannot be directly used as training loss. In this paper, considering the great challenge of generating rich image descriptions in one stage, we propose a coarse-to-\ufb01ne multi-stage prediction framework. Our model consists of an image encoder and a sequence of sentence decoders that repeatedly generate image descriptions in \ufb01ner details. However, directly composing such multi-stage decoders in an image captioning model faces the risk of the vanishing gradients problem. Motivated by the works on image recognition (Zhang, Lee, and Lee 2016; Fu, Zheng, and Mei 2017), which show that supervising very deep networks at intermediate layers aids in learning, we also enforce intermediate supervisions for each stage. Furthermore, inspired by the recent image captioning work (Rennie et al. 2017), which uses Reinforcement Learning (RL) to address the loss-evaluation mismatch problem and include the inference process as a baseline in training to address the exposure bias problem, we also design a similar RL-based training method but extend it from one-stage (Rennie et al. 2017) to our multi-stage framework, where rewards are introduced at each stage as intermediate supervision. Particularly, we optimize our model with a RL-based approach which utilizes the output of each intermediate decoder\u2019s test-time inference algorithm as well as the output of its preceding decoder to normalize the rewards. In addition, to cope with our coarseto-\ufb01ne learning framework, we adopt a stacked attention model to extract more \ufb01ne-grained visual attention information for word prediction at each stage. Figure 1 illustrates our proposed coarse-to-\ufb01ne framework, which consists of three stacked Long Short-Term Memory (LSTM) networks. The \ufb01rst LSTM generates the coarse-scale image description, and the subsequent LSTM networks serve as the \ufb01nescale decoders. At each stage in our model, attention weights and hidden vector produced by the preceding stage are used as inputs, which are taken as the disambiguating cues to the subsequent stage. As a result, each stage of the decoder generates words with increasingly re\ufb01ned attention weights as well as words. The main contributions of this work include: (a) a coarseto-\ufb01ne framework which increases the model complexity gradually with increasingly re\ufb01ned attention weights for image captioning and (b) a reinforcement learning method that directly optimizes model with the normalized intermediate rewards. Experiments show outstanding performance of our approach on MSCOCO (Lin et al. 2014). Related Works Image Captioning with Maximum Likelihood Estimation. The information gap between the visual content of the images and their corresponding descriptions has been extensively studied (Vinyals et al. 2015; Fang et al. 2015; Mao et al. 2014; Wu et al. 2016). The classical image captioning framework is based on the CNN image encoder and the RNN based sentence decoder (Vinyals et al. 2015). Only providing the global image feature is not suf\ufb01cient, as the power of RNNs lies in its capability to model the contextual information between time steps, while the global image representation weakens the RNN\u2019s memory of the visual information. To better incorporate the image information into the language processing, a few approaches have been proposed (You et al. 2016; Yang et al. 2016b). Visual attention for image captioning was \ufb01rst being introduced by (Xu et al. 2015) which incorporates the spatial attention on convolutional features of images into the encoderdecoder framework through the soft and hard attention mechanisms. Their work was later followed by (Yang et al. 2016a) and (Liu et al. 2017b) which further improves the visual attention mechanism. However, all these approaches are typically trained by maximising the likelihood estimation, often called as Teacher-Forcing (Williams and Zipser 1989). Instead of training the model with the handcrafted loss, some researchers applied the adversarial training for image captioning, called Professor-Forcing (Lamb et al. 2016), which uses adversarial training to encourage the dynamics of the RNNs to be the same as that of training conditioned on previous ground truth words. Recently, some works have proposed to encode more discriminative visual information into the captioning model. They leverage visual attributes of the image to enhance the visual information using some weakly supervised approach. In (You et al. 2016; Yao et al. 2017), they incorporate high-level attributes into the encoder-decoder framework and achieve large improvements. Both of (You et al. 2016) and (Wu et al. 2016) treat the attribute detection problem as a multi-instance learning (MIL) problem and train a corresponding CNN by minimizing the element-wise logistic loss function. (Liu et al. 2017a) uses R-FCN (Li et al. 2016) to detect the visual attributes and adopts a sequential attention mechanism to translate the attributes to a word sequence. Image Captioning with Reinforcement Learning. Several attempts have been made to use reinforcement learning to address the discrepancy between the training and the testing objectives for image captioning (Rennie et al. 2017). The \ufb01rst work of training RNN-based sequence model with policy gradient was proposed by (Ranzato et al. 2016), in which a REINFORCE-based approach was used to calculate the sentence-level reward and a Monte-Carlo technique was employed for training. Similarly, (Liu et al. 2017c) estimates the action value by averaging three roll-out sequences which is the same as (Yu et al. 2017). Instead of using the sentence-level reward in training, (Bahdanau et al. 2017) use the token-level reward in temporal difference training for sequence generation. Recently, the self-critical learning approach proposed by (Rennie et al. 2017) utilizes an improved REINFORCE algorithm with a reward obtained by the current model against the baseline, i.e., the inference algorithm. All these existing researches on image captioning mainly focus on one-stage training (Mao et al. 2014; Vinyals et al. 2015; Rennie et al. 2017). However, it is challenging to generate a rich description for the image in one stage. Rather than generating image description in one-step, in this paper, we propose a coarse-to-\ufb01ne model by stacking multiple intermediate sentence decoders and optimizing them with \fsentence-level evaluation metrics, where the coarse decoder generates the coarse caption and reduces the computational burden for the \ufb01ne-scale sentence decoders to generate complex and rich image descriptions. Note that our coarse-to\ufb01ne concept at high level is similar to the coarse-to-\ufb01ne reasoning (Kiddon and Domingos 2011), while the latter is not for image captioning. Our RL-based supervision for solving the loss-evaluation mismatch problem is related to (Rennie et al. 2017), while ours is designed for our multi-stage coarse-to-\ufb01ne model and (Rennie et al. 2017) is for the conventional one-stage model. Methodology In this paper, we consider the problem of learning to generate image description \u02c6 Y = { \u02c6 Y0, . . . , \u02c6 YT \u22121} for an image I, where \u02c6 Yt \u2208D is the predicted word, D is the dictionary, and T denotes the sequence length. Our algorithm builds a coarse-to-\ufb01ne model with the same target as those onestage models, but with the additional intermediate layers between the output layer and the input layer. We \ufb01rst train the model by maximizing log-likelihood of each successive target word conditioned on the input image and the gold history of target words Y = {Y0, . . . , YT \u22121}, and then optimize the model with sentence-level evaluation metrics. We denote by \u02c6 Yi, i \u2208{0, \u00b7 \u00b7 \u00b7 , Nf} the predicted word sequence of the ith stage decoder, and Nf is the total number of \ufb01ne stages. As a result, each intermediate sentence decoder predicts the increasingly re\ufb01ned image description, and the prediction of the last decoder is taken as the \ufb01nal image description. Note that we treat stage i = 0 as the coarse decoder, and stages i >= 1 as the \ufb01ne decoders. Image Encoding We \ufb01rst encode the given image I to the spatial image features V = {V0, \u00b7 \u00b7 \u00b7 , Vk\u00d7k\u22121}, Vi \u2208Rdv with CNN: V = CNN(I), where k \u00d7 k is the number of regions, each feature channel Vi depicts a region of the image, and dv is the dimension of the feature vector for each region. Specifically, we extract the image features from the \ufb01nal convolutional layer of CNN, and use spatial adaptive average pooling to resize the features to a \ufb01xed-size spatial representation of k \u00d7 k \u00d7 dv. Coarse-to-Fine Decoding The overall coarse-to-\ufb01ne sentence decoder consists of one coarse decoder and a sequence of attention-based \ufb01ne decoders that repeatedly produce re\ufb01ned attention maps for the prediction of each word based on the cues from the preceding decoder. The \ufb01rst stage of our model is a coarse decoder which predicts coarse description from the global image feature. In the subsequent stages, each stage i \u2208{1, \u00b7 \u00b7 \u00b7 , Nf} is a \ufb01ne decoder which predicts the improved image description based on image features and the outputs of the preceding stage. Particularly, we use the attention weights of the preceding stage to provide the following stage beliefs of regions for word prediction. More formally, we decode the image features in multiple stages, where the prediction \u02c6 Yi of each stage is a re\ufb01nement of the prediction \u02c6 Yi\u22121 of previous stage. Figure 2 illustrates the coarse-to-\ufb01ne decoding architecture, where the top row contains one coarse decoder and two stacked attention-based \ufb01ne decoders under the training mode, and the bottom row shows the \ufb01ne decoders under its inference mode (greedy decoding) for computing rewards so as to incorporate intermediate supervisions. In the following, we will introduce the adopted coarse decoder, our proposed \ufb01ne decoder, our proposed stacked attention model and our proposed RL-based process for incorporating intermediate supervisions. Coarse Decoder. We start by decoding in a coarse search space in the \ufb01rst stage (i = 0), where we learn a coarse decoder with an LSTM network, called LSTMcoarse. At each time step t \u2208[0, T \u22121], the input to LSTMcoarse consists of the previous target word yt\u22121, concatenated with the global image feature, and the previous hidden states. The operation of the LSTMcoarse can be described as: o0 t, h0 t = LSTMcoarse(h0 t\u22121, i0 t, yt\u22121) (1) i0 t =[f(V); hNf t\u22121] (2) where h0 t\u22121 and hNf t\u22121 are the hidden states, o0 t is the cell output, yt\u22121 = WeYt\u22121 is the embedding of previous word Yt\u22121. We obtain the global image feature f(V) by taking a mean-pooling over the spatial image features as 1 k\u00d7k Pk\u00d7k\u22121 i=0 Vi. The t-th decoded word \u02c6 Y 0 t of LSTMcoarse is drawn from the dictionary D according to the softmax probability: \u02c6 Y 0 t \u223cSoftmax(W0 oo0 t + b0 o). Fine Decoder. In the subsequent stages, each \ufb01ne decoder predicts the word \u02c6 Y i t based on the image features V again, and the attention weights \u03b1i\u22121 t and the hidden state hi\u22121 t from the preceding LSTM. Each \ufb01ne decoder consists of an LSTMi \ufb01ne network and an attention model. At each time step t, the input to LSTMi \ufb01ne consists of the attended image feature, the previous word embedding yt\u22121, its previous hidden state hi t\u22121, and the updated hidden state hi\u22121 t from the preceding LSTM. Note that when t = 1, h0 t is the hidden output of LSTMcoarse; otherwise hi\u22121 t is the hidden output of the preceding LSTMi\u22121 \ufb01ne . Therefore, the updating procedure of LSTMi \ufb01ne can be written as: oi t, hi t = LSTMi \ufb01ne(hi t\u22121, ii t, yt\u22121) (3) ii t = [g(V, \u03b1i\u22121 t , hi\u22121 t ); hi\u22121 t ] (4) where oi t is the cell output of LSTMi \ufb01ne, and g(\u00b7) is the spatial attention function which feeds attended visual representations as the additional inputs to LSTMi \ufb01ne at each time step to emphasise the detailed visual information. During the inference, the \ufb01nal output word \u02c6 Yt is drawn from D according to the softmax probability: \u02c6 Yt \u223cSoftmax(WNf o oNf t +bNf o ). Stacked Attention Model. As aforementioned, our coarse decoder generates words based on the global image features. However, in many cases, each word is only related to \fFigure 2: Illustration of the proposed coarse-to-\ufb01ne decoding using intermediate supervision (reward) after each stage. The top row (gray) contains one coarse decoder (left) and two visual attention-based \ufb01ne decoders under the training mode. The bottom row shows the \ufb01ne decoders under its inference mode (greedy decoding) for computing rewards. a small region of an image. Using the global image feature for word prediction could lead to sub-optimal results due to the noises introduced from the irrelevant regions for each prediction (Gu et al. 2017b). Therefore, the attention mechanism has been introduced to signi\ufb01cantly improve the performance of image captioning. It typically produces a spatial map highlighting image regions relevant to each predicted word. In this research, to extract more \ufb01ne-grained visual information for word prediction, we adopt a stacked attention model to \ufb01lter out noises gradually and pinpoint the regions that are highly relevant to the word prediction. In each \ufb01ne stage i, our attention model operates on both image features V and attention weights \u03b1i\u22121 t from the preceding stage. Formally, for the time step t of stage i, the stacked attention model is de\ufb01ned as: g(V, \u03b1i\u22121 t , hi\u22121 t ) = k\u00d7k\u22121 X n=0 \u03b1i,n t \u00b7 (Wi v\u03b1Vn + bi v\u03b1) (5) where \u03b1i,n t corresponds to the attention probability of each image region. We compute the attention probability \u03b1i,n t as follows: \u03b1i t =softmax(Wi \u03b1Ai t + bi \u03b1) (6) Ai,n t = tanh(Wi vaVn + Wi ha\u00af hi\u22121 t ) (7) \u00af hi\u22121 t =hi\u22121 t + k\u00d7k\u22121 X n=0 \u03b1i\u22121,n t \u00b7 (Wi\u22121 v\u03b1 Vn + bi\u22121 v\u03b1 ) (8) where hi\u22121 t is the updated hidden state of LSTMi\u22121 \ufb01ne , which is added to the aggregated image features to form a new hidden representation \u00af hi\u22121 t . Note that when i = 1, we set \u03b10 t to zero. Learning The coarse-to-\ufb01ne approach described above results in a deep architecture. Training such a deep network can be prone to the vanishing gradient problem, where the magnitude of gradients decreases in strength when backpropagated through multiple intermediate layers. A natural approach to address this problem is to incorporate supervised training objectives into the intermediate layers. Each stage of the coarse-to-\ufb01ne sentence decoder is trained to predict the words repeatedly. We \ufb01rst train the network by de\ufb01ning a loss function for each stage i that minimizes the cross-entropy (XE) loss, i.e., Li XE(\u03b80:i) = \u2212 T \u22121 X t=0 log(p\u03b80:i(Yt | Y0:t\u22121, I)), (9) where the Yt is the ground-truth word, and \u03b80:i is the parameters up to the stage-i decoder. By adding the losses at each stage i, we obtain the overall learning objective for the full architecture: LXE(\u03b8) = Nf X i=0 Li XE(\u03b80:i) = \u2212 Nf X i=0 T \u22121 X t=0 log(p\u03b80:i(Yt | Y0:t\u22121, I)) (10) where p\u03b80:i(Yt | Y0:t\u22121, I) is the output probability of word Yt given by the LTSMi decoder. We share the weights of the models across all time steps. However, training with the loss function of Equation 10 is not suf\ufb01cient. As mentioned in Section 1, the existing loglikelihood training methods have the problem of the discrepancy between their training and testing modes, where the model is often trained with scheduled sampling, while in testing, greed decoding or beam search is commonly used to get higher scores. Besides, the log-likelihood score of the prediction does not correlate well with the standard evaluation metrics such as BLEU, and CIDEr. Many researchers have explored in the direction of optimizing the image captioning model with the evaluation metrics (e.g., CIDEr in (Rennie et al. 2017)). To optimise the evaluation metrics during each stage, we consider the image caption generation process as a reinforcement learning problem, i.e., given an environment (previous states), we want to get an agent (e.g., RNN, LSTM or GRU) to look at the environment (image features, hidden states, and previous words), and make an action (the prediction of the next word). After generating a complete sentence, the agent will observe a sentence-level reward and update its internal state. We cast our generative model in the reinforcement learning terminology as in (Ranzato et al. 2016; Rennie et al. 2017). The LSTM-based decoder of each stage can be viewed as an agent that interacts with the external environment. The policy network parametrized by \u03b80:i de\ufb01nes a policy p\u03b80:i, which receives a state (preceding outputs, internal state of LSTM and image features) and produces an action \u02dc Y i t \u223cp\u03b80:i which is the prediction of the next word sampled \ffrom the LSTM at time step t. Once we have a complete predicted sentence \u02dc Yi, the agent observes a reward r( \u02dc Yi) (e.g., CIDEr score) of the sentence. The goal of RL-based training is to minimize the negative expected rewards (punishments) of multi-stages, : LRL(\u03b8) = \u2212 Nf X i=1 E \u02dc Yi\u223cp\u03b80:i [r( \u02dc Yi)] \u2248\u2212 Nf X i=1 r( \u02dc Yi) (11) where \u02dc Yi = { \u02dc Y i 0 , \u00b7 \u00b7 \u00b7 , \u02dc Y i T \u22121}, and \u02dc Y i t is sampled from the stage i at time step t. r( \u02dc Yi) is calculated by comparing the generated sentence to the corresponding reference sentences using the standard evaluation metric. Note that we do not consider i = 0 in Equation 11 as the coarse decoder does not has a preceding stage. After that, we calculate the expected gradient using the Monte-Carlo sample \u02dc Yi from p\u03b80:i as: \u2207\u03b8LRL(\u03b8) = Nf X i=1 \u2207\u03b80:iLRL(\u03b80:i) (12) \u2248\u2212 Nf X i=1 r( \u02dc Yi) \u00b7 \u2207\u03b80:i log p\u03b80:i( \u02dc Yi) (13) To reduce the variance of the gradient estimate in Equation 13, we follow the REINFORCE approach from SCST (Rennie et al. 2017) to approximate Equation 13 as: \u2207\u03b8LRL(\u03b8) \u2248\u2212 Nf X i=1 \u2206r( \u02dc Yi) \u00b7 \u2207\u03b80:i log p\u03b80:i( \u02dc Yi) (14) where \u2206r( \u02dc Yi) is the relative reward which can reduce the variance of the gradient estimate. The principal idea of our RL-based coarse-to-\ufb01ne learning approach is to baseline the REINFORCE algorithm with the reward r( \u02c6 Yi) obtained in each stage under the inference algorithm at test time, as well as the reward r( \u02dc Yi\u22121) obtained by its preceding decoder at train time. Particularly, \u2206r( \u02dc Yi) is de\ufb01ned as: \u2206r( \u02dc Yi) = h r( \u02dc Yi) \u2212r( \u02c6 Yi) i + h r( \u02dc Yi) \u2212r( \u02dc Yi\u22121) i (15) where \u02dc Yi is a sampled caption of the i-th stage and \u02c6 Yi is obtained by the conventional greedy decoding. The \ufb01rst term in Equation 15 tends to increase the probability of the samples of stage i that score higher than the results of stage i at test-mode (greedy decoding). In other words, we supress those samples that have the worse scores than the greedy decoding results. The second term increases the probability of the samples from stage i that outperform the samples from stage i \u22121, and suppresses the inferior samples. Experiments In this section, we \ufb01rst describe the dataset used in our experiments, and then introduce the baseline methods for comparisons and the implementation details followed by the detailed results. We report all the results using MSCOCO caption evaluation tool1. 1https://github.com/tylin/coco-caption Datasets and Setting We evaluate the proposed approach on MSCOCO dataset. The dataset contrains 123,000 images, where each image has \ufb01ve reference captions. We follow the setting of (Karpathy and Fei-Fei 2015) by using 5,000 images for of\ufb02ine validation and 5,000 images for of\ufb02ine testing. The widely used BLEU, METEOR, ROUGE, CIDEr, and SPICE scores are used to measure the quality of the generated captions. We further test on the MSCOCO test set consisting of 40,775 images, and then conduct the online comparison against the state-of-the-art via the online MSCOCO evaluation server. Baseline Approaches for Comparisons To gain insight into the effectiveness of our proposed approach, we compare the following models with each other: LSTM and LSTM3 layers. We implement a one layer LSTM-based image captioning model based on the framework proposed by (Vinyals et al. 2015). We also add two additional LSTM networks after the one layer LSTM model, which is named as LSTM3 layers. We \ufb01rst train these two models with XE loss, and then optimize the CIDEr metric with SCST (Rennie et al. 2017). LSTM+ATTSoft and LSTM+ATTTop-Down. We implement two types of visual attention-based image captioning models: the Soft-attention model (LSTM+ATTSoft) proposed by (Xu et al. 2015) and the Top-Down attention model (LSTM+ATTTop-Down) proposed by (Anderson et al. 2017). We encode the image with ResNet-101 and apply the spatially adaptive pooling to get a \ufb01xed-size output of 14 \u00d7 14 \u00d7 2048. At each time step, the attention model produces an attention mask over the 196 spatial locations. LSTM+ATTTop-Down consists of two LSTM networks, where the \ufb01rst LSTM takes the mean-pooled image feature as input, and the second LSTM predicts the words based on the attended image features and the hidden state of the \ufb01rst LSTM. Similarly, we also train these two models with XE Loss and the RL-based sentence-level metric. Stack-Cap and Stack-Cap\u2217. Stack-Cap is our proposed method and Stack-Cap\u2217is a simpli\ufb01ed version. In particular, Stack-Cap\u2217incorporates the multiple attention models into LSTM3 layers. Here we treat the \ufb01rst LSTM as the coarse decoder, and the subsequent two attention-based LSTM networks (Nf = 2) as the \ufb01ne decoders. Stack-Cap has the architecture similar to Stack-Cap\u2217, except that it applies the proposed stacked attention model instead of the independent attention model. We train these two models (Stack-Cap\u2217and Stack-Cap) with the proposed coarse-to-\ufb01ne (C2F) learning approach. Implementation Details In this paper, we set the number of hidden units of each LSTM to 512, the number of hidden units in the attention layer to 512, and the vocabulary size of the word embedding to 9,487. In our implementation, the parameters are randomly initialized except the image CNN, for which we encode the full image with the ResNet-101 pre-trained on ImageNet. We \ufb01rst train our model under the cross-entropy cost using Adam (Kingma and Ba 2015) optimizer with an initial \flearning rate of 4 \u00d7 10\u22124 and a momentum parameter of 0.9. After that, we run the proposed RL-based approach on the just trained model to be optimized for the CIDEr metric. During this stage, we use Adam with a learning rate 5 \u00d7 10\u22125. After each epoch, we evaluate the model on the validation set and select the model with the best CIDEr score for testing. During testing, we apply beam search which can increase the performance of greedy decoding. Unlike greedy decoding which keeps only a single hypothesis during decoding, Beam search keeps K > 1 (K = 5 in our experiments) hypotheses that have the highest scores at each time step, and returns the hypothesis with the highest log probability at the end. Quantitative Analysis In this experiment, we \ufb01rst optimize the models with the standard cross-entropy (XE) loss. We report the performance of our model and the baselines on the Karpathy test split in Table 1. Note that all results are reported without \ufb01ne-tuning of the ResNet-101. Approach B@1 B@2 B@3 B@4 M C LSTM (XE) 72.1 54.8 39.6 28.5 24.3 91.4 LSTM3 layers (XE) 70.5 53.1 38.9 28.3 23.2 85.7 LSTM+AttSoft (XE) 73.8 57.2 43.1 33.0 25.7 101.0 LSTM+AttTop-Down (XE) 74.9 58.6 44.5 33.3 25.8 103.4 Stack-Cap\u2217(XE) 75.6 59.6 45.6 34.6 26.3 108.0 Stack-Cap (XE) 76.2 60.4 46.4 35.2 26.5 109.1 Table 1: Performance comparisons on MSCOCO, where B@n is short for BLEU-n, M is short for METEOR, and C is short for CIDEr. All values are reported as the percentage (Bold numbers are the best results). It can be seen from Table 1 that our coarse-to-\ufb01ne image captioning model (Stack-Cap) achieves the best performances in all metrics. The two coarse-to-\ufb01ne models, Stack-Cap and Stack-Cap\u2217, give similar performance. Note that although these two coarse-to-\ufb01ne models have the same number of LSTM units as LSTM3 layers, directly adding two additional LSTM layers in LSTM3 layers without intermediate supervision decreases the performance of LSTM as the model experiences over\ufb01tting. Our coarse-to-\ufb01ne approach can optimize the network gradually with the intermediate supervision and avoid over\ufb01tting. We also observe that Soft attention (LSTM+ATTSoft) and Top-Down attention (LSTM+ATTTop-Down) can signi\ufb01cantly improve the performance of image captioning. Our best model (Stack-Cap) with stacked attention networks outperforms the Stack-Cap\u2217, which demonstrates that adjusting the attention on the relevant visual clues progressively can generate better image descriptions. After optimizing the models with XE loss, we optimize them for the CIDEr metric with the RL-based algorithms. The performances of the four models optimized for CIDEr with the SCST (Rennie et al. 2017) and the performances of two models optimized with the proposed coarse-to-\ufb01ne (C2F) learning are also reported in Table 2. We can see Approach B@1 B@2 B@3 B@4 M C LSTM (CIDEr) 76.7 58.3 42.8 30.8 25.5 100.2 LSTM3 layers (CIDEr) 73.0 56.1 41.1 29.9 25.1 95.9 LSTM+AttSoft (CIDEr) 77.3 59.3 44.1 32.1 25.9 104.8 LSTM+AttTop-Down (CIDEr) 76.7 60.4 45.6 33.9 26.5 112.7 Stack-Cap\u2217(C2F) 77.9 61.6 46.7 35.0 26.9 115.9 Stack-Cap (C2F) 78.6 62.5 47.9 36.1 27.4 120.4 Table 2: Performance comparisons with the baselines on MSCOCO Karpathy test split. Our Stack-Cap (C2F) model achieves signi\ufb01cant grains across all metrics. that our Stack-Cap model obtains signi\ufb01cant gains across all metrics. Table 3 compares the results of our Stack-Cap (C2F) model with those of the existing methods on MSCOCO Karpathy test split, where Stack-Cap achieves the best performance in all metrics. Online Evaluation. Table 4 reports the performance of our proposed Stack-Cap model trained with the coarse-to\ufb01ne learning on the of\ufb01cial MSCOCO evaluation server2. We can see that our approach achieves very competitive performance, compared to the state-of-the-art. Note that the results of SCST:Att2in (Ens. 4) are achieved by the ensemble of four models, while our results are generated by the single model. Figure 3: Visualizations of the generated captions and image attention maps on MSCOCO. Ground-Truth (GT) descriptions and the generated description of each stage are shown for each example. The columns from left to right correspond to the outputs of the three LSTM decoders from coarse to \ufb01ne (coarse: black, re\ufb01ned: purple, \ufb01nal: red). Qualitative Analysis To demonstrate that using the proposed coarse-to-\ufb01ne approach can generate better image descriptions stage-bystage that correlate well with the adaptively attended regions, we visualize the spatial attention weight for word in 2https://competitions.codalab.org/competitions/3221 \fApproach BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CIDEr SPICE Google NIC (Vinyals et al. 2015) \u2014 \u2014 \u2014 27.7 \u2014 23.7 85.5 \u2014 Hard-Attention (Xu et al. 2015) 70.7 49.2 34.4 24.3 23.9 \u2014 \u2014 \u2014 Soft-Attention (Xu et al. 2015) 71.8 50.4 35.7 25.0 23.0 \u2014 \u2014 \u2014 VAE (Pu et al. 2016) 72.0 52.0 37.0 28.0 24.0 \u2014 90.0 \u2014 Google NICv2 (Vinyals et al. 2016) \u2014 \u2014 \u2014 32.1 25.7 \u2014 99.8 \u2014 Attributes-CNN+RNN (Wu et al. 2016) 74.0 56.0 42.0 31.0 26.0 \u2014 94.0 \u2014 CNNL+RHN (Gu et al. 2017a) 72.3 55.3 41.3 30.6 25.2 \u2014 98.9 18.3 PG-SPIDEr-TAG (Liu et al. 2017c) 75.4 59.1 44.5 33.2 25.7 55.0 101.3 \u2014 Adaptive (Lu et al. 2017) 74.2 58.0 43.9 33.2 26.6 \u2014 108.5 \u2014 SCST:Att2in (Rennie et al. 2017) \u2014 \u2014 \u2014 33.3 26.3 55.3 111.4 \u2014 SCST:Att2in (Ens. 4) (Rennie et al. 2017) \u2014 \u2014 \u2014 34.8 26.9 56.3 115.2 \u2014 Stack-Cap (C2F) 78.6 62.5 47.9 36.1 27.4 56.9 120.4 20.9 Table 3: Comparisons of the image captioning performance of the existing methods on MSCOCO Karpathy test split. Our Stack-Cap (C2F) model with the coarse-to-\ufb01ne learning achieves signi\ufb01cant gains across all metrics. BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CIDEr Approach c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 Google NIC 71.3 89.5 54.2 80.2 40.7 69.4 30.9 58.7 25.4 34.6 53.0 68.2 94.3 94.6 Hard-Attention 70.5 88.1 52.8 77.9 38.3 65.8 27.7 53.7 24.1 32.2 51.6 65.4 86.5 89.3 PG-SPIDEr-TAG 75.1 91.6 59.1 84.2 44.5 73.8 33.1 62.4 25.5 33.9 55.1 69.4 104.2 107.1 Adaptive 74.8 92.0 58.4 84.5 44.4 74.4 33.6 63.7 26.4 35.9 55.0 70.5 104.2 105.9 SCST:Att2in (Ens. 4) 78.1 93.1 61.9 86.0 47.0 75.9 35.2 64.5 27.0 35.5 56.3 70.7 114.7 116.7 Ours: Stack-Cap (C2F) 77.8 93.2 61.6 86.1 46.8 76.0 34.9 64.6 27.0 35.6 56.2 70.6 114.8 118.3 Table 4: Leaderboard of the published image captioning models (as of 10/09/2017) on the online MSCOCO test server. Our single Stack-Cap model trained with the coarse-to-\ufb01ne learning yields comparable performance with the state-of-the-art approaches on all reported metrics. the generated captions. We upsample the attention weights by a factor of 16 and apply a Gaussian \ufb01lter to make it the same size as the input image, and stack all the upsamped spatial attention maps into the original input image. Figure 3 shows some generated captions. By reasoning via multiple attention layers progressively, the Stack-Cap model can gradually \ufb01lter out noises and pinpoint the regions that are highly relevant to the current word prediction. We can \ufb01nd that our Stack-Cap model learns alignments that correspond strongly with human intuition. Taking the \ufb01rst image as an example, compared with the caption generated in the coarse stage, the \ufb01rst re\ufb01ned caption generated by the \ufb01rst \ufb01ne decoder contains \u201cdog,\u201d and the second \ufb01ne decoder not only produces \u201cdog,\u201d but also identi\ufb01es \u201cumbrella.\u201d Besides, our approach can generate more descriptive sentences. For example, the attention visualizations of the jets image show that the Stack-Cap model can query the relationship of those \u201cjets\u201d as well as the long trail of smoke behind them, as there are strong attention weights that encompass this salient region. This, together with other examples, suggests that the stacked attention can more effectively explore the visual information for sequence prediction. In other words, our approach via the stacked attention can consider visual information in the image from coarse to \ufb01ne, aligning well with the human visual system, where we usually use a coarse-to-\ufb01ne procedure to understand pictures. Conclusion In this paper, we have presented a coarse-to-\ufb01ne image captioning model which utilizes a stacked visual attention model in conjunction with multiple LSTM networks to achieve better image descriptions. Unlike the conventional one-stage models, our approach allows generating captions from coarse to \ufb01ne, which we found to be very bene\ufb01cial for image captioning. Our model achieves comparable performance with the state-of-the-art approach using ensemble on the online MSCOCO test server. Future research directions include integrating extra attributes learning into image captioning, and incorporating beam search into the training procedure. Acknowledgements: This research is supported by the National Research Foundation, Prime Ministers Of\ufb01ce, Singapore, under its IDM Futures Funding Initiative, and NTU CoE Grant. This research was carried out at the ROSE Lab at the Nanyang Technological University, Singapore. The ROSE Lab is supported by the National Research Foundation, Prime Ministers Of\ufb01ce, Singapore, under its IDM Futures Funding Initiative and administered by the Interactive and Digital Media Programme Of\ufb01ce. We gratefully acknowledge the support of NVAITC (NVIDIA AI Tech Center) for our research at NTU ROSE Lab, Singapore." + }, + { + "url": "http://arxiv.org/abs/1612.07086v3", + "title": "An Empirical Study of Language CNN for Image Captioning", + "abstract": "Language Models based on recurrent neural networks have dominated recent\nimage caption generation tasks. In this paper, we introduce a Language CNN\nmodel which is suitable for statistical language modeling tasks and shows\ncompetitive performance in image captioning. In contrast to previous models\nwhich predict next word based on one previous word and hidden state, our\nlanguage CNN is fed with all the previous words and can model the long-range\ndependencies of history words, which are critical for image captioning. The\neffectiveness of our approach is validated on two datasets MS COCO and\nFlickr30K. Our extensive experimental results show that our method outperforms\nthe vanilla recurrent neural network based language models and is competitive\nwith the state-of-the-art methods.", + "authors": "Jiuxiang Gu, Gang Wang, Jianfei Cai, Tsuhan Chen", + "published": "2016-12-21", + "updated": "2017-08-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction Image caption generation is a fundamental problem that involves Computer Vision, Natural Language Processing (NLP), and Machine Learning. It can be analogous to \u201ctranslating\u201d an image to proper sentences. While this task seems to be easy for human beings, it is quite challenging for machines because it requires the model to understand the image content and express their relationships in a natural language. Also, the image captioning model should be capable of capturing implicit semantic information of an image and generating humanlike sentences. As a result, generating accurate captions for an image is not an easy task. The recent surge of research interest in image caption generation task is due to the advances in Neural Machine Translation (NMT) [44] and large datasets [39, 29]. Most image captioning models follow the encoder-decoder pipeline [4, 24, 35, 19, 41]. The encoder-decoder framework is recently introduced for sequence-to-sequence learning based on Recurrent Neural Networks (RNNs) or LongShort Term Memory (LSTM) networks. Both RNNs and LSTM networks can be sequence learners. However, due to the vanishing gradient problem, RNNs can only remember the previous status for a few time steps. LSTM network is a special type of RNN architecture designed to solve the vanishing gradient problem in RNNs [46, 15, 6]. It introduces a new component called memory cell. Each memory cell is composed of three gates and a neuron with the selfrecurrent connection. These gates allow the memory cells to keep and access information over a long period of time and make LSTM network capable of learning long-term dependencies. Although models like LSTM networks have memory cells which aim to memorize history information for longterm, they are still limited to several time steps because long-term information is gradually diluted at every time step [49]. Besides, vanilla RNNs-based image captioning models recursively accumulate history information without explicitly modeling the hierarchical structure of word sequences, which clearly have a bottom-up structure [28]. To better model the hierarchical structure and long-term dependencies in word sequences, in this paper, we adopt a language CNN which applies temporal convolution to extract features from sequences. Such a method is inspired by works in NLP which have shown CNN is very powerful for text representation [18, 48]. Unlike the vanilla CNN architecture, we drop the pooling operation to keep the relevant information for words representation and investigate the optimum convolutional \ufb01lters by experiments. However, only using language CNN fails to model the dynamic temporal behavior. Hence, we still need to combine language CNN with recurrent networks (e.g., RNN or LSTM). Our extensive studies show that adding language CNN to a recurrent network helps model sequences consistently and more effectively, and leads to improved results. To summarize, our primary contribution lies in incorporating a language CNN, which is capable of capturing long-range dependencies in sequences, with RNNs for image captioning. Our model yields comparable performance with the state-of-the-art approaches on Flickr30k [39] and arXiv:1612.07086v3 [cs.CV] 2 Aug 2017 \fMS COCO [29]. 2. Related Works The problem of generating natural language descriptions for images has become a hot topic in computer vision community. Prior to using neural networks for generating descriptions, the classical approach is to pose the problem as a retrieval and ranking problem [12, 9, 37]. The main weakness of those retrieval-based approaches is that they cannot generate proper captions for a new combination of objects. Inspired by the success of deep neural networks in machine translation [44, 4, 17], researchers have proposed to use the encoder-decoder framework for image caption generation [21, 35, 19, 46, 6, 3, 26]. Instead of translating sentences between two languages, the goal of image captioning is to \u201ctranslate\u201d a query image into a sentence that describes the image. The earliest approach using neural network for image captioning is proposed by Vinyals et al. [46] which is an encoder-decoder system trained to maximize the log-likelihood of the target image descriptions. Similarly, Mao et al. [35] and Donahue et al. [6] use the multimodal fusion layer to fuse the image features and word representation at each time step. In both cases, i.e., the models in [35] and [6], the captions are generated from the full images, while the image captioning model proposed by Karpathy et al. [19] generates descriptions based on regions. This work is later followed by Johnson et al. [16] whose method is designed to jointly localize regions and describe each with captions. Rather than representing an image as a single feature vector from the top-layer of CNNs, some researchers have explored the structure of networks to explicitly or implicitly model the correlation between images and descriptions [51, 34, 30]. Xu et al. [51] incorporate the spatial attention on convolutional features of an image into the encoder-decoder framework through the \u201chard\u201d and \u201csoft\u201d attention mechanisms. Their work is followed by Yang et al. [52] whose method introduces a review network to improve the attention mechanism and Liu et al. [30] whose approach is designed to improve the correctness of visual attention. Moreover, a variational autoencoder for image captioning is developed by Pu et al. [40]. They use a CNN as the image encoder and use a deep generative deconvolutional network as the decoder together with a Gated Recurrent Unit (GRU) [4] to generate image descriptions. More recently, high-level attributes have been shown to obtain clear improvements on the image captioning task when injected into existing encoder-decoder based models [50, 53, 8]. Speci\ufb01cally, Jia et al. [15] use the semantic information as the extra input to guide the model in generating captions. In addition, Fang et al. [7] learn a visual attributes detector based on multi-instance learning (MIL) \ufb01rst and then learn a statistical language model for caption generation. Likewise, Wu et al. [50] train several visual attribute classi\ufb01ers and take the outputs of those classi\ufb01ers as inputs for the LSTM network to predict words. In general, current recurrent neural network based approaches have shown their powerful capability on modeling word sequences [46, 19]. However, the historysummarizing hidden states of RNNs are updated at each time, which render the long-term memory rather dif\ufb01cult [25, 36]. Besides, we argue that current recurrent networks like LSTM are not ef\ufb01cient on modeling the hierarchical structure in word sequences. All of these prompt us to explore a new language model to extract better sentence representation. Considering ConvNets can be stacked to extract hierarchical features over long-range contexts and have received a lot of attention in many tasks [10], in this paper, we design a language CNN to model words with long-term dependencies through multilayer ConvNets and to model the hierarchical representation through the bottom-up and convolutional architecture. 3. Model Architecture 3.1. Overall Framework We study the effect of language CNN by combining it with Recurrent Networks. Figure 1 shows a recursive framework. It consists of one deep CNN for image encoding, one CNN for sentence modeling, and a recurrent network for sequence prediction. In order to distinguish these two CNN networks, we name the \ufb01rst CNN for image feature extraction as CNNI, and the second CNN for language modeling as CNNL. Given an image I, we take the widely-used CNN architecture VGGNet (16-layer) [42] pre-trained on ImageNet [22] to extract the image features V \u2208RK. The CNNL is designed to represent words and their hierarchical structure in word sequences. It takes a sequence of t generated words (each word is encoded as a one-hot representation) as inputs and generates a bottom-up representation of these words. The outputs of CNNI and CNNL will be fed into a multimodal fusion layer, and use the recurrent network frecurrent(\u00b7) to predict the next word. The following equations show the main working \ufb02ow of our model: V = CNNI(I) (1) y[t] = CNNL(S[0], S[1], \u00b7 \u00b7 \u00b7 , S[t\u22121]) (2) m[t] = fmultimodal(y[t], V) (3) r[t] = frecurrent(r[t\u22121], x[t\u22121], m[t]) (4) S[t] \u223c arg max S Softmax(Wor[t] + bo) (5) where t \u2208[0, N \u22121] is the time step, y[t] is the output vector of CNNL, r[t] is the activation output of recurrent network, S[t] is the t-th word drawn from the dictionary S according \fCNNI CNNI Recurrent Network a young girl skiing through a snow covered hill CNNL CNNL CNNL CNNL V V V V V Recurrent Network Recurrent Network Recurrent Network Recurrent Network \u21e0 V Recurrent Network a young girl skiing through a snow covered hill S[t\u22121] y[t] r[t\u22121] r[t] S[t] \u21e0Softmax(Wor[t] + bo) M M M M M CNNL M V \u0001\u0005\u0006\u0003\u0004\u0006\u0002 CNNL m[t] Figure 1. An overview of our framework. The input of our model is a query image. Our model estimates the probability distribution of the next word given previous words and image. It consists of four parts: a CNNI for image feature extraction, a deep CNNL for language modeling, a multimodal layer (M) that connects the CNNI and CNNL, and a Recurrent Network (e.g., RNN, LSTM, etc.) for word prediction. The weights are shared among all time frames. to the maximum Softmax probability controlled by r[t], Wo and bo are weights and biases used for calculating the distribution over words. Equation 2, 3, 4 and 5 are recursively applied, the design of each function is discussed below. 3.2. CNNL Layer Models based on RNNs have dominated recent sequence modeling tasks [23, 31, 32, 44], and most of the recent image captioning models are based on LSTM networks [6, 19, 34]. However, LSTM networks cannot explicitly model the hierarchical representation of words. Even with multi-layer LSTM networks, such hierarchical structure is still hard to be captured due to the more complex model and higher risk of over-\ufb01tting. Inspired by the recent success of CNNs in computer vision [10, 14], we adopt a language CNN with a hierarchical structure to capture the long-range dependencies between the input words, called CNNL. The \ufb01rst layer of CNNL is a word embedding layer. It embeds the one-hot word encoding from the dictionary into word representation through a lookup table. Suppose we have t input words S = {S[0], S[1], \u00b7 \u00b7 \u00b7 , S[t\u22121]}, and S[i] is the one-of-V (onehot) encoding, with V as the size of the vocabulary. We \ufb01rst map each word S[t] in the sentence into a K-dimensional vector x[t] = WeS[t], where We \u2208RK\u00d7V is a word embedding matrix (to be learned). Next, those embeddings are concatenated to produce a matrix as follows: x = h x[0], x[1], \u00b7 \u00b7 \u00b7 , x[t\u22121]iT , x \u2208Rt\u00d7K (6) The concatenated matrix x is fed to the convolutional layer. Just like the normal CNN, CNNL has a \ufb01xed architecture with prede\ufb01ned maximum number of input words (denoted as LL). Unlike the toy example in Figure 2, in practice we use a larger and deeper CNNL with LL = 16. We use the temporal convolution [21] to model the sentence. Given an input feature map y(\u2113\u22121) \u2208RM\u2113\u22121\u00d7K of Layer-\u2113\u22121, the output feature map y(\u2113) \u2208RM\u2113\u00d7K of the temporal convolution layer-\u2113will be: y(\u2113) i (x) = \u03c3(w(l) L y(\u2113\u22121) i + b(\u2113) L ) (7) here y(\u2113) i (x) gives the output of feature map for location i in Layer-\u2113, w(l) L denotes the parameters on Layer-\u2113, \u03c3(\u00b7) is the activation function, e.g., Sigmoid, or ReLU. The input feature map y(l\u22121) i is the segment of Layer-\u2113\u22121 for the convolution at location i, while y(0) is the concatenation of t word embeddings from the sequence input S[0:t\u22121]. The de\ufb01nition of y(0) is as follows: y(0) def = (\u0002 x[t\u2212LL], \u00b7 \u00b7 \u00b7 , x[t\u22121]\u0003T , if t \u2265LL \u0002 x[0], \u00b7 \u00b7 \u00b7 , x[t\u22121], \u02dc x[t], \u00b7 \u00b7 \u00b7 , \u02dc x[LL\u22121]\u0003T otherwise (8) Specially, when t \u2265LL, the input sentence will be truncated, we only use LL words before the current time step t. When t < LL, the input sentence will be padded with \u02dc x[:]. Note that if t = 0, \u02dc x[:] are the image features V, otherwise \u02dc x[:] are the zero vectors that have the same dimension as x[:]. Previous CNNs, including those adopted for NLP tasks [13, 18], take the classic convolution-pooling strategy, which uses max-pooling to pick the highest response feature across time. This strategy works well for tasks like text classi\ufb01cation [18] and matching [13], but is undesirable for modeling the composition functionality, because it ignores the temporal information in sequence. In our network, we discard the pooling operations. We consider words as the smallest linguistic unit and apply a straightforward stack of convolution layers on top of each other. In practice, we \ufb01nd that deeper CNNL works better than shallow CNNL, which is consistent with the tradition of CNNs in computer vision [10], where using very deep CNNs is key to having better feature representation. The output features of the \ufb01nal convolution layer are fed into a fully connected layer that projects the extracted words features into a low-dimensional representation. Next, the projected features will be fed to a highway connection [43] which controls \ufb02ows of information in the layer and im\fproves the gradient \ufb02ow. The \ufb01nal output of the highway connection is a K-dimensional vector y[t]. a young girl skiing through a / / + \u21e5 \u21e5 Transform gate Carry gate Highway Connection Fully-connected Layer Temporal Convolution n Temporal Convolution 1 Word Embedding S[0] S[1] S[2] S[3] S[4] S[5] x[0] x[1] x[2] x[3] x[4] x[5] \u02dc x[6] \u02dc x[7] y[t] \u2026 \u2026 y(`\u22121) y(`) Figure 2. The architecture of language CNN for sentence modeling. Here \u201c/\u201d stands for a zero padding. The CNNL builds a hierarchical representation of history words which contains the useful information for next word prediction. 3.3. Multimodal Fusion Layer Next, we add a multimodal fusion layer after CNNL, which fuses words representation and image features. This layer has two inputs: the bottom-up words representation y[t] extracted from CNNL and the image representation V extracted from CNNI. We map these two inputs to the same multimodal feature space and combine them together to obtain the activation of multimodal features: m[t] = fmultimodal(y[t], V) (9) = \u03c3 \u0010 fy(y[t]; WY, bY) + gv(V; WV, bV) \u0011 (10) where \u201c+\u201d denotes element-wise addition, fy(\u00b7) and gv(\u00b7) are linear mapping functions, m[t] is the multimodal layer output feature vector. \u03c3(\u00b7) is the activation function, here we use the scaled tanh function [27] which leads to a faster training process than the basic tanh function. 3.4. Recurrent Networks Our CNNL may miss the important temporal information because it extracts the holistic features from the whole sequence of words. To overcome this limitation, we combine it with recurrent networks. In our model, the transition equations of the recurrent network can be formulated as: r[t] = frecurrent(r[t\u22121], x[t\u22121], m[t]) (11) S[t] \u223carg max S Softmax(Wor[t] + bo) (12) where r[t] denotes the recurrent state, x[t\u22121] = WeS[t\u22121] is the previous word embedding, m[t] is the multimodal fusion output, and frecurrent(\u00b7) is the transition function of recurrent network. Softmax(r[t]) is the probability of word S[t] given by the Softmax layer, and S[t] is the t-th decoded word. In our study, we combine our language CNN with four types of recurrent networks: Simple RNN, LSTM network, GRU [4], and Recurrent Highway Network (RHN) [54]. Traditionally, the simple RNN updates the recurrent state r[t] of Equation 11 as follows: r[t] = tanh(Wrr[t\u22121] + Wzz[t] + b) (13) where z[t] is the input. However, this type of simple RNN is hard to deal with long-term dependencies [2]. As the vanishing gradient will make gradients in directions that shortterm dependencies are large, while the gradients in directions that correspond to long-term dependencies are small. LSTM network extents the simple RNN with the gating mechanism (input gate, forget gate, and output gate) to control information \ufb02ow and a memory cell to store the history information, thus it can better model the long-term dependencies than simple RNN. GRU is an architecture similar to the LSTM, but it has a simpli\ufb01ed structure. GRU does not has a separate memory cell and exposes its hidden state r[t] without any control. Thus, it is computationally more ef\ufb01cient and outperforms the LSTM network on many tasks due to its simple structure. Besides, we also consider a fourth type of recurrent network: RHN, which introduces the highway connection to simple RNN. RHN has directly gated connections between previous state r[t\u22121] and current input z[t] to modulate the \ufb02ow of information. The transition equations of RHN can be formulated as follows: \uf8eb \uf8ed t[t] c[t] h[t] \uf8f6 \uf8f8 = \uf8eb \uf8ed \u03c3 \u03c3 tanh \uf8f6 \uf8f8 \u0012 M \u0012 r[t\u22121] z[t] \u0013\u0013 (14) r[t] = h[t] \u2299t[t] + c[t] \u2299r[t\u22121] (15) where c[t] is the carry gate, t[t] is the transform gate, h[t] denotes the modulated input, M : R2K+d \u2192R3d is an af\ufb01ne transformation. z[t] \u2208R2K denotes the concatenation of two vectors: m[t] and x[t\u22121]. According to Equation 3 and 2, z[t] can be expressed as follows: z[t] = [fmultimodal(CNNL(x[0,\u00b7\u00b7\u00b7 ,t\u22121]), V); x[t\u22121]] (16) Like GRU, RHN does not have output gate to control the exposure of the recurrent state r[t], but exposes the whole state each time. The RHN, however, does not have reset gate to drop information that is irrelevant in the future. As our CNNL can extract the relevant information from the sequence of history words at each time step, to some extent, the CNNL allows the model to add information that is useful in making a prediction. 3.5. Training During training, given the ground truth words S and corresponding image I, the loss function for a single training \finstance (S, I) is de\ufb01ned as a sum of the negative log likelihood of the words. The loss can be written as: L(S, I) = \u2212 N\u22121 X t=0 log P(S[t]|S[0], \u00b7 \u00b7 \u00b7 , S[t\u22121], I) (17) where N is the sequence length, and S[t] denotes a word in the sentence S. The training objective is to minimize the cost function, which is equivalent to maximizing the probability of the ground truth context words given the image by using: arg max\u03b8 PN\u22121 t=0 log P(S[t]|S[0:t\u22121], I), where \u03b8 are the parameters of our model, and P(S[t]|S[0:t\u22121], I) corresponds to the activation of Softmax layer. 3.6. Implementation Details In the following experiments, we use the 16-layer VGGNet [42] model to compute CNN features and map the last fully-connected layer\u2019s output features to an embedding space via a linear transformation. As for preprocessing of captions, we transform all letters in the captions to lowercase and remove all the nonalphabetic characters. Words occur less than \ufb01ve times are replaced with an unknown token . We truncate all the captions longer than 16 tokens and set the maximum number of input words for CNNL to be 16. 3.6.1 Training Details In the training process, each image I has \ufb01ve corresponding annotations. We \ufb01rst extract the image features V with CNNI. The image features V are used in each time step. We map each word representation S[t] with: x[t] = WeS[t], t \u2208[0, N \u22121]. After that, our network is trained to predict the words after it has seen the image and preceding words. Please note that we denote by S[0] a special token and by S[N\u22121] a special token which designate the start and end of the sentence. For Flickr30K [39] and MS COCO [29] we set the dimensionality of the image features and word embeddings as 512. All the models are trained with Adam [20], which is a stochastic gradient descent method that computes adaptive learning rate for each parameter. The learning rate is initialized with 2e-4 for Flickr30K and 4e-4 for MS COCO, and the restart technique mentioned in [33] is adopted to improve the convergence of training. Dropout and early stopping are used to avoid over\ufb01tting. All weights are randomly initialized except for the CNN weights. More speci\ufb01cally, we \ufb01ne-tune the VGGNet when the validation loss stops decreasing. The termination of training is determined by evaluating the CIDEr [45] score for the validation split after each training epoch. 3.6.2 Testing During testing, the previous output S[t\u22121] is used as input in lieu of S[t]. The sentence generation process is straightforward. Our model starts from the token and calculates the probability distribution of the next word : P(S[t]|S[0:t\u22121], I). Here we use Beam Search technology proposed in [15], which is a fast and ef\ufb01cient decoding method for recurrent network models. We set a \ufb01xed beam search size (k=2) for all models (with RNNs) in our tests. 4. Experiments 4.1. Datasets and Evaluation Metrics We perform experiments on two popular datasets that are used for image caption generation: MS COCO and Flickr30k. These two datasets contain 123,000 and 31,000 images respectively, and each image has \ufb01ve reference captions. For MS COCO, we reserve 5,000 images for validation and 5,000 images for testing. For Flickr30k, we use 29,000 images for training, 1,000 images for validation, and 1,000 images for testing. We choose four metrics for evaluating the quality of the generated sentences: BLEU-n [38] is a precision-based metric. It measures how many words are shared by the generated captions and ground truth captions. METEOR [5] is based on the explicit word to word matches between generated captions and ground-truth captions. CIDEr [45] is a metric developed speci\ufb01cally for evaluating image captions. It measures consensus in image caption by performing a Term Frequency-Inverse Document Frequency weighting for each n-gram. SPICE [1] is a more recent metric which has been shown to correlate better with the human judgment of semantic quality than previous metrics. 4.2. Models To gain insight into the effectiveness of CNNL, we compare CNNL-based models with methods using the recurrent network only. For a fair comparison, the output dimensions of all gates are \ufb01xed to 512. Recurrent Network-based Models. We implement Recurrent Network-based Models based on the framework proposed by Vinyals et al. [46], it takes an image as input and predicts words with one-layer Recurrent Network. Here we use the publicly available implementation Neuraltalk2 1. We evaluate four baseline models: Simple RNN, RHN, LSTM, and GRU. CNNL-based Models. As can be seen in Figure 1. The CNNL-based models employ a CNNL to obtain the bottomup representation from the sequence of words and cooperate with the Recurrent Network to predict the next word. Image features and words representation learned from CNNI and 1https://github.com/karpathy/neuraltalk2 \fCNNL respectively are fused with the multimodal function. We implement four CNNL-based models: CNNL+Simple RNN, CNNL+RHN, CNNL+LSTM, and CNNL+GRU. 4.3. Quantitative Results We \ufb01rst evaluate the importance of language CNN for image captioning, then evaluate the effects of CNNL on two datasets (Flickr30K and MS COCO), and also compare with the state-of-the-art methods. 4.3.1 Analysis of CNNL It is known that CNNL-based models have larger capacity than RNNs. To verify that the improved performance is from the developed CNNL rather than due to more layers/parameters, we set the hidden and output sizes of RNNs to 512 and 9568 (vocabulary size), and list the parameters of each model as well as their results in Table 1. Approach Params B@4 C Approach Params B@4 C Simple RNN 5.4M 27.0 87.0 LSTM 7.0M 29.2 92.6 CNNL 6.3M 18.4 56.8 LSTM2 9.1M 29.7 93.2 CNNL+RNN 11.7M 29.5 95.2 LSTM3 11.2M 29.3 92.9 Table 1. Results on MS COCO, where B@n are short for BLEUn, C is short for CIDEr. All values are reported as percentage (Bold numbers are the best results). CNNL contains \ufb01ve temporal convolutional layers, the kernel size of the \ufb01rst two convolutional layers is 5, and the rest kernel size of convolutional layers is 3. As seen in Table 1, the parameter size of the 3-layer LSTM (LSTM3) is close to that of the CNNL+RNN. Adding the 2nd LSTM layer (LSTM2) improves the performance of LSTM, but it is still lower than CNNL+RNN. Meanwhile, LSTM3 does not show improvements as the model experiences over\ufb01tting. This issue is even worse on Flickr30K which has relatively small number of training data. Note that CNNL (without RNNs) achieves lower performance than CNNL+RNN. We \ufb01nd that those predicted captions of CNNL (without RNNs) only are short, but contain primary attributes, e.g., CNNL model generates: \u201ca person on a wave\u201d, while CNNL+RNN provides: \u201ca young man sur\ufb01ng a wave\u201d. This \ufb01nding shows that the temporal recurrence of RNNs is still crucial for modeling the shortterm contextual information across words in the sentence. We further compare language CNNs with different input words and with max-pooling operations, where those language CNNs are combined with RHN instead of RNN. Table 2 shows that larger context windows achieve better performance. This is likely because CNNL with larger window size can better utilize contextual information and learn better word embedding representation. In addition, the performance of CNNL\u2217 16 words+RHN is inferior to CNNL+RHN, which experimentally supports our opinion that max-pooling operations lose information about the local order of words. Approach B@4 C Approach B@4 C Avghistory+RHN 30.1 95.8 CNNL2 words+RHN 29.2 93.8 CNNL\u2217 16 words+RHN 28.9 91.9 CNNL4 words+RHN 29.5 95.8 CNNL+RHN 30.6 98.9 CNNL8 words+RHN 30.0 95.9 Table 2. Results of different history information encoding approaches on MS COCO. CNNLNwords takes N previous words as inputs, where we set N to 2, 4, and 8. Avghistory computes an average over history word embeddings. CNNL\u2217 16 words replaces the 2nd and 4th convolutional layers in CNNL with the max-pooling layer. 4.3.2 Results Using CNNL on MS COCO Table 3 shows the generation performance on MS COCO. By combine CNNL, our methods clearly outperforms the recurrent network counterpart in all metrics. Approach B@1 B@2 B@3 B@4 M C S Simple RNN 70.1 52.1 37.6 27.0 23.2 87.0 16.0 CNNL+RNN 72.2 55.0 40.7 29.5 24.5 95.2 17.6 RHN 70.5 52.7 37.8 27.0 24.0 90.6 17.2 CNNL+RHN 72.3 55.3 41.3 30.6 25.2 98.9 18.3 LSTM 70.8 53.6 39.5 29.2 24.5 92.6 17.1 CNNL+LSTM 72.1 54.6 40.9 30.4 25.1 99.1 18.0 GRU 71.6 54.1 39.7 28.9 24.3 93.3 17.2 CNNL+GRU 72.6 55.4 41.1 30.3 24.6 96.1 17.6 Table 3. Performance comparison on MS COCO, where M is short for METEOR, and S is short for SPICE. Approach B@1 B@2 B@3 B@4 M C S Simple RNN 60.5 41.3 28.0 19.1 17.1 32.5 10.5 CNNL+RNN 71.3 53.8 39.6 28.7 22.6 65.4 15.6 RHN 62.1 43.1 29.4 20.0 17.7 38.4 11.4 CNNL+RHN 73.8 56.3 41.9 30.7 21.6 61.8 15.0 LSTM 60.9 41.8 28.3 19.3 17.6 35.0 11.1 CNNL+LSTM 64.5 45.8 32.2 22.4 19.0 45.0 12.5 GRU 61.4 42.5 29.1 20.0 18.1 39.5 11.4 CNNL+GRU 71.4 54.0 39.5 28.2 21.1 57.9 14.5 Table 4. Performance comparison on Flickr30k. Among these models, CNNL+RHN achieves the best performances in terms of B@(3,4), METEOR, and SPICE metrics, CNNL+LSTM achieves the best performance in CIDEr metric (99.1), and CNNL+GRU achieves the best performance in B@(1,2) metrics. Although the absolute gains across different B@n metrics are similar, the percentage of the relative performance improvement is increasing from B@1 to B@4. It does show the advantage of our method in terms of better capturing long-term dependency. Note that the CNNL+RNN model achieves better performance than simple RNN model and outperforms LSTM model. As mentioned in Section 3.4, LSTM networks model the word dependencies with multi-gates and the internal memory cell. However, our CNNL+RNN without memory cell works better than LSTM model. We think the reason is that our language CNN takes all history words as input and explicitly model the long-term dependencies in history words, this could be regarded as an external \u201cmemory cell\u201d. Thus, the CNNL\u2019s ability to model long-term de\fFlickr30k MS COCO Approach BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR CIDEr BRNN [19] 57.3 36.9 24.0 15.7 \u2014 62.5 45.0 32.1 23.0 19.5 66.0 Google NIC [46] \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 27.7 23.7 85.5 LRCN [6] 58.8 39.1 25.1 16.5 \u2014 66.9 48.9 34.9 24.9 \u2014 \u2014 MSR [7] \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 25.7 23.6 \u2014 m-RNN [35] 60.0 41.0 28.0 19.0 \u2014 67.0 49.0 35.0 25.0 \u2014 \u2014 Hard-Attention [51] 66.9 43.9 29.6 19.9 18.5 70.7 49.2 34.4 24.3 23.9 \u2014 Soft-Attention [51] 66.7 43.4 28.8 19.1 18.5 71.8 50.4 35.7 25.0 23.0 \u2014 ATT-FCN [53] 64.7 46.0 32.4 23.0 18.9 70.9 53.7 40.2 30.4 24.3 \u2014 ERD+GoogLeNet [52] \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 29.8 24.0 88.6 emb-gLSTM [15] 64.6 44.6 30.5 20.6 17.9 67.0 49.1 35.8 26.4 22.7 81.3 VAE [40] 72.0 53.0 38.0 25.0 \u2014 72.0 52.0 37.0 28.0 24.0 90.0 State-of-the-art results using model assembling or extra information Google NICv2 [47] \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 \u2014 32.1 25.7 99.8 Attributes-CNN+RNN [50] 73.0 55.0 40.0 28.0 \u2014 74.0 56.0 42.0 31.0 26.0 94.0 Our results CNNL+RNN 71.3 53.8 39.6 28.7 22.6 72.2 55.0 40.7 29.5 24.5 95.2 CNNL+RHN 73.8 56.3 41.9 30.7 21.6 72.3 55.3 41.3 30.6 25.2 98.9 CNNL+LSTM 64.5 45.8 32.2 22.4 19.0 72.1 54.6 40.9 30.4 25.1 99.1 CNNL+GRU 71.4 54.0 39.5 28.2 21.1 72.6 55.4 41.1 30.3 24.6 96.1 Table 5. Performance in terms of BLEU-n, METEOR, and CIDEr compared with other state-of-the-art methods on the MS COCO and Flickr30k datasets. For those competing methods, we extract their performance from their latest version of papers. pendencies can be taken as enhancement of simple RNNs, which can solve the dif\ufb01culty of learning long-term dependencies. 4.3.3 Results Using CNNL on Flickr30K We also evaluate the effectiveness of language CNN on the smaller dataset Flickr30K. The results in Table 4 clearly indicate the advantage of exploiting the language CNN to model the long-term dependencies in words for image captioning. Among all models, CNNL+RHN achieves the best performances in B@(1,2,3,4) metrics, and CNNL+RNN achieves the best performances in METEOR, CIDEr, and SPICE metrics. As for the low results (without CNNL) on Flickr30k, we think that it is due to lack of enough training data to avoid over\ufb01tting. In contrast, our CNNL can help learn better word embedding and better representation of history words for word prediction, and it is much easier to be trained compared with LSTM due to its simplicity and ef\ufb01ciency. Note that the performance of LSTM and CNNL+LSTM models are lower than RHN/GRU and CNNL+RHN/GRU. This illustrates that the LSTM networks are easily over\ufb01tting on this smaller dataset. 4.3.4 Comparison with State-of-the-art Methods To empirically verify the merit of our models, we compare our methods with other state-of-the-art approaches. Performance on MS COCO. The right-hand side of Table 5 shows the results of different models on MS COCO dataset. CNNL-based models perform better than most image captioning models. The only two methods with better performance (for some metrics) than ours are AttributesCNN+RNN [50] and Google NICv2 [47]. However, Wu et al. [50] employ an attribute prediction layer, which requires determining an extra attribute vocabulary. While we generate the image descriptions only based on the image features. Google NICv2 [47] is based on Google NIC [46], the results of Google NICv2 are achieved by model ensembling. All our models are based on VGG-16 for a fair comparison with [6, 7, 15, 35, 50, 51]. Indeed, better image CNN (e.g. Resnet [11]) leads to higher performance2. Despite all this, the CIDEr score of our CNNL+LSTM model can still achieve 99.1, which is comparable to their best performance even with a single VGG-16 model. Performance on Flickr30K. The results on Flickr30K are reported on the left-hand side of Table 5. Interestingly, CNNL+RHN performs the best on this smaller dataset and even outperforms the AttributesCNN+RNN [50]. Obviously, there is a signi\ufb01cant performance gap between CNNL+RNN/RHN/GRU and RNN/RHN/GRU/LSTM models. This demonstrates the effectiveness of our language CNN on the one hand, and also shows that our CNNL+RNN/RHN/GRU models are more robust and easier to train than LSTM networks when less training data is available. 4.4. Qualitative Results Figure 3 shows some examples generated by our models. It is easy to see that all of these caption generation models can generate somewhat relevant sentences, while 2We uploaded the results based on Resnet-101+CNNL+LSTM (named jxgu LCNN NTU) to the of\ufb01cial MS COCO evaluation server (https: //competitions.codalab.org/competitions/3221), and achieved competitive ranking across different metrics. \fthere is a black tuxedo cat looking in the mirror two cats sitting on top of a wooden floor a cat looking at itself in the mirror next to a tripod a cat and a tripod sitting in front of a mirror a close up of a cat in a mirror a woman and child in ski gear next to a lodge a man and a child are smiling while standing on skiis a young man poses with a little kid in the snow an adult and a small child dressed for skiing a man and a little girl in skis stand in front of a mountain lodge CNNL +RHN : a man standing next to a child on a snow covered slope CNNL +RNN : a man and a woman standing on a snow covered slope GRU : a man and a child standing on a snow covered slope LSTM : a man and a child are standing in the snow RNN : a man and a woman are skiing on the snow CNNL+RHN : a black and white cat looking at itself in a mirror CNNL+RNN : a black and white cat sitting in front of a mirror GRU : a black and white cat standing next to a mirror LSTM : a black and white cat sitting in a bathroom sink RNN : a cat sitting on the floor in a bathroom a dog looking at a cat through a glass window a cat is outside looking through in at a dog the dog wants to go outside with the cat a cat sitting outside of a door next to a dog a cat sitting at a sliding glass door CNNL +RHN : a cat looking at a dog in a door CNNL +RNN : a cat is looking at a dog in front of a window GRU : a cat standing next to a door looking out a window LSTM : a dog and a cat are standing in front of a window RNN : a cat sitting on the side of the road CNNL +RHN : a man talking on a cell phone while walking down a street CNNL +RNN : a man is talking on a cell phone GRU : a man is talking on a cell phone in the street LSTM : a man is talking on his cell phone RNN : a man standing next to a woman talking on a cell phone a man talking on the phone in front of a blue car a man on a telephone holds his hand up to his other ear as he walks a man standing next to a car with a cellphone a man is talking on a cell phone next to a city street a man standing on the side of the street with a cell phone up to his Figure 3. Qualitative results for images on MS COCO. Ground-truth annotations (under each dashed line) and the generated descriptions are shown for each image. a child is looking a white bear in a water aquarium child stands viewing a polar bear as it dives under water to retrieve a bone a boy reaching towards an aquarium in which a polar bear chews on a bone a boy watches a polar bear chew on a bone a young boy touching the glass of a polar bear CNNL+GRU : a polar bear in the water with a ball in its mouth a couple that is eating some food together the groom is feeding the bride a slice of cake a man feeding a piece of cake to his bride a husband feeds his wife a piece of cake a groom feeding wedding cake to his bride CNNL+LSTM : a man and a woman holding a glass of wine a bear that is hanging in a tree a young bear holding onto a pine tree a bear cub in the branches of a pine tree a black bear cub climbing a pine tree the bear cub UNK high up into the tree CNNL+RHN : a large bird perched on top of a tree a tan dog standing on a sidewalk next to a UNK and grass the dog is standing outside all alone in the backyard a dog standing on a brick walk way a brown dog is standing on the side of a walk way a brown dog standing on a brick path CNNL+RNN : a black and white dog standing on a sidewalk Figure 4. Some failure descriptions for images on MS COCO. Ground-truth descriptions are under each dashed line. the CNNL-based models can predict more high-level words by jointly exploiting history words and image representations. Take the last image as an example, compared with the sentences generated by RNN/LSTM/GRU model, \u201ca cat is looking at a dog in front of a window\u201d generated by CNNL+RNN is more precise to describe their relationship in the image. Besides, our CNNL-based models can generate more descriptive sentences. For instance, with the detected object \u201ccat\u201d in the \ufb01rst image, the generated sentence \u201ca black and white cat looking at itself in a mirror\u201d by CNNL+RHN depicts the image content more comprehensively. The results demonstrate that our model with language CNN can generate more humanlike sentences by modeling the hierarchical structure and long-term information of words. Figure 4 shows some failure samples of our CNNLbased models. Although most of the generated captions are complete sentences. However, the biggest problem is that those predicted visual attributes are wrong. For example, \u201cbear\u201d in the \ufb01rst image is detected as \u201cbird\u201d, and \u201cbrown\u201d in the second image is detected as \u201cblack and white\u201d. This will decrease the precision-based evaluation score (e.g., B@n). We can improve our model by further taking high-level attributes into account. 5. Conclusion In this work, we present an image captioning model with language CNN to explore both hierarchical and temporal information in sequence for image caption generation. Experiments conducted on MS COCO and Flickr30K image captioning datasets validate our proposal and analysis. Performance improvements are clearly observed when compared with other image captioning methods. Future research directions will go towards integrating extra attributes learning into image captioning, and how to apply a single language CNN for image caption generation is worth trying. Acknowledgements This work is supported by the National Research Foundation, Prime Ministers Of\ufb01ce, Singapore, under its IDM Futures Funding Initiative, and NTU CoE Grant. This research was carried out at ROSE Lab at Nanyang Technological University, Singapore. ROSE Lab is supported by the National Research Foundation, Prime Ministers Of\ufb01ce, Singapore, under its IDM Futures Funding Initiative and administered by the Interactive and Digital Media Programme Of\ufb01ce. We gratefully acknowledge the support of NVAITC (NVIDIA AI Tech Centre) for our research at NTU ROSE Lab, Singapore." + }, + { + "url": "http://arxiv.org/abs/1512.07108v6", + "title": "Recent Advances in Convolutional Neural Networks", + "abstract": "In the last few years, deep learning has led to very good performance on a\nvariety of problems, such as visual recognition, speech recognition and natural\nlanguage processing. Among different types of deep neural networks,\nconvolutional neural networks have been most extensively studied. Leveraging on\nthe rapid growth in the amount of the annotated data and the great improvements\nin the strengths of graphics processor units, the research on convolutional\nneural networks has been emerged swiftly and achieved state-of-the-art results\non various tasks. In this paper, we provide a broad survey of the recent\nadvances in convolutional neural networks. We detailize the improvements of CNN\non different aspects, including layer design, activation function, loss\nfunction, regularization, optimization and fast computation. Besides, we also\nintroduce various applications of convolutional neural networks in computer\nvision, speech and natural language processing.", + "authors": "Jiuxiang Gu, Zhenhua Wang, Jason Kuen, Lianyang Ma, Amir Shahroudy, Bing Shuai, Ting Liu, Xingxing Wang, Li Wang, Gang Wang, Jianfei Cai, Tsuhan Chen", + "published": "2015-12-22", + "updated": "2017-10-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "cs.NE" + ], + "main_content": "Introduction Convolutional Neural Network (CNN) is a well-known deep learning architecture inspired by the natural visual perception mechanism of the living creatures. In 1959, Hubel & Wiesel [1] found that cells in animal visual cortex are responsible for detecting light in receptive \ufb01elds. Inspired by this discovery, Kunihiko Fukushima proposed the neocognitron in 1980 [2], which could be regarded as the predecessor of CNN. In 1990, LeCun et al. [3] published the seminal paper establishing the modern framework of CNN, and later improved it in [4]. They developed a multi-layer arti\ufb01cial neural network called LeNet-5 which could classify handwritten digits. Like other neural networks, LeNet-5 has multiple layers and can be trained with the backpropagation algorithm [5]. It can obtain e\ufb00ective representations of the original image, which makes it possible to recognize visual patterns directly from raw pixels with little-to-none preprocessing. A parallel study of Zhang et al. [6] used a shift-invariant arti\ufb01cial neural network (SIANN) to recognize characters from an image. However, due to the lack of large training data and computing power at that time, their networks can not perform well on more complex problems, e.g., large-scale image and video classi\ufb01cation. Since 2006, many methods have been developed to overcome the di\ufb03culties encountered in training deep CNNs [7\u201310]. Most notably, Krizhevsky et al.proposed a classic CNN architecture and showed signi\ufb01cant improvements upon previous methods on the image classi\ufb01cation task. The overall architecture of their method, i.e., AlexNet [8], is similar to LeNet-5 but with a deeper structure. With the success of AlexNet, many works have been proposed to improve its performance. Among them, four representative works are \u2217equal contribution Email addresses: jgu004@ntu.edu.sg (Jiuxiang Gu), zhwang.me@gmail.com (Zhenhua Wang), jasonkuen@ntu.edu.sg (Jason Kuen), lyma@ntu.edu.sg (Lianyang Ma), amir3@ntu.edu.sg (Amir Shahroudy), bshuai001@ntu.edu.sg (Bing Shuai), LIUT0016@e.ntu.edu.sg (Ting Liu), wangxx@ntu.edu.sg (Xingxing Wang), wa0002li@e.ntu.edu.sg (Li Wang), WangGang@ntu.edu.sg (Gang Wang), asjfcai@ntu.edu.sg (Jianfei Cai), tsuhan@ntu.edu.sg (Tsuhan Chen) arXiv:1512.07108v6 [cs.CV] 19 Oct 2017 \fFigure 1: Hierarchically-structured taxonomy of this survey ZFNet [11], VGGNet [9], GoogleNet [10] and ResNet [12]. From the evolution of the architectures, a typical trend is that the networks are getting deeper, e.g., ResNet, which won the champion of ILSVRC 2015, is about 20 times deeper than AlexNet and 8 times deeper than VGGNet. By increasing depth, the network can better approximate the target function with increased nonlinearity and get better feature representations. However, it also increases the complexity of the network, which makes the network be more di\ufb03cult to optimize and easier to get over\ufb01tting. Along this way, various methods have been proposed to deal with these problems in various aspects. In this paper, we try to give a comprehensive review of recent advances and give some thorough discussions. In the following sections, we identify broad categories of works related to CNN. Figure 1 shows the hierarchically-structured taxonomy of this paper. We \ufb01rst give an overview of the basic components of CNN in Section 2. Then, we introduce some recent improvements on di\ufb00erent aspects of CNN including convolutional layer, pooling layer, activation function, loss function, regularization and optimization in Section 3 and introduce the fast computing techniques in Section 4. Next, we discuss some typical applications of CNN including image classi\ufb01cation, object detection, object tracking, pose estimation, text detection and recognition, visual saliency detection, action recognition, scene labeling, speech and natural language processing in Section 5. Finally, we conclude this paper in Section 6. 2. Basic CNN Components There are numerous variants of CNN architectures in the literature. However, their basic components are very similar. Taking the famous LeNet-5 as an example, it consists of three types of layers, namely convolutional, pooling, and fully-connected layers. The convolutional layer aims to learn feature representations of the inputs. As shown in Figure 2(a), convolution layer is composed of several convolution kernels which are used to compute di\ufb00erent feature maps. Speci\ufb01cally, each neuron of a feature map is connected to a region of neighbouring neurons in the previous layer. Such a neighbourhood is referred to as the neuron\u2019s receptive \ufb01eld in the previous layer. The new feature map can be obtained by \ufb01rst convolving the input with a learned kernel and then applying an element-wise nonlinear activation function on the convolved results. Note that, to generate each feature map, the kernel is shared by all spatial locations of the input. The complete feature maps are obtained by using several di\ufb00erent kernels. Mathematically, the feature value at location (i, j) in the k-th feature map of l-th layer, zl i,j,k, is calculated by: zl i,j,k = wl k T xl i,j + bl k (1) 2 \f(a) LeNet-5 network (b) Learned features Figure 2: (a) The architecture of the LeNet-5 network, which works well on digit classi\ufb01cation task. (b) Visualization of features in the LeNet-5 network. Each layer\u2019s feature maps are displayed in a di\ufb00erent block. where wl k and bl k are the weight vector and bias term of the k-th \ufb01lter of the l-th layer respectively, and xl i,j is the input patch centered at location (i, j) of the l-th layer. Note that the kernel wl k that generates the feature map zl :,:,k is shared. Such a weight sharing mechanism has several advantages such as it can reduce the model complexity and make the network easier to train. The activation function introduces nonlinearities to CNN, which are desirable for multi-layer networks to detect nonlinear features. Let a(\u00b7) denote the nonlinear activation function. The activation value al i,j,k of convolutional feature zl i,j,k can be computed as: al i,j,k = a(zl i,j,k) (2) Typical activation functions are sigmoid, tanh [13] and ReLU [14]. The pooling layer aims to achieve shift-invariance by reducing the resolution of the feature maps. It is usually placed between two convolutional layers. Each feature map of a pooling layer is connected to its corresponding feature map of the preceding convolutional layer. Denoting the pooling function as pool(\u00b7), for each feature map al :,:,k we have: yl i,j,k = pool(al m,n,k), \u2200(m, n) \u2208Rij (3) where Rij is a local neighbourhood around location (i, j). The typical pooling operations are average pooling [15] and max pooling [16]. Figure 2(b) shows the feature maps of digit 7 learned by the \ufb01rst two convolutional layers. The kernels in the 1st convolutional layer are designed to detect low-level features such as edges and curves, while the kernels in higher layers are learned to encode more abstract features. By stacking several convolutional and pooling layers, we could gradually extract higher-level feature representations. After several convolutional and pooling layers, there may be one or more fully-connected layers which aim to perform high-level reasoning [9, 11, 17]. They take all neurons in the previous layer and connect them to every single neuron of current layer to generate global semantic information. Note that fully-connected layer not always necessary as it can be replaced by a 1 \u00d7 1 convolution layer [18]. The last layer of CNNs is an output layer. For classi\ufb01cation tasks, the softmax operator is commonly used [8]. Another commonly used method is SVM, which can be combined with CNN features to solve di\ufb00erent classi\ufb01cation tasks [19, 20]. Let \u03b8 \u03b8 \u03b8 denote all the parameters of a CNN (e.g., the weight vectors and bias terms). The optimum parameters for a speci\ufb01c task can be obtained by minimizing an appropriate loss function de\ufb01ned on that task. Suppose we have N desired input-output relations {(x x x(n),y y y(n)); n \u2208 [1, \u00b7 \u00b7 \u00b7 , N]}, where x x x(n) is the n-th input data, y y y(n) is its corresponding target label and o o o(n) is the output of CNN. The loss of CNN can be calculated as follows: L = 1 N N X n=1 \u2113(\u03b8 \u03b8 \u03b8;y y y(n),o o o(n)) (4) Training CNN is a problem of global optimization. By minimizing the loss function, we can \ufb01nd the best \ufb01tting set of parameters. Stochastic gradient descent is a common solution for optimizing CNN network [21, 22]. 3 \f(a) Convolution (b) Tiled Convolution (c) Dilated Convolution (d) Deconvolution Figure 3: Illustration of (a) Convolution, (b) Tiled Convolution, (c) Dilated Convolution, and (d) Deconvolution 3. Improvements on CNNs There have been various improvements on CNNs since the success of AlexNet in 2012. In this section, we describe the major improvements on CNNs from six aspects: convolutional layer, pooling layer, activation function, loss function, regularization, and optimization. 3.1. Convolutional Layer Convolution \ufb01lter in basic CNNs is a generalized linear model (GLM) for the underlying local image patch. It works well for abstraction when instances of latent concepts are linearly separable. Here we introduce some works which aim to enhance its representation ability. 3.1.1. Tiled Convolution Weight sharing mechanism in CNNs can drastically decrease the number of parameters. However, it may also restrict the models from learning other kinds of invariance. Tiled CNN [23] is a variation of CNN that tiles and multiples feature maps to learn rotational and scale invariant features. Separate kernels are learned within the same layer, and the complex invariances can be learned implicitly by square-root pooling over neighbouring units. As illustrated in Figure 3(b), the convolution operations are applied every k unit, where k is the tile size to control the distance over which weights are shared. When the tile size k is 1, the units within each map will have the same weights, and tiled CNN becomes identical to the traditional CNN. In [23], their experiments on the NORB and CIFAR-10 datasets show that k = 2 achieves the best results. Wang et al. [24] \ufb01nd that Tiled CNN performs better than traditional CNN [25] on small time series datasets. 3.1.2. Transposed Convolution Transposed convolution can be seen as the backward pass of a corresponding traditional convolution. It is also known as deconvolution [11, 26\u201328] and fractionally strided convolution [29]. To stay consistent with most literature [11, 30], we use the term \u201cdeconvolution\u201d. Contrary to the traditional convolution that connects multiple input activations to a single activation, deconvolution associates a single activation with multiple output activations. Figure 3(d) shows a deconvolution operation of 3 \u00d7 3 kernel over a 4 \u00d7 4 input using unit stride and zero padding. The stride of deconvolution gives the dilation factor for the input feature map. Speci\ufb01cally, the deconvolution will \ufb01rst upsample the input by a factor of the stride value with padding, then perform convolution operation on the upsampled input. Recently, deconvolution has been widely used for visualization [11], recognition [31\u201333], localization [34], semantic segmentation [30], visual question answering [35], and super-resolution [36]. 3.1.3. Dilated Convolution Dilated CNN [37] is a recent development of CNN that introduces one more hyper-parameter to the convolutional layer. By inserting zeros between \ufb01lter elements, Dilated CNN can increase the network\u2019s 4 \f(a) Linear convolution layer (b) Mlpconv layer Figure 4: The comparison of linear convolution layer and mlpconv layer. receptive \ufb01eld size and let the network cover more relevant information. This is very important for tasks which need a large receptive \ufb01eld when making the prediction. Formally, a 1-D dilated convolution with dilation l that convolves signal F with kernel k of size r is de\ufb01ned as (F \u2217l k)t = P \u03c4 k\u03c4Ft\u2212l\u03c4, where \u2217l denotes l-dilated convolution. This formula can be straightforwardly extended to 2-D dilated convolution. Figure 3(c) shows an example of three dilated convolution layers where the dilation factor l grows up exponentially at each layer. The middle feature map F2 is produced from the bottom feature map F1 by applying a 1-dilated convolution, where each element in F2 has a receptive \ufb01eld of 3\u00d73. F3 is produced from F2 by applying a 2-dilated convolution, where each element in F3 has a receptive \ufb01eld of (23 \u22121) \u00d7 (23 \u22121). The top feature map F4 is produced from F3 by applying a 4-dilated convolution, where each element in F4 has a receptive \ufb01eld of (24 \u22121) \u00d7 (24 \u22121). As can be seen, the size of receptive \ufb01eld of each element in Fi+1 is (2i+2 \u22121) \u00d7 (2(i+2) \u22121). Dilated CNNs have achieved impressive performance in tasks such as scene segmentation [37], machine translation [38], speech synthesis [39], and speech recognition [40]. 3.1.4. Network in Network Network In Network (NIN) is a general network structure proposed by Lin et al. [18]. It replaces the linear \ufb01lter of the convolutional layer by a micro network, e.g., multilayer perceptron convolution (mlpconv) layer in the paper, which makes it capable of approximating more abstract representations of the latent concepts. The overall structure of NIN is the stacking of such micro networks. Figure 4 shows the di\ufb00erence between the linear convolutional layer and the mlpconv layer. Formally, the feature map of convolution layer (with nonlinear activation function, e.g., ReLU [14]) is computed as: ai,j,k = max(wT k xi,j + bk, 0) (5) where ai,j,k is the activation value of k-th feature map at location (i, j), xi,j is the input patch centered at location (i, j), wk and bk are weight vector and bias term of the k-th \ufb01lter. As a comparison, the computation performed by mlpconv layer is formulated as: an i,j,kn = max(wT knan\u22121 i,j,: + bkn, 0) (6) where n \u2208[1, N], N is the number of layers in the mlpconv layer, a0 i,j,: is equal to xi,j. In mlpconv layer, 1\u00d71 convolutions are placed after the traditional convolutional layer. The 1\u00d71 convolution is equivalent to the cross-channel parametric pooling operation which is succeeded by ReLU [14]. Therefore, the mlpconv layer can also be regarded as the cascaded cross-channel parametric pooling on the normal convolutional layer. In the end, they also apply a global average pooling which spatially averages the feature maps of the \ufb01nal layer, and directly feed the output vector into softmax layer. Compared with the fully-connected layer, global average pooling has fewer parameters and thus reduces the over\ufb01tting risk and computational load. 3.1.5. Inception Module Inception module is introduced by Szegedy et al. [10] which can be seen as a logical culmination of NIN. They use variable \ufb01lter sizes to capture di\ufb00erent visual patterns of di\ufb00erent sizes, and approximate 5 \f(a) (b) (c) (d) Figure 5: (a) Inception module, naive version. (b) The inception module used in [10]. (c) The improved inception module used in [41] where each 5 \u00d7 5 convolution is replaced by two 3 \u00d7 3 convolutions. (d) The Inception-ResNet-A module used in [42]. the optimal sparse structure by the inception module. Speci\ufb01cally, inception module consists of one pooling operation and three types of convolution operations (see Figure 5(b)), and 1 \u00d7 1 convolutions are placed before 3\u00d73 and 5\u00d75 convolutions as dimension reduction modules, which allow for increasing the depth and width of CNN without increasing the computational complexity. With the help of inception module, the network parameters can be dramatically reduced to 5 millions which are much less than those of AlexNet (60 millions) and ZFNet (75 millions). In their later paper [41], to \ufb01nd high-performance networks with a relatively modest computation cost, they suggest the representation size should gently decrease from inputs to outputs as well as spatial aggregation can be done over lower dimensional embeddings without much loss in representational power. The optimal performance of the network can be reached by balancing the number of \ufb01lters per layer and the depth of the network. Inspired by the ResNet [12], their latest Inception-V4 [42] combines the inception architecture with shortcut connections (see Figure 5(d)). They \ufb01nd that shortcut connections can signi\ufb01cantly accelerate the training of inception networks. Their Inception-v4 model architecture (with 75 trainable layers) that ensembles three residual and one Inception-v4 can achieve 3.08% top-5 error rate on the validation dataset of ILSVRC 2012. 3.2. Pooling Layer Pooling is an important concept of CNN. It lowers the computational burden by reducing the number of connections between convolutional layers. In this section, we introduce some recent pooling methods used in CNNs. 3.2.1. Lp Pooling Lp pooling is a biologically inspired pooling process modelled on complex cells [43]. It has been theoretically analyzed in [44], which suggest that Lp pooling provides better generalization than max pooling. Lp pooling can be represented as: yi,j,k = [ X (m,n)\u2208Rij (am,n,k)p]1/p (7) where yi,j,k is the output of the pooling operator at location (i, j) in k-th feature map, and am,n,k is the feature value at location (m, n) within the pooling region Rij in k-th feature map. Specially, when p = 1, Lp corresponds to average pooling, and when p = \u221e, Lp reduces to max pooling. 3.2.2. Mixed Pooling Inspired by random Dropout [17] and DropConnect [45], Yu et al. [46] propose a mixed pooling method which is the combination of max pooling and average pooling. The function of mixed pooling can be 6 \fformulated as follows: yi,j,k = \u03bb max (m,n)\u2208Rij am,n,k + (1 \u2212\u03bb) 1 |Rij| X (m,n)\u2208Rij am,n,k (8) where \u03bb is a random value being either 0 or 1 which indicates the choice of either using average pooling or max pooling. During forward propagation process, \u03bb is recorded and will be used for the backpropagation operation. Experiments in [46] show that mixed pooling can better address the over\ufb01tting problems and it performs better than max pooling and average pooling. 3.2.3. Stochastic Pooling Stochastic pooling [47] is a dropout-inspired pooling method. Instead of picking the maximum value within each pooling region as max pooling does, stochastic pooling randomly picks the activations according to a multinomial distribution, which ensures that the non-maximal activations of feature maps are also possible to be utilized. Speci\ufb01cally, stochastic pooling \ufb01rst computes the probabilities p for each region Rj by normalizing the activations within the region, i.e., pi = ai/ P k\u2208Rj(ak). After obtaining the distribution P(p1, ..., p|Rj|), we can sample from the multinomial distribution based on p to pick a location l within the region, and then set the pooled activation as yj = al, where l \u223cP(p1, ..., p|Rj|). Compared with max pooling, stochastic pooling can avoid over\ufb01tting due to the stochastic component. 3.2.4. Spectral Pooling Spectral pooling [48] performs dimensionality reduction by cropping the representation of input in frequency domain. Given an input feature map x \u2208Rm\u00d7m, suppose the dimension of desired output feature map is h \u00d7 w, spectral pooling \ufb01rst computes the discrete Fourier transform (DFT) of the input feature map, then crops the frequency representation by maintaining only the central h \u00d7 w submatrix of the frequencies, and \ufb01nally uses inverse DFT to map the approximation back into spatial domain. Compared with max pooling, the linear low-pass \ufb01ltering operation of spectral pooling can preserve more information for the same output dimensionality. Meanwhile, it also does not su\ufb00er from the sharp reduction in output map dimensionality exhibited by other pooling methods. What is more, the process of spectral pooling is achieved by matrix truncation, which makes it capable of being implemented with little computational cost in CNNs (e.g., [49]) that employ FFT for convolution kernels. 3.2.5. Spatial Pyramid Pooling Spatial pyramid pooling (SPP) is introduced by He et al. [50]. The key advantage of SPP is that it can generate a \ufb01xed-length representation regardless of the input sizes. SPP pools input feature map in local spatial bins with sizes proportional to the image size, resulting in a \ufb01xed number of bins. This is di\ufb00erent from the sliding window pooling in the previous deep networks, where the number of sliding windows depends on the input size. By replacing the last pooling layer with SPP, they propose a new SPP-net which is able to deal with images with di\ufb00erent sizes. 3.2.6. Multi-scale Orderless Pooling Inspired by [51], Gong et al. [52] use multi-scale orderless pooling (MOP) to improve the invariance of CNNs without degrading their discriminative power. They extract deep activation features for both the whole image and local patches of several scales. The activations of the whole image are the same as those of previous CNNs, which aim to capture the global spatial layout information. The activations of local patches are aggregated by VLAD encoding [53], which aim to capture more local, \ufb01ne-grained details of the image as well as enhancing invariance. The new image representation is obtained by concatenating the global activations and the VLAD features of the local patch activations. 3.3. Activation Function A proper activation function signi\ufb01cantly improves the performance of a CNN for a certain task. In this section, we introduce the recently used activation functions in CNNs. 7 \f3.3.1. ReLU Recti\ufb01ed linear unit (ReLU) [14] is one of the most notable non-saturated activation functions. The ReLU activation function is de\ufb01ned as: ai,j,k = max(zi,j,k, 0) (9) where zi,j,k is the input of the activation function at location (i, j) on the k-th channel. ReLU is a piecewise linear function which prunes the negative part to zero and retains the positive part (see Figure 6(a)). The simple max(\u00b7) operation of ReLU allows it to compute much faster than sigmoid or tanh activation functions, and it also induces the sparsity in the hidden units and allows the network to easily obtain sparse representations. It has been shown that deep networks can be trained e\ufb03ciently using ReLU even without pre-training [8]. Even though the discontinuity of ReLU at 0 may hurt the performance of backpropagation, many works have shown that ReLU works better than sigmoid and tanh activation functions empirically [54, 55]. 3.3.2. Leaky ReLU A potential disadvantage of ReLU unit is that it has zero gradient whenever the unit is not active. This may cause units that do not active initially never active as the gradient-based optimization will not adjust their weights. Also, it may slow down the training process due to the constant zero gradients. To alleviate this problem, Mass et al. introduce Leaky ReLU (LReLU) [54] which is de\ufb01ned as: ai,j,k = max(zi,j,k, 0) + \u03bb min(zi,j,k, 0) (10) where \u03bb is a prede\ufb01ned parameter in range (0, 1). Compared with ReLU, Leaky ReLU compresses the negative part rather than mapping it to constant zero, which makes it allow for a small, non-zero gradient when the unit is not active. 3.3.3. Parametric ReLU Rather than using a prede\ufb01ned parameter in Leaky ReLU, e.g., \u03bb in Eq.(10), He et al. [56] propose Parametric Recti\ufb01ed Linear Unit (PReLU) which adaptively learns the parameters of the recti\ufb01ers in order to improve accuracy. Mathematically, PReLU function is de\ufb01ned as: ai,j,k = max(zi,j,k, 0) + \u03bbk min(zi,j,k, 0) (11) where \u03bbk is the learned parameter for the k-th channel. As PReLU only introduces a very small number of extra parameters, e.g., the number of extra parameters is the same as the number of channels of the whole network, there is no extra risk of over\ufb01tting and the extra computational cost is negligible. It also can be simultaneously trained with other parameters by backpropagation. 3.3.4. Randomized ReLU Another variant of Leaky ReLU is Randomized Leaky Recti\ufb01ed Linear Unit (RReLU) [57]. In RReLU, the parameters of negative parts are randomly sampled from a uniform distribution in training, and then \ufb01xed in testing (see Figure 6(c)). Formally, RReLU function is de\ufb01ned as: a(n) i,j,k = max(z(n) i,j,k, 0) + \u03bb(n) k min(z(n) i,j,k, 0) (12) where z(n) i,j,k denotes the input of activation function at location (i, j) on the k-th channel of n-th example, \u03bb(n) k denotes its corresponding sampled parameter, and a(n) i,j,k denotes its corresponding output. It could reduce over\ufb01tting due to its randomized nature. Xu et al. [57] also evaluate ReLU, LReLU, PReLU and RReLU on standard image classi\ufb01cation task, and concludes that incorporating a non-zero slop for negative part in recti\ufb01ed activation units could consistently improve the performance. 8 \f(a) ReLU (b) LReLU/PReLU (c) RReLU (d) ELU Figure 6: The comparison among ReLU, LReLU, PReLU, RReLU and ELU. For Leaky ReLU, \u03bb is empirically prede\ufb01ned. For PReLU, \u03bbk is learned from training data. For RReLU, \u03bb(n) k is a random variable which is sampled from a given uniform distribution in training and keeps \ufb01xed in testing. For ELU, \u03bb is empirically prede\ufb01ned. 3.3.5. ELU Clevert et al. [58] introduce Exponential Linear Unit (ELU) which enables faster learning of deep neural networks and leads to higher classi\ufb01cation accuracies. Like ReLU, LReLU, PReLU and RReLU, ELU avoids the vanishing gradient problem by setting the positive part to identity. In contrast to ReLU, ELU has a negative part which is bene\ufb01cial for fast learning. Compared with LReLU, PReLU, and RReLU which also have unsaturated negative parts, ELU employs a saturation function as negative part. As the saturation function will decrease the variation of the units if deactivated, it makes ELU more robust to noise. The function of ELU is de\ufb01ned as: ai,j,k = max(zi,j,k, 0) + min(\u03bb(ezi,j,k \u22121), 0) (13) where \u03bb is a prede\ufb01ned parameter for controlling the value to which an ELU saturate for negative inputs. 3.3.6. Maxout Maxout [59] is an alternative nonlinear function that takes the maximum response across multiple channels at each spatial position. As stated in [59], the maxout function is de\ufb01ned as: ai,j,k = maxk\u2208[1,K] zi,j,k, where zi,j,k is the k-th channel of the feature map. It is worth noting that maxout enjoys all the bene\ufb01ts of ReLU since ReLU is actually a special case of maxout, e.g., max(wT 1 x + b1, wT 2 x + b2) where w1 is a zero vector and b1 is zero. Besides, maxout is particularly well suited for training with Dropout. 3.3.7. Probout Springenberg et al. [60] propose a probabilistic variant of maxout called probout. They replace the maximum operation in maxout with a probabilistic sampling procedure. Speci\ufb01cally, they \ufb01rst de\ufb01ne a probability for each of the k linear units as: pi = e\u03bbzi/ Pk j=1 e\u03bbzj, where \u03bb is a hyperparameter for controlling the variance of the distribution. Then, they pick one of the k units according to a multinomial distribution {p1, ..., pk} and set the activation value to be the value of the picked unit. In order to incorporate with dropout, they actually re-de\ufb01ne the probabilities as: \u02c6 p0 = 0.5, \u02c6 pi = e\u03bbzi/(2. k X j=1 e\u03bbzj) (14) The activation function is then sampled as: ai = ( 0 if i = 0 zi else (15) where i \u223cmultinomial{\u02c6 p0, ..., \u02c6 pk}. Probout can achieve the balance between preserving the desirable properties of maxout units and improving their invariance properties. However, in testing process, probout is computationally expensive than maxout due to the additional probability calculations. 9 \f3.4. Loss Function It is important to choose an appropriate loss function for a speci\ufb01c task. We introduce four representative ones in this subsection: Hinge loss, Softmax loss, Contrastive loss, Triplet loss. 3.4.1. Hinge Loss Hinge loss is usually used to train large margin classi\ufb01ers such as Support Vector Machine (SVM). The hinge loss function of a multi-class SVM is de\ufb01ned in Eq.(16), where w is the weight vector of classi\ufb01er and y y y(i) \u2208[1, . . . , K] indicates its correct class label among the K classes. Lhinge = 1 N N X i=1 K X j=1 [max(0, 1 \u2212\u03b4(y y y(i), j)wT xi)]p (16) where \u03b4(y y y(i), j) = 1 if y y y(i) = j, otherwise \u03b4(y y y(i), j) = \u22121. Note that if p = 1, Eq.(16) is Hinge-Loss (L1Loss), while if p = 2, it is the Squared Hinge-Loss (L2-Loss) [61]. The L2-Loss is di\ufb00erentiable and imposes a larger loss for point which violates the margin comparing with L1-Loss. [19] investigates and compares the performance of softmax with L2-SVMs in deep networks. The results on MNIST [62] demonstrate the superiority of L2-SVM over softmax. 3.4.2. Softmax Loss Softmax loss is a commonly used loss function which is essentially a combination of multinomial logistic loss and softmax. Given a training set {(x x x(i),y y y(i)); i \u22081, . . . , N,y y y(i) \u22081, . . . , K}, where x x x(i) is the i-th input image patch, and y y y(i) is its target class label among the K classes. The prediction of j-th class for i-th input is transformed with the softmax function: p(i) j = ez(i) j /PK l=1 ez(i) l , where z(i) j is usually the activations of a densely connected layer, so z(i) j can be written as z(i) j = wT j a(i) + bj. Softmax turns the predictions into non-negative values and normalizes them to get a probability distribution over classes. Such probabilistic predictions are used to compute the multinomial logistic loss, i.e., the softmax loss, as follows: Lsoftmax = \u22121 N [ N X i=1 K X j=1 1{y y y(i) = j}logp(i) j ] (17) Recently, Liu et al. [63] propose the Large-Margin Softmax (L-Softmax) loss, which introduces an angular margin to the angle \u03b8j between input feature vector a(i) and the j-th column wj of weight matrix. The prediction p(i) j for L-Softmax loss is de\ufb01ned as: p(i) j = e\u2225wj\u2225\u2225a(i)\u2225\u03c8(\u03b8j) e\u2225wj\u2225\u2225a(i)\u2225\u03c8(\u03b8j) + P l\u0338=j e\u2225wl\u2225\u2225a(i)\u2225cos(\u03b8l) (18) \u03c8(\u03b8j) = (\u22121)k cos(m\u03b8j) \u22122k, \u03b8j \u2208[k\u03c0/m, (k + 1)\u03c0/m] (19) where k \u2208[0, m \u22121] is an integer, m controls the margin among classes. When m = 1, the L-Softmax loss reduces to the original softmax loss. By adjusting the margin m between classes, a relatively di\ufb03cult learning objective will be de\ufb01ned, which can e\ufb00ectively avoid over\ufb01tting. They verify the e\ufb00ective of LSoftmax on MNIST, CIFAR-10, and CIFAR-100, and \ufb01nd that the L-Softmax loss performs better than the original softmax. 3.4.3. Contrastive Loss Contrastive loss is commonly used to train Siamese network [64\u201367] which is a weakly-supervised scheme for learning a similarity measure from pairs of data instances labelled as matching or non-matching. Given the i-th pair of data (x x x(i) \u03b1 ,x x x(i) \u03b2 ), let (z z z(i,l) \u03b1 ,z z z(i,l) \u03b2 ) denotes its corresponding output pair of the l-th (l \u2208 [1, \u00b7 \u00b7 \u00b7 , L]) layer. In [65] and [66], they pass the image pairs through two identical CNNs, and feed the 10 \ffeature vectors of the \ufb01nal layer to the cost module. The contrastive loss function that they use for training samples is: Lcontrastive = 1 2N N X i=1 (y)d(i,L) + (1 \u2212y) max(m \u2212d(i,L), 0) (20) where d(i,L) = ||z z z(i,L) \u03b1 \u2212z z z(i,L) \u03b2 ||2 2, and m is a margin parameter a\ufb00ecting non-matching pairs. If (x x x(i) \u03b1 ,x x x(i) \u03b2 ) is a matching pair, then y = 1. Otherwise, y = 0. Lin et al. [68] \ufb01nd that such a single margin loss function causes a dramatic drop in retrieval results when \ufb01ne-tuning the network on all pairs. Meanwhile, the performance is better retained when \ufb01ne-tuning only on non-matching pairs. This indicates that the handling of matching pairs in the loss function is responsible for the drop. While the recall rate on non-matching pairs alone is stable, handling the matching pairs is the main reason for the drop in recall rate. To solve this problem, they propose a double margin loss function which adds another margin parameter to a\ufb00ect the matching pairs. Instead of calculating the loss of the \ufb01nal layer, their contrastive loss is de\ufb01ned for every layer l and the backpropagations for the loss of individual layers are performed at the same time. It is de\ufb01ned as: Ld\u2212contrastive = 1 2N N X i=1 L X l=1 (y)max(d(i,l) \u2212m1, 0) + (1 \u2212y)max(m2 \u2212d(i,l), 0) (21) In practice, they \ufb01nd that these two margin parameters can set to be equal (m1 = m2 = m) and be learned from the distribution of the sampled matching and non-matching image pairs. 3.4.4. Triplet Loss Triplet loss [69] considers three instances per loss function. The triplet units (x x x(i) a ,x x x(i) p ,x x x(i) n ) usually contain an anchor instance x x x(i) a as well as a positive instance x x x(i) p from the same class of x x x(i) a and a negative instance x x x(i) n . Let (z z z(i) a ,z z z(i) p ,z z z(i) n ) denote the feature representation of the triplet units, the triplet loss is de\ufb01ned as: Ltriplet = 1 N N X i=1 max{d(i) (a,p) \u2212d(i) (a,n) + m, 0} (22) where d(i) (a,p) = \u2225z z z(i) a \u2212z z z(i) p \u22252 2 and d(i) (a,n) = \u2225z z z(i) a \u2212z z z(i) n \u22252 2. The objective of triplet loss is to minimize the distance between the anchor and positive, and maximize the distance between the negative and the anchor. However, randomly selected anchor samples may judge falsely in some special cases. For example, when d(i) (n,p) < d(i) (a,p) < d(i) (a,n), the triplet loss may still be zero. Thus the triplet units will be neglected during the backward propagation. Liu et al. [70] propose the Coupled Clusters (CC) loss to solve this problem. Instead of using the triplet units, the coupled clusters loss function is de\ufb01ned over the positive set and the negative set. By replacing the randomly picked anchor with the cluster center, it makes the samples in the positive set cluster together and samples in the negative set stay relatively far away, which is more reliable than the original triplet loss. The coupled clusters loss function is de\ufb01ned as: Lcc = 1 N p N p X i=1 1 2 max{\u2225z z z(i) p \u2212cp\u22252 2 \u2212\u2225z z z(\u2217) n \u2212cp\u22252 2 + m, 0} (23) where N p is the number of samples per set, z z z(\u2217) n is the feature representation of x x x(\u2217) n which is the nearest negative sample to the estimated center point cp = (PN p i z z z(i) p )/N p. Triplet loss and its variants have been widely used in various tasks, including re-identi\ufb01cation [71], veri\ufb01cation [70], and image retrieval [72]. 3.4.5. Kullback-Leibler Divergence Kullback-Leibler Divergence (KLD) is a non-symmetric measure of the di\ufb00erence between two probability distributions p(x) and q(x) over the same discrete variable x (see Figure 7(a)). The KLD from q(x) to p(x) 11 \f(a) KL divergence (b) Autoencoders variants and GAN variants Figure 7: The illustration of (a) the KullbackLeibler divergence for two normal Gaussian distributions, (b) AE variants (AE, VAE [73], DVAE [74], and CVAE [75]) and GAN variants (GAN [76], CGAN [77]). is de\ufb01ned as: DKL(p||q) = \u2212H(p(x)) \u2212Ep[log q(x)] (24) = X x p(x) log p(x) \u2212 X x p(x) log q(x) = X x p(x) log p(x) q(x) (25) where H(p(x)) is the Shannon entropy of p(x), Ep(log q(x)) is the cross entropy between p(x) and q(x). KLD has been widely used as a measure of information loss in the objective function of various Autoencoders (AEs) [78\u201380]. Famous variants of AE include sparse AE [81, 82], Denoising AE [78] and Variational AE (VAE) [73]. VAE interprets the latent representation through Bayesian inference. It consists of two parts: an encoder which \u201ccompresses\u201d the data sample x to the latent representation z \u223cq\u03c6(z|x); and a decoder, which maps such representation back to data space \u02dc x \u223cp\u03b8(x|z) which as close to the input as possible, where \u03c6 and \u03b8 are the parameters of encoder and decoder respectively. As proposed in [73], VAEs try to maximize the variational lower bound of the log-likelihood of log p(x|\u03c6, \u03b8): Lvae = Ez\u223cq\u03c6(z|x)[log p\u03b8(x|z)] \u2212DKL(q\u03c6(z|x)\u2225p(z)) (26) where the \ufb01rst term is the reconstruction cost, and the KLD term enforces prior p(z) on the proposal distribution q\u03c6(z|x). Usually p(z) is the standard normal distribution [73], discrete distribution [75], or some distributions with geometric interpretation [83]. Following the original VAE, many variants have been proposed [74, 75, 84]. Conditional VAE (CVAE) [75, 84] generates samples from the conditional distribution with \u02dc x \u223cp\u03b8(x|y, z). Denoising VAE (DVAE) [74] recovers the original input x from the corrupted input \u02c6 x [78]. Jensen-Shannon Divergence (JSD) is a symmetrical form of KLD. It measures the similarity between p(x) and q(x): DJS(p||q) = 1 2DKL \u0012 p(x) \r \r \r \r p(x) + q(x) 2 \u0013 + 1 2DKL \u0012 q(x) \r \r \r \r p(x) + q(x) 2 \u0013 (27) By minimizing the JSD, we can make the two distributions p(x) and q(x) as close as possible. JSD has been successfully used in the Generative Adversarial Networks (GANs) [76, 85, 86]. In contrast to VAEs that model the relationship between x and z directly, GANs are explicitly set up to optimize for generative tasks [85]. The objective of GANs is to \ufb01nd the discriminator D that gives the best discrimination between the real and generated data, and simultaneously encourage the generator G to \ufb01t the real data distribution. The min-max game played between the discriminator D and the generator G is formalized by the following objective function: min G max D Lgan(D, G) = Ex\u223cp(x)[log D(x)] + Ez\u223cq(z)[log(1 \u2212D(G(z)))] (28) 12 \fThe original GAN paper [76] shows that for a \ufb01xed generator G\u2217, we have the optimal discriminator D\u2217 G(x) = p(x) p(x)+q(x). Then the Equation 28 is equivalent to minimize the JSD between p(x) and q(x). If G and D have enough capacity, the distribution q(x) converges to p(x). Like Conditional VAE, the Conditional GAN (CGAN) [77] also receives an additional information y as input to generate samples conditioning on y. In practice, GANs are notoriously unstable to train [87, 88]. 3.5. Regularization Over\ufb01tting is an unneglectable problem in deep CNNs, which can be e\ufb00ectively reduced by regularization. In the following subsection, we introduce some e\ufb00ective regularization techniques: \u2113p-norm, Dropout, and DropConnect. 3.5.1. \u2113p-norm Regularization Regularization modi\ufb01es the objective function by adding additional terms that penalize the model complexity. Formally, if the loss function is L(\u03b8, x, y), then the regularized loss will be: E(\u03b8, x, y) = L(\u03b8, x, y) + \u03bbR(\u03b8) (29) where R(\u03b8) is the regularization term, and \u03bb is the regularization strength. \u2113p-norm regularization function is usually employed as R(\u03b8) = P j \u2225\u03b8j\u2225p p. When p \u22651, the \u2113p-norm is convex, which makes the optimization easier and renders this function attractive [17]. For p = 2, the \u21132-norm regularization is commonly referred to as weight decay. A more principled alternative of \u21132-norm regularization is Tikhonov regularization [89], which rewards invariance to noise in the inputs. When p < 1, the \u2113p-norm regularization more exploits the sparsity e\ufb00ect of the weights but conducts to non-convex function. 3.5.2. Dropout Dropout is \ufb01rst introduced by Hinton et al. [17], and it has been proven to be very e\ufb00ective in reducing over\ufb01tting. In [17], they apply Dropout to fully-connected layers. The output of Dropout is y = r\u2217a(WT x), where x = [x1, x2, . . . , xn]T is the input to fully-connected layer, W \u2208Rn\u00d7d is a weight matrix, and r is a binary vector of size d whose elements are independently drawn from a Bernoulli distribution with parameter p, i.e. ri \u223cBernoulli(p). Dropout can prevent the network from becoming too dependent on any one (or any small combination) of neurons, and can force the network to be accurate even in the absence of certain information. Several methods have been proposed to improve Dropout. Wang et al. [90] propose a fast Dropout method which can perform fast Dropout training by sampling from or integrating a Gaussian approximation. Ba et al. [91] propose an adaptive Dropout method, where the Dropout probability for each hidden variable is computed using a binary belief network that shares parameters with the deep network. In [92], they \ufb01nd that applying standard Dropout before 1 \u00d7 1 convolutional layer generally increases training time but does not prevent over\ufb01tting. Therefore, they propose a new Dropout method called SpatialDropout, which extends the Dropout value across the entire feature map. This new Dropout method works well especially when the training data size is small. 3.5.3. DropConnect DropConnect [45] takes the idea of Dropout a step further. Instead of randomly setting the outputs of neurons to zero, DropConnect randomly sets the elements of weight matrix W to zero. The output of DropConnect is given by y = a((R \u2217W)x), where Rij \u223cBernoulli(p). Additionally, the biases are also masked out during the training process. Figure 8 illustrates the di\ufb00erences among No-Drop, Dropout and DropConnect networks. 3.6. Optimization In this subsection, we discuss some key techniques for optimizing CNNs. 13 \f(a) No-Drop (b) DropOut (c) DropConnect Figure 8: The illustration of No-Drop network, DropOut network and DropConnect network. 3.6.1. Data Augmentation Deep CNNs are particularly dependent on the availability of large quantities of training data. An elegant solution to alleviate the relative scarcity of the data compared to the number of parameters involved in CNNs is data augmentation [8]. Data augmentation consists in transforming the available data into new data without altering their natures. Popular augmentation methods include simple geometric transformations such as sampling [8], mirroring [93], rotating [94], shifting [95], and various photometric transformations [96]. Paulin et al. [97] propose a greedy strategy that selects the best transformation from a set of candidate transformations. However, their strategy involves a large number of model re-training steps, which can be computationally expensive when the number of candidate transformations is large. Hauberg et al. [98] propose an elegant way for data augmentation by randomly generating di\ufb00eomorphisms. Xie et al. [99] and Xu et al. [100] o\ufb00er additional means of collecting images from the Internet to improve learning in \ufb01ne-grained recognition tasks. 3.6.2. Weight Initialization Deep CNN has a huge amount of parameters and its loss function is non-convex [101], which makes it very di\ufb03cult to train. To achieve a fast convergence in training and avoid the vanishing gradient problem, a proper network initialization is one of the most important prerequisites [102, 103]. The bias parameters can be initialized to zero, while the weight parameters should be initialized carefully to break the symmetry among hidden units of the same layer. If the network is not properly initialized, e.g., each layer scales its input by k, the \ufb01nal output will scale the original input by kL where L is the number of layers. In this case, the value of k > 1 leads to extremely large values of output layers while the value of k < 1 leads a diminishing output value and gradients. Krizhevsky et al. [8] initialize the weights of their network from a zero-mean Gaussian distribution with standard deviation 0.01 and set the bias terms of the second, fourth and \ufb01fth convolutional layers as well as all the fully-connected layers to constant one. Another famous random initialization method is \u201cXavier\u201d, which is proposed in [104]. They pick the weights from a Gaussian distribution with zero mean and a variance of 2/(nin +nout), where nin is the number of neurons feeding into it, and nout is the number of neurons the result is fed to. Thus \u201cXavier\u201d can automatically determine the scale of initialization based on the number of input and output neurons, and keep the signal in a reasonable range of values through many layers. One of its variants in Ca\ufb00e 1 uses the nin-only variant, which makes it much easier to implement. \u201cXavier\u201d initialization method is later extended by [56] to account for the rectifying nonlinearities, where they derive a robust initialization method that particularly considers the ReLU nonlinearity. Their method , allows for the training of extremely deep models (e.g., [10]) to converge while the \u201cXavier\u201d method [104] cannot. 1https://github.com/BVLC/caffe 14 \fIndependently, Saxe et al. [105] show that orthonormal matrix initialization works much better for linear networks than Gaussian initialization, and it also works for networks with nonlinearities. Mishkin et al. [102] extend [105] to an iterative procedure. Speci\ufb01cally, it proposes a layer-sequential unit-variance process scheme which can be viewed as an orthonormal initialization combined with batch normalization (see Section 3.6.4) performed only on the \ufb01rst mini-batch. It is similar to batch normalization as both of them take a unit variance normalization procedure. Di\ufb00erently, it uses ortho-normalization to initialize the weights which helps to e\ufb03ciently de-correlate layer activities. Such an initialization technique has been applied to [106, 107] with a remarkable increase in performance. 3.6.3. Stochastic Gradient Descent The backpropagation algorithm is the standard training method which uses gradient descent to update the parameters. Many gradient descent optimization algorithms have been proposed [108, 109]. Standard gradient descent algorithm updates the parameters \u03b8 \u03b8 \u03b8 of the objective L(\u03b8 \u03b8 \u03b8) as \u03b8 \u03b8 \u03b8t+1 = \u03b8 \u03b8 \u03b8t \u2212\u03b7\u2207\u03b8 \u03b8 \u03b8E[L(\u03b8 \u03b8 \u03b8t)], where E[L(\u03b8 \u03b8 \u03b8t)] is the expectation of L(\u03b8 \u03b8 \u03b8) over the full training set and \u03b7 is the learning rate. Instead of computing E[L(\u03b8 \u03b8 \u03b8t)], Stochastic Gradient Descent (SGD) [21] estimates the gradients on the basis of a single randomly picked example (x x x(t),y y y(t)) from the training set: \u03b8 \u03b8 \u03b8t+1 = \u03b8 \u03b8 \u03b8t \u2212\u03b7t\u2207\u03b8 \u03b8 \u03b8L(\u03b8 \u03b8 \u03b8t;x x x(t),y y y(t)) (30) In practice, each parameter update in SGD is computed with respect to a mini-batch as opposed to a single example. This could help to reduce the variance in the parameter update and can lead to more stable convergence. The convergence speed is controlled by the learning rate \u03b7t. However, mini-batch SGD does not guarantee good convergence, and there are still some challenges that need to be addressed. Firstly, it is not easy to choose a proper learning rate. One common method is to use a constant learning rate that gives stable convergence in the initial stage, and then reduce the learning rate as the convergence slows down. Additionally, learning rate schedules [110, 111] have been proposed to adjust the learning rate during the training. To make the current gradient update depend on historical batches and accelerate training, momentum [108] is proposed to accumulate a velocity vector in the relevant direction. The classical momentum update is given by: v v vt+1 = \u03b3v v vt \u2212\u03b7t\u2207\u03b8 \u03b8 \u03b8L(\u03b8 \u03b8 \u03b8t;x x x(t),y y y(t)) (31) \u03b8 \u03b8 \u03b8t+1 = \u03b8 \u03b8 \u03b8t + v v vt+1 (32) where v v vt+1 is the current velocity vector, \u03b3 is the momentum term which is usually set to 0.9. Nesterov momentum [103] is another way of using momentum in gradient descent optimization: v v vt+1 = \u03b3v v vt \u2212\u03b7t\u2207\u03b8 \u03b8 \u03b8L(\u03b8 \u03b8 \u03b8t + \u03b3v v vt;x x x(t),y y y(t)) (33) Compared with the classical momentum [108] which \ufb01rst computes the current gradient and then moves in the direction of the updated accumulated gradient, Nesterov momentum \ufb01rst moves in the direction of the previous accumulated gradient \u03b3v v vt, calculates the gradient and then makes a gradient update. This anticipatory update prevents the optimization from moving too fast and achieves better performance [112]. Parallelized SGD methods [22, 113] improve SGD to be suitable for parallel, large-scale machine learning. Unlike standard (synchronous) SGD in which the training will be delayed if one of the machines is slow, these parallelized methods use the asynchronous mechanism so that no other optimizations will be delayed except for the one on the slowest machine. Je\ufb00rey Dean et al. [114] use another asynchronous SGD procedure called Downpour SGD to speed up the large-scale distributed training process on clusters with many CPUs. There are also some works that use asynchronous SGD with multiple GPUs. Paine et al. [115] basically combine asynchronous SGD with GPUs to accelerate the training time by several times compared to training on a single machine. Zhuang et al. [116] also use multiple GPUs to asynchronously calculate gradients and update the global model parameters, which achieves 3.2 times of speedup on 4 GPUs compared to training on a single GPU. 15 \fNote that SGD methods may not result in convergence. The training process can be terminated when the performance stops improving. A popular remedy to over-training is to use early stopping [117] in which optimization is halted based on the performance on a validation set during training. To control the duration of the training process, various stopping criteria can be considered. For example, the training might be performed for a \ufb01xed number of epochs, or until a prede\ufb01ned training error is reached [118]. The stopping strategy should be done carefully [119], a proper stopping strategy should let the training process continue as long as the network generalization ability is improved and the over\ufb01tting is avoided. 3.6.4. Batch Normalization Data normalization is usually the \ufb01rst step of data preprocessing. Global data normalization transforms all the data to have zero-mean and unit variance. However, as the data \ufb02ows through a deep network, the distribution of input to internal layers will be changed, which will lose the learning capacity and accuracy of the network. Io\ufb00e et al. [120] propose an e\ufb03cient method called Batch Normalization (BN) to partially alleviate this phenomenon. It accomplishes the so-called covariate shift problem by a normalization step that \ufb01xes the means and variances of layer inputs where the estimations of mean and variance are computed after each mini-batch rather than the entire training set. Suppose the layer to normalize has a d dimensional input, i.e., x = [x1, x2, ..., xd]T . We \ufb01rst normalize the k-th dimension as follows: \u02c6 xk = (xk \u2212\u00b5B)/ q \u03b42 B + \u03f5 (34) where \u00b5B and \u03b42 B are the mean and variance of mini-batch respectively, and \u03f5 is a constant value. To enhance the representation ability, the normalized input \u02c6 xk is further transformed into: yk = BN\u03b3,\u03b2(xk) = \u03b3\u02c6 xk + \u03b2 (35) where \u03b3 and \u03b2 are learned parameters. Batch normalization has many advantages compared with global data normalization. Firstly, it reduces internal covariant shift. Secondly, BN reduces the dependence of gradients on the scale of the parameters or of their initial values, which gives a bene\ufb01cial e\ufb00ect on the gradient \ufb02ow through the network. This enables the use of higher learning rate without the risk of divergence. Furthermore, BN regularizes the model, and thus reduces the need for Dropout. Finally, BN makes it possible to use saturating nonlinear activation functions without getting stuck in the saturated model. 3.6.5. Shortcut Connections As mentioned above, the vanishing gradient problem of deep CNNs can be alleviated by normalized initialization [8] and BN [120]. Although these methods successfully prevent deep neural networks from over\ufb01tting, they also introduce di\ufb03culties in optimizing the networks, resulting in worse performances than shallower networks [56, 104, 105]. Such an optimization problem su\ufb00ered by deeper CNNs is regarded as the degradation problem. Inspired by Long Short Term Memory (LSTM) networks [121] which use gate functions to determine how much of a neuron\u2019s activation value to transform or just pass through. Srivastava et al. [122] propose highway networks which enable the optimization of networks with virtually arbitrary depth. The output of their network is given by: xl+1 = \u03c6l+1(xl, WH) \u00b7 \u03c4l+1(xl, WT ) + xl \u00b7 (1 \u2212\u03c4l+1(xl, WT )) (36) where xl and xl+1 correspond to the input and output of lth highway block, \u03c4(\u00b7) is the transform gate and \u03c6(\u00b7) is usually an a\ufb03ne transformation followed by a non-linear activation function (in general it may take other forms). This gating mechanism forces the layer\u2019s inputs and outputs to be of the same size and allows highway networks with tens or hundreds of layers to be trained e\ufb03ciently. The outputs of gates vary signi\ufb01cantly with the input examples, demonstrating that the network does not just learn a \ufb01xed structure, but dynamically routes data based on speci\ufb01c examples. Independently, Residual Nets (ResNets) [12] share the same core idea that works in LSTM units. Instead of employing learnable weights for neuron-speci\ufb01c gating, the shortcut connections in ResNets are not gated 16 \fand untransformed input is directly propagated to the output which brings fewer parameters. The output of ResNets can be represented as follows: xl+1 = xl + fl+1(xl, WF ) (37) where fl is the weight layer, it can be a composite function of operations such as Convolution, BN, ReLU, or Pooling. With residual block, activation of any deeper unit can be written as the sum of the activation of a shallower unit and a residual function. This also implies that gradients can be directly propagated to shallower units, which makes deep ResNets much easier to be optimized than the original mapping function and more e\ufb03cient to train very deep nets. This is in contrast to usual feedforward networks, where gradients are essentially a series of matrix-vector products, that may vanish as networks grow deeper. After the original ResNets, He et al. [123] follow up with another preactivation variant of ResNets, where they conduct a set of experiments to show that identity shortcut connections are the easiest for networks to learn. They also \ufb01nd that bringing BN forward performs considerably better than using BN after addition. In their comparisons, the residual net with BN + ReLU pre-activation gets higher accuracies than their previous ResNets [12]. Inspired by [123], Shen et al. [124] introduce a weighting factor for the output from the convolutional layer, which gradually introduces the trainable layers. The latest Inception-v4 paper [42] also reports that training is accelerated and performance is improved by using identity skip connections across Inception modules. The original ResNets and preactivation ResNets are very deep but also very thin. By contrast, Wide ResNets [125] proposes to decrease the depth and increase the width, which achieves impressive results on CIFAR-10, CIFAR-100, and SVHN. However, their claims are not validated on the large-scale image classi\ufb01cation task on Imagenet dataset2. Stochastic Depth ResNets randomly drop a subset of layers and bypass them with the identity mapping for every mini-batch. By combining Stochastic Depth ResNets and Dropout, Singh et al. [126] generalize dropout and networks with stochastic depth, which can be viewed as an ensemble of ResNets, Dropout ResNets, and Stochastic Depth ResNets. The ResNets in ResNets (RiR) paper [127] describes an architecture that merges classical convolutional networks and residual networks, where each block of RiR contains residual units and non-residual blocks. The RiR can learn how many convolutional layers it should use per residual block. ResNets of ResNets (RoR) [128] is a modi\ufb01cation to the ResNets architecture which proposes to use multi-level shortcut connections as opposed to single-level shortcut connections in the prior work on ResNets [12]. DenseNet [129] can be seen as an architecture takes the insights of the skip connection to the extreme, in which the output of a layer is connected to all the subsequent layer in the module. In all of the ResNets [12, 123], Highway [122] and Inception networks [42], we can see a pretty clear trend of using shortcut connections to help train very deep networks. 4. Fast Processing of CNNs With the increasing challenges in the computer vision and machine learning tasks, the models of deep neural networks get more and more complex. These powerful models require more data for training in order to avoid over\ufb01tting. Meanwhile, the big training data also brings new challenges such as how to train the networks in a feasible amount of time. In this section, we introduce some fast processing methods of CNNs. 4.1. FFT Mathieu et al. [49] carry out the convolutional operation in the Fourier domain with FFTs. Using FFT-based methods has many advantages. Firstly, the Fourier transformations of \ufb01lters can be reused as the \ufb01lters are convolved with multiple images in a mini-batch. Secondly, the Fourier transformations of the output gradients can be reused when backpropagating gradients to both \ufb01lters and input images. Finally, the summation over input channels can be performed in the Fourier domain, so that inverse Fourier transformations are only required once per output channel per image. There have already been some GPUbased libraries developed to speed up the training and testing process, such as cuDNN [130] and fb\ufb00t [131]. 2http://www.image-net.org 17 \fHowever, using FFT to perform convolution needs additional memory to store the feature maps in the Fourier domain, since the \ufb01lters must be padded to be the same size as the inputs. This is especially costly when the striding parameter is larger than 1, which is common in many state-of-art networks, such as the early layers in [132] and [10]. While FFT can achieve faster training and testing process, the rising prominence of small size convolutional \ufb01lters have become an important component in CNNs such as ResNet [12] and GoogleNet [10], which makes a new approach specialized for small \ufb01lter sizes: Winograd\u2019s minimal \ufb01ltering algorithms [133]. The insight of Winograd is like FFT, and the Winograd convolutions can be reduced across channels in transform space before applying the inverse transform and thus makes the inference more e\ufb03cient. 4.2. Structured Transforms Low-rank matrix factorization has been exploited in a variety of contexts to improve the optimization problems. Given an m\u00d7n matrix C of rank r, there exists a factorization C = AB where A is an m\u00d7r full column rank matrix and B is an r \u00d7n full row rank matrix. Thus, we can replace C by A and B. To reduce the parameters of C by a fraction p, it is essential to ensure that mr + rn < pmn, i.e., the rank of C should satisfy that r < pmn/(m + n). By applying this factorization, the space complexity reduces from O(mn) to O(r(m + n)), and the time complexity reduces from O(mn) to O(r(m + n)). To this end, Sainath et al. [134] apply the low-rank matrix factorization to the \ufb01nal weight layer in a deep CNN, resulting about 30-50% speedup in training time with little loss in accuracy. Similarly, Xue et al. [135] apply singular value decomposition on each layer of a deep CNN to reduce the model size by 71% with less than 1% relative accuracy loss. Inspired by [136] which demonstrates the redundancy in the parameters of deep neural networks, Denton et al. [137] and Jaderberg et al. [138] independently investigate the redundancy within the convolutional \ufb01lters and develop approximations to reduced the required computations. Novikov et al. [139] generalize the low-rank ideas, where they treat the weight matrix as multi-dimensional tensor and apply a Tensor-Train decomposition [140] to reduce the number of parameters of the fully-connected layers. Adaptive Fastfood transform is generalization of the Fastfood [141] transform for approximating matrix. It reparameterizes the weight matrix C \u2208Rn\u00d7n in fully-connected layers with an Adaptive Fastfood transformation: Cx = ( \u02dc D1H \u02dc D2\u03a0H \u02dc D3)x, where \u02dc D1, \u02dc D2 and \u02dc D3 are diagonal matrices of parameters, \u03a0 is a random permutation matrix, and H denotes the Walsh-Hadamard matrix. The space complexity of Adaptive Fastfood transform is O(n), and the time complexity is O(n log n). Motivated by the great advantages of circulant matrix in both space and computation e\ufb03ciency [142, 143], Cheng et al. [144] explore the redundancy in the parametrization of fully-connected layers by imposing the circulant structure on the weight matrix to speed up the computation, and further allow the use of FFT for faster computation. With a circulant matrix C \u2208Rn\u00d7n as the matrix of parameters in a fully-connected layer, for an input vector x \u2208Rn, the output of Cx can be calculated e\ufb03ciently using the FFT and inverse IFFT: CDx = i\ufb00t(\ufb00t(v)) \u25e6\ufb00t(x), where \u25e6corresponds to elementwise multiplication operation, v \u2208Rn is de\ufb01ned by C, and D is a random sign \ufb02ipping matrix for improving the capacity of the model. This method reduces the time complexity from O(n2) to O(n log n), and space complexity from O(n2) to O(n). Moczulski et al. [145] further generalize the circulant structures by interleaving diagonal matrices with the orthogonal Discrete Cosine Transform (DCT). The resulting transform, ACDC\u22121, has O(n) space complexity and O(n log n) time complexity. 4.3. Low Precision Floating point numbers are a natural choice for handling the small updates of the parameters of CNNs. However, the resulting parameters may contain a lot of redundant information [146]. To reduce redundancy, Binarized Neural Networks (BNNs) restrict some or all the arithmetics involved in computing the outputs to be binary values. There are three aspects of binarization for neural network layers: binary input activations, binary synapse weights, and binary output activations. Full binarization requires all the three components are binarized, and the cases with one or two components are considered as partial binarization. Kim et al. [147] consider full binarization with a predetermined portion of the synapses having zero weight, and all other synapses with a 18 \fweight of one. Their network only needs XNOR and bit count operations, and they report 98.7% accuracy on the MNIST dataset. XNOR-Net [148] applies convolutional BNNs on the ImageNet dataset with topologies inspired by AlexNet, ResNet and GoogLeNet, reporting top-1 accuracies of up to 51.2% for full binarization and 65.5% for partial binarization. DoReFa-Net [149] explores reducing precision during the forward pass as well as the backward pass. Both partial and full binarization are explored in their experiments and the corresponding top-1 accuracies on ImageNet are 43% and 53%. The work by Courbariaux et al. [150] describes how to train fully-connected networks and CNNs with full binarization and batch normalization layers, reporting competitive accuracies on the MNIST, SVHN, and CIFAR-10 datasets. 4.4. Weight Compression Many attempts have been made to reduce the number of parameters in the convolution layers and fullyconnected layers. Here, we brie\ufb02y introduce some methods under these topics: vector quantization, pruning, and hashing. Vector Quantization (VQ) is a method for compressing densely connected layers to make CNN models smaller. Similar to scalar quantization where a large set of numbers is mapped to a smaller set [151], VQ quantizes groups of numbers together rather than addressing them one at a time. In 2013, Denil et al. [136] demonstrate the presence of redundancy in neural network parameters, and use VQ to signi\ufb01cantly reduce the number of dynamic parameters in deep models. Gong et al. [152] investigate the information theoretical vector quantization methods for compressing the parameters of CNNs, and they obtain parameter prediction results similar to those of [136]. They also \ufb01nd that VQ methods have a clear gain over existing matrix factorization methods, and among the VQ methods, structured quantization methods such as product quantization work signi\ufb01cantly better than other methods (e.g., residual quantization [153], scalar quantization [154]). An alternative approach to weight compression is pruning. It reduces the number of parameters and operations in CNNs by permanently dropping less important connections [155], which enables smaller networks to inherit knowledge from the large predecessor networks and maintains comparable of performance. Han et al. [146, 156] introduce \ufb01ne-grained sparsity in a network by a magnitude-based pruning approach. If the absolute magnitude of any weight is less than a scalar threshold, the weight is pruned. Gao et al. [157] extend the magnitude-based approach to allow restoration of the pruned weights in the previous iterations, with tightly coupled pruning and retraining stages, for greater model compression. Yang et al. [158] take the correlation between weights into consideration and propose an energy-aware pruning algorithm that directly uses energy consumption estimation of a CNN to guide the pruning process. Rather than \ufb01ne-grained pruning, there are also works that investigate coarse-grained pruning. Hu et al. [159] propose removing \ufb01lters that frequently generate zero output activations on the validation set. Srinivas et al. [160] merge similar \ufb01lters into one, while Mariet et al. [161] merge \ufb01lters with similar output activations into one. Designing a proper hashing technique to accelerate the training of CNNs or save memory space also an interesting problem. HashedNets [162] is a recent technique to reduce model sizes by using a hash function to group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. Their network shrinks the storage costs of neural networks signi\ufb01cantly while mostly preserves the generalization performance in image classi\ufb01cation. As pointed out in Shi et al. [163] and Weinberger et al. [164], sparsity will minimize hash collision making feature hashing even more e\ufb00ective. HashNets may be used together with pruning to give even better parameter savings. 4.5. Sparse Convolution Recently, several attempts have been made to sparsify the weights of convolutional layers [165, 166]. Liu et al. [165] consider sparse representations of the basis \ufb01lters, and achieve 90% sparsifying by exploiting both inter-channel and intra-channel redundancy of convolutional kernels. Instead of sparsifying the weights of convolution layers, Wen et al. [166] propose a Structured Sparsity Learning (SSL) approach to simultaneously optimize their hyperparameters (\ufb01lter size, depth, and local connectivity). Bagherinezhad et al. [167] propose a lookup-based convolutional neural network (LCNN) that encodes convolutions by few lookups to a rich set of dictionary that is trained to cover the space of weights in CNNs. They decode the weights of the 19 \fconvolutional layer with a dictionary and two tensors. The dictionary is shared among all weight \ufb01lters in a layer, which allows a CNN to learn from very few training examples. LCNN can achieve a higher accuracy in a small number of iterations compared to standard CNN. 5. Applications of CNNs In this section, we introduce some recent works that apply CNNs to achieve state-of-the-art performance, including image classi\ufb01cation, object tracking, pose estimation, text detection, visual saliency detection, action recognition, scene labeling, speech and natural language processing. 5.1. Image Classi\ufb01cation CNNs have been applied in image classi\ufb01cation for a long time [168\u2013171]. Compared with other methods, CNNs can achieve better classi\ufb01cation accuracy on large scale datasets [8, 9, 172] due to their capability of joint feature and classi\ufb01er learning. The breakthrough of large scale image classi\ufb01cation comes in 2012. Krizhevsky et al. [8] develop the AlexNet and achieve the best performance in ILSVRC 2012. After the success of AlexNet, several works have made signi\ufb01cant improvements in classi\ufb01cation accuracy by either reducing \ufb01lter size [11] or expanding the network depth [9, 10]. Building a hierarchy of classi\ufb01ers is a common strategy for image classi\ufb01cation with a large number of classes [173]. The work of [174] is one of the earliest attempts to introduce category hierarchy in CNN, in which a discriminative transfer learning with tree-based priors is proposed. They use a hierarchy of classes for sharing information among related classes in order to improve performance for classes with very few training examples. Similarly, Wang et al. [175] build a tree structure to learn \ufb01ne-grained features for subcategory recognition. Xiao et al. [176] propose a training method that grows a network not only incrementally but also hierarchically. In their method, classes are grouped according to similarities and are self-organized into di\ufb00erent levels. Yan et al. [177] introduce a hierarchical deep CNNs (HD-CNNs) by embedding deep CNNs into a category hierarchy. They decompose the classi\ufb01cation task into two steps. The coarse category CNN classi\ufb01er is \ufb01rst used to separate easy classes from each other, and then those more challenging classes are routed downstream to \ufb01ne category classi\ufb01ers for further prediction. This architecture follows the coarse-to-\ufb01ne classi\ufb01cation paradigm and can achieve lower error at the cost of an a\ufb00ordable increase of complexity. Subcategory classi\ufb01cation is another rapidly growing sub\ufb01eld of image classi\ufb01cation. There are already some \ufb01ne-grained image datasets (such as Birds [178], Dogs [179], Cars [180], and Plants [181]). Using object part information is bene\ufb01cial for \ufb01ne-grained classi\ufb01cation [182]. Generally, the accuracy can be improved by localizing important parts of objects and representing their appearances discriminatively. Along this way, Branson et al. [183] propose a method which detects parts and extracts CNN features from multiple pose-normalized regions. Part annotation information is used to learn a compact pose normalization space. They also build a model that integrates lower-level feature layers with pose-normalized extraction routines and higher-level feature layers with unaligned image features to improve the classi\ufb01cation accuracy. Zhang et al. [184] propose a part-based R-CNN which can learn whole-object and part detectors. They use selective search [185] to generate the part proposals, and apply non-parametric geometric constraints to more accurately localize parts. Lin et al. [186] incorporate part localization, alignment, and classi\ufb01cation into one recognition system which is called Deep LAC. Their system is composed of three sub-networks: localization sub-network is used to estimate the part location, alignment sub-network receives the location as input and performs template alignment [187], and classi\ufb01cation sub-network takes pose aligned part images as input to predict the category label. They also propose a value linkage function to link the sub-networks and make them work as a whole in training and testing. As can be noted, all the above-mentioned methods make use of part annotation information for supervised training. However, these annotations are not easy to collect and these systems have di\ufb03culty in scaling up and to handle many types of \ufb01ne-grained classes. To avoid this problem, some researchers propose to \ufb01nd localized parts or regions in an unsupervised manner. Krause et al. [188] use the ensemble of localized learned feature representations for \ufb01ne-grained classi\ufb01cation, they use co-segmentation and alignment to 20 \fgenerate parts, and then compare the appearance of each part and aggregate the similarities together. In their latest paper [189], they combine co-segmentation and alignment in a discriminative mixture to generate parts for facilitating \ufb01ne-grained classi\ufb01cation. Zhang et al. [190] use the unsupervised selective search to generate object proposals, and then select the useful parts from the multi-scale generated part proposals. Xiao et al. [191] apply visual attention in CNN for \ufb01ne-grained classi\ufb01cation. Their classi\ufb01cation pipeline is composed of three types of attentions: the bottom-up attention proposes candidate patches, the objectlevel top-down attention selects relevant patches of a certain object, and the part-level top-down attention localizes discriminative parts. These attentions are combined to train domain-speci\ufb01c networks which can help to \ufb01nd foreground object or object parts and extract discriminative features. Lin et al. [192] propose a bilinear model for \ufb01ne-grained image classi\ufb01cation. The recognition architecture consists of two feature extractors. The outputs of two feature extractors are multiplied using the outer product at each location of the image, and are pooled to obtain an image descriptor. 5.2. Object Detection Object detection has been a long-standing and important problem in computer vision [193\u2013195]. Generally, the di\ufb03culties mainly lie in how to accurately and e\ufb03ciently localize objects in images or video frames. The use of CNNs for detection and localization can be traced back to 1990s [196] . However, due to the lack of training data and limited processing resources, the progress of CNN-based object detection is slow before 2012. Since 2012, the huge success of CNNs in ImageNet challenge [8] rekindles interest in CNN-based object detection [197]. In some early works [196, 198], they use the sliding window based approaches to densely evaluate the CNN classi\ufb01er on windows sampled at each location and scale. Since there are usually hundreds of thousands of candidate windows in a image, these methods su\ufb00er from highly computational cost, which makes them unsuitable to be applied on the large-scale dataset, e.g., Pascal VOC [172], ImageNet [8] and MSCOCO [199]. Recently, object proposal based methods attract a lot of interests and are widely studied in the literature [185, 193, 200, 201]. These methods usually exploit fast and generic measurements to test whether a sampled window is a potential object or not, and further pass the output object proposals to more sophisticated detectors to determine whether they are background or belong to a speci\ufb01c object class. One of the most famous object proposal based CNN detector is Region-based CNN (R-CNN) [202]. R-CNN uses Selective Search (SS) [185] to extract around 2000 bottom-up region proposals that are likely to contain objects. Then, these region proposals are warped to a \ufb01xed size (227 \u00d7 227), and a pre-trained CNN is used to extract features from them. Finally, a binary SVM classi\ufb01er is used for detection. R-CNN yields a signi\ufb01cant performance boost. However, its computational cost is still high since the time-consuming CNN feature extractor will be performed for each region separately. To deal with this problem, some recent works propose to share the computation in feature extraction [9, 28, 132, 202]. OverFeat [132] computes CNN features from an image pyramid for localization and detection. Hence the computation can be easily shared between overlapping windows. Spatial pyramid pooling network (SPP net) [203] is a pyramid-based version of R-CNN [202], which introduces an SPP layer to relax the constraint that input images must have a \ufb01xed size. Unlike R-CNN, SPP net extracts the feature maps from the entire image only once, and then applies spatial pyramid pooling on each candidate window to get a \ufb01xed-length representation. A drawback of SPP net is that its training procedure is a multi-stage pipeline, which makes it impossible to train the CNN feature extractor and SVM classi\ufb01er jointly to further improve the accuracy. Fast RCNN [204] improves SPP net by using an end-to-end training method. All network layers can be updated during \ufb01ne-tuning, which simpli\ufb01es the learning process and improves detection accuracy. Later, Faster R-CNN [204] introduces a region proposal network (RPN) for object proposals generation and achieves further speed-up. Beside R-CNN based methods, Gidaris et al. [205] propose a multi-region and semantic segmentation-aware model for object detection. They integrate the combined features on an iterative localization mechanism as well as a box-voting scheme after non-max suppression. Yoo et al. [206] treat the object detection problem as an iterative classi\ufb01cation problem. It predicts an accurate object boundary box by aggregating quantized weak directions from their detection network. Another important issue of object detection is how to explore e\ufb00ective training sets as the performance is somehow largely depends on quantity and quality of both positive and negative samples. Online bootstrap21 \fping (or hard negative mining [207]) for CNN training has recently gained interest due to its importance for intelligent cognitive systems interacting with dynamically changing environments [208]. [209] proposes a novel bootstrapping technique called online hard example mining (OHEM) for training detection models based on CNNs. It simpli\ufb01es the training process by automatically selecting the hard examples. Meanwhile, it only computes the feature maps of an image once, and then forwards all region-of-interests (RoIs) of the image on top of these feature maps. Thus it is able to \ufb01nd the hard examples with a small extra computational cost. More recently, YOLO [210] and SSD [211] allow single pipeline detection that directly predicts class labels. YOLO [210] treats object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. The whole detection pipeline is a single network which predicts bounding boxes and class probabilities from the full image in one evaluation, and can be optimized end-to-end directly on detection performance. SSD [211] discretizes the output space of bounding boxes into a set of default boxes over di\ufb00erent aspect ratios and scales per feature map location. With this multiple scales setting and their matching strategy, SSD is signi\ufb01cantly more accurate than YOLO. With the bene\ufb01ts from super-resolution, Lu et al. [212] propose a top-down search strategy to divide a window into sub-windows recursively, in which an additional network is trained to account for such division decisions. 5.3. Object Tracking The success in object tracking relies heavily on how robust the representation of target appearance is against several challenges such as view point changes, illumination changes, and occlusions [213\u2013215]. There are several attempts to employ CNNs for visual tracking. Fan et al. [216] use CNN as a base learner. It learns a separate class-speci\ufb01c network to track objects. In [216], the authors design a CNN tracker with a shift-variant architecture. Such an architecture plays a key role so that it turns the CNN model from a detector into a tracker. The features are learned during o\ufb04ine training. Di\ufb00erent from traditional trackers which only extract local spatial structures, this CNN based tracking method extracts both spatial and temporal structures by considering the images of two consecutive frames. Because the large signals in the temporal information tend to occur near objects that are moving, the temporal structures provide a crude velocity signal to tracking. Li et al. [217] propose a target-speci\ufb01c CNN for object tracking, where the CNN is trained incrementally during tracking with new examples obtained online. They employ a candidate pool of multiple CNNs as a data-driven model of di\ufb00erent instances of the target object. Individually, each CNN maintains a speci\ufb01c set of kernels that favourably discriminate object patches from their surrounding background using all available low-level cues. These kernels are updated in an online manner at each frame after being trained with just one instance at the initialization of the corresponding CNN. Instead of learning one complicated and powerful CNN model for all the appearance observations in the past, Li et al. [217] use a relatively small number of \ufb01lters in the CNN within a framework equipped with a temporal adaptation mechanism. Given a frame, the most promising CNNs in the pool are selected to evaluate the hypothesises for the target object. The hypothesis with the highest score is assigned as the current detection window and the selected models are retrained using a warm-start backpropagation which optimizes a structural loss function. In [218], a CNN object tracking method is proposed to address limitations of handcrafted features and shallow classi\ufb01er structures in object tracking problem. The discriminative features are \ufb01rst automatically learned via a CNN. To alleviate the tracker drifting problem caused by model update, the tracker exploits the ground truth appearance information of the object labeled in the initial frames and the image observations obtained online. A heuristic schema is used to judge whether updating the object appearance models or not. Hong et al. [219] propose a visual tracking algorithm based on a pre-trained CNN, where the network is trained originally for large-scale image classi\ufb01cation and the learned representation is transferred to describe target. On top of the hidden layers in the CNN, they put an additional layer of an online SVM to learn a target appearance discriminatively against background. The model learned by SVM is used to compute a target-speci\ufb01c saliency map by back-projecting the information relevant to target to input image space. And they exploit the target-speci\ufb01c saliency map to obtain generative target appearance models and perform tracking with understanding of spatial con\ufb01guration of target. 22 \f5.4. Pose Estimation Since the breakthrough in deep structure learning, many recent works pay more attention to learn multiple levels of representations and abstractions for human-body pose estimation task with CNNs [220, 221]. DeepPose [222] is the \ufb01rst application of CNNs to human pose estimation problem. In this work, pose estimation is formulated as a CNN-based regression problem to body joint coordinates. A cascade of 7-layered CNNs are presented to reason about pose in a holistic manner. Unlike the previous works that usually explicitly design graphical model and part detectors, the DeepPose captures the full context of each body joint by taking the whole image as the input. Meanwhile, some works exploit CNN to learn representation of local body parts. Ajrun et al. [223] present a CNN based end-to-end learning approach for full-body human pose estimation, in which CNN part detectors and an Markov Random Field (MRF)-like spatial model are jointly trained, and pair-wise potentials in the graph are computed using convolutional priors. In a series of papers, Tompson et al. [224] use a multi-resolution CNN to compute heat-map for each body part. Di\ufb00erent from [223], Tompson et al. [224] learn the body part prior model and implicitly the structure of the spatial model. Speci\ufb01cally, they start by connecting every body part to itself and to every other body part in a pair-wise fashion, and use a fully-connected graph to model the spatial prior. As an extension of [224], Tompson et al. [92] propose a CNN architecture which includes a position re\ufb01nement model after a rough pose estimation CNN. This re\ufb01nement model, which is a Siamese network [64], is jointly trained in cascade with the o\ufb00-the-shelf model [224]. In a similar work with [224], Chen et al. [225, 226] also combine graphical model with CNN. They exploit a CNN to learn conditional probabilities for the presence of parts and their spatial relationships, which are used in unary and pairwise terms of the graphical model. The learned conditional probabilities can be regarded as low-dimensional representations of the body pose. There is also a pose estimation method called dual-source CNN [227] that integrates graphical models and holistic style. It takes the full body image and the holistic view of the local parts as inputs to combine both local and contextual information. In addition to still image pose estimation with CNN, recently researchers also apply CNN to human pose estimation in videos. Based on the work [224], Jain et al. [228] also incorporate RGB features and motion features to a multi-resolution CNN architecture to further improve accuracy. Speci\ufb01cally, The CNN works in a sliding-window manner to perform pose estimation. The input of the CNN is a 3D tensor which consists of an RGB image and its corresponding motion features, and the output is a 3D tensor containing response-maps of the joints. In each response map, the value of each location denote the energy for presence the corresponding joint at that pixel location. The multi-resolution processing is achieved by simply down sampling the inputs and feeding them to the network. 5.5. Text Detection and Recognition The task of recognizing text in image has been widely studied for a long time [229\u2013232]. Traditionally, optical character recognition (OCR) is the major focus. OCR techniques mainly perform text recognition on images in rather constrained visual environments (e.g., clean background, well-aligned text). Recently, the focus has been shifted to text recognition on scene images due to the growing trend of high-level visual understanding in computer vision research [233, 234]. The scene images are captured in unconstrained environments where there exists a large amount of appearance variations which poses great di\ufb03culties to existing OCR techniques. Such a concern can be mitigated by using stronger and richer feature representations such as those learned by CNN models. Along the line of improving the performance of scene text recognition with CNN, a few works have been proposed. The works can be coarsely categorized into three types: (1) text detection and localization without recognition, (2) text recognition on cropped text images, and (3) end-to-end text spotting that integrates both text detection and recognition: 5.5.1. Text Detection One of the pioneering works to apply CNN for scene text detection is [235]. The CNN model employed by [235] learns on cropped text patches and non-text scene patches to discriminate between the two. The text are then detected on the response maps generated by the CNN \ufb01lters given the multiscale image pyramid of the input. To reduce the search space for text detection, Xu et al. [236] propose to obtain a set of character 23 \fcandidates via Maximally Stable Extremal Regions (MSER) and \ufb01lter the candidates by CNN classi\ufb01cation. Another work that combines MSER and CNN for text detection is [237]. In [237], CNN is used to distinguish text-like MSER components from non-text components, and cluttered text components are split by applying CNN in a sliding window manner followed by Non-Maximal Suppression (NMS). Other than localization of text, there is an interesting work [238] that makes use of CNN to determine whether the input image contains text, without telling where the text is exactly located. In [238], text candidates are obtained using MSER which are then passed into a CNN to generate visual features, and lastly the global features of the images are constructed by aggregating the CNN features in a Bag-of-Words (BoW) framework. 5.5.2. Text Recognition Goodfellow et al. [239] propose a CNN model with multiple softmax classi\ufb01ers in its \ufb01nal layer, which is formulated in such a way that each classi\ufb01er is responsible for character prediction at each sequential location in the multi-digit input image. As an attempt to recognize text without using lexicon and dictionary, Jaderberg et al. [240] introduce a novel Conditional Random Fields (CRF)-like CNN model to jointly learn character sequence prediction and bigram generation for scene text recognition. The more recent text recognition methods supplement conventional CNN models with variants of recurrent neural networks (RNN) to better model the sequence dependencies between characters in text. In [241], CNN extracts rich visual features from character-level image patches obtained via sliding window, and the sequence labelling is carried out by LSTM [242]. The method presented in [243] is very similar to [241], except that in [243], lexicon can be taken into consideration to enhance text recognition performance. 5.5.3. End-to-end Text Spotting For end-to-end text spotting, Wang et al. [15] apply a CNN model originally trained for character classi\ufb01cation to perform text detection. Going in a similar direction as [15], the CNN model proposed in [244] enables feature sharing across the four di\ufb00erent subtasks of an end-to-end text spotting system: text detection, character case-sensitive and insensitive classi\ufb01cation, and bigram classi\ufb01cation. Jaderberg et al. [245] make use of CNNs in a very comprehensive way to perform end-to-end text spotting. In [245], the major subtasks of its proposed system, namely text bounding box \ufb01ltering, text bounding box regression, and text recognition are each tackled by a separate CNN model. 5.6. Visual Saliency Detection The technique to locate important regions in imagery is referred to as visual saliency prediction. It is a challenging research topic, with a vast number of computer vision and image processing applications facilitated by it. Recently, a couple of works have been proposed to harness the strong visual modeling capability of CNNs for visual saliency prediction. Multi-contextual information is a crucial prior in visual saliency prediction, and it has been used concurrently with CNN in most of the considered works [246\u2013250]. Wang et al. [246] introduce a novel saliency detection algorithm which sequentially exploits local context and global context. The local context is handled by a CNN model which assigns a local saliency value to each pixel given the input of local image patches, while the global context (object-level information) is handled by a deep fully-connected feedforward network. In [247], the CNN parameters are shared between the global-context and local-context models, for predicting the saliency of superpixels found within object proposals. The CNN model adopted in [248] is pre-trained on large-scale image classi\ufb01cation dataset and then shared among di\ufb00erent contextual levels for feature extraction. The outputs of the CNN at di\ufb00erent contextual levels are then concatenated as input to be passed into a trainable fully-connected feedforward network for saliency prediction. Similar to [247, 248], the CNN model used in [249] for saliency prediction are shared across three CNN streams, with each stream taking input of a di\ufb00erent contextual scale. He et al. [250] derive a spatial kernel and a range kernel to produce two meaningful sequences as 1-D CNN inputs, to describe color uniqueness and color distribution respectively. The proposed sequences are advantageous over inputs of raw image pixels because they can reduce the training complexity of CNN, while being able to encode the contextual information among superpixels. There are also CNN-based saliency prediction approaches [251\u2013253] that do not consider multi-contextual information. Instead, they rely very much on the powerful representation capability of CNN. In [251], an 24 \fensemble of CNNs is derived from a large number of randomly instantiated CNN models, to generate good features for saliency detection. The CNN models instantiated in [251] are however not deep enough because the maximum number of layers is capped at three. By using a pre-trained and deeper CNN model with 5 convolutional layers, [252] (Deep Gaze) learns a separate saliency model to jointly combine the responses from every CNN layer and predict saliency values. [253] is the only work making use of CNN to perform visual saliency prediction in an end-to-end manner, which means the CNN model accepts raw pixels as input and produces saliency map as output. Pan et al. [253] argue that the success of the proposed end-to-end method is attributed to its not-so-deep CNN architecture which attempts to prevent over\ufb01tting. 5.7. Action Recognition Action recognition, the behaviour analysis of human subjects and classifying their activities based on their visual appearance and motion dynamics, is one of the challenging problems in computer vision [254\u2013 256]. Generally, this problem can be divided to two major groups: action analysis in still images and in videos. For both of these two groups, e\ufb00ective CNN based methods have been proposed. In this subsection we brie\ufb02y introduce the latest advances on these two groups. 5.7.1. Action Recognition in Still Images The work of [257] has shown the output of last few layers of a trained CNN can be used as a general visual feature descriptor for a variety of tasks. The same intuition is utilized for action recognition by [9, 258], in which they use the outputs of the penultimate layer of a pre-trained CNN to represent full images of actions as well as the human bounding boxes inside them, and achieve a high level of performance in action classi\ufb01cation. Gkioxari et al. [259] add a part detection to this framework. Their part detector is a CNN based extension to the original Poselet [260] method. CNN based representation of contextual information is utilized for action recognition in [261]. They search for the most representative secondary region within a large number of object proposal regions in the image and add contextual features to the description of the primary region (ground truth bounding box of human subject) in a bottom-up manner. They utilize a CNN to represent and \ufb01ne-tune the representations of the primary and the contextual regions. After that, they move a step forward and show that it is possible to locate and recognize human actions in images without using human bounding boxes [262]. However, they need to train human detectors to guide their recognition at test time. In [263], they propose a method that segments out the action mask of underlying human-object interactions with minimum annotation e\ufb00orts. 5.7.2. Action Recognition in Video Sequences Applying CNNs on videos is challenging because traditional CNNs are designed to represent two dimensional pure spatial signals but in videos a new temporal axis is added which is essentially di\ufb00erent from the spatial variations in images [256, 264]. The sizes of the video signals are also in higher orders in comparison to those of images which makes it more di\ufb03cult to apply convolutional networks on. Ji et al. [265] propose to consider the temporal axis in a similar manner as other spatial axes and introduce a network of 3D convolutional layers to be applied on video inputs. Recently Tran et al. [266] study the performance, e\ufb03ciency, and e\ufb00ectiveness of this approach and show its strengths compared to other approaches. Another approach to apply CNNs on videos is to keep the convolutions in 2D and fuse the feature maps of consecutive frames, as proposed by [267]. They evaluate three di\ufb00erent fusion policies: late fusion, early fusion, and slow fusion, and compare them with applying the CNN on individual single frames. One more step forward for better action recognition via CNNs is to separate the representation to spatial and temporal variations and train individual CNNs for each of them, as proposed by Simonyan and Zisserman [268]. First stream of this framework is a traditional CNN applied on all the frames and the second receives the dense optical \ufb02ow of the input videos and trains another CNN which is identical to the spatial stream in size and structure. The output of the two streams are combined in a class score fusion step. Ch\u00b4 eron et al. [269] utilize the two stream CNN on the localized parts of the human body and show the aggregation of part-based local CNN descriptors can e\ufb00ectively improve the performance of action recognition. Another approach to model the dynamics of videos di\ufb00erently from spatial variations, is to feed the CNN based features of individual 25 \fframes to a sequence learning module e.g., a recurrent neural network. Donahue et al. [270] study di\ufb00erent con\ufb01gurations of applying LSTM units as the sequence learner in this framework. 5.8. Scene Labeling Scene labeling aims to relate one semantic class (road, water, sea etc.) to each pixel of the input image [271\u2013275]. CNNs are used to model the class likelihood of pixels directly from local image patches. They are able to learn strong features and classi\ufb01ers to discriminate the local visual subtleties. Farabet et al. [276] have pioneered to apply CNNs to scene labeling tasks. They feed their Multi-scale ConvNet with di\ufb00erent scale image patches, and they show that the learned network is able to perform much better than systems with hand-crafted features. Besides, this network is also successfully applied to RGB-D scene labeling [277]. To enable the CNNs to have a large \ufb01eld of view over pixels, Pinheiro et al. [278] develop the recurrent CNNs. More speci\ufb01cally, the identical CNNs are applied recurrently to the output maps of CNNs in the previous iterations. By doing this, they can achieve slightly better labeling results while signi\ufb01cantly reduces the inference times. Shuai et al. [279\u2013281] train the parametric CNNs by sampling image patches, which speeds up the training time dramatically. They \ufb01nd that patch-based CNNs su\ufb00er from local ambiguity problems, and [279] solve it by integrating global beliefs. [280] and [281] use the recurrent neural networks to model the contextual dependencies among image features from CNNs, and dramatically boost the labeling performance. Meanwhile, researchers are exploiting to use the pre-trained deep CNNs for object semantic segmentation. Mostajabi et al. [282] apply the local and proximal features from a ConvNet and apply the Alex-net [8] to obtain the distant and global features, and their concatenation gives rise to the zoom-out features. They achieve very competitive results on the semantic segmentation tasks. Long et al. [28] train a fully convolutional Network to directly predict the input images to dense label maps. The convolution layers of the FCNs are initialized from the model pre-trained on ImageNet classi\ufb01cation dataset, and the deconvolution layers are learned to upsample the resolution of label maps. Chen et al. [283] also apply the pre-trained deep CNNs to emit the labels of pixels. Considering that the imperfectness of boundary alignment, they further use fully connected CRF to boost the labeling performance. 5.9. Speech Processing 5.9.1. Automatic Speech Recognition Automatic Speech Recognition (ASR) is the technology that converts human speech into spoken words [284]. Before applying CNN to ASR, this domain has long been dominated by the Hidden Markov Model and Gaussian Mixture Model (GMM-HMM) methods [285] , which usually require extracting hand-craft features on speech signals, e.g., the most popular Mel Frequency Cepstral Coe\ufb03cients (MFCC) features. Meanwhile, some researchers have applied Deep Neural Networks (DNNs) in large vocabulary continuous speech recognition (LVCSR) and obtained encouraging results [286, 287], however, their networks are susceptible to performance degradations under mismatched condition [288], such as di\ufb00erent recording conditions etc. CNNs have shown better performance over GMM-HMMs and general DNNs [289, 290], since they are well suited to exploit the correlations in both time and frequency domains through the local connectivity and are capable of capturing frequency shift in human speech signals. In [289], they achieve lower speech recognition errors by applying CNN on Mel \ufb01lter bank features. Some attempts use the raw waveform with CNNs, and to learn \ufb01lters to process the raw waveform jointly with the rest of the network [291, 292]. Most of the early applications of CNN in ASR only use fewer convolution layers. For example, Abdel-Hamid et al. [290] use one convolutional layer in their network, and Amodei et al. [293] use three convolutional layers as the feature preprocessing layer. Recently, very deep CNNs have shown impressive performance in ASR [294, 295]. Besides, small \ufb01lters have successfully applied in acoustic modeling in hybrid NN-HMM ASR system, and pooling operations are replaced by densely connected layers for ASR tasks [296]. Yu et al. [297] propose a layer-wise context expansion with attention model for ASR. It is a variant of time-delay neural network [298] in which lower layers focus on extracting simple local patterns while higher layers exploit broader context and extract complex patterns than the lower layers. A similar idea can be found in [40]. 26 \f5.9.2. Statistical Parametric Speech Synthesis In addition to speech recognition, the impact of CNNs has also spread to Statistical Parametric Speech Synthesis (SPSS). The goal of speech synthesis is to generate speech sounds directly from the text and possibly with additional information. It has been known for many years that the speech sounds generated by shallow structured HMM networks are often mu\ufb04ed compared with natural speech. Many studies have adopted deep learning to overcome such de\ufb01ciency [299\u2013301]. One advantage of these methods is their strong ability to represent the intrinsic correlation by using a generative modeling framework. Inspired by the recent advances in neural autoregressive generative models that model complex distributions such as images [302] and text [303], WaveNet [39] makes use of the generative model of the CNN to represent the conditional distribution of the acoustic features given the linguistic features, which can be seen as a milestone in SPSS. In order to deal with long-range temporal dependencies, they develop a new architecture based on dilated causal convolutions to capture very large receptive \ufb01elds. By conditioning linguistic features on text, it can be used to directly synthesize speech from text. 5.10. Natural Language Processing 5.10.1. Statistical Language Modeling For statistical language modeling, the input typically consists of incomplete sequences of words rather than complete sentences [304, 305]. Kim et al. [304] use the output of character-level CNN as the input to an LSTM at each time step. genCNN [306] is a convolutional architecture for sequence prediction, which uses separate gating networks to replace the max-pooling operations. Recently, Kalchbrenner et al. [38] propose a CNN-based architecture for sequence processing called ByteNet, which is a stack of two dilated CNNs. Like WaveNet [39], ByteNet also bene\ufb01ts from convolutions with dilations to increase the receptive \ufb01eld size, thus can model sequential data with long-term dependencies. It also has the advantage that the computational time only linearly depends on the length of the sequences. Compared with recurrent neural networks, CNNs not only can get long-range information but also get a hierarchical representation of the input words. Gu et al. [307] and Yann et al. [308] share a similar idea that both of them use CNN without pooling to model the input words. Gu et al. [307] combine the language CNN with recurrent highway networks and achieve a huge improvement compared to LSTM-based methods. Inspired by the gating mechanism in LSTM networks, the gated CNN in [308] uses a gating mechanism to control the path through which information \ufb02ows in the network, and achieves the state-of-the-art on WiKiText-103. However, the frameworks in [308] and [307] are still under the recurrent framework, and the input window size of their network are of limited size. How to capture the speci\ufb01c long-term dependencies as well as hierarchical representation of history words is still an open problem. 5.10.2. Text Classi\ufb01cation Text classi\ufb01cation is a crucial task for Natural Language Processing (NLP). Natural language sentences have complicated structures, both sequential and hierarchical, that are essential for understanding them. Owing to the powerful capability of capturing local relations of temporal or hierarchical structures, CNNs have achieved top performance in sentence modeling. A proper CNN architecture is important for text classi\ufb01cation. Collobert et al. [309] and Yu et al. [310] apply one convolutional layer to model the sentence, while Kalchbrenner et al. [311] stack multiple layers of convolution to model sentences. In [312], they use multichannel convolution and variable kernels for sentence classi\ufb01cation. It is shown that multiple convolutional layers help to extract high-level abstract features, and multiple linear \ufb01lters can e\ufb00ectively consider di\ufb00erent n-gram features. Recently, Yin et al. [313] extend the network in [312] by hierarchical convolution architecture and further exploration of multichannel and variable size feature detectors. The pooling operation can help the network deal with variable sentence lengths. In [312, 314], they use maxpooling to keep the most important information to represent the sentence. However, max-pooling cannot distinguish whether a relevant feature in one of the rows occurs just one or multiple times and it ignores the order in which the features occur. In [311], they propose the k-max pooling which returns the top k activations in the original order in the input sequence. Dynamic k-max pooling is a generalization of the k-max pooling operator where the k value is depended on the input feature map size. The CNN 27 \farchitectures mentioned above are rather shallow compared with the deep CNNs which are very successful in computer vision. Recently, Conneau et al. [315] implement a deep convolutional architecture which is up to 29 convolutional layers. They \ufb01nd that shortcut connections give better results when the network is very deep (49 layers). However, they do not achieve state-of-the-art under this setting. 6. Conclusions and Outlook Deep CNNs have made breakthroughs in processing image, video, speech and text. In this paper, we have given an extensive survey on recent advances of CNNs. We have discussed the improvements of CNN on di\ufb00erent aspects, namely, layer design, activation function, loss function, regularization, optimization and fast computation. Beyond surveying the advances of each aspect of CNN, we have also introduced the application of CNN on many tasks, including image classi\ufb01cation, object detection, object tracking, pose estimation, text detection, visual saliency detection, action recognition, scene labeling, speech and natural language processing. Although CNNs have achieved great success in experimental evaluations, there are still lots of issues that deserve further investigation. Firstly, since the recent CNNs are becoming deeper and deeper, they require large-scale dataset and massive computing power for training. Manually collecting labeled dataset requires huge amounts of human e\ufb00orts. Thus, it is desired to explore unsupervised learning of CNNs. Meanwhile, to speed up training procedure, although there are already some asynchronous SGD algorithms which have shown promising result by using CPU and GPU clusters, it is still worth to develop e\ufb00ective and scalable parallel training algorithms. At testing time, these deep models are highly memory demanding and timeconsuming, which makes them not suitable to be deployed on mobile platforms that have limited resources. It is important to investigate how to reduce the complexity and obtain fast-to-execute models without loss of accuracy. Furthermore, one major barrier for applying CNN on a new task is that it requires considerable skill and experience to select suitable hyperparameters such as the learning rate, kernel sizes of convolutional \ufb01lters, the number of layers etc. These hyper-parameters have internal dependencies which make them particularly expensive for tuning. Recent works have shown that there exists a big room to improve current optimization techniques for learning deep CNN architectures [12, 42, 316]. Finally, the solid theory of CNNs is still lacking. Current CNN model works very well for various applications. However, we do not even know why and how it works essentially. It is desirable to make more e\ufb00orts on investigating the fundamental principles of CNNs. Meanwhile, it is also worth exploring how to leverage natural visual perception mechanism to further improve the design of CNN. We hope that this paper not only provides a better understanding of CNNs but also facilitates future research activities and application developments in the \ufb01eld of CNNs. Acknowledgment This research was carried out at the Rapid-Rich Object Search (ROSE) Lab at the Nanyang Technological University, Singapore. The ROSE Lab is supported by the Infocomm Media Development Authority, Singapore." + } + ], + "Yingyu Liang": [ + { + "url": "http://arxiv.org/abs/2007.05557v1", + "title": "Learning Entangled Single-Sample Gaussians in the Subset-of-Signals Model", + "abstract": "In the setting of entangled single-sample distributions, the goal is to\nestimate some common parameter shared by a family of $n$ distributions, given\none single sample from each distribution. This paper studies mean estimation\nfor entangled single-sample Gaussians that have a common mean but different\nunknown variances. We propose the subset-of-signals model where an unknown\nsubset of $m$ variances are bounded by 1 while there are no assumptions on the\nother variances. In this model, we analyze a simple and natural method based on\niteratively averaging the truncated samples, and show that the method achieves\nerror $O \\left(\\frac{\\sqrt{n\\ln n}}{m}\\right)$ with high probability when\n$m=\\Omega(\\sqrt{n\\ln n})$, matching existing bounds for this range of $m$. We\nfurther prove lower bounds, showing that the error is\n$\\Omega\\left(\\left(\\frac{n}{m^4}\\right)^{1/2}\\right)$ when $m$ is between\n$\\Omega(\\ln n)$ and $O(n^{1/4})$, and the error is\n$\\Omega\\left(\\left(\\frac{n}{m^4}\\right)^{1/6}\\right)$ when $m$ is between\n$\\Omega(n^{1/4})$ and $O(n^{1 - \\epsilon})$ for an arbitrarily small\n$\\epsilon>0$, improving existing lower bounds and extending to a wider range of\n$m$.", + "authors": "Yingyu Liang, Hui Yuan", + "published": "2020-07-10", + "updated": "2020-07-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DS", + "stat.ML" + ], + "main_content": "Introduction This work considers the novel parameter estimation setting called entangled single-sample distributions. In this setting, distributions are entangled in the sense that they share some common parameter and our goal is to estimate the common parameter based on one sample from each distributions obtained. We focus on the mean estimation problem in the subset-of-signals model when the distributions are Gaussians. In this problem, we have n independent Gaussians with a common mean with different unknown variances. Given one sample from each of the Gaussians, our goal is to estimate the mean. There can be different con\ufb01gurations of the unknown variances. In this work, we propose a basic model called subset-of-signals, which assumes that an unknown subset of m variances are bounded by 1 while there are no assumptions on the other variances. Equivalently, \u03c3(m) \u22641 where \u03c3(m) is the m-th smallest value in {\u03c3i}n i=1. The subset-of-signals model gives a simple setting specifying the possible con\ufb01gurations of n unknown variances {\u03c3i}n i=1 for analysis. While even in this simple setting, the optimal rates of mean estimation for entangled single-sample Gaussians are still unknown (for most values of m). \u2217The work was done during the summer internship of H. Yuan at the University of Wisconsin-Madison. c \u20dd2020 Y. Liang & H.Y. . arXiv:2007.05557v1 [cs.LG] 10 Jul 2020 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL The setting of entangled single-sample distributions is motivated for both theoretical and practical reasons. From the theoretical perspective, it goes beyond the typical i.i.d. setting and raises many interesting open questions in the most fundamental topics like mean estimation of Gaussians. It can also be viewed as a generalization of the traditional mixture modeling, since the number of distinct mixture components could grow with the number of samples and even be as large as the number of samples. From the practical perspective, traditional i.i.d. assumption can lead to a bad modeling of data in modern applications, where various forms of heterogeneity occur. In particular, entangled Gaussians capture heteroscedastic noises in various applications and thus can be a natural model for studying robustness. Though theoretically interesting and practically important, few studies exist in this setting. Chierichetti et al. (2014) considered the mean estimation for entangled Gaussians and showed the existence of a gap between estimation error rates of the best possible estimator in this setting and the maximum likelihood estimator when the variances are known. It focused on the case where most samples are \u201chigh-noised\u201d (i.e., most variances are large), and provided bounds in terms of \u03c3(m) with small m like \u0398(ln n). Pensia et al. (2019) considered means estimation for symmetric, unimodal distributions with sharpened bounds, and provided extensive discussion on the performance of their estimators in different con\ufb01gurations of the variances. Many questions are still largely open. In particular, when instantiated in the subset-of-signals model, existing studies provide interesting upper bounds and lower bounds but a large gap remains. See the related work section and remarks after our theorems for more details. This work thus proposes the subset-of-signals model and attempts to gain better understanding on the problem. For the upper bound, we aim to achieve a vanishing error bound (i.e., the error bound tends to 0 when n \u2192+\u221e). We analyze a simple algorithm based on iteratively averaging the truncated samples: it keeps an iterate and each time it truncates the samples in an interval around the current iterate and then averages the truncated samples to compute the next iterate. We also prove lower bounds for a wide range of m, improving known bounds. Our main results are summarized below. 1.1. Main Results Problem Setup. Suppose we have n independent samples xi \u223cN(\u00b5\u22c6, \u03c32 i ), where the distributions have a common mean \u00b5\u22c6but different variances \u03c32 i . The mean and variances are all unknown. We consider the subset-of-signal model, where an unknown subset of m variances are bounded by 1 while there are no assumptions on the other variances. That is, \u03c3(m) \u22641 where \u03c3(m) is the m-th smallest value in {\u03c3i}n i=1. Our goal is to estimate the common mean \u00b5\u22c6from the samples {xi}n i=1. As usual, we use f(n, m) = O(g(n, m)) (or f(n, m) \u2272g(n, m)) if there exist N, M and C > 0 such that when n > N and m > M, f(n, m) \u2264Cg(n, m). f = \u02dc O(g) hides logarithmic terms. f = \u2126(g) (or f \u2273g), f = \u0398(g) (or f \u2243g), f = o(g), and f = \u03c9(g) are de\ufb01ned as usual. Upper bound. We obtain the following result for an algorithm based on iteratively averaging truncated samples(see Algorithm 1 for the details). Theorem 1 If \u03c3(m) \u22641 for m = \u2126( \u221a n ln n), then with probability at least 1 \u22121/n, the output \u02c6 \u00b5 of Algorithm 1 satis\ufb01es |\u02c6 \u00b5 \u2212\u00b5\u22c6| \u2272 \u221a n ln n m . 2 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL Figure 1: Our bounds and those from previous works for mean estimation of entangled Gaussians in the subset-of-signals model. x-axis is the number of Gaussians with variances 1, y-axis is the error. See the text for the details of the bounds. The result shows that the algorithm can achieve a vanishing error when m = \u03c9( \u221a n ln n). Therefore, we can achieve vanishing error with only an \u03c9( p ln n/n) fraction of samples with bounded variances. This means even when the noisy samples dominates the data and the fraction of signals diminishes when n \u2192+\u221e, we can still obtain accurate estimation. The result also shows that when there are only a constant fraction of \u201cheavy-noised\u201d data (i.e., m = \u0398(n)), the error rate is O( p ln n/n), which matches the optimal error rate O(1/\u221an) up to a logarithmic factor. Our result matches the best bound known: the hybrid estimator proposed in Pensia et al. (2019) achieved O(\u221an ln n/m) in the subset-of-signals model but for essentially all values of m (Theorem 6 in their paper). (One should be able to tighten their analysis to get O( \u221a n ln n/m) with high probability.) Furthermore, median estimators can already achieve such a bound for the range m = \u2126( \u221a n ln n) (e.g., Lemma 5 in their paper). Our contribution is to show that iterative truncation can also achieve such a guarantee. The iterative truncation is natural and widely used in practice, so our analysis can be viewed as a justi\ufb01cation for this heuristic. Our upper bound is in sharp contrast to the robust mean estimation in the commonly studied adversarial contamination model (Valiant, 1985; Huber, 2011; Diakonikolas et al., 2019), where an \u03f5 fraction of the data are adversarially modi\ufb01ed and it has been shown that vanishing error is impossible when \u03f5 = \u2126(1). This means that the entangled distributions setting can be much more benign than the adversarial contamination model. For mean estimation for entangled Gaussians in the subset-of-signals model, one can view it as an adversary picking n \u2212m variances but having no control over the sampling process after specifying those variances. That is, it is a semi-adversarial model and can be much more benign than the fully adversarial contamination model. Lower bound. We now turn to the lower bound. Note that an instance of our problem is speci\ufb01ed by \u00b5\u22c6and {\u03c3i}n i=1. Theorem 2 Suppose \u03c3(m) \u22641. \u2022 If m = \u2126(ln n) and m = O(n1/4), then there exist a family of instances and a distribution over these instances such that any estimator has expected error \u2126 \u0010\u0000 n m4 \u00011/2\u0011 . 3 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL \u2022 For any arbitrarily small \u03f5 > 0, if m is between \u2126(n1/4) and O(n1\u2212\u03f5), then there exist a family of instances and a distribution over these instances such that any estimator has expected error \u2126 \u0010\u0000 n m4 \u00011/6\u0011 . The bound is for a distribution over the instances, which then implies the typical minimax bound. The result shows that when m = O(n1/4), it is impossible to obtain vanishing error. When m is as small as \u0398(ln n), the error is \u02dc \u2126(\u221an), paying a factor of \u02dc \u2126(\u221an) compared to the oracle bound O(1/\u221am) when the m bounded variance samples are known. When m = \u2126(n1/4), the lower bound does not exclude the possibility of vanishing error. On the other hand, it shows that one needs to pay a factor of \u2126 \u0010\u0000 n m \u00011/6\u0011 , compared to the oracle bound O(1/\u221am) when the m bounded variance samples are known. It also shows that one needs to pay a factor of \u2126 \u0010\u0000 n m \u00012/3\u0011 , compared to the bound O(1/\u221an) when all samples have bounded variance 1. Our result extends and improves the lower bound in Chierichetti et al. (2014). Their bound is \u2126 \u0010\u0000 n m4 \u00011/2\u0011 for m between \u2126(ln n) and o(\u221an). Our result extends the range of m by including the values between \u2126(n1/2) and O(n1\u2212\u03f5) (for any arbitrarily small \u03f5 > 0). It also improves their bound in the range between \u2126(n1/4) and o(n1/2), by a factor of \u2126 \u0012\u0010 m4 n \u00111/3\u0013 . Figure 1 provides an illustration summarizing the known upper and lower bounds for mean estimation of entangled single-sample Gaussians in the subset-of-signals model. There is still a gap between the known upper and lower bounds. A natural direction is to close the gap and obtain the optimal rates, which we left as future work. 2. Related Work Entangled distributions. This setting is \ufb01rst studied by Chierichetti et al. (2014), which considered mean estimation for entangled Gaussians and presented a algorithm combining the k-median and the k-shortest gap algorithms. It also showed the existence of a gap between the error rates of the best possible estimator in this setting and the maximum likelihood estimator when the variances are known. Pensia et al. (2019) considered a more general class of distributions (unimodal and symmetric) and provided analysis on both individual estimator (r-modal interval, k-shortest gap, k-median estimators) and hybrid estimator, which combines Median estimator with Shortest Gap or Modal Interval estimator. They also discussed slight relaxation of the symmetry assumption and provided extensions to linear regression. Our work focuses on the subset-of-signals model that allows to study the minimax rate and helps a clearer understanding of the problem (but our results can also be used for some other con\ufb01gurations). The algorithm we analyzed is based on the natural iterative truncation heuristics frequently used in practice to handle heteroscedastic noises, and our bound for it matches the best known rates (obtained by the hybrid estimator in Pensia et al. (2019)) in the range m = \u2126( \u221a n ln n). We also extends (to a wider range of m) and improves the lower bound in Chierichetti et al. (2014). Yuan and Liang (2020) considered mean estimation for entangled distributions, but the distributions are not assumed to be Gaussians (it only assumed the distributions have the same mean and their variances exist). Due to this generality, their upper bound is signi\ufb01cantly worse than ours: it\u2019s only for m \u22654n/5 (i.e., only a constant fraction of high noise points); it does not achieve a vanishing error when n tends to in\ufb01nity. The paper doesn\u2019t provide lower bounds. Their algorithm is also 4 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL based on iterative truncation, but has the following important difference: it removes a \ufb01xed fraction of data points in each iteration, rather than doing adaptive truncation. In contrast, our algorithm uses adaptive truncation interval lengths. This is crucial to obtain our results, since intuitively the best bias-variance trade-off introduced by the truncation can only be achieved with adaptive truncation. The entangled distributions setting is also closely related to robust estimation, which have been extensively studied in the literature of both classic statistics and machine learning theory. Robust mean estimation. There are several classes of data distribution models for robust mean estimators. The most commonly addressed is adversarial contamination model, whose origin can be traced back to the malicious noise model by Valiant (1985) and the contamination model by Huber (2011). Under contamination, mean estimation has been investigated in Diakonikolas et al. (2017, 2019); Cheng et al. (2019). Another related model is the mixture of distributions. There has been steady progress in algorithms for leaning mixtures, in particular, leaning Gaussian mixtures. Starting from Dasgupta (1999), a rich collection of results are provided in many studies, such as Sanjeev and Kannan (2001); Achlioptas and McSherry (2005); Kannan et al. (2005); Belkin and Sinha (2010a,b); Kalai et al. (2010); Moitra and Valiant (2010); Diakonikolas et al. (2018). Heteroscedastic models. The setting of entangled distributions is also closely related to heteroscedastic models, which have been a classic topic in statistics. For example, in heterogeneous linear regression (Munoz et al., 1986; Vicari and Vichi, 2013), the errors for different response variables may have different variances, and weighted least squares has been used for estimating the parameters in this setting. Another example is Principal Component Analysis for heteroscedastic data (Hong et al., 2018a,b; Zhang et al., 2018). The entangled Gaussians can be viewed as a model of mean estimation in the presence of heteroscedastic noises. 3. Upper Bound The na\u00a8 \u0131ve method of averaging all samples cannot achieve a small error when some distributions have large variances. A natural idea is then to reduce the variances. Truncation is a frequently used heuristic, i.e., projecting the samples to an interval (around a current estimation) to get controlled variances. However, while averaging the original samples is consistent, truncation can lead to bias. So truncation introduces some form of bias-variance tradeoff and the width of the interval controls the tradeoff. Intuitively, the best width will depend on how aligned the interval is with the true mean; for intervals around estimations of different error, the width for the best tradeoff can be different. Therefore, we consider iterative truncation using adaptive widths for the interval. Algorithm 1 describes the details of our method. Given an initial estimation \u00b50, it averages the truncated data in an interval around the estimation iteratively. In particular, the algorithm has K stages, and each stage has T steps. In step t of stage k, given a current estimation \u00b5(k) t and a width parameter \u03b4(k) t , the algorithm computes the new estimation \u00b5(k) t+1 by averaging the truncated data \u03c6(xi; \u2206(k) t ), where \u2206(k) t is the interval around \u00b5(k) t with radius \u03b4(k) t , and \u03c6 is de\ufb01ned as: \u03c6(x; [a, b]) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 a, if x < a, x, if a \u2264x \u2264b, b, if x > b. (1) For this algorithm, we prove the following guarantee. 5 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL Algorithm 1: Mean Estimation via Iterative Truncation Input: {xi}n i=1, initialization \u00b50, and parameters B, m s.t. B \u22652|\u00b50 \u2212\u00b5\u2217|, \u03c3(m) \u22641 Set \u03b4(0) = B, \u00b5(0) 0 = \u00b50, K = \u230alog2 \u03b4(0)\u230b, T = \u230864n ln n/m\u2309 for k = 0, 1, . . . , K do for t = 0, 1, . . . , T do \u2206(k) t = [\u00b5(k) t \u2212\u03b4(k), \u00b5(k) t + \u03b4(k)] \u00b5(k) t+1 = 1 n Pn i=1 \u03c6(xi; \u2206(k) t ) // \u03c6 is defined in Eqn (1) end \u00b5(k+1) 0 = \u00b5(k) T+1, \u03b4(k+1) = \u03b4(k)/2 end Output: \u02c6 \u00b5 \u2190\u00b5(K) T+1 Theorem 1 If \u03c3(m) \u22641 for m = \u2126( \u221a n ln n), then with probability at least 1 \u22121/n, the output \u02c6 \u00b5 of Algorithm 1 satis\ufb01es |\u02c6 \u00b5 \u2212\u00b5\u22c6| \u2272 \u221a n ln n m . Remark. The algorithm needs an initialization \u00b50 and parameter B. There exist methods to achieve this, e.g., set \u00b50 as the sample mean and B as two times the diameter of the sample points. Remark. Our proof actually gives more general results. Let m(\u03b4) = max{i : \u03c3(i) \u2264\u03b4} and let H\u03c3 \u03b4 be the harmonic mean of {max(\u03c3i, \u03b4)}n i=1, i.e., H\u03c3 \u03b4 = n/(Pn i=1 1 max(\u03c3i,\u03b4)). Then our analysis shows that for any k in the algorithm, the estimation at the end of the k-th iteration satis\ufb01es |\u00b5(k) T+1 \u2212\u00b5\u22c6| \u2272H\u03c3 \u03b4(k) q ln n n . That is, with probability at least 1 \u22121/n, the algorithm can output an estimation \u02c6 \u00b5 (by setting proper K and T) for any \u03b4 with m(\u03b4) = \u2126( \u221a n ln n), such that |\u02c6 \u00b5 \u2212\u00b5\u22c6| \u2272H\u03c3 \u03b4 r ln n n . (2) Since H\u03c3 \u03b4 \u2264n\u03b4/m(\u03b4), the error is \u2272\u03b4 \u221a n ln n m(\u03b4) . So for any t \u2265m = \u2126( \u221a n ln n), by setting \u03b4 = \u03c3(t) (the t-th smallest variance), we can get with probability 1 \u22121/n, |\u02c6 \u00b5 \u2212\u00b5\u22c6| \u2272\u03c3(t) \u221a n ln n t (3) When t = m, we recover the bound in the theorem. The more general results are more adaptive. First, they can be applied to more general threshold values \u03b4. For example, for the con\ufb01guration of variances where \u03c3(t) can increase with n, one can still get vanishing error when \u03c3(t) = o(t/ \u221a n ln n). Second, (2) can be applied to different con\ufb01gurations of \u03c3i\u2019s and obtain better bounds. When \u03c3(i)\u2019s for i > m are benign, (2) shows that they can help the estimation and quanti\ufb01es the provided information with the notion H\u03c3 \u03b4 . 6 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL Remark. We would also like to point out, the hybrid estimator proposed in Pensia et al. (2019) also achieved almost the same upper bound O(\u221an ln n/m) as ours in the subset-of-signals model, but for essentially all values of m. (Their analysis can be tightened to get O( \u221a n ln n/m)). Their bound is obtained by combining two estimators, and depends on a notion rk, the length of the smallest interval containing k samples. Furthermore, the k-median estimator (with proper k) can also achieve the bound for the range m = \u2126( \u221a n ln n). In comparison, our bound is for the iterative truncation heuristic frequently used in practice, and depends on the notion H\u03c3 \u03b4 . More details of the existing bounds are as follows. Chierichetti et al. (2014) achieved an error bound min2\u2264k\u2264log n \u02dc O \u0000n1/2(1+1/(k\u22121))\u03c3k \u0001 . Among all estimators studied in Pensia et al. (2019), the superior performance is obtained by the hybrid estimators, which includes version (1): combining k1-median with k2-shorth and version (2): combining k1-median with modal interval estimator. These two versions achieve similar guarantees. Version 1 of the hybrid estimator outputs \u02c6 \u00b5k1,k2 such that |\u02c6 \u00b5k1,k2 \u2212\u00b5| \u22644\u221an log n k2 r2k2 with probability 1\u22122 exp(\u2212c\u2032k2)\u22122 exp(\u2212c log2 n), where k1 = \u221an log n and k2 \u2265C log n. Here rk is de\ufb01ned as inf \b r : 1 n Pn i=1 P(|xi \u2212\u00b5\u22c6| \u2264r) \u2265k n \t . So the error bound varies with speci\ufb01c con\ufb01gurations of the variances. Furthermore, the modal interval estimator or the shorth estimator still work for small m\u2019s, so their bound holds also for m = \u02dc O(n1/2). 3.1. Proof of Theorem 1 To prove the theorem, we \ufb01rst focus on one stage and omit the superscript (k). De\ufb01ne et := |\u00b5t \u2212\u00b5\u22c6|, (4) zi := \u03c6(xi; \u2206t) \u2212\u00b5\u22c6, (5) \u00af zi := zi \u2212Ezi. (6) We have \u00b5t+1 \u2212\u00b5\u22c6= Pn i=1(\u03c6(xi; \u2206t) \u2212\u00b5\u22c6) n = 1 n n X i=1 zi. (7) To bound | Pn i=1 zi|, we need to bound \u00af zi\u2019s and |Ezi|\u2019s. Since zi is 1-Lipschitz w.r.t. \u00b5t, a standard \u03f5net argument gives a uniform concentration bound of \u00af zi\u2019s in Lemma 3. |Ezi| is bounded in Lemma 4. See Appendix A for their proofs. Lemma 3 Let zi(\u00b5) = \u03c6(xi; [\u00b5 \u2212\u03b4, \u00b5 + \u03b4]) \u2212\u00b5\u22c6, \u00af zi(\u00b5) = zi(\u00b5) \u2212Ezi(\u00b5). With probability at least 1 \u22121/n3, for any \u00b5 satisfying |\u00b5 \u2212\u00b5\u22c6| \u2264\u03b4e, we have \f \f \f \f \f n X i=1 \u00af zi(\u00b5) \f \f \f \f \f \u2272\u03b4 \u221a n ln n + \u03b4e n . Lemma 4 Let zi(\u00b5) = \u03c6(xi; [\u00b5 \u2212\u03b4, \u00b5 + \u03b4]) \u2212\u00b5\u22c6and \u03b4e = |\u00b5 \u2212\u00b5\u22c6|. Then |Ezi| \u2264\u03b4e \u0012 1 \u22121 5 \u03b4 max{\u03b4e, \u03b4} \u03b4 max{\u03c3i, \u03b4} \u0013 . Using these two lemmas, we can analyze one iteration of the algorithm. 7 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL Lemma 5 If \u03b4 \u2265et, then with probability at least 1 \u22121/n3, et+1 \u2264C\u03b4 r ln n n + et \u0012 1 \u2212 \u03b4 5H\u03c3 \u03b4 \u0013 where H\u03c3 \u03b4 is the harmonic mean of {max(\u03c3i, \u03b4)}n i=1:H\u03c3 \u03b4 = n/ Pn i=1 1 max(\u03c3i,\u03b4). Proof By Lemma 3 and Lemma 4, with probability at least 1 \u22121/n3, \f \f \f \f \f n X i=1 zi \f \f \f \f \f \u2264 \f \f \f \f \f n X i=1 \u00af zi \f \f \f \f \f + n X i=1 |Ezi| \u2264C\u03b4 \u221a n ln n + et n + et n \u2212\u03b4 5 n X i=1 1 max(\u03c3i, \u03b4) ! . This leads to the \ufb01nal bound. Now we are ready to prove Theorem 1. At stage k = 0, we have \u03b4(k) \u22652|\u00b5(k) 0 \u2212\u00b5\u22c6|. Suppose this is true for stage k < K, we show that it is true for k + 1. In stage k, we have \u03b4(k) \u22652et for t = 0. Suppose this is true for a step t \u2264T, we show that it is true for t + 1. Let m(\u03b4) = max{i : \u03c3(i) \u2264\u03b4}. We have H\u03c3 \u03b4 = n Pn i=1 1 max(\u03c3i,\u03b4) \u2264 n m(\u03b4) 1 \u03b4 = n\u03b4 m(\u03b4). Then by Lemma 5, et+1 \u2264C\u03b4(k) r ln n n + et 1 \u2212m(\u03b4(k)) 5n ! . If et \u2273\u03b4(k)\u221a n ln n m(\u03b4(k)) , et+1 \u2264et \u2264\u03b4(k)/2. If et \u2272\u03b4(k)\u221a n ln n m(\u03b4(k)) , we have et+1 \u2264et + C\u03b4(k) q ln n n \u2272 \u03b4(k) \u221a n ln n m(\u03b4(k)) + \u03b4(k) q ln n n \u2272\u03b4(k) \u221a n ln n m(\u03b4(k)) \u2264\u03b4(k)/4. Therefore, we can always guarantee et \u2264\u03b4(k)/2 for t \u2264T. Then Lemma 5 can be applied for all t \u2264T, and thus after T iterations, eT \u2264C\u03b4(k) r ln n n T\u22121 X i=0 1 \u2212m(\u03b4(k)) 5n !i + 1 \u2212m(\u03b4(k)) 5n !T e0 \u2272\u03b4(k)\u221a n ln n m(\u03b4(k)) . Since m(\u03b4(k)) \u2265m, eT < \u03b4(k)/4, so \u03b4(k+1) = \u03b4(k)/2 > 2et = 2|\u00b5(k+1) 0 \u2212\u00b5\u22c6|. Therefore, \u03b4(k) \u22652|\u00b5(k) 0 \u2212\u00b5\u22c6| for all k \u2264K. Since 1 \u2264\u03b4(K), at the end of stage K: eT \u2272\u03b4(K)\u221a n ln n m(\u03b4(K)) \u2272 \u221a n ln n m . This is |\u02c6 \u00b5 \u2212\u00b5\u22c6| \u2272 \u221a n ln n m . 8 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL 4. Lower Bound To complement the upper bound, we also provide the following lower bound. Theorem 2 Suppose \u03c3(m) \u22641. \u2022 If m = \u2126(ln n) and m = O(n1/4), then there exist a family of instances and a distribution over these instances such that any estimator has expected error \u2126 \u0010\u0000 n m4 \u00011/2\u0011 . \u2022 For any arbitrarily small \u03f5 > 0, if m is between \u2126(n1/4) and O(n1\u2212\u03f5), then there exist a family of instances and a distribution over these instances such that any estimator has expected error \u2126 \u0010\u0000 n m4 \u00011/6\u0011 . Remark. The lower bound considers two ranges of m. In the \ufb01rst range, the bound is \u02dc \u2126(\u221an) at one end point m = \u0398(ln n), and is \u2126(1) at the other end point m = \u0398(n1/4). It decreases at a rate of 1/m2 as m increases in this range. In the second range, the bound is \u2126(1) at one end point m = \u0398(n1/4), and is \u2126(1/n1/2\u22122\u03f5/3) at the other end point m = \u0398(n1\u2212\u03f5) (for any arbitrarily small \u03f5 > 0). It decreases at a rate of 1/m2/3 as m increases, which is slower than that in the \ufb01rst range. Roughly speaking, the bound excludes the possibility of vanishing error in the \ufb01rst range while still allows that in the second range, and the transition point is m = \u0398(n1/4). Our result extends and improves the lower bound in Chierichetti et al. (2014). Their bound is \u2126 \u0010\u0000 n m4 \u00011/2\u0011 for m between \u2126(ln n) and o(\u221an). Our result extends the range of m by including the values between \u2126(n1/2) and O(n1\u2212\u03f5) (for any arbitrarily small \u03f5 > 0). It also improves their bound in the range between \u2126(n1/4) and o(n1/2), by a factor of \u2126 \u0012\u0010 m4 n \u00111/3\u0013 . The improvement is obtained by a tighten analysis in the second range of m, which is discussed below. 4.1. Proof of Theorem 2 Our proof follows the high-level idea of Chierichetti et al. (2014) but with a tightened analysis. We also consider the following distribution over a family of instances: \u03c3i\u2019s are i.i.d. sampled; with probability p, \u03c3i = \u03c3p, and with probability q = 1 \u2212p, \u03c3i = \u03c3q; \u00b5\u22c6is uniform over {+L, \u2212L}. Here, p = m/n, \u03c3p = 1, while \u03c3q, L are parameters to be chosen. The goal is then to choose \u03c3q, L (based on n, m), such that conditioned on \u00b5\u22c6= +L or \u00b5\u22c6= \u2212L, the other choice of mean has a higher likelihood with a constant probability. If this is true, then any estimator has an expected error \u2126(L) over the above distribution on the instances and the randomness of the sample points. When m large enough, the probability that \u03c3(m/2) > 1 is exponentially small. Then on the distribution over the instances conditioned on \u03c3(m/2) \u22641, the lower bound holds under the assumption \u03c3(m/2) \u22641. By changing the variable m to 2m, the theorem follows. We improve over Chierichetti et al. (2014) by noting that, roughly speaking, the requirement on \u03c3q when m = \u2126(n1/4) is more relaxed compared to that when m = O(n1/4). This allows us to set \u03c3q differently to get improved results and also over a more general range of m, as detailed below. Following the idea above, denote the likelihood of the mean being L as L+, and the likelihood of the mean being \u2212L as L\u2212. We will show that the log-likelihood ratio has suf\ufb01ciently large variances so can be negative or positive with constant probabilities. From now on, we condition on 9 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL the true mean is L (the proof for the case with \u2212L is symmetric). Let Sp = {i : \u03c3i = \u03c3p} and Sq = {i : \u03c3i = \u03c3q}. De\ufb01ne Ni = p/\u03c3p q/\u03c3q exp \u0012 \u2212(xi \u2212L)2 2 \u0012 1 \u03c32 p \u22121 \u03c32 q \u0013\u0013 (8) Di = p/\u03c3p q/\u03c3q exp \u0012 \u2212(xi + L)2 2 \u0012 1 \u03c32 p \u22121 \u03c32 q \u0013\u0013 . (9) Then we have ln L+ L\u2212 = n X i=1 \u0012 ln 1 + Ni 1 + Di + 2L \u03c32 q xi \u0013 = X i\u2208Sp \u0012 ln 1 + Ni 1 + Di + 2L \u03c32 q xi \u0013 | {z } Xp + X i\u2208Sq \u0012 ln 1 + Ni 1 + Di + 2L \u03c32 q xi \u0013 | {z } Xq . We next bound Xq and Xp respectively. The road map is to show that Xq has suf\ufb01ciently large variances so can make the log-likelihood ratio negative with constant probability, shown via the Berry-Essen Theorem. This requires computing the moments, so we \ufb01rst approximate ln 1+Ni 1+Di via the Taylor expansion of the function ln(1+x), and then compute the moments of the approximation. When m = \u2126(n1/4), the likelihood of xi \u2208Sq is comparable to that of xi \u2208Sq, so their ratio (as in (8) or (9)) is in the same order as a constant. We thus use a tighter approximation for ln(1 + Ni) and ln(1 + Di) in the log-likelihood ratio, and improve over Chierichetti et al. (2014). Lemma 6 Suppose the mean is L, and q > Cqp, \u03c3q > C\u03c3\u03c3p, L < cL\u03c3q for suf\ufb01ciently large absolute constants Cq, C\u03c3 and a suf\ufb01ciently small absolute constant cL.1 Suppose p/\u03c3p q/\u03c3q < c\u03b1 for a suf\ufb01ciently small absolute constant c\u03b1 < 1. Let t be a positive integer. Let Vi = P2t\u22121 j=1 (\u22121)j+1(Nj i \u2212 Dj i )/j and Yi = 2L \u03c32 q xi + Vi. Then for i \u2208Sq, E[Yi] \u2272L2 \u03c32 q , E[Y 2 i ] \u2243p2/\u03c3p q2/\u03c3q min \u001a 1, L2 \u03c32 p \u001b + L2 \u03c32 q . And with probability at least 1\u2212n\u2212\u0398(1) \u2212exp (\u0398 (Uq)), \f \f \fXq \u2212P i\u2208Sq Yi \f \f \f \u2272Uq := \u0010 p/\u03c3p q/\u03c3q \u00112t \u03c3p \u03c3q n. Also, with probability at least 1 \u2212c for a suf\ufb01ciently small absolute constant c, \f \f \fXq \u2212P i\u2208Sq Yi \f \f \f \u2272 U \u2032 q := \u0010 p/\u03c3p q/\u03c3q \u00112t \u0010 \u03c3p \u03c3q n + \u221an \u0011 . Lemma 7 Under the same conditions as in Lemma 6, for i \u2208Sp, E[Yi] \u2272p/\u03c3p q/\u03c3q min \u001a 1, L2 \u03c32 p \u001b , E[Y 2 i ] \u2243L2\u03c32 p \u03c34 q + L4 \u03c34 q + \u0012p/\u03c3p q/\u03c3q \u00132 min \u001a 1, L2 \u03c32 p \u001b + p/\u03c3p q/\u03c3q L2 \u03c32 q . And with probability at least 1 \u2212c for a suf\ufb01ciently small absolute constant c, \f \f \fXp \u2212P i\u2208Sp Yi \f \f \f \u2272 Up := \u0010 p/\u03c3p q/\u03c3q \u00112t pn. 1. Cq is a constant chosen for the inequality q > Cqp. It doesn\u2019t depend on the value of q. Similar for C\u03c3, cL etc. 10 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL Now de\ufb01ne Zi = Yi \u2212E[Yi], Z = 1 p M2|Sq| X i\u2208Sq Zi. To apply the Berry-Essen Theorem, we bound the \ufb01rst three moments of Zi. Clearly, E[Zi] = 0. Lemma 8 Under the same conditions as in Lemma 6, for i \u2208Sq, M2 := E[Z2 i ] \u2243p2/\u03c3p q2/\u03c3q min \u001a 1, L2 \u03c32 p \u001b + L2 \u03c32 q , M3 := E[|Zi|3] \u2272p3/\u03c32 p q3/\u03c32 q min \u001a 1, L2 \u03c32 p \u001b + p2/\u03c32 p q2\u03c3q L2 + p/\u03c3p q\u03c34 q L2(\u03c33 p + \u03c32 pL + \u03c3pL2) + L3 \u03c33 q . By the Berry-Essen Theorem, conditioned on Sq, the CDF F(t) of Z satis\ufb01es |F(t) \u2212\u03a6(t)| \u2272 M3 \u221a M3 2 |Sq| where \u03a6(t) is the CDF of a standard normal distribution. By the Chernoff\u2019s bound, with probability 1 \u2212n\u0398(1), |Sp| \u2243pn, |Sq| \u2243qn \u2243n. Assume this is true in the rest of the proof. Now we consider different cases for p and set \u03c3p, \u03c3q and L accordingly. Case 1. Suppose p \u2265\u2126(ln n/n) and p \u2264 cp n3/4 for some suf\ufb01ciently small constant cp > 0. Then set \u03c3p = 1, \u03c3q = C\u03c3/(p2n) and L = cL/(p2n3/2) \u2243\u03c3q/\u221an for some suf\ufb01ciently large constant C\u03c3 > 0 and some suf\ufb01ciently small constant cL > 0. Set t = 1. Then M2 \u2243p2\u03c3q + L2 \u03c32 q \u22431 n. M3 \u2272p3\u03c32 q + p2 L2 \u03c3q + p \u03c34 q L2(1 + L + L2) + L3 \u03c33 q \u2243 1 pn2 + 1 n3/2 . Then conditioned on |Sq| > qn/2 > n/4, we have M3 \u221a M3 2 |Sq| \u2272 1 pn = o(1). Then we have for constants CZ > 0 and cz > 0, Pr[Z \u2264\u2212CZ] = Pr hP i\u2208Sq Zi \u2264\u2212CZ p M2|Sq| i \u2265cz. So with a constant probability, \u2212P i\u2208Sq Zi \u2265CZ \u221aM2n \u2243CZ. We also have with probability 1 \u2212c for a suf\ufb01ciently small absolute constant c, X i\u2208Sq E[Yi] \u2272L2 \u03c32 q qn \u22431, Uq = \u0012p/\u03c3p q/\u03c3q \u00132 \u03c3p \u03c3q n \u2243p2/\u03c3p q2/\u03c3q n \u22431, Up = \u0012p/\u03c3p q/\u03c3q \u00132 pn \u22431 pn = o(1), X i\u2208Sp E[Yi] \u2272pnp/\u03c3p q/\u03c3q \u2243p2\u03c3qn \u22431, X i\u2208Sp E[Y 2 i ] \u2272pn L2\u03c32 p \u03c34 q + L4 \u03c34 q + \u0012p/\u03c3p q/\u03c3q \u00132 + p/\u03c3p q/\u03c3q L2 \u03c32 q ! 11 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL \u2272pn \u0012 1 \u03c32 qn + 1 n2 + p2\u03c32 q + p\u03c3q n \u0013 \u22721 pn = o(1). Therefore, with a constant probability, ln L+ L\u2212is negative. The expected error E|\u02c6 \u00b5 \u2212\u00b5\u2217| of any estimator \u02c6 \u00b5 is \u2126(L) = \u2126(1/(p2n3/2)) = \u2126(\u221an/m2). Case 2. Suppose p \u2265 Cp n3/4 and p < cp n2/t for some suf\ufb01ciently large absolute constant Cp and suf\ufb01ciently small absolute constant cp. Then set \u03c3p = 1, \u03c3q = C\u03c3/p2/3 and L = cL/(p2/3n1/2) \u2243 \u03c3q/\u221an for some suf\ufb01ciently large constant C\u03c3 > 0 and some suf\ufb01ciently small constant cL > 0. Then M2 \u2243p2\u03c3qL2 + L2 \u03c32 q \u22431 n. M3 \u2272p3\u03c32 qL2 + p2 L2 \u03c3q + p \u03c34 q L2 + L3 \u03c33 q \u2243p1/3 n + 1 n3/2 . Then conditioned on |Sq| > qn/2 > n/4, we have M3 \u221a M3 2 |Sq| = o(1). Then we have for constants CZ > 0 and cz > 0, Pr[Z \u2264\u2212CZ] = Pr hP i\u2208Sq Zi \u2264\u2212CZ p M2|Sq| i \u2265cz. So with a constant probability, \u2212P i\u2208Sq Zi \u2265CZ \u221aM2n \u2243CZ. We also have with probability 1 \u2212c for a suf\ufb01ciently small absolute constant c, X i\u2208Sq E[Yi] \u2272L2 \u03c32 q qn \u22431, U \u2032 q = \u0012p/\u03c3p q/\u03c3q \u00132t \u0012\u03c3p \u03c3q n + \u221an \u0013 \u22721, Up = \u0012p/\u03c3p q/\u03c3q \u00132t pn \u22721, X i\u2208Sp E[Yi] \u2272pnp/\u03c3p q/\u03c3q L2 \u22431, X i\u2208Sp E[Y 2 i ] \u2272pn L2\u03c32 p \u03c34 q + L4 \u03c34 q + \u0012p/\u03c3p q/\u03c3q \u00132 L2 + p/\u03c3p q/\u03c3q L2 \u03c32 q ! \u2272pn \u0012 1 \u03c32 qn + 1 n2 + p2\u03c32 qL2 + p\u03c3q n \u0013 \u2272p1/3 = o(1). Therefore, with a constant probability, ln L+ L\u2212is negative. The expected error E|\u02c6 \u00b5 \u2212\u00b5\u2217| of any estimator \u02c6 \u00b5 is \u2126(L) = \u2126(1/(p2/3n1/2)) = \u2126(n1/6/m2/3). 5. Conclusion This work considered mean estimation in the setting of entangled single-sampled Gaussians where given one sample from each of n Gaussians with a common mean but different variances, the goal is to learn the mean. It studied the subset-of-signals model where an unknown subset of m variances are bounded, and proved upper and lower bounds, which are summarized in Figure 1. A natural future direction is to close the gap between the upper bound and the lower bound. 12 \fLEARNING ENTANGLED SINGLE-SAMPLE GAUSSIANS IN THE SOS MODEL Acknowledgement This work was supported in part by FA9550-18-1-0166. The authors would also like to acknowledge the support provided by the University of Wisconsin-Madison Of\ufb01ce of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation." + }, + { + "url": "http://arxiv.org/abs/2002.09812v1", + "title": "Sketching Transformed Matrices with Applications to Natural Language Processing", + "abstract": "Suppose we are given a large matrix $A=(a_{i,j})$ that cannot be stored in\nmemory but is in a disk or is presented in a data stream. However, we need to\ncompute a matrix decomposition of the entry-wisely transformed matrix,\n$f(A):=(f(a_{i,j}))$ for some function $f$. Is it possible to do it in a space\nefficient way? Many machine learning applications indeed need to deal with such\nlarge transformed matrices, for example word embedding method in NLP needs to\nwork with the pointwise mutual information (PMI) matrix, while the entrywise\ntransformation makes it difficult to apply known linear algebraic tools.\nExisting approaches for this problem either need to store the whole matrix and\nperform the entry-wise transformation afterwards, which is space consuming or\ninfeasible, or need to redesign the learning method, which is application\nspecific and requires substantial remodeling.\n In this paper, we first propose a space-efficient sketching algorithm for\ncomputing the product of a given small matrix with the transformed matrix. It\nworks for a general family of transformations with provable small error bounds\nand thus can be used as a primitive in downstream learning tasks. We then apply\nthis primitive to a concrete application: low-rank approximation. We show that\nour approach obtains small error and is efficient in both space and time. We\ncomplement our theoretical results with experiments on synthetic and real data.", + "authors": "Yingyu Liang, Zhao Song, Mengdi Wang, Lin F. Yang, Xin Yang", + "published": "2020-02-23", + "updated": "2020-02-23", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS", + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction 3 2 Related Work 4 3 Preliminaries 5 4 Sketch for f-Matrix Product 6 4.1 Sketch for f-Vector Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.2 Sketch log(| \u00b7 | + 1)-Vector Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.3 From Vector Product Sketch to Matrix Product Sketch . . . . . . . . . . . . . . . . . 8 5 Application to Low Rank Approximation 10 6 Experiments 10 6.1 Synthetic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 6.2 Real Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 7 Conclusions 13 A Preliminaries 21 A.1 CountSketch and Gaussian Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . 21 A.2 Pythagorean Theorem, matrix form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 A.3 Adaptive Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B Additional Results for Sketching f-Matrix Product 22 B.1 Proofs of Sketch log(| \u00b7 | + 1)-Vector Product . . . . . . . . . . . . . . . . . . . . . . . 22 B.2 Sketch p | \u00b7 |-Vector Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 B.3 More General Functions f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 B.4 From Vector Product Sketch to Matrix Product Sketch . . . . . . . . . . . . . . . . . 24 C Application in Low Rank Approximations 25 C.1 Leverage score and its application on samping . . . . . . . . . . . . . . . . . . . . . . 25 C.2 Proof of Theorem 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 C.3 Sampling by Generalized Leverage Scores . . . . . . . . . . . . . . . . . . . . . . . . 26 C.4 Adaptive Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 C.5 Computing Approximation Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 C.6 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 D Examples Demonstrating the Di\ufb00erences Between A and log(A) 33 D.1 rank(A) \u226brank(log A) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 D.2 rank(A) \u226arank(log A) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E Application of f-Matrix Product Sketch in Linear Regression 34 F Tools 36 1 \fG Complete Experimental Results 37 G.1 Synthetic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 G.2 Real Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 G.3 The E\ufb00ect of the Sample Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2 \f1 Introduction Matrix datasets are ubiquitous in machine learning. However, many matrix datasets are usually too large to \ufb01t in the computer memory in large scale applications, e.g., image clustering [PPP06], natural language processing [MSA+11], network analysis [MS04, GL16], and recommendation systems [KBV09]. Many techniques have been proposed to perform the learning tasks on these data in an e\ufb03cient way; see, e.g., [Mah11, Woo14, ZWSP08, GNHS11] and the references therein. However, challenges arise when the learning task is performed on an entrywise transformation of the matrix, which prevents applying many linear algebraic techniques. Furthermore, due to large sizes, these matrices are often constructed by entrywise updates, i.e., the entries of the matrix are constructed from a stream of updates where each update adds some value on some entry. More speci\ufb01cally, there is a very large underlying matrix A (that cannot be stored in memory easily) whose entries are constructed by a data stream where each item in the stream is of the form (i, j, \u2206) with \u2206\u2208{\u00b11} representing the update Ai,j \u2190Ai,j + \u2206. The downstream learning task (e.g., low rank approximation), however, needs to take input as matrix M where Mi,j = f(Ai,j) for some transformation function f (e.g., f(x) = log(|x| + 1)). A concrete example is word embedding in natural language processing (NLP). Word embedding methods aim to embed each word to a vector space. It becomes a basic building block in many modern NLP systems. Many of these systems achieve the state of the art performance on various tasks via word embedding [PSM14, MSC+13, WSC+16]. A basic routine in word embedding is to explicitly or implicitly perform low rank approximation of an entry-wise transformed matrix [LG14, LZM15]. For instance, the transformation is to apply a log likelihood function on each entry. The matrix itself is the so-called co-occurrence count matrix, which can be constructed by scanning the text corpus, e.g., the entire Wikipedia database. This matrix is usually of size millions by millions. Similar examples include regressions on huge accumulated datasets in economics [DVF13, Var14], where di\ufb00erent transformations on covariates are often used to reduce biases. Other examples include visual feature extraction [BPL10], kernel methods [RR08], and M-estimators [Zha97]. These large scale applications make it impractical or hard to implement existing methods, which keep the matrix in memory. Some other approaches exploit the problem structure to get around the huge space requirement. For instance, some of them propose sequential models of the data, and design online algorithms for computing the embeddings (e.g.,[MSC+13, BGJM16]). These methods, however, are more task-speci\ufb01c and cannot be applied to other tasks involving more general entrywise matrix transformations. In this paper, we show that learning based on transformed large matrices is possible even when storing such a matrix is not feasible. Our main contributions are: \u2022 For a general class of transformation function f, we provide an e\ufb03cient one-pass matrix-product sketch for computing the product of a given small matrix B with the transformed matrix f(A) with provable error bounds. This algorithm uses space at most the size of the output. The method assumes no statistical model about the updates and can handle a general family of transformations. In particular, these transformations include logarithmic functions and small degree polynomials. This method can also be used as building blocks for downstream tasks: any algorithm requires access to the transformed matrix via a matrix product can apply our algorithm to obtain space saving. \u2022 We demonstrate the application of our algorithm in a concrete task: low rank approximation. To the best of our knowledge, our algorithm is the \ufb01rst one that is able to compute low rank approximation of large matrices under entrywise transformations. We plug in our matrix product sketch into known algorithms as black boxes. We provide theoretical analysis on the 3 \ftradeo\ufb00between the space and the accuracy of these algorithms. We show that our algorithms are space e\ufb03cient and almost match the accuracy of using the full matrix. These theoretical guarantees are complemented by experiments for low rank approximation on synthetic and real data. The empirical results show that our algorithm can reduce the space usage by orders of magnitude while the error is almost the same as the optimum. We show that our algorithms beat the baseline of using uniform sampling on columns of the transformed matrix by a large margin. We also provide results on linear regression in the appendix. Road Map. We provide de\ufb01nitions and basic concepts in Section 3. In Section 4, we introduce our basic routine called the matrix product sketch. We use our sketching algorithms to compute the low rank approximation of a transformed matrix in Section 5, and the application on linear regression is in Appendix E. In Section 6, we use numeric experiments to justify our approach. The appendix provides a list of related works, the complete proofs, details of the experiments, and also additional theoretical and empirical results. 2 Related Work There exists a large body of work on fast algorithms for large scale matrices. Some are based on randomized matrix algorithms and use techniques like sampling and sketching; see [Mah11, Woo14] and the reference therein. Some others are based on optimization algorithms like Alternating Least Square and Stochastic Gradient Descent and their variants; see [ZWSP08, GNHS11] for some examples. However, most existing approaches do not apply to the settings considered in this paper. The closest work is [WZ16], which considers low rank approximation of the element-wise transformation of the sum of several matrices located in di\ufb00erent machines. This distributed setting is di\ufb00erent from our setting and na\u00efvely applying their algorithm will lead to a large space cost. Furthermore, our sketching method can be applied to learning tasks beyond low rank approximation. Our work is built on techniques from numerical linear algebra and streaming data analysis in the recent decade. There are numerous research works along this line. Here we list a few but far from exhaustive. Low-rank approximation or matrix factorization of a matrix is an important task in numerical linear algebra. In this problem, we are given a n\u00d7d matrix A and a parameter k, the goal is to \ufb01nd a rank-k matrix b A so as to minimize the residual error \u2225b A\u2212A\u22252 F , where the Frobenius norm is de\ufb01ned as \u2225A\u2225F = (Pn i=1 Pd j=1 A2 i,j) 1 2. Note that an optimal b A provides a good estimation to the leading eigenspace of the matrix A. Classical way of speeding up low-rank approximation via sketching requires showing two properties for sketching matrix: subspace embedding [Sar06, LWW20, WW19] and approximate matrix product [NN13, KN14]. Low-rank approximation algorithm via combining those two properties has been presented in several papers [CW13, MM13, SWZ19b]. The classical sketching idea is easy to be made a streaming algorithm, since we usually use linear sketching matrix, which we don\u2019t need to explicitly write down during the stream. However none of these methods are applicable to our setting, which is much harder than the classical streaming low-rank approximation problem. This is mainly because the transformation f that acts on an the matrix A completely destroyes the linear algebraic property of matrix A; see Appendix D for some discussions. The storage of A can also be indefeasibly large to be stored and apply the above mentioned methods. Streaming algorithms have gained great progress since its \ufb01rst systematic study by [AMS99]. Classic streaming problems ask how to estimate a function over a vector, which is under streaming updates. For instance, [AMS99] approximates \u2225v\u2225p while observing a sequence of updates to the coordinates of v. The usual assumption is that v \u2208Rn and n is so large that v cannot be stored 4 \fin memory easily. Since [AMS99], a line of research works (e.g. [Ind00, IW05, BYKS02, BKSV14, KNW10]) gradually improve the algorithm and obtain nearly optimal upper and lower bounds. Very recently, [BO10b, BO10a, BVWY17] attempts to handle a more general set of functions. [BVWY17] gives a nearly optimal characterization of this problem. [BBC+17] studies a more general setting, i.e., functions that do not have a summation structure f : Rn \u2192R. They give optimal characterization for streaming all symmetric norms. Given theses advances, none of them solves our problem directly since a streaming estimation only gives a value of vector, that is unrelated to the matrix formulation of the input. 3 Preliminaries Notation. [n] denotes the set {1, 2, \u00b7 \u00b7 \u00b7 , n}. For a vector x \u2208Rn, |x| \u2208Rn denotes a vector whose i-th entry is |xi|. For a matrix A \u2208Rn\u00d7n, let \u2225A\u2225denote its spectral norm, \u03c3i(A) to denote its i-th largest singular value, and [A]k denote its best rank-k approximation. Also let det(A) denote its determinant when A is square. For a function f, M = f(A) means entrywise transformation Mij = f(Aij). We also denote Ai\u2217as the i-th row of matrix A and A\u2217j as its j-th column. Problem De\ufb01nition. The problem of interests is de\ufb01ned as follows. Suppose we have a underlying large matrix A = (Ai,j) \u2208Rn\u00d7n initialized as a zero matrix.1 Now, we have observed a sequence of updates of the form \u27e8(i1, j1, \u22061), (i2, j2, \u22062), . . . , (im, jm, \u2206m)\u27e9for some m = poly(n), it, jt \u2208[n] and \u2206t \u2208{\u22121, 1}. At the t-th update, we are updating the underlying matrix by ait,jt \u2190ait,jt +\u2206t. We assume that m is bounded by poly(n). Note that the assumptions of integer updates is without loss of generality. For instance, if the updates is not an integer, we can round them to a speci\ufb01ed precision \u01eb > 0 and then scale them to integers. The polynomially bounded length is also a usual and reasonable assumption. At the end of the stream, one would like to perform some learning task (such as low-rank approximation) on the matrix M = f(A) for some \ufb01xed function f : R \u2192R and would like to do so using as small space as possible, in particular, avoid storing the large matrix A. Some examples of the transformation functions are f(x) = log(|x| + 1), or f(x) = |x|\u03b1, \u2200\u03b1 \u22650. (1) Functions of this form are important in machine learning. For example, f(x) = log(|x| + 1) corresponds to the log likelihood function and f(x) = |x|\u03b1 corresponds to a general family of statistic models or feature expansion. In this paper we would like to design a space e\ufb03cient method for approximating Z = f(A)B for a given matrix B, where f(A) \u2208Rn\u00d7n and B \u2208Rn\u00d7k for some integer n and k with k \u226an. We would like to design algorithms that uses space e O(nk) instead of e O(n2). This can then be used as a plug-in primitive and turn learning algorithms into space e\ufb03cient ones if they only access f(A) by matrix product with small B. More formally, Problem 3.1 (approximate transformed matrix and matrix product). Given a \ufb01xed matrix B and function f : R \u2192R, design an algorithm that makes a single pass over an update stream of a matrix A, output an approximated value of f(A)B with high probability. We require the algorithm to use as small space as possible (without counting the space of B). We call our method the sketch for f-matrix product. We then demonstrate its e\ufb00ectiveness in the applications of linear regression and low rank approximation on M = f(A). Linear regression is to minimize \u2225Mx \u2212b\u22252 2, and low rank approximation is de\ufb01ned as follows. 1Our method also applies to non-square A; we consider square matrices for simplicity. 5 \fProblem 3.2 (low-rank approximation). Given integers k \u2264n, an n\u00d7n matrix M, two parameters \u01eb, \u03b4 > 0, the goal is to output an orthonormal n \u00d7 k matrix L such that \u2225LL\u22a4M \u2212M\u22252 F \u2264(1 + \u01eb)\u2225M \u2212[M]k\u22252 F + \u03b4. where [M]k = arg minrank \u2212k M\u2032 \u2225M \u2212M\u2032\u22252 F . 4 Sketch for f-Matrix Product Our goal in this section is to compute the matrix product f(A)B where B is given and A is under updating or can only be read entry by entry. We observe that each entry of Z = f(A)B can be written as a vector product: Zi,j = \u27e8f(A)i\u2217, B\u2217j\u27e9. Thus, we will \ufb01rst design a primitive to compute each Zi,j using small space. Running a primitive in parallel for each entry Zi,j results in our full algorithm for computing the matrix product. In the following sections, we will \ufb01rst introduce the vector sketch problem and present our vector product primitives for di\ufb00erent functions f. Lastly, we will combine them to form a uni\ufb01ed algorithm for matrix product. 4.1 Sketch for f-Vector Product Recall that for given vectors x, y \u2208Rn, the inner product is de\ufb01ned as \u27e8x, y\u27e9= Pn i=1 xiyi. In our setting, we are also given a function f : R \u2192R and a vector x \u2208Rn where the storage of x is free, but not directly given y. The f-vector product is de\ufb01ned as \u27e8x, f(y)\u27e9, where f is applied to y coordinate-wisely. The updates to y is a stream, i.e., we observe a sequence of integer pairs (zt, \u2206t) for t = 1, 2, . . . , m, where each zt \u2208[n] and \u2206t \u2208{\u22121, 1}. Thus, we initialize y as a y(0) \u21900, a zero-vector, and at time t, the update to y is described by y(t) \u2190y(t\u22121) + \u2206zt \u00b7 ezt where ezt is the standard unit vector with only the zt-th coordinate non-zero. Our goal is to approximate \u27e8x, f(y)\u27e9 without storing y, where x is given to the algorithm without storage cost. Formally, we de\ufb01ne the following problem. Problem 4.1 (approximate transformed vector and vector inner product). Given a \ufb01xed vector x and function f : R \u2192R, design an algorithm that makes a single pass over an update stream of a vector y, output an approximated value of \u27e8f(y), x\u27e9with high probability. We require the algorithm to use as small space as possible (excluding the space of x). We note that a na\u00efve algorithm would be storing the vector y as a whole. However such an algorithm is not feasible when n is large or the demand of computing such inner products is too high (e.g., in our matrix applications for computing Z = f(A)B \u2208Rn\u00d7k, each entry of Z is an inner product. If each inner product requires space n, then \ufb01nal space can be O(n2k) which is prohibitively high.). In Section 4.2 below, we design an algorithm that accomplish this task for function f(y) = log(|y| + 1), which only uses e O(1) bits of memory. In Section B.3, we present a general framework that works for a general family of functions f with nearly optimal space complexity. 4.2 Sketch log(| \u00b7 | + 1)-Vector Product Recall that, when f(\u00b7) = log(|\u00b7|+1), we are designing an algorithm for computing the inner product \u27e8log(|y| + 1), x\u27e9, where x, y \u2208Rn are two vectors, x is given to the algorithm for free and y is under updating. Our full algorithm is Algorithm 1, which is composed of 3 sub-procedures: procedure Initialize is called on initialization with given vector x, procedure Update is called when we go over the update stream of the vector y, and procedure Query is called at the end to report the 6 \fAlgorithm 1 1: data structure LogSum \u22b2Theorem 4.2 2: procedure Initialize(x) 3: \u03b3 \u2190\u01eb\u22122 poly(log n/\u03b4) 4: t \u2190\u0398(log n), pj \u21902\u2212j \u00b7 \u03b3, \u2200j \u2208[t] 5: for j = 1 \u2192t do 6: Sample a log n-wise independent hash function hj : [n] \u2192{0, 1} such that \u2200i \u2208[n] : Pr[hj(i) = 1] = min(pj, 1). 7: Sample a K-set structure KSetj with error parameter \u0398(\u03b4/t) and memory budget \u01eb\u22122 poly(log n/\u03b4) 8: end for 9: end procedure 10: procedure Update(a) \u22b2a \u2208[n] 11: for j = 1 \u2192t do 12: if hj(a) = 1 and xa \u0338= 0 then 13: KSetj.update(a) 14: end if 15: end for 16: end procedure 17: procedure Query() 18: Pick the largest j such that KSetj does not return \u201cFail\u201d 19: Let v be the output of KSetj, denote Sj = supp(v) 20: return 2j P i\u2208Sj xi log(|vi| + 1) 21: end procedure 22: end data structure answer. The detailed analysis of Algorithm 1, can be found in Appendix B. We here sketch the high level ideas for how it works. For ease of representation, we consider x has no zero coordinates, since otherwise we can simply ignore these coordinates and change our universe [n] to supp(x) accordingly. Our algorithm is originated from [BO10b] but it is much simpli\ufb01ed in this paper. From a high level, our algorithm can be viewed as an \u21130-sampler, namely, sample uniformly at random from the support of an updating vector y. Note that the support of y is changing over time. Thus it is non-trivial to maintain a uniform sample while using only small space. We also note that it is necessary to sample coordinates from the support of y, since otherwise we can always construct worst-case examples for algorithms that sample coordinates uniformly from [n]. We design our algorithm thus by maintaining independently \u0398(log n) many sub-vectors of the vector y. Each sub-vector is generated by sampling a set of coordinates uniformly from [n] with geometrically decreasing probabilities. For instance, in our algorithm, we \ufb01rst generate \u0398(log n) many hash functions, each de\ufb01nes a set Sj \u2282[n]. For each i \u2208[n], we demand that i \u2208Sj with probability 2\u2212j. Thus if the size of the support of y is of order \u0398(2j), then we are expected to sample \u0398(1) samples of y using the set Sj. We now describe how to maintain these sampled coordinates in memory. For convinience we assume \u03b3 = 1 in line 3 in Algorithm 1. For the case of insertion-only stream (once a coordinate of y becomes larger than 0, it stays so), maintaining the sub-vector ySj is a trivial task since the number of coordinates of ySj is expected to be O(1). However, for j\u2032 \u2264j, the sub-vectors ySj\u2032s contain too many coordinates. We handle this quite straightforwardly: if any of them exceeds our memory budget, we just ignore them. For the case of general stream, in which coordinates can be 0 even they were non-zero at some 7 \ftime-point. We will be using the K-set data structure presented in [Gan07]. This data structure supports insertion and deletion of data points and can maintain the samples only if the number of \ufb01nal samples is under the memory budget. The formal guarantee of the K-set data structure presented in Theorem B.1. Suppose now we have collected su\ufb03ciently many samples from the support of the vector y. Suppose the set of samples is collected using set Sj. We can have an empirical estimator for the inner product as 2j P i\u2208Sj xi log(|yi| + 1). Notice that this estimator is unbiased. Also since the variance of the estimator is bounded by X i 2jx2 i log2(|yi| + 1) = O(1) \u00b7 \u2225x\u22252 \u221e\u00b7 X i log2(|yi| + 1) \u00b7 log2 m, where m is the length of the stream and is usually assumed to be of oder poly(n), thus we only need poly log n samples to obtain an accurate estimation. We summarize the main guarantee in the following theorem, while the formal proof can be found in Section B. Theorem 4.2 (approximate inner product of transformed vector and vector). Suppose vector x \u2208 Rn is given without memory cost. There exists a streaming algorithm (data structure LogSum in Algorithm 1) that makes a single pass over the stream updates to a vector y \u2208Rn and outputs Z \u2208R, such that, with probability at least 1 \u2212\u03b4, |Z \u2212\u27e8x, log(|y| + 1)\u27e9| \u2264\u01eb \u00b7 \u2225x\u2225\u221e\u00b7 n X i=1 log(|yi| + 1). The algorithm uses space O(\u01eb\u22122 poly(log(n/\u03b4))) (excluding the space of x) has a poly(log n, 1/\u01eb) query time. Remark 4.3. We also note that our algorithm naturally works for f(y) := logc(|y| + 1) for any constant c. To modify our algorithm, we only need to keep slightly larger space and change the \ufb01nal estimation to be 2j P i\u2208Sj xi logc(|vi| + 1). It also enjoys the same relative error guarantee in Theorem 4.2. 4.3 From Vector Product Sketch to Matrix Product Sketch With the f-inner product sketch tools established, we are now ready to present the result for sketching the matrix product, Z = f(A)B. Notice that each entry Zi,j := \u27e8f(Ai), Bj\u27e9is an inner product. Thus our algorithm for the matrix sketch is simply maintaining an f-inner product sketch for each Zi,j. In our algorithm, we assume that matrix B is given to the algorithm for free. Thus, if B \u2208Rn\u00d7k for some k \u226an, we only need to keep up to e O(nk) vector product sketches, which cost in total e O(nk) words of space. For the ease of representation, we present our guarantee for matrix product for f(z) := logc(|z| + 1) for some c or for f(z) = zp for 0 \u2264p \u22642, and for matrix B \u2208{\u22121, 0, 1}n\u00d7k. Our results can be generalized to a more general set of functions and matrix B using the results presented in Section B.3. The proof of the following theorem is a straightforward application of Theorem 4.2 and B.2. 8 \fAlgorithm 2 Low rank approximation of M = log(|A| + 1) 1: procedure LowRankApprox(A, k, \u01eb) \u22b2Theorem 5.1 2: s \u2190O(k log k) 3: d1 \u2190O(k log2 k) 4: d2 \u2190O(k/\u01eb) 5: \u03b7 \u2190O(\u01eb\u221ad1 + \u01eb2d1) 6: \u22b2Step 1 : Sampling according to generalized leverage scores of M 7: Let S be the CountSketch (SparesJL) matrix of size s \u00d7 n \u22b2Appendix A.1 8: Let S+ and S\u2212be its positive and negative parts of S. 9: R \u2190[S+; S\u2212] 10: e E \u2190LogSum(RM) \u22b2\u2225e Ei\u22252 2 = (1 \u00b1 \u01eb)\u2225(RM)i\u22252, \u2200i 11: Sample a set P of d1 columns of M according to the leverage score of e E. \u22b2De\ufb01nition C.2 12: \u22b2Step 2 : Adaptive sampling 13: [Qp, \u00b7] \u2190QRFactorization(P) \u22b2Qp is the basis vectors for P 14: e \u0393 \u2190LogSum(Q\u22a4 p M) \u22b2\u2225e \u0393i\u22252 2 = (1 \u00b1 \u01eb)\u2225(Q\u22a4 p M)i\u22252 2, \u2200i 15: e z \u2190LogSum(M) \u22b2e zi = (1 \u00b1 \u01eb)\u2225Mi\u22252 2, \u2200i 16: e si \u2190e zi \u2212\u2225e \u0393i\u22252 2 17: Sample a set e Y of d2 columns from M according to pi = max(e si, \u03b7e zi) 18: Y \u2190e Y \u222aP 19: \u22b2Step 3 : Computing approximation solutions 20: [Qy, \u00b7] \u2190QRFactorization(Y ) \u22b2Qy is the basis vectors for Y 21: e \u03a0 \u2190LogSum(Q\u22a4 y M) \u22b2\u2225e \u03a0i\u22252 2 = (1 \u00b1 \u01eb2)\u2225(Q\u22a4 y M)i\u22252 2, \u2200i 22: Compute the top k singular vectors f W of e \u03a0 23: L \u2190Qyf W 24: return L 25: end procedure Theorem 4.4 (approximate each coordinate of the transformed matrix). Given a matrix B \u2208 {\u22121, 0, 1}n\u00d7k, and a function f(x) := logc(|x| + 1) for some c or f(x) := |x|p for some 0 \u2264p \u22642, then there exists a one-pass streaming algorithm that makes a single pass over the stream updates to an underlying matrix A \u2208Rn and outputs a matrix b Z, such that, with probability at least 1 \u2212\u03b4, for all i, j, | b Zi,j \u2212Zi,j| \u2264\u01eb n X j\u2032=1 f(|Ai,j\u2032|). The algorithm uses space \u01eb\u22122nk poly(log(n/\u03b4)) and has an nk poly(log n, 1/\u01eb) query time. Remark 4.5. We note that our sketch in the last theorem can be easily used to approximate the 2-norm of each row of the matrix f(A). In this case, we simply choose B \u2208Rn\u00d71 as the all-1 vector and change f(\u00b7) to be f 2(\u00b7). For f(x) = poly log(|x| + 1) or f(x) = |x|p with 0 \u2264p \u22641, it can be easily verify that our output is a (1 \u00b1 \u01eb) approximation to f 2(A) \u00b7 1, hence the approximation of 2-norm squared of each row of f(A). 9 \f5 Application to Low Rank Approximation This section considers the concrete application of rank-k approximation for M where Mi,j = log(|Ai,j| + 1), i.e., \ufb01nding k orthonormal vectors L such that \u2225M \u2212LL\u22a4M\u2225F is minimized. Our algorithm for rank-k approximation is presented in Algorithm 2. Low rank approximation for other functions f follows the same algorithm and similar analysis. There exists a large body of work for low rank approximation (see, e.g., [HMT11, DMIMW12, Woo14, CW13, MM13, NN13, CW15, RSW16, SWZ17, CGK+17, SWZ18, BW18, KPRW19, SWZ19a, SWZ19b, SWZ19c, Son19, BBB+19, DJS+19, BCW19, IVWW19, BWZ19] and references therein) but most of them are designed for the case without transformation and thus cannot be directly applied. As mentioned in previous sections, if an algorithm only accesses the transformed matrix via a matrix product, plugging in our sketching method leads to a suitable algorithm. We design an algorithm that applies generalized leverage score sampling approach [DMIMW12, BLS+16] for low-rank approximation. Leverage score sampling is a non-oblivious sketching technique that is widely used in numerical linear algebra and has been successfully applied to speed up di\ufb00erent problems such as linear regression [CW13, PSW17, AKK+17, SWZ19b, DSWY19], row sampling [SS11, LMP13], spectral approximation [CLM+15], low rank approximation [BW14, SWZ17, SWZ19b], cutting plane methods [Vai89, LSW15, JLSW20], linear programming [BLSS20], computing John Ellipsoid [CCLY19]. From the perspective of graph problems, leverage score is closely related to random spanning tree [Sch18, KS18], graph sparsi\ufb01cation and Laplacian system solver [ST04, SS11, BSS12]. Readers may refer to Appendix C.1 for more detailed discussion on leverage score sampling. On a high level, we would like to sample matrix M \u2208Rn\u00d7n according to its leverage scores. It turns out it is su\ufb03cient to use the leverage scores of SM where S is a sketching matrix. We apply Algorithm 1 to do so and obtain the sampled set P (Step 1). We then apply the technique of adaptive sampling to re\ufb01ne the sampling and obtain Y (Step 2) so that we have better control over the rank, and \ufb01nally compute the solution using Y by taking projection and computing singular vectors (Step 3). Detailed description and analysis of Algorithm 2 can be found in Appendix C. Overall we have the following guarantee. Theorem 5.1 (low-rank approximation). For any parameter \u01eb \u2208(0, 1) and integer k \u22651, there is an algorithm (procedure LowRankApprox in Algorithm 2) that runs in e O(n) \u00b7 k3 \u00b7 poly(1/\u01eb) time, takes e O(n) \u00b7 k3/\u01eb2 spaces, and outputs a matrix L \u2208Rn\u00d7k such that \u2225LL\u22a4M \u2212M\u22252 F \u226410 \u00b7 \u2225M \u2212[M]k\u22252 F + O \u0012 \u01eb2 k3 log5 k \u0013 \u00b7 \u2225M\u22252 1,2, holds with probability at least 9/10, where \u2225M\u22251,2 = (P j \u2225M\u2217,j\u22252 1)1/2. For a large n and \ufb01xed \u01eb, our algorithm uses much less space than storing the full matrix. Note that our algorithm still needs to make several passes over the stream of updates. Whether there exists a one-pass algorithm is still an open problem, and is left for future work. 6 Experiments To demonstrate the advantage of our proposed method, we complement the theoretical analysis with empirical study on synthetic and real data. We consider the low rank approximation task with f(x) = log(|x| + 1). We adjust the constant factors in the amount of space used by our method and compare the errors of the obtained solutions. In the appendix, we describe more experimental 10 \f0 0.05 0.1 0.15 0.2 Space used / total space of the matrix 1 2 3 4 5 Error ratio Uniform sampling Ours (a) LogData, n = 104 0 0.05 0.1 0.15 0.2 Space used / total space of the matrix 1 2 3 4 5 (b) LogData, n = 3 \u00b7 104 0 0.02 0.04 0.06 0.08 0.1 0.12 Space used / total space of the matrix 1 2 3 4 5 (c) LogData, n = 5 \u00b7 104 0 0.05 0.1 0.15 0.2 Space used / total space of the matrix 1 2 3 4 5 Error ratio Uniform sampling Ours (d) Real data, n = 104 0 0.05 0.1 0.15 0.2 Space used / total space of the matrix 1 2 3 4 5 (e) Real data, n = 3 \u00b7 104 0 0.02 0.04 0.06 0.08 0.1 0.12 Space used / total space of the matrix 1 2 3 4 5 (f) Real data, n = 5 \u00b7 104 Figure 1: Error ratios on the synthetic data (top row) and the real data (bottom row). The x-axis is the ratio between the amount of space used by the algorithms and the total amount of space occupied by the data matrix. The y-axis is the ratio between the error of the solutions output by the algorithms and the optimal error. details. We also provide additional experiments in the appendix to show that the method also works for f(x) = p |x|. We furthre demonstrate the robustness of the parameter selections in the algorithm. Setup. Given a data stream in the form of (it, jt, \u03b4t), we use the algorithm in Section 5 to compute the top k = 10 singular vectors L, and then compare the error of this solution to the error of the optimal solution (i.e., the true top k singular vectors). Let A denote the accumulated matrix, M = f(A) denote the transformed one, and U denote the top k singular vectors of M. Then the evaluation criterion is error-ratio(L) = \u2225M \u2212LL\u22a4M\u2225F /\u2225M \u2212UU \u22a4M\u2225F . Clearly, the error ratio is at least 1, and a value closer to 1 means a better solution. Besides demonstrating the e\ufb00ectiveness, we also exam the tradeo\ufb00between the solution quality and the space used. Recall that there are constant parameters in the sketching methods controlling the amount of space used. We vary its value, and set the parameters in other steps of our algorithm so that the amount of space used is dominated by that of the sketch. We then plot how the error ratios change with the amount of space used. The plotted results are averages of 5 runs; the variances are too small to plot. Finally, we also report the results of a baseline method: uniformly at random sample a subset T of columns from A, and then compute the top k singular vectors of f(T). The space occupied by the columns sampled is similar to the space required by our algorithm for fair comparison. We choose uniform sampling as baseline because to the best of the authors\u2019 knowledge, our algorithm is the \ufb01rst one to deal with low-rank approximation on transformed matrix in the 11 \fstream setting, and we are not aware of any other non-trivial algorithm working in this setting. 6.1 Synthetic Data Data Generation. The data sets LogData are generated as follows. First generate a matrix M of n \u00d7 n where the entries are i.i.d. Gaussians. To break the symmetry of the columns, we scale the norm of the i-th column to 4/i. Finally, we generate matrix A with Aij = exp(Mij) \u22121. Each entry Aij is divided into equally into 5 updates (i, j, Aij/5), and all the updates arrive in a arbitrary order. The size n can be 10000, 30000, and 50000. Parameter Setting. In our algorithm for low rank approximation, an FJLT matrix S is used [Ach03, AC06]. For the sketching subroutine, instead of specifying the desired \u01eb, we directly set the size of the data structure (line 19 in LogSum), so as to exam the tradeo\ufb00between space and accuracy. We set mc = ms = ma and set their value so that the space used is at most that used by the sketch method. Results. Figure 1 top row shows the results on the synthetic data. In general, the error ratio of our method is much better than that of the uniform sampling baseline: ours is close to 1 while that of uniform sampling is about 4. It also shows that our method can greatly reduce the amount of space needed, e.g., by orders of magnitude, but still preserve a good solution. This advantage is more signi\ufb01cant on larger data sets. For example, when n = 50000, to obtain 5% error over the optimum solution, we only needs space corresponding to 5% of the size of the matrix. 6.2 Real Data We experiment our method on the real world data from NLP applications, which are the motivating examples for our approach. Our method with f(x) = log(|x| + 1) is used. The parameters are set in a similar way as for the synthetic data. Data Collection. The data set is the entire Wikipedia corpus [Wik12] consisting of about 3 billion tokens. Details can be found in the appendix and only a brief description is provided here. The matrix to be factorized is M with Mij = pj log(NijN NiNj + 1) where Nij is the number of times words i and j co-occur in a window of size 10, Ni is the number of times word i appears, N is the total number of words in the corpus, and pj is a weighting factor depending on Nj (putting larger weights on more frequent words). Note that Ni\u2019s and N can be computed easily, so essentially the only dynamically update part is log Nij. The data stream is generated by considering each window of size 10 along the sentences in the corpus and collecting the co-occurrence counts of the word pairs in that window. We consider the matrix for the most frequent n words, where n = 10000, 30000, and 50000. Results. Figure 1 bottom row shows the results on the real data. The observations are similar to those on the synthetic data: the errors of our method are much smaller than the baseline, and are close to the optimum. These results again demonstrate the accuracy and space e\ufb03ciency of our methods. 12 \f7 Conclusions We considered the setting where a large matrix is updated by a data stream and the learning tasks is performed on an element-wise transformation of the matrix. We proposed a method for computing the product of its element-wise transformation with another given matrix. For a large family of transformations, our method only needs a single pass over the data and provable guarantees on the error. Our method uses much smaller space than directly storing the matrix. Our approach can be used as a building block for many learning tasks. We provided a concrete application for low-rank approximation with theoretical analysis and empirical veri\ufb01cation, showing the e\ufb00ectiveness of this approach. 13" + } + ], + "Zhenmei Shi": [ + { + "url": "http://arxiv.org/abs/2310.12408v1", + "title": "Provable Guarantees for Neural Networks via Gradient Feature Learning", + "abstract": "Neural networks have achieved remarkable empirical performance, while the\ncurrent theoretical analysis is not adequate for understanding their success,\ne.g., the Neural Tangent Kernel approach fails to capture their key feature\nlearning ability, while recent analyses on feature learning are typically\nproblem-specific. This work proposes a unified analysis framework for two-layer\nnetworks trained by gradient descent. The framework is centered around the\nprinciple of feature learning from gradients, and its effectiveness is\ndemonstrated by applications in several prototypical problems, such as mixtures\nof Gaussians and parity functions. The framework also sheds light on\ninteresting network learning phenomena such as feature learning beyond kernels\nand the lottery ticket hypothesis.", + "authors": "Zhenmei Shi, Junyi Wei, Yingyu Liang", + "published": "2023-10-19", + "updated": "2023-10-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "Introduction Neural network (NN) learning has achieved remarkable empirical success and has been a main driving force for the recent progress in machine learning and artificial intelligence. On the other hand, theoretical understandings significantly lag behind. Traditional analysis approaches are not adequate due to the overparameterization of practical networks and the non-convex optimization in the training via gradient descent. One line of work (e.g. [9, 31, 38, 60, 71, 123] and many others) shows under proper conditions, heavily overparameterized networks are approximately linear models over data-independent features, i.e., a linear function on the Neural Tangent Kernel (NTK). While making weak assumptions about the data and thus applicable to various settings, this approach requires the network learning to be approximately using fixed data-independent features (i.e., the kernel regime, or fixed feature methods). It thus fails to capture the feature learning ability of networks (i.e., to learn a feature mapping for the inputs which allow accurate prediction), which is widely believed to be the key factor to their empirical success in many applications (e.g., [54, 77, 117, 119]). To study feature learning in networks, a recent line of work (e.g. [5, 6, 14, 33, 52, 72, 76, 116] and others) shows examples where networks provably enjoy advantages over fixed feature methods (including NTK), under different settings and assumptions. While providing more insights, these studies typically focus on specific problems, and their analyses exploit the specific properties of the problems and appear to be unrelated to each other. Is there a common principle for feature learning in networks via gradient descent? Is there a unified analysis framework that can clarify the principle and also lead to provable error guarantees for prototypical problem settings? In this work, we take a step toward this goal by proposing a gradient feature learning framework for analyzing two-layer network learning by gradient descent. (1) The framework makes essentially no assumption about the data distribution and can be applied to various problems. Furthermore, it is centered around features from gradients, clearly illustrating how gradient descent leads to feature learning in networks and subsequently accurate predictions. (2) It leads to error guarantees competitive with the optimal in a family of networks that use the features induced by gradients on the \u2217Equal contribution. 37th Conference on Neural Information Processing Systems (NeurIPS 2023). arXiv:2310.12408v1 [cs.LG] 19 Oct 2023 \fdata distribution. Then for a specific problem with structured data distributions, if the optimal in the induced family is small, the framework gives a small error guarantee. We then apply the framework to several prototypical problems: mixtures of Gaussians, parity functions, linear data, and multiple-index models. These have been used for studying network learning (in particular, for the feature learning ability), but with different and seemingly unrelated analyses. In contrast, straightforward applications of our framework give small error guarantees, where the main effort is to compute the optimal in the induced family. Furthermore, in some cases, such as parities, we can handle more general data distributions than in the existing work. Finally, we also demonstrate that the framework sheds light on several interesting network learning phenomena or implications such as feature learning beyond the kernel regime, lottery ticket hypothesis (LTH), simplicity bias, learning over different data distributions, and new perspectives about roadmaps forward. Due to space limitations, we present implications about features beyond the kernel regime and LTH in the main body but defer the other implications in Appendix C with a brief here. (1) For simplicity bias, it is generally believed that the optimization has some implicit regularization effect that restricts learning dynamics to a low capacity subset of the whole hypothesis class, so can lead to good generalization [53, 90]. Our framework provides an explanation that the learning first learns simpler functions and then more sophisticated ones. (2) For learning over different data distributions, we provide data-dependent non-vacuous guarantees, as our framework can be viewed as using the optimal gradient-induced NN to measure or quantify the \u201ccomplexity\u201d of the problem. For easier problems, this quantity is smaller, and our framework can give a better error bound to derive guarantees. (3) For new perspectives about roadmaps forward, our framework suggests the strong representation power of NN is actually the key to successful learning, while traditional ones suggest strong representation power leads to vacuous generalization bounds [19, 33]. Thus, we suggest a different analysis road. Traditional analysis typically first reasons about the optimal based on the whole function class then analyzes how NN learns proper features and reaches the optimal. In contrast, our framework defines feature family first, and then reasons about the optimal based on it. 2 Related Work Neural Networks Learning Analysis. Recently there has been an increasing interest in the analysis of network learning. One line of work connects the sufficiently over-parameterized neural network to linear methods around its initialization like NTK (e.g. [9, 11, 20, 21, 31, 38, 49, 60, 62, 69, 71, 78, 82, 91, 93, 95, 114, 121, 122] and more), so that the neural network training is a convex problem. The key idea is that it suffices to consider the first-order Tyler expansion of the neural network around the origin when the initialization is large enough. However, NTK lies in the lazy training (kernel) regime that excludes feature learning [29, 50, 68, 113]. Many studies (e.g. [2, 5, 6, 8, 12, 14, 22, 26, 33, 37, 51, 52, 57, 58, 70, 72, 73, 76, 99, 112, 115, 116] and more) show that neural networks take advantage over NTK empirically and theoretically. Another line of work is the mean-field (MF) analysis of neural networks (e.g. [27, 28, 36, 79, 80, 100, 106] and more). The insight is to see the training dynamics of a sufficiently large-width neural network as a PDE. It uses a smaller initialization than the NTK so that the parameters may move away from the initialization. However, the MF does not provide explicit convergence rates and requires an unrealistically large width of the neural network. One more line of work is neural networks max-margin analysis (e.g. [30, 47, 48, 56, 61, 63, 74, 75, 83, 85, 86, 107, 109] and more). They need a strong assumption that the convergence starts from weights having perfect training accuracy, while feature learning happens in the early stage of training. To explain the success of neural networks beyond the limitation mentioned above, some work introduces the low intrinsic dimension of data distributions [17, 18, 23, 24, 25, 44, 67, 104, 108, 124]. Another recent line of work is that a trained network can exactly recover the ground truth or optimal solution or teacher network [3, 4, 10, 39, 84, 87, 94, 96, 120], but they have strong assumptions on data distribution or model structure, e.g., Gaussian marginals. [1, 40, 55, 110, 111] show that training dynamics of neural networks have multiple phases, e.g., feature learning at the beginning, and then dynamics in convex optimization which requires proxy convexity [43] or PL condition [65] or special data structure. Feature Learning Based on Gradient Analysis. A recent line of work is studying how features emerge from the gradient. [7, 46] consider linear separable data and show that the first few gradient steps can learn good features, and the later steps learn a good network on neurons with these features. [33, 45, 105] have similar conclusions on non-linear data (e.g., parity functions), while in their problems one feature is sufficient for accurate prediction (i.e., single-index data model). 2 \f[32] considers multiple-index with low-degree polynomials as labeling functions and shows that a one-step gradient update can learn multiple features that lead to accurate prediction. [13, 81] studies one gradient step feature improvements at different learning rates. [97] proposes Recursive Feature Machines to show the mechanism of recursively feature learning but without giving a final loss guarantee. These studies consider specific problems and exploit properties of the data to analyze the gradient delicately, while our work provides a general framework applicable to different problems. 3 Gradient Feature Learning Framework Problem Setup. We denote [n] := {1, 2, . . . , n} and \u02dc O(\u00b7), \u02dc \u0398(\u00b7), \u02dc \u2126(\u00b7) to omit the log term inside. Let X \u2286Rd denote the input space, Y \u2286R the label space. Let D be an arbitrary data distribution over X \u00d7 Y. Denote the class of two-layer networks with m neurons as: Fd,m := \b f(a,W,b) \f \f f(a,W,b)(x) := a\u22a4\u0002 \u03c3(W\u22a4x \u2212b) \u0003 = X i\u2208[m] ai [\u03c3(\u27e8wi, x\u27e9\u2212bi)] \t , (1) where \u03c3(z) = max(z, 0) is the ReLU activation function, a \u2208Rm is the second layer weight, W \u2208Rd\u00d7m is the first layer weight, wi is the i-th column of W (i.e., the weight for the i-th neuron), and b \u2208Rm is the bias for the neurons. For technical simplicity, we only train a, W but not b. Let superscript (t) denote the time step, e.g., f(a(t),W(t),b) denote the network at time step t. Denote \u039e := (a, W, b), \u039e(t) := (a(t), W(t), b). The goal of neural network learning is to minimize the expected risk, i.e., LD(f) := E(x,y)\u223cDL(x,y)(f), where L(x,y)(f) = \u2113(yf(x)) is the loss on an example (x, y) for some loss function \u2113(\u00b7), e.g., the hinge loss \u2113(z) = max{0, 1 \u2212z}, and the logistic loss \u2113(z) = log[1 + exp(\u2212z)]. We also consider \u21132 regularization. The regularized loss with regularization coefficient \u03bb is L\u03bb D(f) := LD(f) + \u03bb 2 (\u2225W\u22252 F + \u2225a\u22252 2). Given a training set with n i.i.d. samples Z = {(x(l), y(l))}l\u2208[n] from D, the empirical risk and its regularized version are: e LZ(f) : = 1 n X l\u2208[n] L(x(l),y(l))(f), e L\u03bb Z(f) := e LZ(f) + \u03bb 2 (\u2225W\u22252 F + \u2225a\u22252 2). (2) Then the training process is summarized in Algorithm 1. Algorithm 1 Network Training via Gradient Descent Initialize (a(0), W(0), b) for t = 1 to T do Sample Z(t\u22121) \u223cDn a(t) = a(t\u22121) \u2212\u03b7(t)\u2207a e L\u03bb(t) Z(t\u22121)(f\u039e(t\u22121)), W(t) = W(t\u22121) \u2212\u03b7(t)\u2207W e L\u03bb(t) Z(t\u22121)(f\u039e(t\u22121)) end for In the whole paper, we need some natural assumptions about the data and the loss. Assumption 3.1. We assume E[\u2225x\u22252] \u2264Bx1, E[\u2225x\u22252 2] \u2264Bx2, \u2225x\u22252 \u2264Bx and for any label y, we have |y| \u22641. We assume the loss function \u2113(\u00b7) is a 1-Lipschitz convex decreasing function, normalized \u2113(0) = 1, |\u2113\u2032(0)| = \u0398(1), and \u2113(\u221e) = 0. Remark 3.2. The above are natural assumptions. Most input distributions have the bounded norms required, and the typical binary classification Y = {\u00b11} satisfies the requirement. Also, the most popular loss functions satisfy the assumption, e.g., the hinge loss and logistic loss. 3.1 Warm Up: A Simple Setting with Frozen First Layer To illustrate some high-level intuition, we first consider a simple setting where the first layer is frozen after one gradient update, i.e., no updates to W for t \u22652 in Algorithm 1. The first idea of our framework is to provide guarantees compared to the optimal in a family of networks. Here let us consider networks with specific weights for the first layer: Definition 3.3. For some fixed W \u2208Rd\u00d7m, b \u2208Rd, and a parameter Ba2, consider the following family of networks FW,b,Ba2, and the optimal approximation network loss in this family: FW,b,Ba2 := \b f(a,W,b) \u2208Fd,m \f \f \u2225a\u22252 \u2264Ba2 \t , OPTW,b,Ba2 := min f\u2208FW,b,Ba2 LD(f). (3) 3 \fThe second idea is to compare to networks using features from gradient descent. As an illustrative example, we now provide guarantees compared to networks with first layer weights W(1) (i.e., the weights after the first gradient step): Theorem 3.4 (Simple Setting). Assume e LZ \u0000f(a,W(1),b) \u0001 is L-smooth to a. Let \u03b7(t) = 1 L, \u03bb(t) = 0, for all t \u2208{2, 3, . . . , T}. Training by Algorithm 1 with no updates for the first layer after the first gradient step, w.h.p., there exists t \u2208[T] such that LD(f(a(t),W(1),b)) \u2264OPTW(1),b,Ba2 + O \u0010 L(\u2225a(1)\u22252 2+B2 a2) T + q B2 a2(\u2225W(1)\u22252 F B2 x+\u2225b\u22252 2) n \u0011 . Intuitively, the theorem shows that if the weight W(1) after a one-step gradient gives a good set of neurons in the sense that there exists a classifier on top of these neurons with low loss, then the network will learn to approximate this good classifier and achieve low loss. The proof is based on standard convex optimization and the Rademacher complexity (details in Appendix D.1). Such an approach, while simple, has been used to obtain interesting results on network learning in existing work, which shows that W(1) can indeed give good neurons due to the structure of the special problems considered (e.g., parities on uniform inputs [15], or polynomials on a subspace [32]). However, it is unclear whether such intuition can still yield useful guarantees for other problems. So, for our purpose of building a general framework covering more prototypical problems, the challenge is what features from gradient descent should be considered so that the family of networks for comparison can achieve a low loss on other problems. The other challenge is that we would like to consider the typical case where the first layer weights are not frozen. In the following, we will introduce the core concept of Gradient Features to address the first challenge, and stipulate proper geometric properties of Gradient Features for the second challenge. 3.2 Core Concepts in the Gradient Feature Learning Framework 20 10 0 10 20 30 40 40 30 20 10 0 10 20 30 20 10 0 10 20 30 40 Gradient Feature being cones under Mixture of Gaussians data Figure 1: An illustration of Gradient Feature, i.e., Definition 3.7 with random initialization (Gaussian), under Mixture of three Gaussian clusters in 3-dimension data space with blue/green/orange color. The Gradient Feature stays in three cones, where each center of the cone aligns with the corresponding Gaussian cluster center. Now, we will introduce the core concept in our framework, Gradient Features, and use it to build the family of networks to derive guarantees. As mentioned, we consider the setting where the first layer is not frozen. After the network learns good features, to ensure the updates in later gradient steps of the first layer are still benign for feature learning, we need some geometric conditions about the gradient features, which are measured by parameters in the definition of Gradient Features. The conditions are general enough, so that, as shown in Section 4, many prototypical problems satisfy them and the induced family of networks enjoys low loss, leading to useful guarantees. We begin by considering what features can be learned via gradients. Note that the gradient w.r.t. wi is \u2202LD(f) \u2202wi = aiE(x,y) [\u2113\u2032(yf(x))y [\u03c3\u2032 (\u27e8wi, x\u27e9\u2212bi)] x] = aiE(x,y) [\u2113\u2032(yf(x))yxI[\u27e8wi, x\u27e9> bi]] . Inspired by this, we define the following notion: Definition 3.5 (Simplified Gradient Vector). For any w \u2208 Rd, b \u2208R, a Simplified Gradient Vector is G(w, b) := E(x,y)\u223cD[yxI[w\u22a4x > b]]. (4) Remark 3.6. Note that the definition of G(w, b) ignores the term \u2113\u2032(yf(x)) in the gradient, where f is the model function. In the early stage of training (or the first gradient step), \u2113\u2032(\u00b7) is approximately a constant, i.e., \u2113\u2032(yf(x)) \u2248\u2113\u2032(0) due to the symmetric initialization (see Equation (8)). Definition 3.7 (Gradient Feature). For a unit vector D \u2208Rd with \u2225D\u22252 = 1, and a \u03b3 \u2208(0, 1), a direction neighborhood (cone) CD,\u03b3 is defined as: CD,\u03b3 := {w | | \u27e8w, D\u27e9|/\u2225w\u22252 > (1 \u2212\u03b3)} . (5) 4 \fLet w \u2208Rd, b \u2208R be random variables drawn from some distribution W, B. A Gradient Feature set with parameters p, \u03b3, BG is defined as: Sp,\u03b3,BG(W, B) := \b (D, s) \f \f Pr w,b \u0002 G(w, b) \u2208CD,\u03b3 , \u2225G(w, b)\u22252 \u2265BG , s = b/|b| \u0003 \u2265p \t . (6) Remark 3.8. When clear from context, write it as Sp,\u03b3,BG. Gradient features (see Figure 1 for illustration) are simply normalized vectors D that are given (approximately) by the simplified gradient vectors. (Similarly, the normalized scalar s is given by the bias b.) To be a useful gradient feature, we require the direction to be \u201chit\u201d by sufficiently large simplified gradient vectors with sufficient large probability, so as to be distinguished from noise and remain useful throughout the gradient steps. Later we will use the gradient features when W, B are the initialization distributions. To make use of the gradient features, we consider the following family of networks using these features and with bounded norms, and will provide guarantees compared to the best in this family: Definition 3.9 (Gradient Feature Induced Networks). The Gradient Feature Induced Networks are: Fd,m,BF ,S := \b f(a,W,b) \u2208Fd,m \f \f \u2200i \u2208[m], |ai| \u2264Ba1, \u2225a\u22252 \u2264Ba2, (wi, bi/|bi|) \u2208S, |bi| \u2264Bb \t , where S is some Gradient Feature set and BF := (Ba1, Ba2, Bb) are some parameters. Remark 3.10. In above definition, the weight and bias of a neuron are simply the scalings of some item in the feature set S (for simplicity the scaling of wi is absorbed into the scaling of ai and bi). Definition 3.11 (Optimal Approximation via Gradient Features). The optimal approximation network and loss using Gradient Feature Induced Networks Fd,r,BF ,S are defined as: f \u2217:= argmin f\u2208Fd,r,BF ,S LD(f), OPTd,r,BF ,S := min f\u2208Fd,r,BF ,S LD(f). (7) 3.3 Provable Guarantee via Gradient Feature Learning To obtain the guarantees, we first specify the symmetric initialization. It is convenient for the analysis and is typical in existing analysis (e.g., [7, 32, 33, 105]), though some other initialization can also work. Formally, we train a two-layer network with 4m neurons, f(a,W,b) \u2208Fd,4m. We initialize a(0) i , w(0) i from Gaussians and bi from a constant for i \u2208{1, . . . , m}, and initialize the parameters for i \u2208{m + 1, . . . , 4m} accordingly to get a zero output initial network. Specifically: for i \u2208{1, . . . , m} : a(0) i \u223cN(0, \u03c32 a), w(0) i \u223cN(0, \u03c32 wI), bi = \u02dc b, for i \u2208{m + 1, . . . , 2m} : a(0) i = \u2212a(0) i\u2212m, w(0) i = \u2212w(0) i\u2212m, bi = \u2212bi\u2212m, (8) for i \u2208{2m + 1, . . . , 4m} : a(0) i = \u2212a(0) i\u22122m, w(0) i = w(0) i\u22122m, bi = bi\u22122m, where \u03c32 a, \u03c32 w,\u02dc b > 0 are hyper-parameters. After initialization, a, W are updated as in Algorithm 1. We are now ready to present our main result in the framework. Theorem 3.12 (Main Result). Assume Assumption 3.1. For any \u03f5, \u03b4 \u2208(0, 1), if m \u2264ed and m =\u2126 \uf8eb \uf8ed1 p\u03f54 rBa1Bx1 r Bb BG !4 + 1 \u221a \u03b4 + 1 p \u0010 log \u0010r \u03b4 \u0011\u00112 \uf8f6 \uf8f8, T =\u2126 \u00121 \u03f5 \u0012\u221arBa2BbBx1 (mp) 1 4 + m\u02dc b \u0013 \u0012 \u221alog m \u221aBbBG + 1 Bx1(mp) 1 4 \u0013\u0013 , n log n =\u02dc \u2126 m3pB2 xB4 a2Bb \u03f52r2B2 a1BG + (mp) 1 2 Bx2 BbBG + B2 x Bx2 + 1 p + \u0012 1 B2 G + 1 B2 x1 \u0013 Bx2 |\u2113\u2032(0)|2 + Tm \u03b4 ! , then with initialization (8) and proper hyper-parameter values, we have with probability \u22651 \u2212\u03b4 over the initialization and training samples, there exists t \u2208[T] in Algorithm 1 with: Pr[sign(f\u039e(t)(x)) \u0338= y] \u2264LD (f\u039e(t)) \u2264OPTd,r,BF ,Sp,\u03b3,BG + rBa1Bx1 s 2\u03b3 + O \u0012 \u221aBx2 log n BG|\u2113\u2032(0)|n 1 2 \u0013 + \u03f5. 5 \fIntuitively, the theorem shows when a data distribution admits a small approximation error by some \u201cground-truth\u201d network with r neurons using gradient features from Sp,\u03b3,BG (i.e., a small optimal approximate loss OPTd,r,BF ,Sp,\u03b3,BG), the gradient descent training can successfully learn good neural networks with sufficiently many m neurons. Now we discuss the requirements and the error guarantee. Viewing boundedness parameters Ba1, Bx1 etc. as constants, then the number m of neurons learned is roughly \u02dc \u0398 \u0010 r4 p\u03f54 \u0011 , a polynomial overparameterization compared to the \u201cground-truth\u201d network. The proof shows that such an overparameterization is needed such that some neurons can capture the gradient features given by gradient descent. This is consistent with existing analysis about overparameterization network learning, and also consistent with existing empirical observations. The error bound consists of three terms. The last term \u03f5 can be made arbitrarily small, while the other two depend on the concrete data distribution. Specifically, with larger r and \u03b3, the second term increases. While the first term (the optimal approximation loss) decreases, since a larger r means a larger \u201cground-truth\u201d network family, and a larger \u03b3 means a larger Gradient Feature set Sp,\u03b3,BG. So, there is a trade-off between these two terms. When we later apply the framework to concrete problems (e.g., mixtures of Gaussians, parity functions), we will show that depending on the specific data distribution, we can choose the proper values for r, \u03b3 to make the error small. This then leads to error guarantees for the concrete problems and demonstrates the unifying power of the framework. Please refer to Appendix D.3 for more discussion about our problem setup and our core concept, e.g., parameter choice, early stopping, the role of s, activation functions, and so on. Proof Sketch. The intuition in the proof of Theorem 3.12 is closely related to the notion of Gradient Features. First, the gradient descent will produce gradients that approximate the features in Sp,\u03b3,BG. Then, the gradient descent update gives a good set of neurons, such that there exists an accurate classifier using these neurons with loss comparable to the optimal approximation loss. Finally, the training will learn to approximate the accurate classifier, resulting in the desired error guarantee. The complete proof is in Appendix D (the population version in Appendix D.2 and the empirical version in Appendix D.4), including the proper values for hyper-parameters such as \u03b7(t) in Theorem D.17. Below, we briefly sketch the key ideas and omit the technical details. We first show that a large subset of neurons has gradients at the first step as good features. (The claim can be extended to multiple steps; for simplicity, we follow existing work (e.g., [33, 105]) and present only the first step.) Let \u2207i denote the gradient of the i-th neuron \u2207wiLD(f\u039e(0)). Denote the subset of neurons with nice gradients approximating feature (D, s) as: G(D,s),Nice := n i \u2208[2m] : s = bi/|bi|, \u27e8\u2207i, D\u27e9> (1 \u2212\u03b3) \u2225\u2207i\u22252 , \u2225\u2207i\u22252 \u2265 \f \f \fa(0) i \f \f \f BG o . (9) Lemma 3.13 (Feature Emergence). For any r size subset {(D1, s1), . . . , (Dr, sr)} \u2286Sp,\u03b3,BG, with probability at least 1 \u2212re\u2212\u0398(mp), for all j \u2208[r], we have |G(Dj,sj),Nice| \u2265mp 4 . This is because \u2207i = \u2113\u2032(0)a(0) i E(x,y) h y\u03c3\u2032 hD w(0) i , x E \u2212bi i x i = \u2113\u2032(0)a(0) i G(w(0) i , bi). Now consider sj = +1 (the case \u22121 is similar). Since wi is initialized by Gaussians, by \u2207i\u2019s connection to Gradient Features, we can see that for all i \u2208[m], Pr \u0002 i \u2208G(Dj,+1),Nice \u0003 \u2265 p 2. The lemma follows from concentration via a large enough m, i.e., sufficient overparameterization. The gradients allow obtaining a set of neurons approximating the \u201cground-truth\u201d network with comparable loss: Lemma 3.14 (Existence of Good Networks). For any \u03b4 \u2208(0, 1), with proper hyper-parameter values, with probability at least 1 \u2212\u03b4, there is \u02dc a such that \u2225\u02dc a\u22250 = O \u0000r\u221amp \u0001 and f(\u02dc a,W(1),b)(x) = P4m i=1 \u02dc ai\u03c3 \u0010D w(1) i , x E \u2212bi \u0011 satisfies LD(f(\u02dc a,W(1),b)) \u2264OPTd,r,BF ,Sp,\u03b3,BG + \u221a 2rBa1Bx1 \u221a\u03b3 + s 2Bb \u221ampBG ! . Given the good set of neurons, we finally show that the remaining gradient steps can learn an accurate classifier. Intuitively, with small step sizes \u03b7(t), the weights of the first layer wi do not change too much (stay in a neighborhood) while the second layer weights grow, and thus the learning is similar to convex learning using the good set of neurons. Technically, we adopt the online convex optimization analysis (Theorem D.5) in [33] to get the final loss guarantee in Theorem 3.12. 6 \f4 Applications in Special Cases In this section we will apply the gradient feature learning framework to some specific problems, corresponding to concrete data distributions D. We primarily focus on prototypical problems for analyzing feature learning in networks. We will present here the results for mixtures of Gaussians and parity functions, and include the complete proofs and some other results in Appendix E. 4.1 Mixtures of Gaussians Mixtures of Gaussians are among the most fundamental and widely used statistical models. Recently, it has been used to study neural network learning, in particular, the effect of gradient descent for feature learning of two-layer neural networks and the advantage over fixed feature methods [46, 99]. Data Distributions. We follow notations from [99]. The data are from a mixture of r highdimensional Gaussians, and each Gaussian is assigned to one of two possible labels in Y = {\u00b11}. Let S(y) \u2286[r] denote the set of indices of Gaussians associated with the label y. The data distribution is then: q(x, y) = q(y)q(x|y), q(x|y) = P j\u2208S(y) pjNj(x), where Nj(x) is a multivariate normal distribution with mean \u00b5j, covariance \u03a3j, and pj are chosen such that q(x, y) is correctly normalized. We will make some assumptions about the Gaussians, for which we first introduce some notations. Dj := \u00b5j \u2225\u00b5j\u22252 , \u02dc \u00b5j := \u00b5j/ \u221a d, B\u00b51 := min j\u2208[r] \u2225\u02dc \u00b5j\u22252, B\u00b52 := max j\u2208[r] \u2225\u02dc \u00b5j\u22252, pB := min j\u2208[r] pj. Assumption 4.1. Let 8 \u2264\u03c4 \u2264d be a parameter that will control our final error guarantee. Assume \u2022 Equiprobable labels: q(\u22121) = q(+1) = 1/2. \u2022 For all j \u2208[r], \u03a3j = \u03c3jId\u00d7d. Let \u03c3B := maxj\u2208[r] \u03c3j and \u03c3B+ := max{\u03c3B, B\u00b52}. \u2022 r \u22642d, pB \u2265 1 2d, \u2126 \u0010 1/d + p \u03c4\u03c3B+2 log d/d \u0011 \u2264B\u00b51 \u2264B\u00b52 \u2264d. \u2022 The Gaussians are well-separated: for all i \u0338= j \u2208[r], we have \u22121 \u2264\u27e8Di, Dj\u27e9\u2264\u03b8, where 0 \u2264\u03b8 \u2264min \u001a 1 2r, \u03c3B+ B\u00b52 q \u03c4 log d d \u001b . Remark 4.2. The first two assumptions are for simplicity; they can be relaxed. We can generalize our analysis to the mixture of Gaussians with unbalanced label probabilities and general covariances. The third assumption is to make sure that each Gaussian has a good amount of probability mass to be learned. The remaining assumptions are to make sure that the Gaussians are well-separated and can be distinguished by the learning algorithm. We are now ready to apply the framework to these data distributions, for which we only need to compute the Gradient Feature set and the corresponding optimal approximation loss. Lemma 4.3 (Mixtures of Gaussians: Gradient Features). (Dj, +1) \u2208Sp,\u03b3,BG for all j \u2208[r], where p = B\u00b51 \u221a\u03c4 log d\u03c3B+ \u00b7 d\u0398(\u03c4\u03c3B+2/B2 \u00b51) , \u03b3 = 1 d0.9\u03c4\u22121.5 , BG = pBB\u00b51 \u221a d \u2212O \u0010 \u03c3B+ d0.9\u03c4 \u0011 . Let f \u2217(x) = Pr j=1 y(j) \u221a\u03c4 log d\u03c3B+ \u0002 \u03c3 \u0000\u27e8Dj, x\u27e9\u22122\u221a\u03c4 log d\u03c3B+ \u0001\u0003 whose hinge loss is at most 3 d\u03c4 + 4 d0.9\u03c4\u22121\u221a\u03c4 log d. Given the values on gradient feature parameters p, \u03b3, BG and the optimal approximation loss OPTd,r,BF ,Sp,\u03b3,BG , the framework immediately leads to the following guarantee: Theorem 4.4 (Mixtures of Gaussians: Main Result). Assume Assumption 4.1. For any \u03f5, \u03b4 \u2208(0, 1), when Algorithm 1 uses hinge loss with m = poly \u00121 \u03b4 , 1 \u03f5 , d\u0398(\u03c4\u03c3B+ 2/B2 \u00b51), r, 1 pB \u0013 \u2264ed, T = poly (m) , n = poly (m) and proper hyper-parameters, then with probability at least 1 \u2212\u03b4, there exists t \u2208[T] such that Pr[sign(f\u039e(t)(x)) \u0338= y] \u2264 \u221a 2r d0.4\u03c4\u22120.8 + \u03f5. 7 \fThe theorem shows that gradient descent can learn to a small error via learning the gradient features, given proper hyper-parameters. In particular, we need sufficient overparameterization (a sufficiently large number m of neurons). When \u03c3B+2/B2 \u00b51 is a constant which is the prototypical interesting case, and we choose a constant \u03c4, then m is polynomial in the key parameters 1 \u03b4 , 1 \u03f5 , d, r, 1 pB , and the error bound is inverse polynomial in d. The complete proof is given in Appendix E.2. [46] studies (almost) linear separable cases while our setting includes non-linear separable cases, e.g., XOR. [99] mainly studies neural network classification on 4 Gaussian clusters with XOR structured labels, while our setting is much more general, e.g., our cluster number can extend up to 2d. 4.1.1 Mixtures of Gaussians: Beyond the Kernel Regime As discussed in the introduction, it is important for the analysis to go beyond fixed feature methods such as NTK (i.e., the kernel regime), so as to capture the feature learning ability which is believed to be the key factor for the empirical success. We first review the fixed feature methods. Following [33], suppose \u03a8 is a data-independent feature mapping of dimension N with bounded features, i.e., \u03a8 : X \u2192[\u22121, 1]N. For B > 0, the family of linear models on \u03a8 with bounded norm B is HB = {h(\u02dc x) : h(\u02dc x) = \u27e8\u03a8(\u02dc x), w\u27e9, \u2225w\u22252 \u2264B}. This can capture linear models on fixed finitedimensional feature maps, e.g., NTK, and also infinite dimensional feature maps, e.g., kernels like RBF, that can be approximated by feature maps of polynomial dimensions [64, 98, 105]. Our framework indeed goes beyond fixed features and shows features from gradients are more powerful than features from random initialization, e.g., NTK. Our framework can show the advantage of network learning over kernel methods under the setting of [99] (4 Gaussian clusters with XOR structured labels). For large enough d, our framework only needs roughly \u2126(log d) neurons and \u2126 \u0000(log d)2\u0001 samples to achieve arbitrary small constant error (see Theorem E.18 when \u03c3B = 1), while fixed feature methods need \u2126(d2) features and \u2126(d2) samples to achieve nontrivial errors (as proved in [99]). Moreover, [99] uses ODE to simulate the optimization process for the 2-layer networks learning XOR-shaped Gaussian mixture with \u2126(1) neurons and gives convincing evidence that \u2126(d) samples is enough to learn it, yet they do not give a rigorous convergence guarantee for this problem. We successfully derive a convergence guarantee and we require a much smaller sample size \u2126 \u0000(log d)2\u0001 . For the proof (detailed in Appendix E.3), we only need to calculate the p, \u03b3, BG of the data distribution carefully and then inject these numbers into Theorem 3.12. 4.2 Parity Functions Parity functions are a canonical family of learning problems in computational learning theory, usually for showing theoretical computational barriers [103]. The typical sparse parties over d-dim binary inputs \u03d5 \u2208{\u00b11}d are Q i\u2208A \u03d5i where A \u2286[d] is a subset of dimensions. Recent studies have shown that when the distribution of inputs \u03d5 has structures rather than uniform, neural networks can perform feature learning and finally learn parity functions with a small error, while methods without feature learning, e.g. NTK, cannot achieve as good results [33, 76, 105]. Thus, this has been a prototypical setting for studying feature learning phenomena in networks. Here we consider a generalization of this problem and show that our framework can show successful learning via gradient descent. Data Distributions. Suppose M \u2208Rd\u00d7D is an unknown dictionary with D columns that can be regarded as patterns. For simplicity, assume d = D and M is orthonormal. Let \u03d5 \u2208Rd be a hidden representation vector. Let A \u2286[D] be a subset of size rk corresponding to the class relevant patterns and r is an odd number. Then the input is generated by M\u03d5, and some function on \u03d5A generates the label. WLOG, let A = {1, . . . , rk}, A\u22a5= {rk + 1, . . . , d}. Also, we split A such that for all j \u2208[r], Aj = {(j \u22121)k + 1, . . . , jk}. Then the input x and the class label y are given by: x = M\u03d5, y = g\u2217(\u03d5A) = sign \u0010 X j\u2208[r] XOR(\u03d5Aj) \u0011 , (10) where g\u2217is the ground-truth labeling function mapping from Rrk to Y = {\u00b11}, \u03d5A is the sub-vector of \u03d5 with indices in A, and XOR(\u03d5Aj) = Q l\u2208Aj \u03d5l is the parity function. We still need to specify the distribution X of \u03d5, which determines the structure of the input distribution: X := (1 \u22122rpA)XU + X j\u2208[r] pA(Xj,+ + Xj,\u2212). (11) 8 \fFor all corresponding \u03d5A\u22a5in X, we have \u2200l \u2208A\u22a5, independently: \u03d5l = \uf8f1 \uf8f2 \uf8f3 +1, w.p. po \u22121, w.p. po 0, w.p. 1 \u22122po , where po controls the signal noise ratio: if po is large, then there are many nonzero entries in A\u22a5which are noise interfering with the learning of the ground-truth labeling function on A. For corresponding \u03d5A, any j \u2208[r], we have \u2022 In Xj,+, \u03d5Aj = [+1, +1, . . . , +1]\u22a4and \u03d5A\\Aj only have zero elements. \u2022 In Xj,\u2212, \u03d5Aj = [\u22121, \u22121, . . . , \u22121]\u22a4and \u03d5A\\Aj only have zero elements. \u2022 In XU, we have \u03d5A draw from {+1, \u22121}rk uniformly. In short, we have r parity functions each corresponding to a block of k dimensions; Xj,+ and Xj,\u2212 stands for the component providing a strong signal for the j-th parity; XU corresponds to uniform distribution unrelated to any parity and providing weak learning signal; A\u22a5is the noise part. The label depends on the sum of the r parity functions. Assumption 4.5. Let 8 \u2264\u03c4 \u2264d be a parameter that will control our final error guarantee. Assume k is an odd number and: k \u2265\u2126(\u03c4 log d), d \u2265rk + \u2126(\u03c4r log d), po = O \u0010 rk d\u2212rk \u0011 , pA \u22651 d. Remark 4.6. We set up the problem to be more general than the parity function learning in existing work. If r = 1, the labeling function reduces to the traditional k-sparse parties of d bits. The assumptions require k, d, and pA to be sufficiently large so as to provide enough large signals for learning. Note that when k = d 16, r = 1, po = 1 2, our analysis also holds, which shows our framework is beyond the kernel regime (discuss in detail in Section 4.2.1). To apply our framework, again we only need to compute the Gradient Feature set and the corresponding optimal loss. We first define the Gradient Features: For all j \u2208[r], let Dj = P l\u2208Aj Ml \u2225P l\u2208Aj Ml\u22252 . Lemma 4.7 (Parity Functions: Gradient Features). We have (Dj, +1), (Dj, \u22121) \u2208Sp,\u03b3,BG for all j \u2208[r], where p = \u0398 \u0012 1 \u221a\u03c4r log d \u00b7 d\u0398(\u03c4r) \u0013 , \u03b3 = 1 d\u03c4\u22122 , BG = \u221a kpA \u2212O \u221a k d\u03c4 ! . (12) With gradient features from Sp,\u03b3,BG, let f \u2217(x) = Pr j=1 Pk i=0(\u22121)i+1\u221a k h \u03c3 \u0010 \u27e8Dj, x\u27e9\u22122i\u2212k\u22121 \u221a k \u0011 \u2212 2\u03c3 \u0010 \u27e8Dj, x\u27e9\u22122i\u2212k \u221a k \u0011 + \u03c3 \u0010 \u27e8Dj, x\u27e9\u22122i\u2212k+1 \u221a k \u0011 i whose hinge loss is 0. Above, we show that Dj is the \u201cindicator function\u201d for the subset Aj so that we can build the optimal neural network based on such directions. Given the values on gradient feature parameters and the optimal approximation loss, the framework immediately leads to the following guarantee: Theorem 4.8 (Parity Functions: Main Result). Assume Assumption 4.5. For any \u03f5, \u03b4 \u2208(0, 1), when Algorithm 1 uses hinge loss with m = poly \u00121 \u03b4 , 1 \u03f5 , d\u0398(\u03c4r), k, 1 pA \u0013 \u2264ed, T = poly (m) , n = poly (m) and proper hyper-parameters, then with probability at least 1 \u2212\u03b4, there exists t \u2208[T] such that Pr[sign(f\u039e(t)(x)) \u0338= y] \u2264 3r \u221a k d(\u03c4\u22123)/2 + \u03f5. The theorem shows that gradient descent can learn to a small error in this problem. We also need sufficient overparameterization: When r is a constant (e.g., r = 1 in existing work), and we choose a constant \u03c4, m is polynomial in 1 \u03b4 , 1 \u03f5 , d, k, 1 pA , and the error bound is inverse polynomial in d. The proof is in Appendix E.4. Our setting is more general than that in [33, 76] which corresponds to M = I, r = 1, pA = 1 4, po = 1 2. [105] study single index learning, where one feature direction is enough for a two-layer network to recover the label, while our setting considers r directions D1, . . . , Dr, so the network needs to learn multiple directions to get a small error. 9 \f4.2.1 Parity Functions: Beyond the Kernel Regime Again, we show that our framework indeed goes beyond fixed features under parity functions. Our problem setting in Section 4.2 is general enough to include the problem setting in [33]. Their lower bound for fixed feature methods directly applies to our case and leads to the following: Proposition 4.9. There exists a data distribution in the parity learning setting in Section 4.2 with M = I, r = 1, pA = 1 4, k = d 16, po = 1 2, such that all h \u2208HB have hinge-loss at least 1 2 \u2212 \u221a NB 2k\u221a 2 . This means to get an inverse-polynomially small loss, fixed feature models need to have an exponentially large size, i.e., either the number of features N or the norm B needs to be exponential in k. In contrast, Theorem 4.8 shows our framework guarantees a small loss with a polynomially large model, runtime, and sample complexity. Clearly, our framework is beyond the fixed feature methods. Parities on Uniform Inputs. When r = 1, pA = 0, our problem setting will degenerate to the classic sparse parity function on a uniform input distribution. This has also been used for analyzing network learning [16]. For this case, our framework can get a k2O(k) log(k) network width bound and a O(dk) sample complexity bound, matching those in [16]. This then again confirms the advantage of network learning over kernel methods that requires d\u2126(k) dimensions as shown in [16]. See the full statement in Theorem E.31, details in Appendix E.5, and alternative analysis in Appendix E.6. 5 Further Implications and Conclusion Our general framework sheds light on several interesting phenomena in NN learning observed in practice. Feature learning beyond the kernel regime has been discussed in Section 4.1.1 and Section 4.2.1. Here we discuss the LTH and defer more implications such as simplicity bias, learning over different data distributions, and new perspectives about roadmaps forward in Appendix C. Lottery Ticket Hypothesis (LTH). Another interesting phenomenon is the LTH [41]: randomlyinitialized networks contain subnetworks that when trained in isolation reach test accuracy comparable to the original network in a similar number of iterations. Later studies (e.g., [42]) show that LTH is more stable when subnetworks are found in the network after a few gradient steps. Our framework provides an explanation for two-layer networks: the lottery ticket subnetwork contains exactly those neurons whose gradient feature approximates the weights of the \u201cground-truth\u201d network f \u2217; they may not exist at initialization but can be found after the first gradient step. More precisely, Lemma 3.14 shows that after the first gradient step, there is a sparse second-layer weight \u02dc a with \u2225\u02dc a\u22250 = O \u0000r\u221amp \u0001 , such that using this weight on the hidden neurons gives a network with a small loss. Let U be the support of \u02dc a. Equivalently, there is a small-loss subnetwork f U \u039e with only neurons in U and with second-layer weight \u02dc aU on these neurons. Following the same proof of Theorem 3.12: Proposition 5.1. In the same setting of Theorem 3.12 but only considering the subnetwork supported on U after the first gradient step, with the same requirements on m and T, with proper hyperparameter values, we have the same guarantee: with probability \u22651 \u2212\u03b4, there is t \u2208[T] with Pr[sign(f U \u039e(t))(x) \u0338= y] \u2264OPTd,r,BF ,Sp,\u03b3,BG + rBa1Bx1 r 2\u03b3 + O \u0010 \u221aBx2 log n BG \u221an \u0011 + \u03f5. This essentially formally proves LTH for two-layer networks, showing (a) the existence of the winning lottery subnetwork and (b) that gradient descent on the subnetwork can learn to similar loss in similar runtime as on the whole network. In particular, (b) is novel and not analyzed in existing work. We provide our work\u2019s broader impacts and limitations (e.g., statement of recovering existing results and some failure cases beyond our framework) in Appendix A and Appendix B respectively. Conclusion. We propose a general framework for analyzing two-layer neural network learning by gradient descent and show that it can lead to provable guarantees for several prototypical problem settings for analyzing network learning. In particular, our framework goes beyond fixed feature methods, e.g., NTK. It sheds light on several interesting phenomena in NN learning, e.g., the lottery ticket hypothesis and simplicity bias. Future directions include: (1) How to extend the framework to deeper networks? (2) While the current framework focuses on the gradient features in the early gradient steps, whether feature learning also happens in later steps and if so how to formalize that? 10 \fAcknowledgements The work is partially supported by Air Force Grant FA9550-18-1-0166, the National Science Foundation (NSF) Grants 2008559-IIS, 2023239-DMS, and CCF-2046710." + }, + { + "url": "http://arxiv.org/abs/2303.00106v1", + "title": "The Trade-off between Universality and Label Efficiency of Representations from Contrastive Learning", + "abstract": "Pre-training representations (a.k.a. foundation models) has recently become a\nprevalent learning paradigm, where one first pre-trains a representation using\nlarge-scale unlabeled data, and then learns simple predictors on top of the\nrepresentation using small labeled data from the downstream tasks. There are\ntwo key desiderata for the representation: label efficiency (the ability to\nlearn an accurate classifier on top of the representation with a small amount\nof labeled data) and universality (usefulness across a wide range of downstream\ntasks). In this paper, we focus on one of the most popular instantiations of\nthis paradigm: contrastive learning with linear probing, i.e., learning a\nlinear predictor on the representation pre-trained by contrastive learning. We\nshow that there exists a trade-off between the two desiderata so that one may\nnot be able to achieve both simultaneously. Specifically, we provide analysis\nusing a theoretical data model and show that, while more diverse pre-training\ndata result in more diverse features for different tasks (improving\nuniversality), it puts less emphasis on task-specific features, giving rise to\nlarger sample complexity for down-stream supervised tasks, and thus worse\nprediction performance. Guided by this analysis, we propose a contrastive\nregularization method to improve the trade-off. We validate our analysis and\nmethod empirically with systematic experiments using real-world datasets and\nfoundation models.", + "authors": "Zhenmei Shi, Jiefeng Chen, Kunyang Li, Jayaram Raghuram, Xi Wu, Yingyu Liang, Somesh Jha", + "published": "2023-02-28", + "updated": "2023-02-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION Representation pre-training is a recent successful approach that utilizes large-scale unlabeled data to address the challenges of scarcity of labeled data and distribution shift. Different from the traditional supervised learning approach using a large labeled dataset, representation learning \ufb01rst pre-trains a representation function using large-scale diverse unlabeled datasets by self-supervised learning (e.g., contrastive learning), and then learns predictors on the representation using small labeled datasets for downstream target tasks. The pre-trained model is commonly referred to as a foundation model (Bommasani et al., 2021), and has achieved remarkable performance in many applications, e.g., BERT (Devlin et al., 2019), GPT-3 (Brown et al., 2020), CLIP (Radford et al., 2021), and Flamingo (Alayrac et al., 2022). To this end, we note that there are two properties that are key to their success: (1) label ef\ufb01ciency: with the pre-trained representation, only a small amount of labeled data is needed to learn accurate predictors for downstream target tasks; (2) universality: the pre-trained representation can be used across various downstream tasks. In this work, we focus on contrastive learning with linear probing that learns a linear predictor on the representation pre-trained by contrastive learning, which is an exemplary pre-training approach (e.g., (Arora et al., 2019; Chen et al., 2020)). We highlight and study a fundamental trade-off between label ef\ufb01ciency and universality, though ideally, one would like to have these two key properties simultaneously. Since pre-training with large-scale diverse unlabeled data is widely used in practice, such a trade-off merits deeper investigation. Theoretically, we provide an analysis of the features learned by contrastive learning, and how the learned features determine the downstream prediction performance and lead to the trade-off. We 1 arXiv:2303.00106v1 [cs.LG] 28 Feb 2023 \fPublished as a conference paper at ICLR 2023 propose a hidden representation data model, which \ufb01rst generates a hidden representation containing various features, and then uses it to generate the label and the input. We \ufb01rst show that contrastive learning is essentially generalized nonlinear PCA that can learn hidden features invariant to the transformations used to generate positive pairs. We also point out that additional assumptions on the data and representations are needed to obtain non-vacuous guarantees for prediction performance. We thus consider a setting where the data are generated by linear functions of the hidden representation, and formally prove that the difference in the learned features leads to the trade-off. In particular, pre-training on more diverse data learns more diverse features and is thus useful for prediction on more tasks. But it also down-weights task-speci\ufb01c features, implying larger sample complexity for predictors and thus worse prediction performance on a speci\ufb01c task. This analysis inspires us to propose a general method \u2013 contrastive regularization \u2013 that adds a contrastive loss to the training of predictors to improve the accuracy on downstream tasks. 105 Number of unlabeled data 0.175 0.180 0.185 0.190 0.195 0.200 0.205 0.210 Averaged T est Accuracy average test accuracy on all tasks test accuracy on the target task 0.75 0.80 0.85 0.90 0.95 T arget T ask T est Accuracy C CS CSG CSGI Figure 1: Illustration of the trade-off between universality and label ef\ufb01ciency. x-axis: from left to right, incrementally add CINIC-10 (C), SVHN (S), GTSRB (G), and ImageNet32 (I) for pretraining MoCo v2. For example, \u201cCS\u201d means CINIC-10+SVHN. The average test accuracy of prediction on all 4 datasets (red line) increases with more diverse pre-training data, while that on the target task CIFAR-10 (blue line) decreases. (The variance of the blue line is too small to be seen.) Please refer to Section 3.1 for details. Empirically, we \ufb01rst perform controlled experiments to reveal the trade-off. Speci\ufb01cally, we \ufb01rst pre-train on a speci\ufb01c dataset similar to that of the target task, and then incrementally add more datasets into pretraining. In the end, the pre-training data includes both datasets similar to the target task and those not so similar, which mimics the practical scenario that foundation models are pre-trained on diverse data to be widely applicable for various downstream tasks. Fig. 1 gives an example of this experiment: As we increase task diversity for contrastive learning, it increases the average accuracy on all tasks from 18.3% to 20.1%, while it harms the label ef\ufb01ciency of an individual task, on CIFAR-10 the accuracy drops from 88.5% to 76.4%. We also perform experiments on contrastive regularization, and demonstrate that it can consistently improve over the typical \ufb01netuning method across multiple datasets. In several cases, the improvement is signi\ufb01cant: 1.3% test accuracy improvement for CLIP on ImageNet, 4.8% for MoCo v3 on GTSRB (see Table 1 and 2 for details). With these results, we believe that it is of importance to bring the community\u2019s attention to this trade-off and the forward path of foundation models. Our main contributions are summarized as follows: \u2022 We propose a hidden representation data model and prove that contrastive learning is essentially generalized nonlinear PCA, and can encode hidden features invariant to the transformations used in positive pairs (Section 2.1). \u2022 We formally prove the trade-off in a simpli\ufb01ed setting with linear data (Section 2.2). \u2022 We empirically demonstrate the trade-off across different methods and datasets for contrastive learning with linear probing (Section 3.1 and 3.2). \u2022 We propose a contrastive regularization method for training the predictor on a target task (Section 2.2), which achieves consistent improvement in our experiments (Section 3.3). Related Work on Representation Pre-training. This paradigm pre-trains a representation function on a large dataset and then uses it for prediction on various downstream tasks (Devlin et al., 2019; Kolesnikov et al., 2020; Brown et al., 2020; Newell & Deng, 2020). The representations are also called foundation models (Bommasani et al., 2021). There are mainly two kinds of approaches: (1) supervised approaches (e.g., (Kolesnikov et al., 2020)) that pre-train on large labeled datasets; (2) self-supervised approaches (e.g., (Newell & Deng, 2020)) that pre-train on large and diverse unlabeled datasets. Recent self-supervised pre-training can compete with or outperform supervised pre-training on the downstream prediction performance (Ericsson et al., 2021). Practical examples like BERT (Devlin et al., 2019), GPT-3 (Brown et al., 2020), CLIP (Radford et al., 2021), 2 \fPublished as a conference paper at ICLR 2023 DALL\u00b7E (Ramesh et al., 2022), PaLM (Chowdhery et al., 2022) and Flamingo (Alayrac et al., 2022) have obtained effective representations universally useful for a wide range of downstream tasks. A popular method is contrastive learning, i.e., to distinguish matching and non-matching pairs of augmented inputs (e.g., (van den Oord et al., 2018; Chen et al., 2020; He et al., 2020; Grill et al., 2020; Chen & He, 2021; Zbontar et al., 2021; Gao et al., 2021)). Some others solve \u201cpretext tasks\u201d like predicting masked parts of the inputs (e.g.,(Doersch et al., 2015; Devlin et al., 2019)). Related Work on Analysis of Self-supervised Pre-training. There exist abundant studies analyzing self-supervised pre-training (Arora et al., 2019; Tsai et al., 2020; Yang et al., 2020; Wang & Isola, 2020; Garg & Liang, 2020; Zimmermann et al., 2021; Tosh et al., 2021; HaoChen et al., 2021; Wen & Li, 2021; Liu et al., 2021; Kotar et al., 2021; Van Gansbeke et al., 2021; Lee et al., 2021; Saunshi et al., 2022a; Shen et al., 2022; Kalibhat et al., 2022). They typically focus on pre-training or assume the same data distribution in pre-training and prediction. Since different distributions are the critical reason for the trade-off we focus on, we provide a new analysis. Some studies have connected contrastive learning to component analysis (Balestriero & LeCun, 2022; Tian, 2022; Ko et al., 2022). Our analysis focuses on the trade-off, while also showing a connection to PCA based on our notion of invariant features and is thus fundamentally different. Recently, Cole et al. have attempted to identify successful conditions for contrastive learning and pointed out that diverse pretraining data can decrease prediction performance compared to pre-training on the speci\ufb01c task data. However, they do not consider universality and provide no systematic study. Similarly, Bommasani et al. call for more research on specialization vs. diversity in pre-training data but provide no study. We aim to provide a better understanding of the trade-off between universality and label ef\ufb01ciency. 2 THEORETICAL ANALYSIS Our experiments in Section 3.1 demonstrate a trade-off between the universality and label ef\ufb01ciency of contrastively pre-trained representations when used for prediction on a distribution different from the pre-training data distribution. See Fig. 1 for an example. Intuitively, from the unlabeled data, pre-training can learn semantic features useful for prediction on even different data distributions. To analyze this, we need to formalize the notion of useful semantic features. So we introduce a hidden representation data model where a hidden representation (i.e., a set of semantic features) is sampled and then used for generating the data. Similar models have been used in some studies (HaoChen et al., 2021; Zimmermann et al., 2021), while we introduce the notion of spurious and invariant features and obtain a novel analysis for contrastive learning. Using this theoretical model of data, Section 2.1 investigates what features are learned by contrastive learning. We show that contrastive learning can be viewed as a generalization of Principal Components Analysis, and it encodes the invariant features not affected by the transformations but removes the others. We also show that further assumptions on the data and the representations are needed necessary for any non-vacuous bounds for downstream prediction. So Section 2.2 considers a simpli\ufb01ed setting with linear data. We show that when pre-trained on diverse datasets (modeled as a mixture of unlabeled data from different tasks), it encodes all invariant features from the different tasks and thus is useful for all tasks. On the other hand, it essentially emphasizes those that are shared among the tasks, but down-weights those that are speci\ufb01c to a single task. Compared to pre-training only on unlabeled data from the target task, this then leads to a larger sample complexity and thus worse generalization for prediction on the target task. Therefore, we show that the trade-off between universality and label ef\ufb01ciency occurs due to the fact that when many useful features from diverse data are packed into the representation, those for a speci\ufb01c target task can be down-weighted and thus worsen the prediction performance on it. Based on this insight, we propose a contrastive regularization method for using representations in downstream prediction tasks, which achieves consistent improvement over the typical \ufb01ne-tuning method in our experiments in Section 3.3. Contrastive Learning. Let X \u2286Rd denote the input space, Y the label space, and Z \u2286Rk the output vector space of the learned representation function. Let \u03a6 denote the hypothesis class of representations \u03c6 : X \u2192Z, and F\u03c6 the hypothesis class of predictors on \u03c6. A task is simply a data distribution over X \u00d7Y. In pre-training, using transformations on unlabeled data from the tasks, we have some pre-train distribution Dpre over positive pairs (x, x+) and negative examples x\u2212, where x, x+ are obtained by applying random transformations on the same input (e.g., cropping or color jitter for images), and x\u2212is an independent example. The contrastive loss is \u2113 \u0000\u03c6(x)\u22a4(\u03c6(x+) \u2212\u03c6(x\u2212)) \u0001 3 \fPublished as a conference paper at ICLR 2023 where \u2113(t) is a suitable loss function. Typically, the logistic loss \u2113(t) = log(1 + exp(\u2212t)) is used, while our analysis also holds for other loss functions. A representation \u03c6 is learned by: min \u03c6\u2208\u03a6 E(x,x+,x\u2212)\u223cDpre \u0002 \u2113 \u0000\u03c6(x)\u22a4(\u03c6(x+) \u2212\u03c6(x\u2212)) \u0001\u0003 . (1) (We simply consider the population loss since pre-training data are large-scale.) Then a predictor f is learned on top of \u03c6 using m labeled points {(xi, yi)}m i=1 from a speci\ufb01c target task D: min f\u2208F\u03c6 1 m m X i=1 \u2113c(f(\u03c6(xi)), yi) (2) where \u2113c is a prediction loss (e.g. cross-entropy). Usually, f is a linear classi\ufb01er (Linear Probing) with a bounded norm: F\u03c6 = {f(z) = u\u22a4z : u \u2208Rk, \u2225u\u2225\u2264B}, where \u2225\u00b7 \u2225denotes the \u21132 norm. Hidden Representation Data Model. We now consider the pre-train distribution Dpre over (x, x+, x\u2212). To capture that pre-training can learn useful features, we assume a hidden representation for generating the data: \ufb01rst sample a hidden representation z \u2208Z from a distribution Dz over some hidden representation space Z \u2286Rd, and then generate the input x and the label y from z. (The space Z models semantic features, and can be different from the learned representation space Z.) The dimensions of z are partitioned into two disjoint subsets of [d] := {1, \u00b7 \u00b7 \u00b7 , d}: spurious features U that are affected by the transformations, and invariant features R that are not. Speci\ufb01cally, let DU, DR denote the distributions of zU and zR, respectively, and let x = g(z) denote the generative function for x. Then the positive pairs (x, x+) are generated as follows: z = [zR; zU] \u223cDz, z+ U \u223cDU, z+ = [zR; z+ U ], x = g(z), x+ = g(z+). (3) That is, x, x+ are from the same zR but two random copies of zU that model the random transformations. Finally, x\u2212is an i.i.d. sample from the same distribution as x: z\u2212\u223cDz, x\u2212= g(z\u2212). 2.1 WHAT FEATURES ARE LEARNED BY CONTRASTIVE LEARNING? To analyze prediction performance, we \ufb01rst need to analyze what features are learned in pre-training. Contrastive Learning is Generalized Nonlinear PCA. Recall that given data x from a distribution D, Principal Components Analysis (PCA) (Pearson, 1901; Hotelling, 1933) aims to \ufb01nd a linear projection function \u03c6 on some subspace such that the variance of the projected data \u03c6(x) is maximized, i.e., it is minimizing the following PCA objective: \u2212Ex\u223cD[\u2225\u03c6(x) \u2212Ex\u2032\u223cD[\u03c6(x\u2032)]\u22252] = \u2212Ex\u223cD[\u2225\u03c6(x) \u2212\u03c60\u22252] (4) where \u03c60 := E[\u03c6(x\u2032)] is the mean of the projected data. Nonlinear PCA replaces linear representation functions \u03c6 with nonlinear ones. We next show that contrastive learning is a generalization of nonlinear PCA on the smoothed representation after smoothing out the transformations. Theorem 2.1. If \u2113(t) = \u2212t, then the contrastive loss is equivalent to the PCA objective on \u03c6zR: E \u0002 \u2113 \u0000\u03c6(x)\u22a4[\u03c6(x+) \u2212\u03c6(x\u2212)] \u0001\u0003 = \u2212E \u0002 \u2225\u03c6zR \u2212\u03c60\u22252\u0003 (5) where \u03c6zR := E[\u03c6(x) | zR] = E[\u03c6(g(z)) | zR]. If additionally \u03c6(x) is linear in x, then it is equivalent to the linear PCA objective \u2212E \u0002 \u2225\u03c6(\u00af x) \u2212\u03c60\u22252\u0003 on data \u00af x := E[x|zR] = E[g(z)|zR]. So contrastive learning is essentially nonlinear PCA when \u2113(t) = \u2212t, and further specializes to linear PCA when the representation is linear. As PCA \ufb01nds directions with large variances, the analogue is that contrastive learning encodes important invariant features but not spurious ones. Contrastive Learning Encodes Invariant Features and Removes Spurious Features. For a formal statement we need some weak assumptions on the data, the representations, and the loss: (A1) zR can be recovered from x, i.e., the inputs x = g(z) from different zR\u2019s are disjoint. (A2) The representation functions are the regular functions with \u2225\u03c6(x)\u2225= Br (\u2200x) for some Br > 0. Being regular means there are a \ufb01nite L and a partition of Z into a \ufb01nite number of subsets, such that in each subset all \u03c6 \u25e6g have Lipschitz constants bounded by L. (A3) The loss \u2113(t) is convex, decreasing, and lower-bounded. 4 \fPublished as a conference paper at ICLR 2023 The \ufb01rst condition means the invariant features zR can be extracted from x (note that g need not be invertible). The regular condition on the representation is to exclude some pathological cases like the Dirichlet function; essentially reasonable functions relevant for practice satisfy this condition, e.g., when g is Lipschitz and \u03c6 are neural networks with the ReLU activation. Also, note that the logistic loss typically used in practice satis\ufb01es the last condition. We say a function f(z) is independent of a subset of input dimensions zS, if there exists a function f \u2032 such that f(z) = f \u2032(z\u2212S) with probability 1, where z\u2212S denotes the set of all zj with j \u0338\u2208S. We say the representation \u03c6 encodes a feature zi, if \u03c6 \u25e6g : Z \u2192Z is not independent of zi as long as the generative function g(z) is not independent of zi. Theorem 2.2. Under Assumptions (A1)(A2)(A3), the optimal representation \u03c6\u2217satis\ufb01es: (1) \u03c6\u2217does not encode the spurious features zU: \u03c6\u2217\u25e6g(z) is independent of zU. (2) For any invariant feature i \u2208R, there exists Bi > 0 such that as long as the representations\u2019 norm Br \u2265Bi, then \u03c6\u2217encodes zi. Furthermore, if Z is \ufb01nite, then Bi is monotonically decreasing in Pr[zR\\{i} = z\u2212 R\\{i}, zi \u0338= z\u2212 i ], the probability that in zR and z\u2212 R, the i-th feature varies while the others remain the same. So contrastive learning aims to remove the spurious features and preserve the invariant features. Then the transformations should be chosen such that they will not affect the useful semantic features, but change those irrelevant to the label. Interestingly, the theorem further suggests that contrastive learning tends to favor the more \u201cspread-out\u201d invariant features zi, as measured by Pr[zR\\{i} = z\u2212 R\\{i}, zi \u0338= z\u2212 i ]. As we increase the representation capacity Br, Br passes the threshold Bi for more features zi, so \u03c6\u2217\ufb01rst encodes the more spread-out invariant features and then the others. This further suggests the following intuition for the trade-off. When pre-trained on diverse data modeled as a mixture from multiple tasks with different invariant features, the representation encodes all the invariant features and thus is useful for prediction on all the tasks. When pre-trained on only a speci\ufb01c task, features speci\ufb01c to this task are favored over those that only show up in other tasks, which leads to smaller sample complexity for learning the predictor and thus better prediction. However, to formalize this, some inductive bias assumptions about the data and the representation are necessary to get any non-vacuous guarantee for the prediction (see discussion in Appendix A.1). Therefore, Section 2.2 introduces additional assumptions and formalizes the trade-off. 2.2 ANALYZING THE TRADE-OFF: LINEAR DATA D1 S D2 DT U R . . . P1 P2 PT Figure 2: Illustration of the features in our data distributions. To analyze the prediction performance, we \ufb01rst need to model the relation between the pre-training data and the target task. We model the diverse pre-training data as a mixture of data from T different tasks Dt\u2019s, while the target task is one of the tasks. All tasks share a public feature set S of size s, and each task Dt additionally owns a private disjoint feature set Pt of size r \u2212s, i.e., Pt \u2229S = \u2205and Pt1 \u2229Pt2 = \u2205for t1 \u0338= t2 (Fig. 2). The invariant features for Dt are then Rt = S \u222aPt. All invariant features are R = \u222aT t=1Rt, and spurious features are U = [d]\\R. In task Dt, the (x, x+) are generated as follows: zRt \u223cN(0, I), zR\\Rt = 0, zU \u223cN(0, I), z = [zR; zU], x = g(z), (6) z+ U \u223cN(0, I), z+ = [zR; z+ U ], x+ = g(z+), (7) and x\u2212is simply an i.i.d. copy from the same distribution as x. In practice, multiple independent negative examples are used, and thus we consider the following contrastive loss min\u03c6\u2208\u03a6 E(x,x+) \u0002 \u2113 \u0000\u03c6(x)\u22a4(\u03c6(x+) \u2212Ex\u2212\u03c6(x\u2212)) \u0001\u0003 for a convex and decreasing \u2113(t) to pre-train a representation \u03c6. Then, when using \u03c6 for prediction in the target task Dt, the predictor class should contain a predictor matching the ground-truth label: F\u03c6,t = {f(z) = u\u22a4z : u \u2208Rk, \u2225u\u2225\u2264B\u03c6,t} (8) where B\u03c6,t is the minimum value such that there exists ut \u2208F\u03c6,t with y = u\u22a4 t \u03c6(x) on Dt. 5 \fPublished as a conference paper at ICLR 2023 Now, given the necessity of inductive biases for non-vacuous guarantees (see Appendix A.1), and inspired by classic dictionary learning and recent analysis on such data (e.g., Olshausen & Field (1997); Wen & Li (2021); Shi et al. (2022)), we assume linear data and linear representations: \u2022 x is linear in z: x = g(z) = Mz where M \u2208Rd\u00d7d is an orthonormal dictionary. Since linear probing has strong performance on pre-trained representations, we thus assume that the label in each task t is linear in its invariant features y = (u\u2217 t )\u22a4zRt for some u\u2217 t \u2208Rr. \u2022 The representations are linear functions with weights of bounded spectral/Frobenius norms: \u03a6 = {\u03c6(x) = Wx : W\u2208Rk\u00d7d, \u2225W\u2225\u22641, \u2225W\u2225F \u2264\u221ar}. Here the norm bounds are chosen to be the minimum values to allow recovering the invariant features in the target task, i.e., there exists \u03c6 \u2208\u03a6 such that \u03c6(x) = [zRt; 0]. We compare two representations: a speci\ufb01c one pre-trained on unlabeled data from the target task Dt, and a universal one pre-trained on an even mixture of data from T tasks. (Appendix B provides analysis for more general cases like uneven mixtures.) This captures the situation that the pretraining data contains some data similar to the target task and also other less similar data. Let vt,1 = P j\u2208S(u\u2217 t )2 j and vt,2 = P j\u2208Pt(u\u2217 t )2 j be the weights on the shared and task-speci\ufb01c invariant features, respectively. Also, assume the prediction loss \u2113c is L-Lipschitz. Proposition 2.3. The representation \u03c6\u2217obtained on an even mixture of data from all the tasks {Dt : 1 \u2264t \u2264T} satis\ufb01es \u03c6\u2217\u25e6g(z) = Q \u0010P j\u2208S \u221a\u03b1zjej + P j\u2208R\\S \u221a\u03b2zjej \u0011 for some \u03b1 \u2208[0, 1], \u03b2 = min \u0010 1, r\u2212\u03b1s T (r\u2212s) \u0011 , where ej\u2019s are the basis vectors and Q is any orthonormal matrix. The Empirical Risk Minimizer \u02c6 u \u2208F\u03c6\u2217,t on \u03c6\u2217using m labeled data points from Dt has risk E(x,y)\u223cDt[\u2113c(\u02c6 u\u22a4\u03c6\u2217(x), y)] \u22644L s 1 m \u0012vt,1 \u03b1 + vt,2 \u03b2 \u0013 \u0012p s\u03b1 + (r \u2212s)\u03b2 + O \u0012r r s\u03b1 + (r \u2212s)\u03b2 \u0013\u0013 + 8 r 2 ln(4/\u03b4) m . Proposition 2.4. The representation \u03c6\u2217 t obtained on data from Dt satis\ufb01es \u03c6\u2217 t \u25e6g(z) = Q \u0010P j\u2208Rt zjej \u0011 where ej\u2019s are the basis vectors and Q is any orthonormal matrix. The Empirical Risk Minimizer \u02c6 u \u2208F\u03c6\u2217 t ,t on \u03c6\u2217 t using m labeled data points from Dt has risk E(x,y)\u223cDt[\u2113c(\u02c6 u\u22a4\u03c6\u2217 t (x), y)] \u22644L r r m\u2225u\u2217 t \u2225+ 8 r 2 ln(4/\u03b4) m . While on task Di(i \u0338= t), any linear predictor on \u03c6\u2217 t has error at least minu EDi[\u2113c(u\u22a4zS, y)]. Difference in Learned Features Leads to the Trade-off. The key of the analysis (in Appendix B) is about what features are learned in the representations. Pre-trained on all T tasks, \u03c6\u2217is a rotation of the weighted features, where the shared features are weighted by \u221a\u03b1 and task-speci\ufb01c ones are weighted by \u221a\u03b2. Pre-trained on one task Dt, \u03c6\u2217 t is a rotation of the task-speci\ufb01c features Rt. So compared to \u03c6\u2217 t , \u03c6\u2217encodes all invariant features but down-weights the task-speci\ufb01c features Pt. The difference in the learned features then determines the prediction performance and results in a trade-off between universality and label ef\ufb01ciency: compared to \u03c6\u2217 t , \u03c6\u2217is useful for more tasks but has worse performance on the speci\ufb01c task Dt. For illustration, suppose r = 2s, and the shared and task-speci\ufb01c features are equally important for the labels on the target task: vt,1 = vt,2 = \u2225u\u2217 t \u22252/2. In Appendix B.3 we show that \u03c6\u2217has \u03b1 = 1, \u03b2 = 1 T and the error is O \u0010 L q T r m \u2225u\u2217 t \u2225 \u0011 , while the error using \u03c6\u2217 t is O \u0000Lp r m\u2225u\u2217 t \u2225 \u0001 . Therefore, the error when using representations pre-trained on data from T tasks is O( \u221a T) worse than that when just pre-training on data from the target task. On the other hand, the former can be used in all T tasks and the prediction error diminishes with the labeled data number m. While the latter only encodes Rt and the only useful features on the other tasks are zS, then even with in\ufb01nite labeled data the error can be large (\u2265minu E[\u2113c(u\u22a4zS, y)], the approximation error using only the common features zS for prediction). Improving the Trade-off via Contrastive Regularization. The above analysis provides some guidance on improving the trade-off, in particular, improving the target prediction accuracy when 6 \fPublished as a conference paper at ICLR 2023 given a pre-trained representation \u03c6\u2217. It suggests that when \u03c6\u2217is pre-trained on diverse data, one can update it by contrastive learning on some unlabeled data from the target task, which can get better features and better predictions. This is indeed the case for the illustrative example above. We can show that updating \u03c6\u2217by contrastive learning on Dt can increase the weights \u03b2 on the task-speci\ufb01c features zPt, and thus improve the generalization error (formal analysis in Appendix B.4). In practice, typically one will learn the classi\ufb01er and also \ufb01ne-tune the representation with a labeled dataset {(xi, yi)}m i=1 from the target task. We thus propose contrastive regularization for \ufb01ne-tuning: for each data point (x, y), generate contrastive pairs R = {(\u02dc x, \u02dc x+, \u02dc x\u2212)} by applying transformations, and add the contrastive loss on these pairs as a regularization term to the classi\ufb01cation loss: \u2113c(f(\u03c6(x)), y) + \u03bb |R| X (\u02dc x,\u02dc x+,\u02dc x\u2212)\u2208R \u2113 \u0000\u03c6(\u02dc x)\u22a4(\u03c6(\u02dc x+) \u2212\u03c6(\u02dc x\u2212)) \u0001 . (9) This method is simple and generally applicable to different models and algorithms. Similar ideas have been used in graph learning (Ma et al., 2021), domain generalization (Kim et al., 2021) and semi-supervised learning (Lee et al., 2022), while we use it in \ufb01ne-tuning for learning predictors. Our experiments in Section 3.3 show that it can consistently improve the prediction performance compared to the typical \ufb01ne-tuning approach. 3 EXPERIMENTS We conduct experiments to answer the following questions. (Q1) Does the trade-off between universality and label ef\ufb01ciency exist when training on real datasets? (Q2) What factors lead to the trade-off? (Q3) How can we alleviate the trade-off, particularly in large foundation models? Our experiments provide the following answers: (A1) The trade-off widely exists in different models and datasets when pre-training on large-scale unlabeled data and adapting with small labeled data (see Section 3.1). This justi\ufb01es our study and aligns with our analysis. (A2) Different datasets own many private invariant features leading to the trade-off, e.g., FaceScrub and CIFAR-10 do not share many invariant features (see Section 3.2). It supports our analysis in Section 2.2. (A3) Our proposed method, Finetune with Contrastive Regularization, can improve the trade-off consistently (see Section 3.3). Please refer to our released code1 for more details. 3.1 VERIFYING THE EXISTENCE OF THE TRADE-OFF 105 Number of unlabeled data 0.175 0.180 0.185 0.190 0.195 0.200 0.205 0.210 Averaged T est Accuracy average test accuracy on all tasks test accuracy on the target task 0.75 0.80 0.85 0.90 0.95 T arget T ask T est Accuracy C CS CSG CSGI (a) CIFAR-10 106 4 \u00d7 105 6 \u00d7 105 Number of unlabeled data 0.06 0.08 0.10 0.12 0.14 0.16 Averaged T est Accuracy average test accuracy on all tasks test accuracy on the target task 0.89 0.90 0.91 0.92 0.93 0.94 0.95 0.96 0.97 T arget T ask T est Accuracy E EF EFG EFGI (b) MNIST 105 Number of unlabeled data 0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Averaged T est Accuracy average test accuracy on all tasks test accuracy on the target task 0.40 0.41 0.42 0.43 0.44 0.45 0.46 0.47 T arget T ask T est Accuracy F FC FCS FCSI (c) Fer2013 Figure 3: Trade-off between universality and label ef\ufb01ciency for MoCo v2. Appendix C.5 shows similar results for more methods and datasets. x-axis: incrementally add datasets for pre-training MoCo v2. (a) Pretraining data: CINIC-10 (C), SVHN (S), GTSRB (G), and ImageNet32 (I). E.g., \u201cCS\u201d on the x-axis means CINIC-10+SVHN. Target task: CIFAR-10. Red line: average test accuracy of Linear Probing on all 4 datasets. Blue line: test accuracy on the target task. (b) EMNIST-Digits&Letters (E), Fashion-MNIST (F), GTSRB (G), ImageNet32 (I). Target: MNIST. (c) FaceScrub (F), CIFAR-10 (C), SVHN (S), ImageNet32 (I). Target: Fer2013. Note that training does not follow the online learning fashion, e.g., the model will pre-train from scratch (random initialization) on the CSG datasets, rather than using the model pre-trained on the CS datasets. Evaluation & Methods. We \ufb01rst pre-train a ResNet18 backbone (He et al., 2016) with different contrastive learning methods and then do Linear Probing (LP, i.e., train a linear classi\ufb01er on the feature 1https://github.com/zhmeishi/trade-off_contrastive_learning 7 \fPublished as a conference paper at ICLR 2023 105 106 Number of unlabeled data 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Averaged T est Accuracy average test accuracy on all tasks test accuracy on the target task 0.70 0.75 0.80 0.85 0.90 T arget T ask T est Accuracy B BV BV+ ALL (a) MoCo v3 (backbone ViT-S) 105 106 Number of unlabeled data 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Averaged T est Accuracy average test accuracy on all tasks test accuracy on the target task 0.46 0.48 0.50 0.52 0.54 0.56 T arget T ask T est Accuracy B BV BV+ ALL (b) SimSiam (backbone ResNet50) Figure 4: Trade-off between universality and label ef\ufb01ciency on ImageNet. x-axis: from left to right, incrementally add ImageNet-Bird (B), ImageNet-Vehicle (V), ImageNet-Cat/Ball/Shop/Clothing/Fruit (+), and ImageNet (ALL) for pre-training (a) MoCo v3 with backbone ViT-S (b) SimSiam with backbone ResNet50. For example, \u201cBV\u201d means ImageNet-Bird + ImageNet-Vehicle. Target: ImageNet-Bird. extractor) with the labeled data from the target task. We report the test accuracy on a speci\ufb01c target task and the average test accuracy on all pre-training datasets (i.e., using them as the downstream tasks). Appendix C.2 presents full details and additional results, while Fig. 3 shows the results for the method MoCo v2. The size and diversity of pre-training data are increased on the x-axis by incrementally adding unlabeled training data from: (a) CINIC-10, SVHN, GTSRB, ImageNet32 (using only a 500k subset); (b) EMNIST-Digits&Letters, Fashion-MNIST, GTSRB, ImageNet32; (c) FaceScrub, CIFAR-10, SVHN, ImageNet32. We further perform larger-scale experiments: (1) on ImageNet (see Fig. 4); (2) on ImageNet22k and GCC-15M (see Appendix C.2.1). Results. The results show that when the pre-training data becomes more diverse, the average test accuracy on all pre-training datasets increases (i.e., universality improves), while the test accuracy on the speci\ufb01c target task decreases (i.e., label ef\ufb01ciency drops). This shows a clear trade-off between universality and label ef\ufb01ciency. It supports our claim that diverse pre-training data allow learning diverse features for better universality, but can down-weight the features for a speci\ufb01c task resulting in worse prediction. Additional results in the appendix show similar trends (e.g., for methods NNCLR and SimSiam). This validates our theoretical analysis of the trade-off. 3.2 INSPECTING THE TRADE-OFF: FEATURE SIMILARITY Face Scrub CIFAR10 SVHN Image Net32 Union Face Scrub CIFAR10 SVHN Image Net32 Union 1 0.058 0.07 0.15 0.39 1 0.11 0.23 0.11 1 0.19 0.14 1 0.3 1 0.0 0.2 0.4 0.6 0.8 1.0 Face Scrub +CIFAR10 +SVHN +Image Net32 Face Scrub +CIFAR10 +SVHN +Image Net32 1 0.49 0.47 0.39 1 0.64 0.56 1 0.63 1 0.0 0.2 0.4 0.6 0.8 1.0 Figure 5: Linear CKA similarity among Fer2013 features from MoCo v2 pre-trained on different datasets. Left: each representation in the \ufb01rst four columns/rows is pre-trained on a single dataset. \u201cUnion\u201d indicates the model pre-trained on the union of the four disjoint datasets. Right: from left column to right, from top row to bottom, we incrementally add datasets for pre-training. Here we compute the similarity of the features learned from different pre-training datasets for a target task. For each pre-trained model, we extract a set of features for the target task Fer2013 using the pre-trained representation function. Then we compute the similarities between the extracted features based on different pre-training dataset pairs using linear Centered Kernel Alignment (CKA) (Kornblith et al., 2019), a widely used tool for high-dimensional feature comparison. Figure 5 reports the results (rows/columns are pretraining data; numbers/colors show the similarity). The left \ufb01gure shows that the features from different pre-training datasets have low similarities. This is consistent with our setup in Section 2.2 that different tasks only share some features and each owns many private ones. The right \ufb01gure shows a decreasing trend of similarity along each row. This indicates that when gradually adding more diverse pre-training data, the learned representation will encode more downstream-task-irrelevant features, and become less similar to that prior to adding more pre-training data. Additional results with similar observations, \ufb01ner-grained investigation into the trade-off, and some ablation studies are provided in Appendix C.3. 8 \fPublished as a conference paper at ICLR 2023 3.3 IMPROVING THE TRADE-OFF: FINETUNE WITH CONTRASTIVE REGULARIZATION Pre-training dataset Method CINIC-10 +SVHN +GTSRB +ImageNet32 LP 88.41\u00b10.01 85.18\u00b10.01 82.07\u00b10.01 75.64\u00b10.03 FT 93.58\u00b10.14 93.35\u00b10.10 93.42\u00b10.13 92.92\u00b10.06 Ours 94.51\u00b10.02 94.26\u00b10.01 94.32\u00b10.13 93.66\u00b10.12 Table 1: Test accuracy on CIFAR-10 with different evaluation methods on MoCo v2 by using all CIFAR-10 training data. From left to right: incrementally add datasets for pre-training. Evaluation & Methods. We pre-train ResNet18 by MoCo v2 as in Section 3.1 and report the test accuracy on CIFAR10 when the predictor is learned by: Linear Probing (LP), Finetune (FT), and Finetune with Contrastive Regularization (Ours). LP follows the training protocol in Section 3.1. FT and Ours learn a linear predictor and update the representation, and use the same data augmentation for a fair comparison. FT follows MAE (He et al., 2022), while Ours uses MoCo v2 contrastive loss and regularization coef\ufb01cient \u03bb = 0.1. More details and results are given in Appendix C.4. Results. Table 1 shows that our method can consistently outperform the other baselines. In particular, it outperforms the typical \ufb01ne-tuning method by about 0.7% \u2013 1%, even when the latter also uses the same amount of data augmentation. This con\ufb01rms the bene\ufb01t of contrastive regularization. To further support our claim, Fig. 13 in Appendix C.4 visualizes the features of different methods by t-SNE, showing that contrastive regularization can highlight the task-speci\ufb01c features and provide cleaner clustering, and thus improve the generalization, as discussed in our theoretical analysis. CLIP MoCo v3 SimCSE Method ImageNet SVHN GTSRB CIFAR-10 SVHN GTSRB IMDB AGNews LP 77.84\u00b10.02 63.44\u00b10.01 86.56\u00b10.01 95.82\u00b10.01 61.92\u00b10.01 75.37\u00b10.01 86.49\u00b10.16 87.76\u00b10.66 FT 83.65\u00b10.01 78.22\u00b10.18 90.74\u00b10.06 96.17\u00b10.12 65.36\u00b10.33 76.45\u00b10.29 92.31\u00b10.26 93.57\u00b10.23 Ours 84.94\u00b10.09 78.72\u00b10.37 92.01\u00b10.28 96.71\u00b10.10 66.29\u00b10.20 81.28\u00b10.10 92.85\u00b10.03 93.94\u00b10.02 Table 2: Test accuracy for different evaluation methods on different datasets using all training data and using foundation models from CLIP, MoCo v3, and SimCSE. Data augmentation is not used for LP (Linear Probing). For FT (Finetune) and Ours (our method), 10 augmentations to each training images are used for CLIP, MoCo v3, and unique augmentation in each training step is used for SimCSE. More results are in Appendix C.4.1. Larger Foundation Models. We further evaluate our method on several popular real-world large representation models (foundation models). On some of these models, the user may be able to \ufb01netune the representation when learning predictors. On very large foundation models, the user typically extracts feature embeddings of their data from the models and then trains a small predictor, called adapter (Hu et al., 2021; Sung et al., 2022), on these embeddings. We evaluate CLIP (ViT-L (Dosovitskiy et al., 2020) as the representation backbone), MoCo v3 (ViT-B backbone), and SimCSE (Gao et al., 2021) (BERT backbone). They are trained on (image, text), (image, image), and (text, text) pairs, respectively, so cover a good spectrum of methods. For CLIP and MoCo v3, the backbone is \ufb01xed. LP uses a linear classi\ufb01er, while FT and Ours insert a two-layer ReLU network as an adapter between the backbone and the linear classi\ufb01cation layer. Ours uses the SimCLR contrastive loss on the output of the adapter. For SimCSE, all methods use linear classi\ufb01ers. LP \ufb01xes the backbone, while FT and Ours train the classi\ufb01er and \ufb01ne-tune the backbone simultaneously. Ours uses the SimCSE contrastive loss on the backbone feature. We set the regularization coef\ufb01cient \u03bb = 1.0. Table 2 again shows that our method can consistently improve the downstream prediction performance for all three models by about 0.4% \u2013 4.8%, and quite signi\ufb01cantly in some cases (e.g., 1.3% for CLIP on ImageNet, 4.8% for MoCo v3 on GTSRB). This shows that our method is also useful for large foundation models, even when the foundation models cannot be \ufb01ne-tuned and only the extracted embeddings can be adapted. Full details and more results are provided in Appendix C.4.1. 4 CONCLUSION AND FUTURE WORK In this work, we have shown and analyzed the trade-off between universality and label ef\ufb01ciency of representations in contrastive learning. There are many interesting open questions for future work. (1) What features does the model learn from speci\ufb01c pre-training and diverse pre-training datasets beyond linear data? (2) Do the other self-supervised learning methods have a similar trade-off? (3) Can we address the trade-off better to gain both properties at the same time? 9 \fPublished as a conference paper at ICLR 2023 ACKNOWLEDGMENTS The work is partially supported by Air Force Grant FA9550-18-1-0166, the National Science Foundation (NSF) Grants CCF-FMitF-1836978, IIS-2008559, SaTC-Frontiers-1804648, CCF-2046710 and CCF-1652140, and ARO grant number W911NF-17-1-0405. Jiefeng Chen and Somesh Jha are partially supported by the DARPA-GARD problem under agreement number 885000. Jayaram Raghuram was partially supported through the National Science Foundation\u2019s grants CNS-2112562 and CNS-2003129." + }, + { + "url": "http://arxiv.org/abs/2206.01717v1", + "title": "A Theoretical Analysis on Feature Learning in Neural Networks: Emergence from Inputs and Advantage over Fixed Features", + "abstract": "An important characteristic of neural networks is their ability to learn\nrepresentations of the input data with effective features for prediction, which\nis believed to be a key factor to their superior empirical performance. To\nbetter understand the source and benefit of feature learning in neural\nnetworks, we consider learning problems motivated by practical data, where the\nlabels are determined by a set of class relevant patterns and the inputs are\ngenerated from these along with some background patterns. We prove that neural\nnetworks trained by gradient descent can succeed on these problems. The success\nrelies on the emergence and improvement of effective features, which are\nlearned among exponentially many candidates efficiently by exploiting the data\n(in particular, the structure of the input distribution). In contrast, no\nlinear models on data-independent features of polynomial sizes can learn to as\ngood errors. Furthermore, if the specific input structure is removed, then no\npolynomial algorithm in the Statistical Query model can learn even weakly.\nThese results provide theoretical evidence showing that feature learning in\nneural networks depends strongly on the input structure and leads to the\nsuperior performance. Our preliminary experimental results on synthetic and\nreal data also provide positive support.", + "authors": "Zhenmei Shi, Junyi Wei, Yingyu Liang", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction 1 2 Related Work 2 3 Problem Setup 3 3.1 Neural Network Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4 Main Results 5 5 Proof Sketches 6 5.1 Provable Guarantees of Neural Networks . . . . . . . . . . . . . . . . . . . . . . . 6 5.2 Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 6 Experiments 8 7 Ethics Statement 10 8 Reproducibility Statement 10 9 Acknowledgement 10 A More Technical Discussion on Related Work 16 B Complete Proofs for Provable Guarantees of Neural Networks 19 B.1 Existence of A Good Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.2 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.3 Some Auxiliary Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B.4 Feature Emergence: First Gradient Step . . . . . . . . . . . . . . . . . . . . . . . 23 B.5 Feature Improvement: Second Gradient Step . . . . . . . . . . . . . . . . . . . . . 27 B.6 Classi\ufb01er Learning Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 B.7 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 C Lower Bound for Linear Models on Fixed Feature Mappings 38 D Lower Bound for Learning without Input Structure 39 14 \fPublished as a conference paper at ICLR 2022 E Complete Experimental Results 40 E.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 E.1.1 Parity Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 E.1.2 Interval Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 E.2 More Simulation Result in Various Settings . . . . . . . . . . . . . . . . . . . . . 43 E.2.1 Varying Input Data Dimension . . . . . . . . . . . . . . . . . . . . . . . . 43 E.2.2 Varying Class Imbalance Ratio . . . . . . . . . . . . . . . . . . . . . . . . 44 E.2.3 Varying Sample Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 E.3 Experiments on More Data Generation Models . . . . . . . . . . . . . . . . . . . 46 E.3.1 Hidden Representation Labeling . . . . . . . . . . . . . . . . . . . . . . . 46 E.3.2 Two-layer Networks on Mixture of Gaussians . . . . . . . . . . . . . . . . 47 E.4 Real Data: Feature Learning in Networks . . . . . . . . . . . . . . . . . . . . . . 47 E.4.1 CNNs on Binary Cifar10: Feature Learning in Networks . . . . . . . . . . 49 E.5 Real Data: The Effect of Input Structure . . . . . . . . . . . . . . . . . . . . . . . 52 E.5.1 Experimental Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 52 E.5.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 E.5.3 Larger Network on MNIST for Checking The Effect of Input Structure . . 55 E.5.4 Empirical Veri\ufb01cation of Our Method . . . . . . . . . . . . . . . . . . . . 55 F Provable Guarantees for Neural Networks in A More General Setting 57 F.1 Problem Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 F.1.1 Neural Network Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 57 F.2 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 F.3 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 F.4 Existence of A Good Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 F.5 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 F.6 Some Auxiliary Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 F.7 Feature Emergence: First Gradient Step . . . . . . . . . . . . . . . . . . . . . . . 62 F.8 Feature Improvement: Second Gradient Step . . . . . . . . . . . . . . . . . . . . . 68 F.9 Classi\ufb01er Learning Stage and Main Theorem . . . . . . . . . . . . . . . . . . . . 81 15 \fPublished as a conference paper at ICLR 2022 A MORE TECHNICAL DISCUSSION ON RELATED WORK Advantage of Neural Networks over Linear Models on Fixed Features. A recent line of work has turned to show learning settings where network learning provably has advantage over linear models on \ufb01xed features; see the nice summary in Malach et al. (2021). Here we highlight the results and focuses of the existing related studies and discuss the differences from ours. Yehudai & Shamir (2019) shows that the random feature method fails to learn even a single ReLU neuron on Gaussian inputs unless its size is exponentially large in dimension. This points out the limitation of the random feature method (belonging to the \ufb01xed feature approach) but does not consider feature learning in networks. Some studies show that a single ReLU neuron can be learnt by gradient descent (Yehudai & Ohad, 2020; Diakonikolas et al., 2020; Frei et al., 2020). The analysis typically involves feature learning. However, their focus is different: they do not show the advantage over \ufb01xed feature methods and do not consider the effect of the input structures. Zhou et al. (2021) shows that in a special teacher-student setting, the student network will do exact local convergence in a surprising way that all student neurons will converge to one of the teacher neurons. The work does not consider the effect of the input structure nor the advantage over \ufb01xed features. Dou & Liang (2020) explains the advantage of network learning by constructing adaptive Reproducing Kernel Hilbert Space (RKHS) indexed by the training process of the neural network, and shows that adaptive RKHS bene\ufb01ts from a smaller function space containing the residue comparing to RKHS. The work shows the statistical advantage of networks over data-independent kernels, but does not consider the optimization for learning the network. Ghorbani et al. (2020) considers data generated from a hidden vector with two subsets of variables, each uniformly distributed in a high-dimensional sphere (with a different radius), while the label is determined by only the \ufb01rst subset of variables. It shows the existence of good neural networks that can overcome the curse of dimensionality by representing the best low-dimensional hidden structure. However, it studies the approximation power of neural networks rather than the learning, i.e., it does not show how to learn the good network. Fang et al. (2019) argues that in the in\ufb01nite width limit, a two-layer neural network will learn a nearly optimal feature representation in the distribution sense, thanks to the convexity of the limit problem. It is unclear how this result helps to understand the feature learning procedure for practical networks, which is usually a non-convex process. Chen et al. (2020a) considers a \ufb01xed, randomly initialized neural network as a representation function fed into another trainable network which is the quadratic Taylor model of a wide two-layer network. It shows that learning over the random representation can achieve improved sample complexities compared to learning over the raw data. However, the representation considered is not learned, which is different from our focus on feature learning. Allen-Zhu & Li (2020a) considers Gaussian inputs with labels given by a multiple-layer network with quadratic activations and skip connections (with the assumption of information gap on the weights), and studies training a deep network with quadratic activation. It shows that the trained network can learn proper representations and obtain small errors while no polynomial \ufb01xed feature methods can. On the other hand, it does not focus on the in\ufb02uence of input structure on feature learning: note that its input distribution contains no information about the \u201cground-truth\u201d features in the target network. It also points out that the learned features get improved during training: higher-level layers will help lower-level layers to improve by backpropagating correction signals. Our analysis also shows feature improvement which however is by signals from the input distribution. Allen-Zhu & Li (2019) considers PAC learning with labels given by a depth-2 ResNet, and studies training an overparameterized depth-2 ResNet (using uniform inputs over Boolean hypercube as an example). It shows the trained network can obtain small errors while no polynomial kernel methods can obtain as good errors. Similar to Allen-Zhu & Li (2020a), it does not focus on the in\ufb02uence of input structure on feature learning or the advantage of networks. 16 \fPublished as a conference paper at ICLR 2022 Allen-Zhu & Li (2020c) studies how ensemble of deep learning models can improve test accuracy and how the ensemble can be distilled into a single model. It develops a theory which assumes the data has multi-view structure and shows that the ensemble of independently trained networks can provably improve test accuracy and the ensemble can also be provably distilled into a single model. The analysis also relies on showing that the data structure can help the ensemble and the distillation. On the other hand, their focus is on ensembles and is quite different from ours: the analysis is on showing the multi-view input structure allows the ensembles of networks to improve over single ones and ensembles of \ufb01xed feature mappings do not have improvement. While our focus is on supervisedly learning one single network that outperforms the \ufb01xed feature approaches. Daniely & Malach (2020) considers the task of learning sparse parities with two-layer networks, and the analysis suggests that the ability to learn the label-correlated features also seems to be critical towards the success of neural networks, although the authors did not explore much in this direction. Malach et al. (2021) also considers similar learning problems but with speci\ufb01cally designed models for the problems. The learning problems considered in Daniely & Malach (2020); Malach et al. (2021) have input distributions that leak information about the target labeling function, which is similar to our setting, and their analysis also shows that the \ufb01rst gradient descent can learn a set of good features and later steps can learn an accurate classi\ufb01er on top. Our work is inspired by their studies, while there are some important differences. First, their focuses are different from ours. Daniely & Malach (2020) focuses on showing neural networks can learn targets (i.e., k-parity functions) that are inherently non-linear. Our analysis generalizes to more general distributions, including practically motivated ones. Malach et al. (2021) focuses on strong separations between learning with gradient descent on differentiable models (including typical neural networks) and learning using the corresponding tangent kernels. The analysis is on speci\ufb01c differentiable models, while our work is on two-layer neural networks similar to practical ones. Second, our analysis relies on the feature improvement in the second gradient step. This is not an artifact of the analysis but comes from our problem setup. While in Daniely & Malach (2020) the data distribution allows some neurons to be suf\ufb01ciently good after the \ufb01rst gradient step and needs no feature improvement, our setup is more general where the data distribution may not have a similar strong benign effect and thus needs feature improvement in the second gradient step. Most related to our work is Daniely & Malach (2020). Therefore, we provide a detailed discussion to highlight the connections and differences. 1. Our problem setting is more general than that in Daniely & Malach (2020). To see this, let our dictionary be the identity matrix, the set P to be the odd numbers (i.e., the labeling function is a sparse parity). Furthermore, let the distribution of the hidden representation be an equal mixture of the following two: (a) D1: Uniform distribution over the hypercube. (b) D2: Irrelevant patterns \u02dc \u03c6j(j \u0338\u2208A) have appearance probability p0 = 1/2. And the distribution of relevant patterns \u02dc \u03c6j(j \u2208A) is: all 0\u2019s with probability 1/2, and all 1\u2019s with probability 1/2. Then our problem setting reduces to their setting (up to scaling/translation of \u02dc \u03c6j\u2019s). On the other hand, in general our setting allows for more choices for the labeling, the dictionary, and the distributions over \u02dc \u03c6. 2. Upper bound: Because of the more general setting, our upper bound proof requires technical novelty. Recall that in their work, the input distribution is essentially a mixture of D1 and D2 above. In D2, the relevant patterns \u02dc \u03c6j(j \u2208A) have the speci\ufb01c structure of all 0\u2019s or all 1\u2019s with probability 1/2. This allows to show that neurons with weight w satisfying P j\u2208A wj = 0 will have good gradients: small components from irrelevant patterns (their Lemma 7) and large components from relevant patterns (their Lemma 8). However, in our setting, the relevant patterns do not have this speci\ufb01c structure, and thus their proof technique is not applicable (or can be applied only when we have an exponentially large number of hidden neurons so that some hit the good positions at random initialization). What we showed is that the gradient has some correlation with the good feature direction. So after the \ufb01rst gradient step, the neuron weights are not good yet but are in a better position for further improvement (in particular, their setting corresponds to p0 = 1/2 which means large noise in the weights after the \ufb01rst step; see discussion after our Lemma 6 in Section 5). Then 17 \fPublished as a conference paper at ICLR 2022 the latter gradient steps are able to improve the weights to better \u201csignal-to-noise-ratio\u201d. In summary, our proof does not rely on their speci\ufb01c input structure or an exponentially large number of hidden neurons for hitting some good positions. The key is that the good feature will emerge with the help of the input structure, and once in a better position, the neurons\u2019 weights can be improved to the desired quality. 3. Lower bound: On the other hand, our lower bound is proved by a reduction to the lower bound results in Daniely & Malach (2020). They have shown that D1 above can lead to large errors for \ufb01xed feature models of polynomial size. Our proof is essentially constructing a mixture of D1 and D2 with mixture weights p0 and (1 \u2212p0), and applying their lower bound for D1. See our proof in Appendix C. 4. Conceptually, our work belongs to the same line of research as Daniely & Malach (2020), to analyze how feature learning leads to the superior performance of networks. While their analysis also relies on feature learning from good gradients induced by input structure, their focus is more on separating network learning and \ufb01xed feature models and has not explicitly explored the impact of input structures (while we agree that such an explicit study will not be dif\ufb01cult in their setting). More importantly, their input distribution is speci\ufb01c and atypical in practice, which allows a speci\ufb01c type of feature learning (as explained in the above discussion on upper bounds). Our work thus considers a more general setting that is motivated by practical problems. Our results then bring theoretical insights closer for explaining the feature learning in practice and provide some positive evidence for the importance of analysis under proper models of the input distributions. Sparse Coding and Subspace Data Models. To analyze neural networks\u2019 performance, various data models have been considered. A practical way to model the underlying structure of data is by assuming that a set of hidden variables exists and the input data is a high dimensional projection of the hidden vector (possibly with noise). Along this line, the classic sparse coding model has been used in existing works for analyzing networks. Koehler & Risteski (2018) considers such a data distribution where the label is given by a linear function on the hidden sparse vector, but studies the approximation power of networks and classic polynomial methods rather than the learning. Allen-Zhu & Li (2020b) considers similar data distributions, but studies the performance of networks under adversarial perturbations. Another type of related data models assumes that the label is determined by a subset of hidden variables. Ghorbani et al. (2020) considers a hidden vector with two subsets of variables, each uniformly distributed in a high-dimensional sphere (with a different radius), while the label is determined by only the \ufb01rst subset of variables. However, Ghorbani et al. (2020) studies the approximation power of neural networks rather than the learning. Compared to these studies, our work assumes the input is given by a dictionary multiplied with a hidden vector (not necessarily sparse) while the label is determined by a subset of the hidden vector, as motivated by pattern recognition applications in practice. Furthermore, we focus on the learning ability of networks instead of approximation. 18 \fPublished as a conference paper at ICLR 2022 B COMPLETE PROOFS FOR PROVABLE GUARANTEES OF NEURAL NETWORKS We \ufb01rst make a few remarks about the proof. Remark. The analysis can be carried out for more gradient steps following similar intuition, while we analyze two steps for simplicity. Remark. Readers may notice that the network can be overparameterized. With suf\ufb01cient overparameterization and proper initialization and step sizes, network learning becomes approximately NTK. However, here our learning scheme allows going beyond this kernel regime: we use aggressive gradient updates \u03bb(t) w = 1/(2\u03b7(t)) in the \ufb01rst two steps, completely forgetting the old weights to learn effective features. Using proper initialization and aggressive updates early to escape the kernel regime has been studied in existing work (e.g., Woodworth et al. (2020); Li et al. (2019)). Our result thus adds another concrete example. Notations. For a vector v and an index set I, let vI denote the vector containing the entries of v indexed by I, and v\u2212I denote the vector containing the entries of v with indices outside I. By initialization, w(0) i for i \u2208[m] are i.i.d. copies of the same random variable w(0) \u223cN(0, \u03c32 wId\u00d7d); similar for a(0) and b(0). Let q\u2113:= \u27e8w(0), M\u2113\u27e9, then \u27e8w(0), x\u27e9= \u27e8\u03c6, q\u27e9. Similarly, de\ufb01ne q(t) i,\u2113:= \u27e8w(t) i , M\u2113\u27e9. Let \u03c32 \u03c6 := po(1 \u2212po)/\u02dc \u03c32 denote the variance of \u03c6\u2113for \u2113\u0338\u2208A. We also de\ufb01ne the following sets to denote typical initialization. For a \ufb01xed \u03b4 \u2208(0, 1), de\ufb01ne Gw(\u03b4) := ( w \u2208Rd : q\u2113= \u27e8w, M\u2113\u27e9,\u03c32 w(D \u2212k) 2 \u2264 X \u2113\u0338\u2208A q2 \u2113\u22643\u03c32 w(D \u2212k) 2 , max \u2113 |q\u2113| \u2264\u03c3w p 2 log(Dm/\u03b4) ) , (5) Ga(\u03b4) := {a \u2208R : |a| \u2264\u03c3a p 2 log(m/\u03b4)}. (6) Gb(\u03b4) := {b \u2208R : |b| \u2264\u03c3b p 2 log(m/\u03b4)}. (7) B.1 EXISTENCE OF A GOOD NETWORK we \ufb01rst show that there exists a network that can \ufb01t the data distribution. Lemma 7. For some s, a, b \u2208R with a, b \u22650, de\ufb01ne a function \u03b4s,a,b : R \u2192R as \u03b4s,a,b(z) = a\u03c3r(z \u2212s + b) \u22122a\u03c3r(z \u2212s) + a\u03c3r(z \u2212s \u2212b). (8) where \u03c3r(z) = max{z, 0} is the ReLU activation function. Then \u03b4s,a,b(z) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 0 when z \u2264s \u2212b, a(z \u2212s) + ab when s \u2212b \u2264z \u2264s, \u2212a(z \u2212s) + ab when s \u2264z \u2264s + b, 0 when s + b \u2264z. (9) That is, \u03b4s,a,b(z) linearly interpolates between (s \u2212b, 0), (s, ab), (s + b, 0) when z \u2208[s \u2212b, s + b], and is 0 elsewhere. Proof of Lemma 7. This can be simply veri\ufb01ed for the four cases of the value of z. Lemma 8 (Restatement of Lemma 5). For any D \u2208F\u039e, there exists a network g\u2217(x) = Pn i=1 a\u2217 i \u03c3(\u27e8w\u2217 i , x\u27e9+ b\u2217 i ) with y = g\u2217(x) for any (x, y) \u223cD. Furthermore, the number of neurons n = 3(k + 1), |a\u2217 i | \u226432k, 1/(32k) \u2264|b\u2217 i | \u22641/2, w\u2217 i = \u02dc \u03c3 P j\u2208A Mj/(4k), and |\u27e8w\u2217 i , x\u27e9+ b\u2217 i | \u22641 for any i \u2208[n] and (x, y) \u223cD. 19 \fPublished as a conference paper at ICLR 2022 Proof of Lemma 5. Let w = \u02dc \u03c3 P j\u2208A Mj and let \u00b5 = P j\u2208A E[\u02dc \u03c6j]. We have \u27e8w, x\u27e9= \u02dc \u03c3 X j\u2208A \u27e8Mj, M\u03c6\u27e9= \u02dc \u03c3 X j\u2208A \u03c6j = X j\u2208A \u02dc \u03c6j \u2212\u00b5. (10) Then by Lemma 7, g\u2217 1(x) := X p\u2208P \u03b4p\u2212\u00b5,2,1/2(\u27e8w, x\u27e9) \u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p\u2212\u00b5,2,1/2(\u27e8w, x\u27e9) (11) = X p\u2208P \u03b4p,2,1/2(\u27e8w, x\u27e9+ \u00b5) \u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p,2,1/2(\u27e8w, x\u27e9+ \u00b5) (12) = X p\u2208P \u03b4p,2,1/2 \uf8eb \uf8edX j\u2208A \u02dc \u03c6j \uf8f6 \uf8f8\u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p,2,1/2 \uf8eb \uf8edX j\u2208A \u02dc \u03c6j \uf8f6 \uf8f8 (13) = y (14) for any (x, y) \u223cD. Similarly, g\u2217 2(x) := X p\u2208P \u03b4p\u2212\u00b5+1/4,4,1/2(\u27e8w, x\u27e9) \u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p\u2212\u00b5+1/4,4,1/2(\u27e8w, x\u27e9) (15) = X p\u2208P \u03b4p+1/4,4,1/2(\u27e8w, x\u27e9+ \u00b5) \u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p+1/4,4,1/2(\u27e8w, x\u27e9+ \u00b5) (16) = X p\u2208P \u03b4p+1/4,4,1/2 \uf8eb \uf8edX j\u2208A \u02dc \u03c6j \uf8f6 \uf8f8\u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p+1/4,4,1/2 \uf8eb \uf8edX j\u2208A \u02dc \u03c6j \uf8f6 \uf8f8 (17) = y (18) for any (x, y) \u223cD. Note that the bias terms in g\u2217 1 and g\u2217 2 have distance at least 1/4, then at least one of them satis\ufb01es that all its bias terms have absolute value \u22651/8. Pick that one and denote it as g(x) = Pn i=1 ai\u03c3r(\u27e8wi, x\u27e9+ bi). By the positive homogeneity of \u03c3r, we have g(x) = n X i=1 4kai\u03c3r(\u27e8wi, x\u27e9/(4k) + bi/(4k)). (19) Since for any (x, y) \u223cD, |\u27e8wi, x\u27e9/(4k) + bi/(4k)| \u22641, then g(x) = n X i=1 4kai\u03c3(\u27e8wi, x\u27e9/(4k) + bi/(4k)) (20) where \u03c3 is the truncated ReLU. Now we can set a\u2217 i = 4kai, w\u2217 i = wi/(4k), b\u2217 i = bi/(4k), to get our \ufb01nal g\u2217. B.2 INITIALIZATION We \ufb01rst show that with high probability, the initial weights are in typical positions. Lemma 9. For any \u03b4 \u2208(0, 1), with probability at least 1 \u2212\u03b4 \u22122 exp (\u2212\u0398(D \u2212k)) over w(0), \u03c32 w(D \u2212k)/2 \u2264 X \u2113\u0338\u2208A q2 \u2113\u22643\u03c32 w(D \u2212k)/2, max \u2113 |q\u2113| \u2264\u03c3w p 2 log(D/\u03b4). With probability at least 1 \u2212\u03b4 over b(0), |b(0)| \u2264\u03c3b p 2 log(1/\u03b4). With probability at least 1 \u2212\u03b4 over a(0), |a(0)| \u2264\u03c3a p 2 log(1/\u03b4). 20 \fPublished as a conference paper at ICLR 2022 Proof of Lemma 9. From q \u223cN(0, \u03c32 wId\u00d7d), we have: \u2022 With probability \u22651 \u2212\u03b4/2, max\u2113|q\u2113| \u2264 q 2\u03c32 w log D \u03b4 , and \u2022 For any subset S \u2286[D], with probability \u22651\u22122 exp (\u2212\u0398(|S|)), \u2225qS\u22252 2 \u2208 \u0010 |S|\u03c32 w 2 , 3|S|\u03c32 w 2 \u0011 . Similar for b(0) and a(0). The lemma then follows. Lemma 10. We have: \u2022 With probability \u22651\u2212\u03b4\u22122m exp(\u2212\u0398(D\u2212k)) over w(0) i \u2019s, for all i \u2208[2m], w(0) i \u2208Gw(\u03b4). \u2022 With probability \u22651 \u2212\u03b4 over b(0) i \u2019s, for all i \u2208[2m], b(0) i \u2208Gb(\u03b4). \u2022 With probability \u22651 \u2212\u03b4 over a(0) i \u2019s, for all i \u2208[2m], a(0) i \u2208Ga(\u03b4). Proof of Lemma 10. This follows from Lemma 9 by union bound. The following lemma about the typical w(0) i \u2019s will be useful for later analysis. Lemma 11. Fix \u03b4 \u2208(0, 1). For any w(0) i \u2208Gw(\u03b4), we have Pr \u03c6 \uf8ee \uf8f0X \u2113\u0338\u2208A \u03c6\u2113q(0) i,\u2113\u2265\u0398 \u0010q (D \u2212k)\u03c32 \u03c6\u03c32 w \u0011 \uf8f9 \uf8fb= \u0398(1) \u2212O(log3/2(Dm/\u03b4)) q (D \u2212k)\u03c32 \u03c6\u02dc \u03c32 . (21) Consequently, when po = \u2126(k2/D) and k = \u2126(log2(Dm/\u03b4)), Pr \u03c6 \uf8ee \uf8f0X \u2113\u0338\u2208A \u03c6\u2113q(0) i,\u2113\u2265\u0398 (\u03c3w) \uf8f9 \uf8fb= \u0398(1) \u2212O(1) k1/4 . (22) Proof of Lemma 11. Note that for \u2113\u0338\u2208A, E[\u03c6\u2113] = 0, E[\u03c62 \u2113] = \u03c32 \u03c6, and E[|\u03c6\u2113|3] = \u0398(\u03c32 \u03c6/\u02dc \u03c3). Then the statement follows from Berry-Esseen Theorem. B.3 SOME AUXILIARY LEMMAS The expression of the gradients will be used frequently. Lemma 12. \u2202 \u2202wi LD(g; \u03c3\u03be) = \u2212aiE(x,y)\u223cD {yI[yg(x; \u03be) \u22641]E\u03beiI[\u27e8wi, x\u27e9+ bi + \u03bei \u2208(0, 1)]x} , (23) \u2202 \u2202bi LD(g; \u03c3\u03be) = \u2212aiE(x,y)\u223cD {yI[yg(x; \u03be) \u22641]E\u03beiI[\u27e8wi, x\u27e9+ bi \u2208(0, 1)]} , (24) \u2202 \u2202ai LD(g; \u03c3\u03be) = \u2212E(x,y)\u223cD {yI[yg(x; \u03be) \u22641]E\u03bei\u03c3(\u27e8wi, x\u27e9+ bi + \u03bei)} . (25) Proof of Lemma 12. It follows from straightforward calculation. We now show that a small subset of the entries in \u03c6, q does not affect the probability distribution of \u27e8\u03c6, q\u27e9much. Lemma 13. Suppose \u03bd \u223cN(0, \u03c32). For any B \u2287A and any b: \f \f \f \f Pr \u03c6\u2212B,\u03bd {\u27e8\u03c6, q\u27e9+ \u03bd \u2265b} \u2212 Pr \u03c6\u2212B,\u03bd {\u27e8\u03c6\u2212B, q\u2212B\u27e9+ \u03bd \u2265b} \f \f \f \f (26) \u2264O |\u27e8\u03c6B, qB\u27e9| (\u03c32 \u03c6\u2225q\u2212B\u22252 2 + \u03c32)1/2 + \u03c33 + \u03c32 \u03c6\u2225q\u2212B\u22253 3/\u02dc \u03c3 (\u03c32 + \u03c32 \u03c6\u2225q\u2212B\u22252 2)3/2 ! . (27) 21 \fPublished as a conference paper at ICLR 2022 Similarly, \f \f \f \f Pr \u03c6\u2212B {\u27e8\u03c6, q\u27e9\u2265b} \u2212Pr \u03c6\u2212B {\u27e8\u03c6\u2212B, q\u2212B\u27e9\u2265b} \f \f \f \f (28) \u2264O \u0012 |\u27e8\u03c6B, qB\u27e9| \u03c3\u03c6\u2225q\u2212B\u22252 + \u2225q\u2212B\u22253 3 \u02dc \u03c3\u03c3\u03c6\u2225q\u2212B\u22253 2 \u0013 . (29) Proof of Lemma 13. Note that for \u2113\u0338\u2208A, E[\u03c6\u2113] = 0, E[\u03c62 \u2113] = \u03c32 \u03c6, and E[|\u03c6\u2113|3] = \u0398(\u03c32 \u03c6/\u02dc \u03c3). Let t = |\u27e8\u03c6B, qB\u27e9|. Then by the Berry-Esseen Theorem, \f \f \f \f Pr \u03c6\u2212B {\u27e8\u03c6, q\u27e9+ \u03bd \u2265b} \u2212Pr \u03c6\u2212B {\u27e8\u03c6\u2212B, q\u2212B\u27e9+ \u03bd \u2265b} \f \f \f \f (30) \u2264Pr \u03c6\u2212B {\u27e8\u03c6\u2212B, q\u2212B\u27e9+ \u03bd \u2208[\u2212t + b, t + b]} (31) \u2264 2t (\u03c32 \u03c6\u2225q\u2212B\u22252 2 + \u03c32)1/2 + O(\u03c33 + \u03c32 \u03c6\u2225q\u2212B\u22253 3/\u02dc \u03c3) (\u03c32 + \u03c32 \u03c6\u2225q\u2212B\u22252 2)3/2 . (32) The second statement follows from a similar argument. We also have the following auxiliary lemma for later calculations. Lemma 14. E\u03c6A {y} = 0, (33) E\u03c6A {|y|} = 1, (34) E\u03c6j {|\u03c6j|} = 2\u03c32 \u03c6\u02dc \u03c3, for j \u0338\u2208A, (35) E\u03c6A {y\u03c6j} = \u03b3 \u02dc \u03c3 , (36) E\u03c6A {|y\u03c6j|} \u22641 \u02dc \u03c3 , for all j \u2208[D]. (37) Proof of Lemma 14. E\u03c6A {y} = X v\u2208{\u00b11} E\u03c6A {y|y = v} Pr[y = v] (38) = 1 2 X v\u2208{\u00b11} E\u03c6A {y|y = v} (39) = 0. (40) E\u03c6A {|y|} = X v\u2208{\u00b11} E\u03c6A {|y| |y = v} Pr[y = v] (41) = 1 2 X v\u2208{\u00b11} E\u03c6A {|y| |y = v} (42) = 1. (43) E\u03c6j {|\u03c6j|} = | \u2212po|(1 \u2212po) + |1 \u2212po|po \u02dc \u03c3 = 2\u03c32 \u03c6\u02dc \u03c3. (44) E\u03c6A {y\u03c6j} = E\u03c6A ( y \u02dc \u03c6j \u2212E[\u02dc \u03c6j] \u02dc \u03c3 ) (45) = 1 \u02dc \u03c3 E\u03c6A n y \u02dc \u03c6j \u2212yE[\u02dc \u03c6j] o (46) = \u03b3 \u02dc \u03c3 . (47) E\u03c6A {|y\u03c6j|} = E\u03c6A {|\u03c6j|} (48) \u22641 \u02dc \u03c3 . (49) 22 \fPublished as a conference paper at ICLR 2022 B.4 FEATURE EMERGENCE: FIRST GRADIENT STEP We will show that w.h.p. over the initialization, after the \ufb01rst gradient step, there are neurons that represent good features. We begin with analyzing the gradients. Lemma 15 (Full version of Lemma 6). Fix \u03b4 \u2208(0, 1) and suppose w(0) i \u2208Gw(\u03b4), b(0) i \u2208Gb(\u03b4) for all i \u2208[2m]. Let \u03f5e := k log1/2(Dm/\u03b4) + log3/2(Dm/\u03b4) q \u03c32 \u03c6\u02dc \u03c32(D \u2212k) . If po = \u2126(k2/D), k = \u2126(log2(Dm/\u03b4)), and \u03c3(1) \u03be < 1/k, then \u2202 \u2202wi LD(g(0); \u03c3(1) \u03be ) = \u2212a(0) i D X j=1 MjTj (50) where Tj satis\ufb01es: \u2022 if j \u2208A, then |Tj \u2212\u03b2\u03b3/\u02dc \u03c3| \u2264O(\u03f5e/\u02dc \u03c3), where \u03b2 \u2208[\u2126(1), 1] and depends only on w(0) i , b(0) i ; \u2022 if j \u0338\u2208A, then |Tj| \u2264O(\u03c32 \u03c6\u03f5e\u02dc \u03c3). Proof of Lemma 15. Consider one neuron index i and omit the subscript i in the parameters. Since the unbiased initialization leads to g(0)(x; \u03be(1)) = 0, we have \u2202 \u2202wLD(g(0); \u03c3(1) \u03be ) (51) = \u2212a(0)E(x,y)\u223cD n yI[yg(0)(x; \u03be(1)) \u22641]E\u03be(1)I[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)]x o (52) = \u2212a(0)E(x,y)\u223cD,\u03be(1) n yI[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)]x o (53) = \u2212a(0) D X j=1 Mj E(x,y)\u223cD,\u03be(1) n yI[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)]\u03c6j o | {z } :=Tj . (54) First, consider j \u2208A. Tj = E(x,y)\u223cD,\u03be(1) n yI[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)]\u03c6j o (55) = E\u03c6A,\u03be(1) \u001a y\u03c6j Pr \u03c6\u2212A h \u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u2208(0, 1) i\u001b . (56) Let Ia := Pr \u03c6\u2212A h \u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u2208(0, 1) i , (57) I\u2032 a := Pr \u03c6\u2212A h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u2208(0, 1) i . (58) We have |E\u03be(1)(Ia \u2212I\u2032 a)| (59) \u2264E\u03be(1) \f \f \f \f Pr \u03c6\u2212A h \u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u22650 i \u2212Pr \u03c6\u2212A h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u22650 i\f \f \f \f (60) + Pr \u03c6\u2212A,\u03be(1) h \u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u22651 i + Pr \u03c6\u2212A,\u03be(1) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u22651 i . (61) 23 \fPublished as a conference paper at ICLR 2022 Then by Lemma 13, \f \f \f \f Pr \u03c6\u2212A h \u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u22650 i \u2212Pr \u03c6\u2212A h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u22650 i\f \f \f \f = O(\u03f5e). (62) Note that P \u2113\u0338\u2208A Var(\u03c6\u2113q\u2113) = \u0398(\u03c32 \u03c6\u03c32 w(D \u2212k)) = \u0398(\u03c32 w), and |\u03c6\u2113| \u2264 1 \u02dc \u03c3, max\u2113|q\u2113| \u2264 \u03c3w p 2 log(Dm/\u03b4). Applying Bernstein\u2019s inequality for bounded distributions, we have: Pr \u03c6\u2212A [\u27e8\u03c6\u2212A, q\u2212A\u27e9\u22651/4] = exp(\u2212\u2126(k)) = O(\u03f5e). (63) We also have: Pr \u03be(1) h b(0) + \u03be(1) \u22651/4 i = exp(\u2212\u2126(k)) = O(\u03f5e). (64) Therefore, Pr \u03c6\u2212A,\u03be(1) h \u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u22651 i = exp(\u2212\u2126(k)) = O(\u03f5e) (65) where the last step follows from the assumption on \u03c3w and k. A similar argument gives: Pr \u03c6\u2212A,\u03be(1) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u22651 i = exp(\u2212\u2126(k)) = O(\u03f5e). (66) Then we have \f \fTj \u2212E\u03c6A,\u03be(1) {y\u03c6jI\u2032 a} \f \f (67) \u2264E\u03c6A \b |y\u03c6j| \f \fE\u03be(1)(Ia \u2212I\u2032 a) \f \f\t (68) \u2264O(\u03f5e)E\u03c6A {|y\u03c6j|} (69) \u2264O(\u03f5e/\u02dc \u03c3) (70) where the last step is from Lemma 14. Furthermore, E\u03c6A,\u03be(1) {y\u03c6jI\u2032 a} (71) = E\u03c6A {y\u03c6j} E\u03be(1)[I\u2032 a] (72) = E\u03c6A {y\u03c6j} Pr \u03c6\u2212A,\u03be(1) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u2208(0, 1) i (73) By Lemma 11, the assumption on po, and (63), we have Pr \u03c6\u2212A h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) \u2208(0, 1/2) i \u2265\u2126(1) \u2212O(1/k1/4), (74) Pr \u03be(1) h \u03be(1) \u2208(0, 1/2) i = 1/2 \u2212exp(\u2212\u2126(k)), (75) This leads to \u03b2 := E\u03be(1)[I\u2032 a] = Pr \u03c6\u2212A,\u03be(1) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u2208(0, 1) i \u2265\u2126(1). (76) By Lemma 14, E\u03c6A {y\u03c6j} = \u03b3/\u02dc \u03c3. Therefore, |Tj \u2212\u03b2\u03b3/\u02dc \u03c3| \u2264O(\u03f5e/\u02dc \u03c3). (77) Now, consider j \u0338\u2208A. Let B denote A \u222a{j}. Tj = E(x,y)\u223cD,\u03be(1) n y\u03c6jI h \u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u2208(0, 1) io (78) = E\u03c6BE\u03c6\u2212B,\u03be(1) n y\u03c6jI h \u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u2208(0, 1) io (79) = E\u03c6B,\u03be(1) \u001a y\u03c6j Pr \u03c6\u2212B h \u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u2208(0, 1) i\u001b . (80) 24 \fPublished as a conference paper at ICLR 2022 Let Ib := Pr \u03c6\u2212B h \u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u2208(0, 1) i , (81) I\u2032 b := Pr \u03c6\u2212B h \u27e8\u03c6\u2212B, q\u2212B\u27e9+ b(0) + \u03be(1) \u2208(0, 1) i . (82) Similar as above, we have |E\u03be(1)(Ib \u2212I\u2032 b)| \u2264O(\u03f5e) by Lemma 13. Then by Lemma 14, \f \fTj \u2212E\u03c6B,\u03be(1) {y\u03c6jI\u2032 b} \f \f (83) \u2264E\u03c6B \b |y\u03c6j||E\u03be(1)(Ib \u2212I\u2032 b)| \t (84) \u2264O(\u03f5e)E\u03c6A {|y|} E\u03c6j {|\u03c6j|} (85) \u2264O(\u03f5e) \u00d7 O(\u03c32 \u03c6\u02dc \u03c3) (86) = O(\u03c32 \u03c6\u03f5e\u02dc \u03c3). (87) Furthermore, E\u03c6B,\u03be(1) {y\u03c6jI\u2032 b} = E\u03c6A {y} E\u03c6j {\u03c6j} E\u03be(1)[I\u2032 b] = 0. (88) Therefore, |Tj| \u2264O(\u03c32 \u03c6\u03f5e\u02dc \u03c3). (89) Lemma 16. Under the same assumptions as in Lemma 15, \u2202 \u2202bi LD(g(0); \u03c3(1) \u03be ) = \u2212a(0) i Tb (90) where |Tb| \u2264O(\u03f5e). Proof of Lemma 16. Consider one neuron index i and omit the subscript i in the parameters. Since the unbiased initialization leads to g(0)(x; \u03be(1)) = 0, we have \u2202 \u2202bLD(g(0); \u03c3(1) \u03be ) (91) = \u2212a(0)E(x,y)\u223cD n yI[yg(0)(x; \u03be) \u22641]E\u03be(1)I[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] o (92) = \u2212a(0) E(x,y)\u223cD,\u03be(1) n yI[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] o | {z } :=Tb . (93) Similar to the proof in Lemma 6, \f \f \f \f Pr \u03c6\u2212A[\u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] \u2212Pr \u03c6\u2212A[\u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] \f \f \f \f = O(\u03f5e). (94) Then \f \f \f \fTb \u2212E\u03c6A,\u03be(1) \u001a y Pr \u03c6\u2212A[\u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] \u001b\f \f \f \f (95) = E\u03c6A,\u03be(1) \u001a |y| \f \f \f \f Pr \u03c6\u2212A[\u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] \u2212Pr \u03c6\u2212A[\u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] \f \f \f \f \u001b (96) \u2264O(\u03f5e)E\u03c6A {|y|} (97) \u2264O(\u03f5e). (98) Also, E\u03c6A,\u03be(1) \u001a y Pr \u03c6\u2212A[\u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] \u001b (99) = E\u03c6A {y} Pr \u03c6\u2212A,\u03be(1)[\u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] (100) = 0. (101) Therefore, |Tb| \u2264O(\u03f5e). 25 \fPublished as a conference paper at ICLR 2022 Lemma 17. We have \u2202 \u2202ai LD(g(0); \u03c3(1) \u03be ) = \u2212Ta (102) where |Ta| \u2264O(max\u2113q(0) i,\u2113). So if w(0) i \u2208G(\u03b4), |Ta| \u2264O(\u03c3w p log(Dm/\u03b4)). Proof of Lemma 17. Consider one neuron index i and omit the subscript i in the parameters. Since the unbiased initialization leads to g(0)(x; \u03be(1)) = 0, we have \u2202 \u2202aLD(g(0); \u03c3(1) \u03be ) (103) = \u2212E(x,y)\u223cD n yI[yg(0)(x; \u03be(1)) \u22641]E\u03be(1)\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1)) o (104) = \u2212E(x,y)\u223cD,\u03be(1) n y\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1)) o | {z } :=Ta . (105) Let \u03c6\u2032 A be an independent copy of \u03c6A, \u03c6\u2032 be the vector obtained by replacing in \u03c6 the entries \u03c6A with \u03c6\u2032 A, and let x\u2032 = M\u03c6\u2032 and its label is y\u2032. Then |Ta| = \f \f \fE\u03c6A n yE\u03c6\u2212A,\u03be(1)\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1)) o\f \f \f (106) \u22641 2 \f \f \f \f \fE\u03c6A n E\u03c6\u2212A,\u03be(1)\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1))|y = 1 o (107) \u2212E\u03c6A n E\u03c6\u2212A,\u03be(1)\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1))|y = \u22121 o \f \f \f \f \f (108) \u22641 2 \f \f \f \f \fE\u03c6A n E\u03c6\u2212A,\u03be(1)\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1))|y = 1 o (109) \u2212E\u03c6\u2032 A n E\u03c6\u2212A,\u03be(1)\u03c3(\u27e8w(0), x\u2032\u27e9+ b(0) + \u03be(1))|y\u2032 = \u22121 o \f \f \f \f \f. (110) Since \u03c3 is 1-Lipschitz, |Ta| \u22641 2E\u03c6A,\u03c6\u2032 A n E\u03c6\u2212A \f \f \f\u27e8w(0), x\u27e9\u2212\u27e8w(0), x\u2032\u27e9 \f \f \f |y = 1, y\u2032 = \u22121 o (111) \u22641 2E\u03c6\u2212A \u0010 E\u03c6A n\f \f \f\u27e8w(0), x\u27e9 \f \f \f |y = 1 o + E\u03c6\u2032 A n\f \f \f\u27e8w(0), x\u2032\u27e9 \f \f \f |y\u2032 = \u22121 o\u0011 (112) = E\u03c6\u2212A,\u03c6A \f \f \f\u27e8w(0), x\u27e9 \f \f \f (113) = Ex \f \f \f\u27e8w(0), x\u27e9 \f \f \f (114) \u2264 q Ex\u27e8w(0), x\u27e92 (115) \u2264max \u2113 q(0) i,\u2113 v u u u tEx \uf8eb \uf8edX \u2113\u2208[D] \u03c62 \u2113+ X j\u0338=\u2113:j,\u2113\u2208A |\u03c6j\u03c6\u2113| \uf8f6 \uf8f8 (116) \u2264max \u2113 q(0) i,\u2113 p Ex (1 + O(1)) (117) = \u0398(max \u2113 q(0) i,\u2113). (118) With the bounds on the gradient, we now summarize the results for the weights after the \ufb01rst gradient step. 26 \fPublished as a conference paper at ICLR 2022 Lemma 18. Set \u03bb(1) w = 1/(2\u03b7(1)), \u03bb(1) a = \u03bb(1) b = 0, \u03c3(1) \u03be = 1/k3/2. Fix \u03b4 \u2208(0, 1) and suppose w(0) i \u2208Gw(\u03b4), b(0) i \u2208Gb(\u03b4) for all i \u2208[2m]. If po = \u2126(k2/D), k = \u2126(log2(Dm/\u03b4)), then for all i \u2208[m], w(1) i = PD \u2113=1 q(1) i,\u2113M\u2113satisfying \u2022 if \u2113\u2208A, then |q(1) i,\u2113\u2212\u03b7(1)a(0) i \u03b2\u03b3/\u02dc \u03c3| \u2264O \u0012 |\u03b7(1)a(0) i |\u03f5e \u02dc \u03c3 \u0013 , where \u03b2 \u2208[\u2126(1), 1] and depends only on w(0) i , b(0) i ; \u2022 if \u2113\u0338\u2208A, then |q(1) i,\u2113| \u2264O \u0010 \u03c32 \u03c6|\u03b7(1)a(0) i |\u03f5e\u02dc \u03c3 \u0011 ; and \u2022 b(1) i = b(0) i + \u03b7(1)a(0) i Tb where |Tb| = O (\u03f5e); \u2022 a(1) i = a(0) i + \u03b7(1)Ta where |Ta| = O(\u03c3w p log(Dm/\u03b4)). Proof of Lemma 18. This follows from Lemma 10 and Lemma 15-17. B.5 FEATURE IMPROVEMENT: SECOND GRADIENT STEP We \ufb01rst show that with properly set \u03b7(1), for most x, |g(1)(x; \u03c3(2) \u03be )| < 1 and thus yg(1)(x; \u03c3(2) \u03be ) < 1. Lemma 19. Fix \u03b4 \u2208(0, 1) and suppose w(0) i \u2208Gw(\u03b4), b(0) i \u2208Gb(\u03b4), a(0) i \u2208Ga(\u03b4) for all i \u2208[2m]. If po = \u2126(k2/D), k = \u2126(log2(Dm/\u03b4)), \u03c3a \u2264\u02dc \u03c32/(\u03b3k2), \u03b7(1) = O \u0010 \u03b3 km\u03c3a \u0011 , and \u03c3(2) \u03be \u22641/k, then with probability \u22651 \u2212exp(\u2212\u0398(k)) over (x, y), we have yg(1)(x; \u03c3(2) \u03be ) < 1. Furthermore, for any i \u2208[2m], \f \f \f\u27e8w(1) i , x\u27e9 \f \f \f = \f \f \f\u27e8q(1) i , \u03c6\u27e9 \f \f \f = O(\u03b7(1)\u02dc \u03c3/\u03b3), \f \f \f\u27e8(q(1) i )\u2212A, \u03c6\u2212A\u27e9 \f \f \f = O(\u03b7(1)\u02dc \u03c3/\u03b3), and \f \f \fb(1) i \u2212b(1) m+i \f \f \f = O(|\u03b7(1)a(0) i |\u03f5e). Proof of Lemma 19. Note that w(0) i = w(0) m+i, b(0) i = b(0) m+i, and a(0) i = \u2212a(0) m+i. Then the gradient for wi is the negation of that for wm+i, the gradient for bi is the negation of that for bm+i, and the gradient for ai is the same as that for am+i. With probability \u22651\u2212exp(\u2212\u0398(max{2po(D \u2212k), k})), among all j \u0338\u2208A, we have that at most 2po(D \u2212k) + k of \u03c6j are (1 \u2212po)/\u02dc \u03c3, while the others are \u2212po/\u02dc \u03c3. For data points with \u03c6 satisfying this, we have: \f \f \fg(1)(x; \u03c3(2) \u03be ) \f \f \f (119) = \f \f \f \f \f 2m X i=1 a(1) i E\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) \f \f \f \f \f (120) = \f \f \f \f \f m X i=1 \u0010 a(1) i E\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) + a(1) m+iE\u03be(2)\u03c3(\u27e8w(1) m+i, x\u27e9+ b(1) m+i + \u03be(2) m+i) \u0011\f \f \f \f \f (121) \u2264 \f \f \f \f \f m X i=1 \u0010 a(1) i E\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) + a(1) m+iE\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) \u0011\f \f \f \f \f (122) + \f \f \f \f \f m X i=1 \u0010 \u2212a(1) m+iE\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) + a(1) m+iE\u03be(2)\u03c3(\u27e8w(1) m+i, x\u27e9+ b(1) m+i + \u03be(2) i ) \u0011\f \f \f \f \f . (123) 27 \fPublished as a conference paper at ICLR 2022 Then we have \f \f \fg(1)(x; \u03c3(2) \u03be ) \f \f \f \u2264 m X i=1 \f \f \f2\u03b7(1)TaE\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) \f \f \f (124) + m X i=1 \f \f \fa(1) m+i \f \f \f \u0010\f \f \f\u27e8w(1) i \u2212w(1) m+i, x\u27e9 \f \f \f + \f \f \fb(1) i \u2212b(1) m+i \f \f \f \u0011 (125) \u2264 m X i=1 \f \f \f2\u03b7(1)Ta \f \f \f \u0010\f \f \f\u27e8w(1) i , x\u27e9+ b(1) i \f \f \f + E\u03be(2) \f \f \f\u03be(2) i \f \f \f \u0011 (126) + m X i=1 \f \f \fa(1) m+i \f \f \f \u0010\f \f \f\u27e8w(1) i \u2212w(1) m+i, x\u27e9 \f \f \f + \f \f \fb(1) i \u2212b(1) m+i \f \f \f \u0011 . (127) We have |Ta| = O(\u03c3w p log(Dm/\u03b4)), and \f \f \f\u27e8w(1) i , x\u27e9 \f \f \f \u2264O(|\u03b7(1)a(0) i |) (\u03b2\u03b3/\u02dc \u03c3 + \u03f5e/\u02dc \u03c3) k \u02dc \u03c3 (128) + O(|\u03b7(1)a(0) i |\u03c32 \u03c6\u03f5e\u02dc \u03c3) ((2po(D \u2212k) + k)(1 \u2212po)/\u02dc \u03c3 + poD/\u02dc \u03c3) (129) \u2264O(|\u03b7(1)a(0) i |) \u0000k\u03b3/\u02dc \u03c32 + \u03f5ek/\u02dc \u03c32 + (k + poD)\u03c32 \u03c6\u03f5e \u0001 (130) \u2264O(\u03b7(1)(1 + po\u02dc \u03c3)/\u03b3). (131) \f \f \fb(1) i \f \f \f \u2264 \f \f \fb(0) i \f \f \f + \f \f \f\u03b7(1)a(0) i Tb \f \f \f (132) \u2264 p log(m/\u03b4) k2 + \f \f \f\u03b7(1)a(0) i \u03f5e \u02dc \u03c3 \f \f \f . (133) E\u03be(2) \f \f \f\u03be(2) i \f \f \f \u2264O(\u03c3(2) \u03be ). (134) |a(1) m+i| \u2264|a(0) i | + |\u03b7(1)Ta| \u2264|a(0) i | + O(\u03b7(1)\u03c3w p log(Dm/\u03b4)). (135) \f \f \f\u27e8w(1) i \u2212w(1) m+i, x\u27e9 \f \f \f = 2 \f \f \f\u27e8w(1) i , x\u27e9 \f \f \f = O(\u03b7(1)(1 + po\u02dc \u03c3)/\u03b3). (136) \f \f \fb(1) i \u2212b(1) m+i \f \f \f = 2|\u03b7(1)a(0) i Tb| = O(|\u03b7(1)a(0) i |\u03f5e). (137) Then we have \f \f \fg(1)(x; \u03c3(2) \u03be ) \f \f \f \u2264O \u0010 m\u03b7(1)\u03c3w p log(Dm/\u03b4) \u0011 \u03b7(1) \u03b3 + p log(m/\u03b4) k2 + \f \f \f\u03b7(1)a(0) i \u03f5e \u02dc \u03c3 \f \f \f + \u03c3(2) \u03be ! (138) + O \u0010 m(|a(0) i | + \u03b7(1)\u03c3w p log(Dm/\u03b4)) \u0011 \u0012\u03b7(1) \u03b3 + \f \f \f\u03b7(1)a(0) i \u03f5e \u02dc \u03c3 \f \f \f \u0013 (139) = O \u0012 m\u03b7(1)\u03c3w log(Dm/\u03b4) k + m|a(0) i | \u0012\u03b7(1) \u03b3 + \f \f \f\u03b7(1)a(0) i \u03f5e \u02dc \u03c3 \f \f \f \u0013\u0013 (140) = O \u0012 m\u03b7(1)\u03c3w log(Dm/\u03b4) k + m|a(0) i |\u03b7(1) \u03b3 + m\u03c3a\u03b7(1) k \u03b3 \u0013 (141) < 1. (142) Then \f \f \fyg(1)(x; \u03c3(2) \u03be ) \f \f \f < 1. Finally, the statement on \f \f \f\u27e8(q(1) i )\u2212A, \u03c6\u2212A\u27e9 \f \f \f follows from a similar calculation on \f \f \f\u27e8w(1) i , x\u27e9 \f \f \f = \f \f \f\u27e8q(1) i , \u03c6\u27e9 \f \f \f. We are now ready to analyze the gradients in the second gradient step. Lemma 20. Fix \u03b4 \u2208(0, 1) and suppose w(0) i \u2208Gw(\u03b4), b(0) i \u2208Gb(\u03b4), a(0) i \u2208Ga(\u03b4) for all i \u2208[2m]. Let \u03f5e2 := O \u0012 \u03b7(1)|a(0) i |k(\u03b3+\u03f5e) \u02dc \u03c32\u03c3(2) \u03be \u0013 + exp(\u2212\u0398(k)). If k = \u2126(log2(Dm/\u03b4)) and k = O(D), \u03c3a \u2264 28 \fPublished as a conference paper at ICLR 2022 \u02dc \u03c32/(\u03b3k2), \u03b7(1) = O \u0010 \u03b3 km\u03c3a \u0011 , and \u03c3(2) \u03be = 1/k3/2, then \u2202 \u2202wi LD(g(1); \u03c3(2) \u03be ) = \u2212a(1) i D X j=1 MjTj (143) where Tj satis\ufb01es: \u2022 if j \u2208A, then |Tj \u2212\u03b2\u03b3/\u02dc \u03c3| \u2264O(\u03f5e2/\u02dc \u03c3 + \u03b7(1)/\u03c3(2) \u03be + \u03b7(1)|a(0) i |\u03f5e/(\u02dc \u03c3\u03c3(2) \u03be )), where \u03b2 \u2208 [\u2126(1), 1] and depends only on w(0) i , b(0) i ; \u2022 if j \u0338\u2208A, then |Tj| \u22641 \u02dc \u03c3 exp(\u2212\u0398(k)) + O(\u03c32 \u03c6\u03f5e2\u02dc \u03c3). Proof of Lemma 20. Consider one neuron index i and omit the subscript i in the parameters. By Lemma 19, Pr[yg(1)(x; \u03be(2)) > 1] \u2264exp(\u2212\u0398(k)). Let Ix = I[yg(1)(x; \u03be(2)) \u22641]. \u2202 \u2202wLD(g(1); \u03c3(2) \u03be ) (144) = \u2212a(1)E(x,y)\u223cD n yIxE\u03be(2)I[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)]x o (145) = \u2212a(1) D X j=1 Mj E(x,y)\u223cD,\u03be(2) n yIxI[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)]\u03c6j o | {z } :=Tj . (146) Let Tj1 := E(x,y)\u223cD,\u03be(2) \b yI[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)]\u03c6j \t . We have |Tj \u2212Tj1| (147) = \f \f \fE(x,y)\u223cD,\u03be(2) n y(1 \u2212Ix)I[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)]\u03c6j o\f \f \f (148) \u22641 \u02dc \u03c3 E(x,y)\u223cD,\u03be(2) |1 \u2212Ix| (149) \u22641 \u02dc \u03c3 exp(\u2212\u0398(k)). (150) So it is suf\ufb01cient to bound Tj1. For simplicity, we use q as a shorthand for q(1) i . First, consider j \u2208A. Tj1 = E(x,y)\u223cD,\u03be(2) n yI[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)]\u03c6j o (151) = E\u03c6A \u001a y\u03c6j Pr \u03c6\u2212A,\u03be(2) h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u2208(0, 1) i\u001b . (152) Let Ia := Pr \u03be(2) h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u2208(0, 1) i , (153) I\u2032 a := Pr \u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(1) + \u03be(2) \u2208(0, 1) i . (154) By the property of the Gaussian \u03be(2), that |\u27e8\u03c6A, qA\u27e9| = O( \u03b7(1)|a(0) i |k(\u03b3+\u03f5e) \u02dc \u03c32 ), and that |\u27e8\u03c6, q\u27e9| = |\u27e8w(1) i , x\u27e9| = O(\u03b7(1)/\u03b3) < O(1/k) and |\u27e8\u03c6\u2212A, q\u2212A\u27e9| = O(\u03b7(1)/\u03b3) < O(1/k), we have |Ia \u2212I\u2032 a| \u2264 \f \f \f \fPr \u03be(2) h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u22650 i \u2212Pr \u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(1) + \u03be(2) \u22650 i\f \f \f \f (155) + Pr \u03be(2) h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u22651 i + Pr \u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(1) + \u03be(2) \u22651 i (156) = O \u03b7(1)|a(0) i |k(\u03b3 + \u03f5e) \u02dc \u03c32\u03c3(2) \u03be ! + exp(\u2212\u0398(k)) = O(\u03f5e2). (157) 29 \fPublished as a conference paper at ICLR 2022 This leads to \f \fTj1 \u2212E\u03c6A,\u03c6\u2212A {y\u03c6jI\u2032 a} \f \f (158) \u2264E\u03c6A \b |y\u03c6j| \f \fE\u03c6\u2212A(Ia \u2212I\u2032 a) \f \f\t (159) \u2264O(\u03f5e2)E\u03c6A {|y\u03c6j|} (160) \u2264O(\u03f5e2/\u02dc \u03c3) (161) where the last step is from Lemma 14. Furthermore, E\u03c6A,\u03c6\u2212A {y\u03c6jI\u2032 a} (162) = E\u03c6A {y\u03c6j} E\u03c6\u2212A[I\u2032 a] (163) = E\u03c6A {y\u03c6j} Pr \u03c6\u2212A,\u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(1) + \u03be(2) \u2208(0, 1) i . (164) By Lemma 19, we have |\u27e8\u03c6\u2212A, q\u2212A\u27e9| \u2264O(\u03b7(1)\u02dc \u03c3/\u03b3). Also, |b(1) \u2212b(0)| \u2264O(\u03b7(1)|a(0) i |\u03f5e). By the property of \u03be(2), \f \f \f \fPr \u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(1) + \u03be(2) \u2208(0, 1) i \u2212Pr \u03be(2) h b(0) + \u03be(2) \u2208(0, 1) i\f \f \f \f (165) \u2264O(\u03b7(1)\u02dc \u03c3/(\u03b3\u03c3(2) \u03be )) + O(\u03b7(1)|a(0) i |\u03f5e/\u03c3(2) \u03be ). (166) On the other hand, \u03b2 := Pr \u03c6\u2212A,\u03be(2) h b(0) + \u03be(2) \u2208(0, 1) i = Pr \u03be(2) h \u03be(2) \u2208(\u2212b(0), 1 \u2212b(0)) i (167) = \u2126(1) (168) and \u03b2 only depends on b(0). By Lemma 14, E\u03c6A {y\u03c6j} = \u03b3/\u02dc \u03c3. Therefore, |Tj1 \u2212\u03b2\u03b3/\u02dc \u03c3| \u2264O(\u03f5e2/\u02dc \u03c3) + O(\u03b7(1)/\u03c3(2) \u03be ) + O(\u03b7(1)|a(0) i |\u03f5e/(\u02dc \u03c3\u03c3(2) \u03be )). (169) Now, consider j \u0338\u2208A. Let B denote A \u222a{j}. Tj1 = E(x,y)\u223cD,\u03be(2) n y\u03c6jI h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u2208(0, 1) io (170) = E\u03c6BE\u03c6\u2212B,\u03be(2) n y\u03c6jI h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u2208(0, 1) io (171) = E\u03c6B \u001a y\u03c6j Pr \u03c6\u2212B,\u03be(2) h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u2208(0, 1) i\u001b . (172) Let Ib := Pr \u03be(2) h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u2208(0, 1) i , (173) I\u2032 b := Pr \u03be(2) h \u27e8\u03c6\u2212B, q\u2212B\u27e9+ b(1) + \u03be(2) \u2208(0, 1) i . (174) Similar as above, we have |Ib \u2212I\u2032 b| \u2264\u03f5e2. Then by Lemma 14, \f \fTj1 \u2212E\u03c6B,\u03c6\u2212B {y\u03c6jI\u2032 b} \f \f (175) \u2264E\u03c6B \b |y\u03c6j||E\u03c6\u2212B(Ib \u2212I\u2032 b)| \t (176) \u2264O(\u03f5e2)E\u03c6j {|\u03c6j|} (177) \u2264O(\u03f5e) \u00d7 O(\u03c32 \u03c6\u02dc \u03c3) (178) = O(\u03c32 \u03c6\u03f5e2\u02dc \u03c3). (179) Furthermore, E\u03c6B,\u03c6\u2212B {y\u03c6jI\u2032 b} = E\u03c6A {y} E\u03c6j {\u03c6j} E\u03c6\u2212B[I\u2032 b] = 0. (180) Therefore, |Tj1| \u2264O(\u03c32 \u03c6\u03f5e2\u02dc \u03c3). (181) 30 \fPublished as a conference paper at ICLR 2022 Lemma 21. Under the same assumptions as in Lemma 20, \u2202 \u2202bLD(g(1); \u03c3(2) \u03be ) = \u2212a(1) i Tb (182) where |Tb| \u2264exp(\u2212\u2126(k)) + O(\u03f5e2). Proof of Lemma 21. Consider one neuron index i and omit the subscript i in the parameters. By Lemma 19, Pr[yg(1)(x; \u03be(2)) > 1] \u2264exp(\u2212\u2126(k)). Let Ix = I[yg(1)(x; \u03be(2)) \u22641]. \u2202 \u2202bLD(g(1); \u03c3(2) \u03be ) (183) = \u2212a(1) E(x,y)\u223cD n yIxE\u03be(2)I[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)] o | {z } :=Tb . (184) Let Tb1 := E(x,y)\u223cD,\u03be(2) \b yI[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)] \t . We have |Tb \u2212Tb1| (185) = \f \f \fE(x,y)\u223cD,\u03be(2) n y(1 \u2212Ix)I[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)] o\f \f \f (186) \u2264E(x,y)\u223cD,\u03be(2) |1 \u2212Ix| (187) \u2264exp(\u2212\u2126(k)). (188) So it is suf\ufb01cient to bound Tb1. For simplicity, we use q as a shorthand for q(1) i . Tb1 = E(x,y)\u223cD,\u03be(2) n yI h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u2208(0, 1) io (189) = E\u03c6AE\u03c6\u2212A,\u03be(2) n yI h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u2208(0, 1) io (190) = E\u03c6A \u001a y Pr \u03c6\u2212A,\u03be(2) h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u2208(0, 1) i\u001b . (191) Let Ib := Pr \u03be(2) h \u27e8\u03c6, q\u27e9+ b(1) + \u03be(2) \u2208(0, 1) i , (192) I\u2032 b := Pr \u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(1) + \u03be(2) \u2208(0, 1) i . (193) Similar as in Lemma 20, we have |Ib \u2212I\u2032 b| \u2264\u03f5e2. Then by Lemma 14, \f \fTb1 \u2212E\u03c6A,\u03c6\u2212A {yI\u2032 b} \f \f (194) \u2264E\u03c6A \b |E\u03c6\u2212A(Ib \u2212I\u2032 b)| \t (195) \u2264O(\u03f5e2). (196) Furthermore, E\u03c6A,\u03c6\u2212A {yI\u2032 b} = E\u03c6A {y} E\u03c6\u2212A[I\u2032 b] = 0. (197) Therefore, |Tb1| \u2264O(\u03f5e2) and the statement follows. Lemma 22. Under the same assumptions as in Lemma 20, \u2202 \u2202ai LD(g(1); \u03c3(2) \u03be ) = \u2212Ta (198) where |Ta| = O(\u03b7(1)\u02dc \u03c3/\u03b3) + exp(\u2212\u2126(k))poly(Dm). 31 \fPublished as a conference paper at ICLR 2022 Proof of Lemma 22. Consider one neuron index i and omit the subscript i in the parameters. By Lemma 19, Pr[yg(1)(x; \u03be(2)) > 1] \u2264exp(\u2212\u2126(k)). Let Ix = I[yg(1)(x; \u03be(2)) \u22641]. \u2202 \u2202aLD(g(1); \u03c3(2) \u03be ) (199) = \u2212E(x,y)\u223cD n yIxE\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2)) o | {z } :=Ta . (200) Let Ta1 := E(x,y)\u223cD \b yE\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2)) \t . We have |Ta \u2212Ta1| (201) = \f \f \fE(x,y)\u223cD n y(1 \u2212Ix)E\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2)) o\f \f \f (202) \u2264E(x,y)\u223cD,\u03be(2) |1 \u2212Ix| (203) \u2264exp(\u2212\u2126(k)). (204) So it is suf\ufb01cient to bound Ta1. For simplicity, we use q as a shorthand for q(1) i . Let \u03c6\u2032 A be an independent copy of \u03c6A, \u03c6\u2032 be the vector obtained by replacing in \u03c6 the entries \u03c6A with \u03c6\u2032 A, and let x\u2032 = M\u03c6\u2032 and its label is y\u2032. Then |Ta1| := \f \f \fE\u03c6A n yE\u03c6\u2212A,\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2)) o\f \f \f (205) \u22641 2 \f \f \f \f \fE\u03c6A n E\u03c6\u2212A,\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(1))|y = 1 o (206) \u2212E\u03c6A n E\u03c6\u2212A,\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2))|y = \u22121 o \f \f \f \f \f (207) \u22641 2 \f \f \f \f \fE\u03c6A n E\u03c6\u2212A,\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2))|y = 1 o (208) \u2212E\u03c6\u2032 A n E\u03c6\u2212A,\u03be(2)\u03c3(\u27e8w(1), x\u2032\u27e9+ b(1) + \u03be(2))|y\u2032 = \u22121 o \f \f \f \f \f (209) \u22641 2E\u03c6A,\u03c6\u2032 A n E\u03c6\u2212A \f \f \f\u27e8w(1), x\u27e9\u2212\u27e8w(1), x\u2032\u27e9 \f \f \f |y = 1, y\u2032 = \u22121 o (210) \u22641 2E\u03c6\u2212A \u0010 E\u03c6A n\f \f \f\u27e8w(1), x\u27e9 \f \f \f |y = 1 o + E\u03c6\u2032 A n\f \f \f\u27e8w(1), x\u2032\u27e9 \f \f \f |y\u2032 = \u22121 o\u0011 (211) \u2264E\u03c6\u2212A,\u03c6A \f \f \f\u27e8w(1), x\u27e9 \f \f \f (212) = Ex \f \f \f\u27e8w(1), x\u27e9 \f \f \f (213) = O(\u03b7(1)\u02dc \u03c3/\u03b3) + exp(\u2212\u2126(k)) \u00d7 D \u00d7 \u2225q(1)\u2225\u221e\u2225\u03c6\u2225\u221e (214) = O(\u03b7(1)\u02dc \u03c3/\u03b3) + exp(\u2212\u2126(k))D|\u03b7(1)a(0)|(\u03b3 + \u03f5e) \u02dc \u03c32 (215) = O(\u03b7(1)\u02dc \u03c3/\u03b3) + exp(\u2212\u2126(k))poly(Dm) (216) where the fourth step follows from that \u03c3 is 1-Lipschitz, the third to the last step from Lemma 19, and the second to the last step from Lemma 18. With the above lemmas about the gradients, we are now ready to show that at the end of the second step, we get a good set of features for accurate prediction. Lemma 23. Set \u03b7(1) = \u03b32\u02dc \u03c3 km3 , \u03bb(1) a = 0, \u03bb(1) w = 1/(2\u03b7(1)), \u03c3(1) \u03be = 1/k3/2, (217) \u03b7(2) = 1, \u03bb(2) a = \u03bb(2) w = 1/(2\u03b7(2)), \u03c3(2) \u03be = 1/k3/2. (218) 32 \fPublished as a conference paper at ICLR 2022 Fix \u03b4 \u2208(0, O(1/k3)). If po = \u2126(k2/D), k = \u2126 \u0010 log2 \u0010 Dm \u03b4\u03b3 \u0011\u0011 , and m \u2265max{\u2126(k4), D}, then with probability at least 1 \u2212\u03b4 over the initialization, there exist \u02dc ai\u2019s such that \u02dc g(x) := P2m i=1 \u02dc ai\u03c3(\u27e8w(2) i , x\u27e9+b(2) i ) satis\ufb01es LD(\u02dc g) = 0. Furthermore, \u2225\u02dc a\u22250 = O(m/k), \u2225\u02dc a\u2225\u221e= O(k5/m), and \u2225\u02dc a\u22252 2 = O(k9/m). Finally, \u2225a(2)\u2225\u221e= O \u00001 km2 \u0001 , \u2225w(2) i \u22252 = O(\u02dc \u03c3/k), and |b(2) i | = O(1/k2) for all i \u2208[2m]. Proof of Lemma 23. By Lemma 5, there exists a network g\u2217(x) = P3(k+1) \u2113=1 a\u2217 \u2113\u03c3(\u27e8w\u2217 \u2113, x\u27e9+ b\u2217 \u2113) satisfying g\u2217(x) for all (x, y) \u223cD. Furthermore, |a\u2217 i | \u226432k, 1/(32k) \u2264|b\u2217 i | \u22641/2, w\u2217 i = \u02dc \u03c3 P j\u2208A Mj/(4k), and |\u27e8w\u2217 i , x\u27e9+ b\u2217 i | \u22641 for any i \u2208[n] and (x, y) \u223cD. Now we \ufb01x an \u2113, and show that with high probability there is a neuron in g(2) that can approximate the \u2113-th neuron in g\u2217. By Lemma 10, with probability 1 \u22122\u03b4 over w(0) i \u2019s, they are all in Gw(\u03b4); with probability 1 \u2212\u03b4 over a(0) i \u2019s, they are all in Ga(\u03b4); with probability 1 \u2212\u03b4 over b(0) i \u2019s, they are all in Gb(\u03b4). Under these events, by Lemma 18, Lemma 20 and 21, for any neuron i \u2208[2m], we have w(2) i = a(1) i D X j=1 MjTj, (219) b(2) i = b(1) i + a(1) i Tb. (220) where \u2022 if j \u2208A, then |Tj \u2212\u03b2\u03b3/\u02dc \u03c3| \u2264\u03f5w1 := O(\u03f5e2/\u02dc \u03c3 + \u03b7(1)/\u03c3(2) \u03be + \u03b7(1)|a(0) i |\u03f5e/(\u02dc \u03c3\u03c3(2) \u03be )), where \u03b2 \u2208[\u2126(1), 1] and depends only on w(0) i , b(0) i ; \u2022 if j \u0338\u2208A, then |Tj| \u2264\u03f5w2 := 1 \u02dc \u03c3 exp(\u2212\u0398(k)) + O(\u03c32 \u03c6\u03f5e2\u02dc \u03c3). \u2022 |Tb| \u2264\u03f5b := 1 \u02dc \u03c3 exp(\u2212\u0398(k)) + O(\u03f5e2). Given the initialization, with probability \u2126(1) over b(0) i , we have |b(0) i | \u2208 \u0014 1 2k2 , 2 k2 \u0015 , sign(b(0) i ) = sign(b\u2217 \u2113). (221) Finally, since 4k|b\u2217 \u2113|\u03b2\u03b3 |b(0) i |\u02dc \u03c32 \u2208[\u2126(k2\u03b3/\u02dc \u03c32), O(k3\u03b3/\u02dc \u03c32)] and depends only on w(0) i , b(0) i , we have that for \u03f5a = \u0398(1/k2), with probability \u2126(\u03f5a) > \u03b4 over a(0) i , \f \f \f \f \f 4k|b\u2217 \u2113|\u03b2\u03b3 |b(0) i |\u02dc \u03c32 a(0) i \u22121 \f \f \f \f \f \u2264\u03f5a, |a(0) i | = O \u0012 \u02dc \u03c32 k2\u03b3 \u0013 . (222) Let na = \u03f5am/4. For the given value of m, by (219)-(222) we have with probability \u22651 \u22125\u03b4 over the initialization, for each \u2113there is a different set of neurons I\u2113\u2286[m] with |I\u2113| = na and such that for each i\u2113\u2208I\u2113, |b(0) i\u2113| \u2208 \u0014 1 2k2 , 2 k2 \u0015 , sign(b(0) i\u2113) = sign(b\u2217 \u2113), (223) \f \f \f \f \f 4k|b\u2217 \u2113|\u03b2\u03b3 |b(0) i\u2113|\u02dc \u03c32 a(0) i\u2113\u22121 \f \f \f \f \f \u2264\u03f5a, |a(0) i\u2113| = O \u0012 \u02dc \u03c32 k2\u03b3 \u0013 . (224) 33 \fPublished as a conference paper at ICLR 2022 We also have \f \f \f \f \f \f \u27e8w(2) i\u2113, x\u27e9\u2212a(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f (225) \u2264 \f \f \f \f \f \f \u27e8w(2) i\u2113, x\u27e9\u2212a(1) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f + \f \f \f \f \f \f a(1) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9\u2212a(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f (226) = \f \f \f \f \f \f a(1) i\u2113 D X j=1 Tj\u03c6j \u2212a(1) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u03c6j \f \f \f \f \f \f + \f \f \fa(1) i\u2113\u2212a(0) i\u2113 \f \f \f \f \f \f \f \f \f \u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u03c6j \f \f \f \f \f \f (227) = \f \f \fa(1) i\u2113 \f \f \f \f \f \f \f \f \f D X j=1 Tj\u03c6j \u2212\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u03c6j \f \f \f \f \f \f + \f \f \fa(1) i\u2113\u2212a(0) i\u2113 \f \f \f \f \f \f \f \f \f \u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u03c6j \f \f \f \f \f \f . (228) We have \f \f \fa(1) i\u2113\u2212a(0) i\u2113 \f \f \f = O(\u03b7(1)\u03c3w p log(Dm/\u03b4)), and \f \f \f \f \f \f D X j=1 Tj\u03c6j \u2212\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u03c6j \f \f \f \f \f \f \u2264 \f \f \f \f \f \f X j\u2208A (Tj \u2212\u03b2\u03b3 \u02dc \u03c3 )\u03c6j \f \f \f \f \f \f + \f \f \f \f \f \f X j\u0338\u2208A Tj\u03c6j \f \f \f \f \f \f (229) \u2264O(k\u03f5w1/\u02dc \u03c3) + O(D\u03f5w2/\u02dc \u03c3) =: \u03f5\u03c6. (230) For the given values of parameters, we have \u03f5e2 = O \u0010 \u03b3 m2 \u0011 , (231) \u03f5w1 = O \u0012 k\u03b3 m2\u02dc \u03c3 + \u03b3\u03f5e km2 \u0013 , (232) \u03f5w2 = O \u0010 \u03b3 m2\u02dc \u03c3 \u0011 , (233) \u03f5b = O \u0010 \u03b3 m2 \u0011 , (234) \u03f5\u03c6 = O \u0012 k2\u03b3 m2\u02dc \u03c32 + \u03b3\u03f5e m2\u02dc \u03c3 + \u03b3 m\u02dc \u03c32 \u0013 . (235) Therefore, \f \f \f \f \f \f \u27e8w(2) i\u2113, x\u27e9\u2212a(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f (236) \u2264 \f \f \fa(1) i\u2113 \f \f \f \u03f5\u03c6 + \f \f \fa(1) i\u2113\u2212a(0) i\u2113 \f \f \f k\u03b3 \u02dc \u03c32 (237) \u2264O \u0012 \u02dc \u03c32 k2\u03b3 + \u03b7(1)\u03c3w p log(Dm/\u03b4) \u0013 \u0012 k2\u03b3 m2\u02dc \u03c32 + \u03b3\u03f5e m2\u02dc \u03c3 + \u03b3 m\u02dc \u03c32 \u0013 (238) + O \u0010 \u03b7(1)\u03c3w p log(Dm/\u03b4) \u0011 k\u03b3 \u02dc \u03c32 (239) \u2264O \u0012 1 m \u0013 . (240) We also have by Lemma 18 and 21: |b(2) i\u2113\u2212b(0) i\u2113| \u2264O \u0012 \u03b7(1)|a(0) i\u2113|\u03f5e + |a(1) i\u2113| \u0012 1 \u02dc \u03c3 exp(\u2212\u0398(k)) + \u03f5e2 \u0013\u0013 \u2264O \u0012 1 m \u0013 . (241) 34 \fPublished as a conference paper at ICLR 2022 Now, construct \u02dc a such that \u02dc ai\u2113= 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113|na for each \u2113and each i\u2113\u2208I\u2113, and \u02dc ai = 0 elsewhere. Then |\u02dc g(x) \u22122g\u2217(x)| (242) = \f \f \f \f \f \f 3(k+1) X \u2113=1 X i\u2113\u2208I\u2113 \u02dc ai\u2113\u03c3 \u0010 \u27e8w(2) i\u2113, x\u27e9+ b(2) i\u2113 \u0011 \u2212 3(k+1) X \u2113=1 2a\u2217 \u2113\u03c3 (\u27e8w\u2217 \u2113, x\u27e9+ b\u2217 \u2113) \f \f \f \f \f \f (243) = \f \f \f \f \f \f 3(k+1) X \u2113=1 X i\u2113\u2208I\u2113 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113|na \u03c3 \u0010 \u27e8w(2) i\u2113, x\u27e9+ b(2) i\u2113 \u0011 \u2212 3(k+1) X \u2113=1 X i\u2113\u2208I\u2113 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113|na \u03c3 |b(0) i\u2113| |b\u2217 \u2113| \u27e8w\u2217 \u2113, x\u27e9+ b(0) i\u2113 !\f \f \f \f \f \f (244) \u2264 \f \f \f \f \f \f 3(k+1) X \u2113=1 X i\u2113\u2208I\u2113 1 na \uf8eb \uf8ed2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \u03c3 \u0010 \u27e8w(2) i\u2113, x\u27e9+ b(2) i\u2113 \u0011 \u22122a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \u03c3 \uf8eb \uf8eda(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9+ b(0) i\u2113 \uf8f6 \uf8f8 \uf8f6 \uf8f8 \f \f \f \f \f \f (245) + \f \f \f \f \f \f 3(k+1) X \u2113=1 X i\u2113\u2208I\u2113 1 na \uf8eb \uf8ed2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \u03c3 \uf8eb \uf8eda(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9+ b(0) i\u2113 \uf8f6 \uf8f8\u22122a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \u03c3 |b(0) i\u2113| |b\u2217 \u2113| \u27e8w\u2217 \u2113, x\u27e9+ b(0) i\u2113 !\uf8f6 \uf8f8 \f \f \f \f \f \f (246) \u22643(k + 1) max \u2113 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| O \u0012 1 m \u0013 + (247) 3(k + 1) max \u2113 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \u02dc \u03c3|b(0) i\u2113| 4k|b\u2217 \u2113| \f \f \f \f \f 4ka(0) i\u2113\u03b2\u03b3|b\u2217 \u2113| \u02dc \u03c32|b(0) i\u2113| \u22121 \f \f \f \f \f k \u02dc \u03c3 (248) = O \u0012k4 m + k2\u03f5a \u0013 (249) \u22641. (250) Here the second equation follows from that \u03c3 is positive-homogeneous in [0, 1], |\u27e8w\u2217 \u2113, x\u27e9+ b\u2217 \u2113| \u22641, |b(0) i\u2113|/|b\u2217 \u2113| \u22641. This guarantees y\u02dc g(x) \u22651. Changing the scaling of \u03b4 leads to the statement. Finally, the bounds on \u02dc a follow from the above calculation. The bound on \u2225a(2)\u22252 follows from Lemma 22, and those on \u2225w(2) i \u22252 and \u2225b(2) i \u22252 follow from (219)(220) and the bounds on a(1) i and b(1) i in Lemma 18. B.6 CLASSIFIER LEARNING STAGE Once we have a set of good features, we are now ready to prove that the later steps will learn an accurate classi\ufb01er. The intuition is that the \ufb01rst layer\u2019s weights do not change much and the second layer\u2019s weights get updated till achieving good accuracy. In particular, we will employ the online optimization technique from Daniely & Malach (2020). We begin by showing that the \ufb01rst layer\u2019s weights do not change too much. Lemma 24. Assume the same conditions as in Lemma 23. Suppose for t > 2, \u03bb(t) a = \u03bb(t) w = \u03bb, \u03b7(t) = \u03b7 for some \u03bb, \u03b7 \u2208(0, 1), and \u03c3(t) \u03be = 0. Then for any t > 2 and i \u2208[2m], |a(t) i | \u2264\u03b7t + O \u0012 1 km2 \u0013 , (251) \u2225w(t) i \u2212w(2) i \u22252 \u2264O \u0012t\u03b7\u03bb\u02dc \u03c3 k \u0013 + \u03b72t2 + O \u0012 t km2 \u0013 , (252) |b(t) i \u2212b(2) i | \u2264O \u0012t\u03b7\u03bb k2 \u0013 + \u03b72t2 + O \u0012 t km2 \u0013 . (253) 35 \fPublished as a conference paper at ICLR 2022 Proof of Lemma 24. First, we bound the size of |a(t) i |: |a(t) i | = \f \f \f \f(1 \u22122\u03b7\u03bb)a(t\u22121) i \u2212\u03b7 \u2202 \u2202ai LD(g(t\u22121)) \f \f \f \f (254) \u2264 \f \f \f(1 \u22122\u03b7\u03bb)a(t\u22121) i \u2212\u03b7E(x,y)\u223cD n yI[yg(t\u22121)(x) \u22641]\u03c3(\u27e8w(t\u22121) i , x\u27e9+ b(t\u22121) i ) o\f \f \f (255) \u2264|a(t\u22121) i | + \u03b7 (256) which leads to |a(t) i | \u2264\u03b7t + |a(2) i | (257) where |a(2) i | = O \u00001 km2 \u0001 . We are now to bound the change of w(t) i and b(t) i . \u2225w(t) i \u2212w(2) i \u22252 (258) = \r \r \r \r(1 \u22122\u03b7\u03bb)w(t\u22121) i \u2212\u03b7 \u2202 \u2202wi LD(g(t\u22121)) \u2212w(2) i \r \r \r \r 2 (259) \u2264 \r \r \r \r \r(1 \u22122\u03b7\u03bb)w(t\u22121) i (260) + \u03b7a(t\u22121) i E(x,y)\u223cD n yI[yg(t\u22121)(x) \u22641]I[\u27e8w(t\u22121) i , x\u27e9+ b(t\u22121) i \u2208(0, 1)]x o \u2212w(2) i \r \r \r \r \r 2 (261) \u2264 \r \r \r(1 \u22122\u03b7\u03bb)w(t\u22121) i \u2212w(2) i \r \r \r 2 (262) + \u03b7 \r \r \ra(t\u22121) i E(x,y)\u223cD n yI[yg(t\u22121)(x) \u22641]I[\u27e8w(t\u22121) i , x\u27e9+ b(t\u22121) i \u2208(0, 1)]x o\r \r \r 2 (263) \u2264(1 \u22122\u03b7\u03bb) \r \r \rw(t\u22121) i \u2212w(2) i \r \r \r 2 + 2\u03b7\u03bb \r \r \rw(2) i \r \r \r 2 + \u03b7 \f \f \fa(t\u22121) i \f \f \f (264) leading to \u2225w(t) i \u2212w(2) i \u22252 \u22642t\u03b7\u03bb \r \r \rw(2) i \r \r \r 2 + \u03b72t2 + t|a(2) i |. (265) Note that \u2225w(2) i \u22252 = O(\u02dc \u03c3/k). |b(t) i \u2212b(2) i | = \f \f \f \fb(t\u22121) i \u2212\u03b7 \u2202 \u2202bi LD(g(t\u22121)) \u2212b(2) i \f \f \f \f (266) \u2264 \f \f \fb(t\u22121) i \u2212b(2) i \f \f \f (267) + \u03b7 \f \f \fa(t\u22121) i E(x,y)\u223cD n yI[yg(t\u22121)(x) \u22641]I[\u27e8w(t\u22121) i , x\u27e9+ b(t\u22121) i \u2208(0, 1)] o\f \f \f (268) \u2264 \f \f \fb(t\u22121) i \u2212b(2) i \f \f \f + \u03b7 \f \f \fa(t\u22121) i \f \f \f (269) leading to |b(t) i \u2212b(2) i | \u2264\u03b72t2 + t|a(2) i |. (270) Note that \f \f \fb(2) i \f \f \f = O(1/k2). Lemma 25. Assume the same conditions as in Lemma 24. Let g(t) \u02dc a (x) = Pm i=1 \u02dc ai\u03c3(\u27e8w(t) i , x\u27e9+ b(t) i ). Then |\u2113(g(t) \u02dc a (x), y) \u2212\u2113(g(2) \u02dc a (x), y)| \u2264\u2225\u02dc a\u22252 p \u2225\u02dc a\u22250 \u0012 O \u0012t\u03b7\u03bb\u02dc \u03c3 k \u0013 + \u03b72t2 + O \u0012 t km2 \u0013\u0013 . (271) 36 \fPublished as a conference paper at ICLR 2022 Proof of Lemma 25. It follows from that |\u2113(g(t) \u02dc a (x), y) \u2212\u2113(g(2) \u02dc a (x))| (272) \u2264|g(t) \u02dc a (x) \u2212g(2) \u02dc a (x)| (273) \u2264\u2225\u02dc a\u22252 p \u2225\u02dc a\u22250 max i\u2208[2m] \f \f \f\u03c3(\u27e8w(t) i , x\u27e9+ b(t) i ) \u2212\u03c3(\u27e8w(2) i , x\u27e9+ b(2) i ) \f \f \f (274) \u2264\u2225\u02dc a\u22252 p \u2225\u02dc a\u22250 max i\u2208[2m] \u0010\f \f \f\u27e8w(t) i \u2212w(2) i , x\u27e9 \f \f \f + \f \f \fb(t) i \u2212b(2) i \f \f \f \u0011 . (275) and Lemma 24. B.7 PROOF OF THEOREM 1 Based on the above lemmas, following the same argument as in the proof of Theorem 2 in Daniely & Malach (2020), we get our main theorem. Theorem 26 (Full version of Theorem 1). Set \u03b7(1) = \u03b32\u02dc \u03c32 km3 , \u03bb(1) a = 0, \u03bb(1) w = 1/(2\u03b7(1)), \u03c3(1) \u03be = 1/k2, (276) \u03b7(2) = 1, \u03bb(2) a = \u03bb(2) w = 1/(2\u03b7(2)), \u03c3(2) \u03be = 1/k2, (277) \u03b7(t) = \u03b7 = k2 Tm1/3 , \u03bb(t) a = \u03bb(t) w = \u03bb \u2264 k3 \u02dc \u03c3m1/3 , \u03c3(t) \u03be = 0, for 2 < t \u2264T. (278) For any \u03b4 \u2208(0, 1), if po = \u2126(k2/D), k = \u2126 \u0010 log2 \u0010 D \u03b4\u03b3 \u0011\u0011 , max{\u2126(k4), D} \u2264m \u2264poly(D), then we have for any D \u2208F\u039e, with probability at least 1 \u2212\u03b4, there exists t \u2208[T] such that Pr[sign(g(t)(x)) \u0338= y] \u2264LD(g(t)) = O \u0012 k8 m2/3 + k3T m2 + k2m2/3 T \u0013 . (279) Consequently, for any \u03f5 \u2208(0, 1), if T = m4/3, and max{\u2126(k12/\u03f53/2), D} \u2264m \u2264poly(D), then Pr[sign(g(t)(x)) \u0338= y] \u2264LD(g(t)) \u2264\u03f5. (280) Proof of Theorem 1. Consider \u02dc LD(g(t)) = E[\u2113(g(t), y)] + \u03bb(t) a \u2225a(t)\u22252 2. Note that the gradient update using \u02dc LD(g(t)) is the same as the update in our learning algorithm. Then by Theorem 27, Lemma 23, and Lemma 25, 1 T T X t=3 \u02dc LD(g(t)) \u2264\u2225\u02dc a\u22252 2 2 + \u2225\u02dc a\u22252 p \u2225\u02dc a\u22250 \u0012 O \u0012T\u03b7\u03bb\u02dc \u03c3 k \u0013 + \u03b72T 2 + O \u0012 T km2 \u0013\u0013 (281) + \u2225\u02dc a\u22252 2 2\u03b7T + \u2225a(2)\u22252 \u221am + \u03b7m (282) \u2264O \u0012k9 m + k4\u03b72T 2 + k3T m2 + k9 \u03b7Tm + \u03b7m \u0013 . (283) \u2264O \u0012 k8 m2/3 + k3T m2 + k2m2/3 T \u0013 . (284) The statement follows from that 0-1 classi\ufb01cation error is bounded by the hinge-loss. Theorem 27 (Theorem 13 in Daniely & Malach (2020)). Fix some \u03b7, and let f1, . . . , fT be some sequence of convex functions. Fix some \u03b81, and assume we update \u03b8t+1 = \u03b8t \u2212\u03b7\u2207ft(\u03b8t). Then for every \u03b8\u2217the following holds: 1 T T X t=1 ft(\u03b8t) \u22641 T T X t=1 ft(\u03b8\u2217) + 1 2\u03b7T \u2225\u03b8\u2217\u22252 2 + \u2225\u03b81\u22252 1 T T X t=1 \u2225\u2207ft(\u03b8t)\u22252 + \u03b7 1 T T X t=1 \u2225\u2207ft(\u03b8t)\u22252 2. (285) 37 \fPublished as a conference paper at ICLR 2022 C LOWER BOUND FOR LINEAR MODELS ON FIXED FEATURE MAPPINGS Theorem 28 (Restatement of Theorem 2). Suppose \u03a8 is a data-independent feature mapping of dimension N with bounded features, i.e., \u03a8 : X \u2192[\u22121, 1]N. De\ufb01ne for B > 0: HB = {h(\u02dc x) : h(\u02dc x) = \u27e8\u03a8(\u02dc x), w\u27e9, \u2225w\u22252 \u2264B}. (286) Then, if 3 < k \u2264D/16 and k is odd, then there exists D \u2208F\u039e such that all h \u2208HB have hinge-loss at least po \u0010 1 \u2212 \u221a 2NB 2k \u0011 . Proof of Theorem 2. We \ufb01rst show that F\u039e contains some distributions that are essentially sparse parity learning problems, and then we invoke the lower bound result from existing work for such problems. Consider D de\ufb01ned as follows. \u2022 Let P = {i \u2208[k] : i is odd}. That is, if there are odd numbers of 1\u2019s in \u02dc \u03c6A, then y = +1. \u2022 Let D(0) \u02dc \u03c6 be a distribution where all entries \u02dc \u03c6j are i.i.d. with Pr[\u02dc \u03c6j = 0] = Pr[\u02dc \u03c6j = 1] = 1/2. Let D(0) be the distribution over (\u02dc x, y) induced by D(0) \u02dc \u03c6 and the above P. \u2022 Let D(1) \u02dc \u03c6 be a distribution where all entries \u02dc \u03c6j for j \u0338\u2208A are i.i.d. with Pr[\u02dc \u03c6j = 1] = po/(2 \u22122po), while Pr[\u02dc \u03c6A = (0, 0, . . . , 0)] = Pr[\u02dc \u03c6A = (1, 1, . . . , 1)] = 1/2. Let D(1) be the distribution over (\u02dc x, y) induced by D(1) \u02dc \u03c6 and the above P. \u2022 Let Dmix A = poD(0) + (1 \u2212po)D(1). It can be veri\ufb01ed that such distributions are included in F\u039e for \u03b3 = \u0398(1). Assume for contradiction that for all D \u2208F\u039e, there exists h\u2217\u2208HB such that h = \u27e8\u03a8, w\u2217\u27e9loss smaller than po \u0010 1 \u2212 \u221a 2NB 2k \u0011 . Then for all the distributions Dmix A de\ufb01ned above, we have ED(0)[\u2113(h\u2217(\u02dc x), y)] < 1 \u2212 \u221a 2NB 2k . (287) Now let Dz be a distribution over z \u2208{\u22121, +1}D with i.i.d. entries zj and Pr[zj = \u22121] = Pr[zj = +1] = 1/2. Let fA(z) = Q j\u2208A zj be the k-sparse parity functions. Let \u03a8\u2032(z) = \u03a8(M(z + 1)/2). Then we have h\u2032(z) = \u27e8\u03a8\u2032(z), w\u2217\u27e9such that for all A, EDz[\u2113(h\u2032(z), fA(z))] < 1 \u2212 \u221a 2NB 2k . (288) This is contradictory to Theorem 29. The following theorem is implicit in the proof in Theorem 1 in Daniely & Malach (2020). Theorem 29. For a subset A \u2286[D] of size k, let the distribution DA over (z, y) de\ufb01ned as follows: z is uniform over {\u00b11}D and y = Q i\u2208A zi. Fix some \u03a8 : {\u00b11}D \u2192[\u22121, +1]N, and de\ufb01ne: HB \u03a8 = {z \u2192\u27e8\u03a8(z), w\u27e9: \u2225w\u22252 \u2264B}. If k is odd and k \u2264D/16, then there exists some A such that min h\u2208HB \u03a8 EDA[\u2113(h(z), y)] \u22651 \u2212 \u221a 2NB 2k . We now prove the corollary. 38 \fPublished as a conference paper at ICLR 2022 Corollary 30 (Restatement of Corollary 3). For any function f using a shift-invariant kernel K with RKHS norm bounded by L, or f(x) = P i \u03b1iK(zi, x) for some data points zi and ||\u03b1||2 \u2264L. If 3 < k \u2264D/16 and k is odd, then there exists D \u2208F\u039e such that f have hinge-loss at least po(1 \u2212poly(d,L) 2k ) \u2212 1 poly(d,L). Proof. By Claim 1 in Rahimi & Recht (2008), for any \u03bd > 0, there exists N = poly(d, 1/\u03bd) Fourier features \u03a8j that can approximate the shift-invariant kernel up to error \u03bd. For any \u03f5 > 0, consider P i \u03b1i\u27e8\u03a8(zi), \u03a6(x)\u27e9= \u27e8P i \u03b1i\u03a8(zi), \u03a8(x)\u27e9. Let w = P i \u03b1i\u03a8(zi) and let \u03bd = O( \u03f5 L), then \u27e8\u03a8(x), w\u27e9approximates f(x) upto error \u03f5 and N = poly(d, L, 1/\u03f5) and the norm of w bounded by B = poly(d, L, 1/\u03f5). The reasoning is the same for f in the RKHS form, replacing sum with integral. By Theorem 2, \u27e8\u03a8(x), w\u27e9has hinge-loss at least p0(1 \u2212 \u221a 2NB 2k ). Thus, the function f has loss at least p0(1 \u2212poly(d,L,1/\u03f5) 2k ) \u2212\u03f5. Choose \u03f5 = 1 poly(d,L), we get the bound. D LOWER BOUND FOR LEARNING WITHOUT INPUT STRUCTURE First recall the Statistical Query model (Kearns, 1998). In this model, the learning algorithm can only receive information about the data through statistical queries. A statistical query is speci\ufb01ed by some property predicate Q of labeled instances, and a tolerance parameter \u03c4 \u2208[0, 1]. When the algorithm asks a statistical query (Q, \u03c4), it receives a response \u02c6 PQ \u2208[PQ \u2212\u03c4, PQ + \u03c4], where PQ = Pr[Q(x, y) is true]. Q is also required to be polynomially computable, i.e., for any (x, y) Q(x, y) can be computed in polynomial time. Notice that a statistical query can be simulated by empirical average of a large random sample of data of size roughly O(1/\u03c4 2) to assure the tolerance \u03c4 with high probability. Blum et al. (1994) introduces the notion of Statistical Query dimension, which is convenient for our purpose. De\ufb01nition 31 (De\ufb01nition 2 in Blum et al. (1994)). For concept class C and distribution D, the statistical query dimension SQ-DIM(C, D) is the largest number d such that C contains d concepts c1, . . . , cd that are nearly pairwise uncorrelated: speci\ufb01cally, for all i \u0338= j, | Pr x\u223cD[ci(x) = cj(x)] \u2212Pr x\u223cD[ci(x) \u0338= cj(x)]| \u22641/d3. (289) Theorem 32 (Theorem 12 in Blum et al. (1994)). In order to learn C to error less than 1/2 \u22121/d3 in the Statistical Query model, where d = SQ-DIM(C, D), either the number of queries or 1/\u03c4 must be at least 1 2d1/3. We now use the above tools to prove our lower bound. Theorem 33 (Restatement of Theorem 4). For any algorithm in the Statistical Query model that can learn over F\u039e0 to error less than 1 2 \u2212 1 ( D k) 3 , either the number of queries or 1/\u03c4 must be at least 1 2 \u0000D k \u00011/3. Proof of Theorem 4. Consider the following concept class and marginal distribution: \u2022 Let D be the distribution over \u02dc x, given by \u02dc x = M \u02dc \u03c6 and \u02dc \u03c6j are i.i.d. with Pr[\u02dc \u03c6j = 0] = Pr[\u02dc \u03c6j = 1] = 1/2. \u2022 Let C be the class of functions y = gA(\u02dc \u03c6) = I[P j(1 \u2212\u02dc \u03c6j) is odd] for different A \u2286[D]. The distributions over (\u02dc x, y) induced by (C, D) are a subset of F\u039e0. It is then suf\ufb01cient to show that SQ-DIM(C, D) \u2265 \u0000D k \u0001 . It is easy to see that C are essentially the sparse parity functions: if zj = 2\u02dc \u03c6j \u22121, then gA(\u02dc \u03c6) = Q j\u2208A zj. This then implies that the gA\u2019s are uncorrelated, so SQ-DIM(C, D) \u2265 \u0000D k \u0001 . 39 \fPublished as a conference paper at ICLR 2022 E COMPLETE EXPERIMENTAL RESULTS Our experiments mainly focus on feature learning and the effect of the input structure. We \ufb01rst perform simulations on our learning problems to (1) verify our main theorems on the bene\ufb01t of feature learning and the effect of input structure (2) verify our analysis of feature learning in networks. We then check if our insights carry over to real data: (3) whether similar feature learning is presented in real network/data; (4) whether damaging the input structure lowers the performance. The results are consistent with our analysis and provide positive support for the theory. The experiments were ran 5 times with different random seeds, and the average results (accuracy) are reported. The standard deviations of the results are smaller than 0.5% and thus we do not present them for clarity. The hardware speci\ufb01cations are 4 Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz, 16 GB RAM, and one NVIDIA GPU GTX1080. E.1 SIMULATION We train a two-layer network following our learning process. We use two \ufb01xed feature methods: the NTK (Fang et al., 2021) and random feature (RF) methods based on the same network and random initialization as the network learning. More precisely, in the NTK method, we randomly initialize the network and take its NTK and learn a classi\ufb01er on it. In the RF method, we freeze the \ufb01rst layer of the network, and train the second layer (on the random features given by the frozen neurons). The training step number is the same as that in network learning. We also test these three methods on the data distribution with input structure removed (i.e., F\u039e0 in Theorem 4). For comparison, we take the representation of our two-layer network at step one/step two, named One Step/Two Step (\ufb01x the weight of the 1st layer after the \ufb01rst step/second step to train the weight of the second layer), and train the best classi\ufb01ers on top of them. Recall that our analysis is on the directions of the weights without considering their scaling, and thus it is important to choose cosine similarity rather than the typical \u21132 distance. Thus, we use metric Cos Similarity max{i\u2208[2m]} cos(wi, P j\u2208A Mj) in our tables, and use Multidimensional Scaling to plot the weights distribution. The simulation dataset size is 50000. During training, the batch size is 1000, while for the \ufb01rst two steps we use the approximate full gradient (batch size is 50000). Each step is corresponding to one weights update. E.1.1 PARITY LABELING Setting. We generate data according to the parity function data distributions used in our proof of the lower bound for \ufb01xed features (Theorem 2), with d = 500, D = 100, k = 5, po = 1/2, with a randomly sampled A. More precisely, we consider D de\ufb01ned as follows. \u2022 Let P = {i \u2208[k] : i is odd}. That is, if there are odd numbers of 1\u2019s in \u02dc \u03c6A, then y = +1. \u2022 Let D(0) \u02dc \u03c6 be a distribution where all entries \u02dc \u03c6j are i.i.d. with Pr[\u02dc \u03c6j = 0] = Pr[\u02dc \u03c6j = 1] = 1/2. Let D(0) be the distribution over (\u02dc x, y) induced by D(0) \u02dc \u03c6 and the above P. \u2022 Let D(1) \u02dc \u03c6 be a distribution where all entries \u02dc \u03c6j for j \u0338\u2208A are i.i.d. with Pr[\u02dc \u03c6j = 1] = po/(2 \u22122po), while Pr[\u02dc \u03c6A = (0, 0, . . . , 0)] = Pr[\u02dc \u03c6A = (1, 1, . . . , 1)] = 1/2. Let D(1) be the distribution over (\u02dc x, y) induced by D(1) \u02dc \u03c6 and the above P. \u2022 Let Dmix A = poD(0) + (1 \u2212po)D(1). The network and the training follow Section 3, where the network size is m = 300 and the training time T = 600 steps. Veri\ufb01cation of the Main Results. Figure 5 shows that the results are consistent with our analysis. Network learning gets high test accuracy while the two \ufb01xed feature methods get signi\ufb01cantly lower accuracy. Furthermore, when the input structure is removed, all three methods get test accuracy similar to random guessing. 40 \fPublished as a conference paper at ICLR 2022 Figure 5: Test accuracy on simulated data under parity labeling with or without input structure. Figure 6: Visualization of the weights wi\u2019s after initialization/one gradient step/two gradient steps in network learning under parity labeling. The red star denotes the ground-truth P j\u2208A Mj; the orange star is \u2212P j\u2208A Mj. The red dots are the weights closest to the red star after two steps; the orange ones are for the orange star. Model Network NTK RF One Step Two Step Network w/o structure Train Acc (%) 100.0 84.0 74.7 51.3 100.0 100.0 Test Acc (%) 100.0 86.4 76.0 52.2 100.0 52.0 Cos Similarity 0.997 NA 0.114 0.848 0.997 0.253 Table 1: Parity labeling results in six methods. The cosine similarity is computed between the ground-truth P j\u2208A Mj and the closest neuron weight. Feature Learning in Networks. Figure 6 shows that the results are as predicted by our analysis. After the \ufb01rst gradient step, some weights begin to cluster around the ground-truth P j\u2208A Mj (or \u2212P j\u2208A Mj due to we have ai in the gradient update which can be positive or negative). After the second step the weights get improved and well-aligned with the ground-truth (with cosine similarity > 0.99). Table 1 shows the results for different methods. Recall that the Cos Similarity metric is max{i\u2208[2m]} cos(wi, P j\u2208A Mj), which reports the cosine value of the closest one. One Step refers to the method where we take the neurons after one gradient step, freeze their weights, and train a classi\ufb01er on top; similar for Two Step. One Step gets test accuracy about 52%, while Two Step gets accuracy about 100%. This demonstrates that while some effective feature emerge in the \ufb01rst step, they need to be improved in the second step for accurate prediction. NTK, random feature, One Step all failed, while Network and Two Step can achieve 100% test accuracy. Network w/o structure refers to training the network on data without the input structure. It over\ufb01ts the training dataset with 52% test accuracy. 41 \fPublished as a conference paper at ICLR 2022 E.1.2 INTERVAL LABELING Figure 7: Test accuracy on simulated data under interval labeling with or without input structure. Figure 8: Visualization of the weights wi\u2019s after initialization/one gradient step/two gradient steps in network learning under interval labeling. The red star denotes the ground-truth P j\u2208A Mj; the orange star is \u2212P j\u2208A Mj. The red dots are the weights closest to the red star after two steps; the orange ones are for the orange star. Model Network NTK RF One Step Two Step Network w/o structure Train Acc (%) 100.0 100.0 76.4 44.1 100.0 100.0 Test Acc (%) 100.0 100.0 73.2 41.0 100.0 100.0 Cos Similarity 1.00 NA 0.153 0.901 0.994 0.965 Table 2: Interval labeling results in six methods. Setting. We also tried interval function, where y = 1 if P i\u2208A \u02dc \u03c6i is in the range [t1, t2] with t1 = 20 and t2 = 30, otherwise y = \u22121. We use d = 500, D = 100, k = 30. The \u02dc \u03c6i\u2019s are independent, and Pr[\u02dc \u03c6i = 1] = 2/3 for any i \u2208A, and Pr[\u02dc \u03c6i = 1] = 1/2 otherwise. When the input structure is removed, we set Pr[\u02dc \u03c6i = 1] = 1/2 for all i\u2019s. The network and training again follows Section 3 with a network size m = 100 and the training time T = 200 steps. Veri\ufb01cation of the Main Results. Figure 7 shows that network learning learns the fastest, NTK learns slower but reaches similar test accuracy, while random feature can only reach a decent but lower accuracy. This is because for such simpler labeling functions, \ufb01xed feature methods can still achieve good performance (note that the lower bound does not hold for such a case), while the performance depends on what \ufb01xed features to use. Furthermore, when the input structure is removed, the methods still get similar (or only slightly worse) performance as with input structure. This shows that when the labeling function is simple, the 42 \fPublished as a conference paper at ICLR 2022 help of the input structure for learning may not be needed. In the experiments on real data, we will show that when the input structure is changed, it indeed leads to lower performance which suggests that the labeling function in practice is typically more complicated than this interval labeling setting, and the help of the input structure is signi\ufb01cant for learning. Feature Learning in Networks. Figure 8 shows the phenomenon of feature learning similar to that in the parity labeling setting. Table 2 shows the test accuracy of six different methods. Random feature and One Step failed, while Network, NTK and Two Step succeed showing that interval labeling setting is a simpler case than parity labeling setting. E.2 MORE SIMULATION RESULT IN VARIOUS SETTINGS We show the robustness of our simulation results by studying the learning behaviors in a variety of settings including different sample size, input data dimension and class imbalance. We reuse the same setting as the simulation in the main text (details in E.1.1), vary different parameters, and report the accuracy, the cosine similarities between the learned weights, and the visualization of the neuron weights. E.2.1 VARYING INPUT DATA DIMENSION In the simulation experiments in the main text, the input data dimension d is 500. Here we change the input data dimension to 100 and 2000. All other con\ufb01gurations follow E.1.1. Veri\ufb01cation of the Main Results. Figure 9 shows that our claim is robust under different input data dimensions. The performance of network learning is superior over NTK and random feature approaches on inputs with structure, and on inputs without structure, all three methods fail. (a) d = 100 (b) d = 2000 Figure 9: Test accuracy on simulated data under different input data dimensions. d = 100 Network NTK RF One Step Two Step Network w/o structure Train Acc 100.0 83.1 78.9 53.0 100.0 100.0 Test Acc 100.0 81.5 78.3 51.1 100.0 51.0 Cos Similarity 1.000 NA 0.354 0.967 1.000 0.331 d = 2000 Network NTK RF One Step Two Step Network w/o structure Train Acc 100.0 75.6 80.0 50.22 100.0 100.0 Test Acc 100.0 75.4 77.0 50.01 100.0 52.5 Cos Similarity 0.998 NA 0.056 0.560 0.998 0.309 Table 3: Results of six methods for different input data dimensions. The cosine similarity is computed between the ground-truth P j\u2208A Mj and the closest neuron weight. Feature Learning in Networks. Figure 10 visualizes the neuron weights. It shows similar results to that in E.1.1: the weights gets updated to to the effective feature in the \ufb01rst two steps, forming clusters. 43 \fPublished as a conference paper at ICLR 2022 Figure 10: Visualization of the weights wi\u2019s in early steps under different input data dimensions. Upper row: input data dimension d = 100; lower row: d = 2000. Table 3 shows some quantitative results. In particular, the average cosine similarities between neuron weights and the effective features after two steps are close to 1, showing that they match the effective features. E.2.2 VARYING CLASS IMBALANCE RATIO The experiments in the main text has 25000 training samples for each class. Here we keep the total sample size 50000 but use different class imbalance ratios, which is the class \u22121 sample size divide by the total sample size. Veri\ufb01cation of the Main Results. Figure 11 shows that our claim is robust under different class imbalance ratios. The results are similar to those for balanced classes, except that NTK becomes less stable. (a) Negative class ratio = 0.8 (b) Negative class ratio = 0.9 Figure 11: Test accuracy on simulated data under different negative class ratios. Feature Learning in Networks. Figure 12 visualizes the neurons\u2019 weights. Again, the observation is similar to that for balanced classes. Table 4 shows some quantitative results which are also similar to those for balanced classes. 44 \fPublished as a conference paper at ICLR 2022 Figure 12: Visualization of the weights wi\u2019s in early steps under different class imbalance ratios. Upper row: negative class ratio 0.8; lower row: 0.9. ratio = 0.8 Network NTK RF One Step Two Step Network w/o structure Train Acc 100.0 62.9 72.7 78.3 100.0 100.0 Test Acc 100.0 82.7 70.4 75.7 100.0 61.7 Cos Similarity 0.999 NA 0.293 0.950 0.999 0.218 ratio = 0.9 Network NTK RF One Step Two Step Network w/o structure Train Acc 100.0 84.0 73.6 92.3 100.0 100.0 Test Acc 100.0 81.7 72.4 89.2 100.0 71.8 Cos Similarity 0.997 NA 0.296 0.956 0.997 0.286 Table 4: Results of six methods under different negative class ratios. E.2.3 VARYING SAMPLE SIZE Here we change the sample size 50000 in Section E.1.1 to be 25000 and 10000. For sample size 25000, we observe similar results. For sample size 10000, we observe over-\ufb01tting (test accuracy much lower than train accuracy). Therefore, for sample size 10000 we reduces the size of the network (i.e., number of hidden neurons) from m = 300 to m = 50. Veri\ufb01cation of the Main Results. Figure 13 shows that our claim is robust under different sample sizes. In particular, the network learning still outperforms the NTK and random feature approaches on structured inputs. n = 25000 Network NTK RF One Step Two Step Network w/o structure Train Acc 100.0 84.0 78.6 50.6 100.0 100 Test Acc 100.0 84.1 74.7 50.0 100.0 50.2 Cos Similarity 0.997 NA 0.105 0.851 0.997 0.230 n = 10000 Network NTK RF One Step Two Step Network w/o structure Train Acc 100.0 73.9 71.6 50.7 100.0 100.0 Test Acc 100.0 75.0 74.3 50.3 100.0 52.2 Cos Similarity 0.995 NA 0.096 0.974 0.994 0.176 Table 5: Results of six methods for different sample size. Feature Learning in Networks. Figure 14 and Table 5 show that the phenomenon of feature learning for different samples is similar to that in E.1.1. 45 \fPublished as a conference paper at ICLR 2022 (a) n = 25000 (b) n = 10000 Figure 13: Test accuracy on simulated data under different sample sizes n. Figure 14: Visualization of the weights wi\u2019s in early steps under different sample sizes. Upper row: sample size 25000; lower row: 10000. E.3 EXPERIMENTS ON MORE DATA GENERATION MODELS In this section we consider some additional data distributions and run the simulation experiments, in particular, focusing on the feature learning phenomenon. Note that our analysis is for the setting where the input distributions have structure revealing some information about the labeling function. (More precisely, the labeling function is speci\ufb01ed by A and P, while the input distribution also depends on them.) We therefore consider two other data generation mechanisms where the labeling function also has connections to the input distributions. E.3.1 HIDDEN REPRESENTATION LABELING Here we consider the following data model: \ufb01rst uniformly at random select \u02dc \u03c6A from a set of binary vectors, and assign label 1 to some and -1 to others; sample irrelevant patterns \u02dc \u03c6\u2212A uniformly at random; generate the input x = M \u02dc \u03c6. We randomly select 50 binary vectors for each label, with d = 500, D = 250, k = 50, po = 1/2. This is a generalization of the distribution D(1), a component in the distribution of our simulation experiments (see the proof of Theorem 2 for details). Recall the de\ufb01nition of D(1): \u02dc \u03c6A is uniform on only two values [+1, . . . , +1] and [0, . . . , 0], and uniform over irrelevant patterns; the value 46 \fPublished as a conference paper at ICLR 2022 [+1, . . . , +1] corresponds to one class and [0, . . . , 0] correspond to another class. Our data model here generalizes D(1) to more than 2 values. The visualization is shown in Figure 15. We can observe similar feature learning phenomena, and the neuron weights are updated to form clusters. Figure 15: Visualization of the weights wi\u2019s after initialization/one gradient step/two gradient steps in network learning under hidden representation labeling. E.3.2 TWO-LAYER NETWORKS ON MIXTURE OF GAUSSIANS To further support our intuition of feature learning, we run experiments on mixture of Gaussians. Data. Let X = Rd be the input space, and Y = {\u00b11} be the label space. Suppose M \u2208Rd\u00d7k is an dictionary with k orthonormal columns. Let \u03b5i, i = 1, . . . , k be i.i.d symmetric Bernoulli random variables, and g \u223cN(0, \u03c32 r k dId). Then we generate the input x and class label y by: x = k X i=1 \u03b5iM:i + g, y = k Y i=1 \u03b5i (290) In this case, 2k Gaussian clusters will be created. The centers of the Gaussian clusters Pk i=1 \u00b1M:i lie on the vertices of a hyper cube, and the label of each Gaussian cluster is determined by the parity function on the vertices of the hyper cube. Note that the labeling function is roughly equivalent to a network: y = Pn i=1 aiReLU(\u27e8ci, x\u27e9) where ci\u2019s are the Gaussian centers, and ai \u221d1 for Gaussian components with label 1 and ai \u221d\u22121 for those with label -1. Setting. We then train a two-layer network with m = 800 hidden neurons on data sets generated as above with different chosen k\u2019s and d\u2019s. The training follows typical practice (not the hyperparameters in our analysis). In this setting, we expect the neural network to learn the effective features: the directions of Gaussian cluster centers. Result. We run experiments with different settings. The parameters are shown in Table 6. From Figure 16 we can see that some neurons learn the directions of Gaussian centers, and each Gaussian center is covered by some neurons, which matches our expectation. Parameters d k Number of Clusters \u03c3r Experiment 1 100 4 16 1 Experiment 2 25 4 16 0.7 Experiment 3 100 5 32 1 Table 6: Gaussian mixture setting. E.4 REAL DATA: FEATURE LEARNING IN NETWORKS We take the subset of MNIST (Deng, 2012) with labels 0/1, CIFAR10 (Krizhevsky, 2012) with labels airplane/automobile and SVHN (Netzer et al., 2011) with labels 0/1, and train a two-layer network 47 \fPublished as a conference paper at ICLR 2022 (a) Experiment 1 with epoch 0/50/80 (b) Experiment 2 with epoch 0/30/50 (c) Experiment 3 with epoch 0/50/80 Figure 16: Visualization of the weights wi\u2019s (blue dots) and Gaussian centers (red for positive labeled clusters and orange for negative labeled clusters). Figure 17: Visualization of the neurons\u2019 weights in a two-layer network trained on the subset of MNIST data with label 0/1. The weights gradually form two clusters. with m = 50. We use traditional weight initialization method (random Gaussian) and training method (SGD with momentum = 0.95 without regularization) in this section, for our purpose of investigating the training dynamics in practice. Then we visualize the neurons\u2019 weights following the same method in the simulation. Figure 17, Figure 18 and Figure 19 show a similar feature learning phenomenon: effective features emerge after a few steps, and then get improved to form clusters. This shows the insights obtained on our learning problems are also applicable to the real data. 48 \fPublished as a conference paper at ICLR 2022 Figure 18: Visualization of the neurons\u2019 weights in a two-layer network trained on the subset of CIFAR10 data with label airplane/automobile. The weights gradually form two clusters. Figure 19: Visualization of the neurons\u2019 weights in a two-layer network trained on the subset of SVHN data with label 0/1. The weights gradually form four clusters. cos(v1, \u00af v) cos(v2, \u00af v) cos(v3, \u00af v) cos(v1, v2) cos(v1, v3) cos(v2, v3) ResNet(128) 0.9727 0.8655 0.6549 0.7454 0.5083 0.6533 ResNet(256) 0.8646 0.9665 0.9121 0.7087 0.6919 0.9135 Table 7: Cosine similarities between the gradients in the early steps. We choose the neuron weight closest to the average weight of the green cluster at the end of the training (in Figure 20 for ResNet(128) and Figure 21 for ResNet(256)). We record the gradients of the \ufb01rst 30 steps and divide them to three trunks of 10 steps evenly and sequentially. For the three trunks, we get the average gradients v1, v2, v3. We calculate their cosine similarities to their average \u00af v = (v1 + v2 + v3)/3 and those between them. E.4.1 CNNS ON BINARY CIFAR10: FEATURE LEARNING IN NETWORKS Setting. We use ResNet(m), which is a ResNet-18 convolutional neural network (He et al., 2016) with m \ufb01lters in the \ufb01rst residual block. It is obtained by scaling the number of \ufb01lters in each block proportionally from the standard ResNet-18 network which is ResNet(64). We use ResNet(128) and ResNet(256) in this experiment. We train our model on Binary CIFAR10 (Krizhevsky, 2012) with labels airplane/automobile for 20 epochs. The \ufb01nal test accuracy of ResNet(128) is 95.75% and that of ResNet(256) is 93.8%. Results. Figure 20 visualizes the \ufb01lters\u2019 weights of different residual blocks in ResNet(128) at Epoch 0, 3, and 20, and Figure 21 shows those in ResNet(256). They show that feature learning happens in the early stage, and show that there are some clusters of weights (e.g., the red and green points). These colored points are selected at Epoch 20. We \ufb01rst visualize the weights at Epoch 20, and then hand pick the points that roughly form two clusters (i.e., the points in the same cluster are close to each other while those in different clusters are far away). We assign red and green colors to the two clusters at Epoch 20, and then assign these weights with the same color in Epoch 0 and 3. Finally, we compute the cosine similarities and show that the hand picked points are indeed roughly clusters in the high-dimension. In particular, we have the following three observations. 49 \fPublished as a conference paper at ICLR 2022 (a) Residual block 1: (Green: 0.6152, Red: 0.6973, Two Centers: -0.7245) (b) Residual block 2: (Green: 0.5528, Red: 0.6000, Two Centers: -0.7509) (c) Residual block 3: (Green: 0.4260, Red: 0.5006, Two Centers: -0.5099) (d) Residual block 4: (Green: 0.5584, Red: 0.5697, Two Centers: -0.9074) Figure 20: Visualization of the normalized convolution weights in all Residual block of ResNet(128) trained on the subset of CIFAR10 data with labels airplane/automobile. We show the weights after 0/3/20 epochs in network learning. The weights gradually form two clusters in all Residual blocks. We also report average cosine similarity between the green/red points in the clusters to their centers and cosine similarity between two cluster centers as (Green, Red, Two Centers). First, we can see that the \ufb01lter weights change signi\ufb01cantly during the early stage of the training, indicating feature learning happens in the early stage: the change between Epoch 0 and Epoch 3 is much more signi\ufb01cant than that between Epoch 3 and Epoch 20. Second, we can also verify that the feature learning is guided by the gradients: the gradients of a \ufb01lter in the early gradient steps point to similar directions (and thus the updated \ufb01lter will learn this direction). More precisely, for a selected \ufb01lter, we average the gradients every 10 gradient steps (so to reduce the variance due to mini-batch), and get v1, v2 and v3 for the \ufb01rst 30 steps and compute their cosine similarities and those to their average. Table 7 shows the results. In general the similarities 50 \fPublished as a conference paper at ICLR 2022 (a) Residual block 1: (Green: 0.7065, Red: 0.6551, Two Centers: -0.7875) (b) Residual block 2: (Green: 0.6599, Red: 0.5299, Two Centers: -0.9004) (c) Residual block 3: (Green: 0.5193, Red: 0.6267, Two Centers: -0.9258) (d) Residual block 4: (Green: 0.7386, Red: 0.6839, Two Centers: -0.9740) Figure 21: Visualization of the normalized convolution weights in all Residual block of ResNet(256) trained on the subset of CIFAR10 data with labels airplane/automobile. We show the weights after 0/3/20 epochs in network learning. The weights gradually form two clusters in all Residual blocks. We also report average cosine similarity between the green/red points in the clusters to their centers and cosine similarity between two cluster centers as (Green, Red, Two Centers). are high indicating they point to similar directions. (Note that a similarity of 0.6 is regarded as very signi\ufb01cant as the \ufb01lters are in a high dimension of 3 \u00d7 3 \u00d7 1024 = 9216). Third, we also observe some clustering effect of the \ufb01lter weights, though not as signi\ufb01cant as in our simulations. For example, in the red and green clusters in Figure 20(a) for the \ufb01rst residual block, the average cosine similarity for \ufb01lter weights in the red cluster is about 0.62 and that for the green is about 0.7, while the cosine similarity between the two clusters\u2019 centers is about -0.72. This shows signi\ufb01cant similarities within the cluster while difference between clusters. 51 \fPublished as a conference paper at ICLR 2022 Note that the clustering is less signi\ufb01cant than our simulation experiments. This is because practical data have more patterns (i.e., effective feature directions) to be learned than our synthetic data, and also the practical network is not as overparameterized as in our simulation. Then \ufb01lters are likely to learn different patterns (or their mixtures) without forming signi\ufb01cant clusters. The results of ResNet(256) show more signi\ufb01cant clustering than ResNet(128), which supports our explanation. On the other hand, we emphasize that the key insight of our analysis is that the gradient guides the learning of effective features in the early stage of training (rather than the clustering), which is veri\ufb01ed as discussed above. E.5 REAL DATA: THE EFFECT OF INPUT STRUCTURE To study the in\ufb02uence of the input structure, we propose to keep the labeling function unchanged, vary the input distributions, and exam the change of the loss surface and the training dynamics. We \ufb01rst describe the detailed experimental methodology, which allows us to generate data with similar labeling function but different input distributions. Then we perform experiments on the generated datasets to investigate the change of the learning due to the change in the input distributions, and present the experimental results. Finally, we also perform experiments to verify the intuition behind our experimental method. E.5.1 EXPERIMENTAL METHODOLOGY We consider the following experimental method. Given an original dataset L = {(xi, yi)}n i=1 (e.g., CIFAR10) and an unlabeled dataset U = {\u02dc xi}m i=1 from a proposed distribution PU (e.g., Gaussians), \ufb01rst extend the labeling function of L to U, giving synthetic labels \u02dc yi to \u02dc xi. Then train a neural network on the union of L and the synthetic data LU = {(\u02dc xi, \u02dc yi)}m i=1. By investigating the new training dynamics, in particular the difference on the original part L and the synthetic part LU, we can see the effect of the input structure. The original dataset should be from real-world data, since one of our goals is to compare them with synthetic data, and identify the properties of real-world data important for the success of learning. A natural idea is to \ufb01rst learn a powerful network f(x) (called the teacher) on L to approximate the true labeling function, then apply f on U to generate synthetic labels, and \ufb01nally train another network (called the student) on the synthetic data and original data. However, we found that na\u00efvely implementation of this idea fails miserably: the support of L and U can be typically different, and the powerful network learned over L can have entirely different behavior on U. Therefore, we need to control the size of the teacher f so that the labeling on U has similar complexity as that on L. For our purpose, we can de\ufb01ne the complexity of the labeling on L as the minimum size of the teacher achieving an approximation error \u03f5 for a chosen \u03f5, if the ground-truth data distribution of L is known. However, given only limited data, we cannot faithfully estimate the needed size of the teacher, and need to take into account the variance introduced by the \ufb01nite data. Our key idea is to use the U-shaped curve of the bias-variance trade-off and select the size of the teacher at the minimum of the U-shaped curve. Since recent works (Belkin et al., 2019; Nakkiran et al., 2020) show that neural networks can have a double descent curve for the error v.s. model complexity, we thus plot the double descent curve, and \ufb01nd the minimum in the classical regime (corresponding to the traditional U-shape curve). Our method is designed based on the following two reasons. First, on the U-shaped curve, the complexity of the network is still roughly controlled by that of the number of parameters. The local minimum of the U-shaped curve is a good measurement of the complexity of the data. If the ground-truth is much more complicated than the teacher, then increasing the teacher\u2019s size leads to a signi\ufb01cant decrease in the approximation error (bias) compared to a small increase in the variance, that is, we will be on the left-hand side of the U-shaped. In contrast, on the right-hand side of the U-shaped, increasing the teacher\u2019s size leads to a small decrease in the bias compared to a signi\ufb01cant increase in the variance. That is, the complexity of the ground-truth is comparable to or lower than the teacher. So the local minimum approximates the complexity of the ground-truth labeling function. Second, the local minimum point is chosen to get the best approximation of the true labels. This helps to maintain the labeling from the real-world data and thus helps our investigation on the input, since too drastic change in the labeling can affect the training. 52 \fPublished as a conference paper at ICLR 2022 We note that the method is not perfect. First, the teacher at the local minimum of U-shape may not have very high accuracy, especially on more complicated data. To alleviate this, we also use the teacher to give synthetic labels y\u2032 i to xi in L, and train the student network on L\u2032 = {(xi, y\u2032 i)}n i=1. Though this introduces some differences from the original labels, it is acceptable for our purpose of studying the inputs. Furthermore, ensuring the consistency of the labels on the original input in L and U is important in our experiments. Second, the measurement is an approximation due to variance. Since only limited labeled data is available, it\u2019s important and necessary to calibrate the measurement w.r.t. the level of variance on the given dataset. Method Description. Algorithm 1 presents the details. For a \ufb01xed network architecture for the teacher f, it \ufb01rst varies the network size and plots the double descent curve. Then it selects the local minimum in the classic regime of U-shape and trains the teacher with the corresponding size. In practice, we observed that the teacher might have unbalanced probabilities for different classes on U if its training does not take into account U. Therefore, we propose the following heuristic regularization using x \u2208U, where \u03bb is a regularization weight, and f(x) is the probabilities over classes given by the teacher: R(x) = R1(x) + \u03bbR2(x) (291) R1(x) = X j \u0012P i f(x)j m ln P i f(x)j m \u0013 (292) R2(x) = \u22121 m X i X j (f(x)j ln(f(x)j)). (293) Here, R1(x) guarantees that each kind of label has the same average probability to be generated, and R2(x) pushes the probability away from uniform to avoid the case that the class probabilities for each data point converge to uniform. Algorithm 1 Learning the teacher network to generate synthetic labels for studying the effect of the input structure Input: teacher architecture f, labeled dataset L = {(xi, yi)}n i=1, unlabeled dataset U = {\u02dc xi}m i=1. Let i to be the size of f, fi to be the teacher of size i. for i = 1 to n do Train fi on L and let li denote the test loss end for Plot li v.s. i, identify the classical regime, and the size it corresponding to the local minimum in classical regime. Train fit on L with a regularizer R(x) on U de\ufb01ned in (291). Output: fit E.5.2 EXPERIMENTAL RESULTS Network models. Here we use one-hidden-layer fully-connected networks with m hidden units and quadratic activation functions. The network is denoted as FC(m). We use ResNet(m), which is a ResNet-18 convolutional neural network (He et al., 2016) with m \ufb01lters in the \ufb01rst residual block. It is obtained by scaling the number of \ufb01lters in each block proportionally from the standard ResNet-18 network which is ResNet(64). Datasets. We use MNIST (Deng, 2012), CIFAR10 (Krizhevsky, 2012) and SVHN (Netzer et al., 2011) as L, and use Gaussian and images in Tiny ImageNet (Le & Yang, 2015) as U. We generate the mixture data, where the fraction of the unlabeled data is denoted as \u03b1. Setup. We \ufb01rst use Algorithm 1 on the labeled data L and the unlabeled data U to get a synthetic labeling function (the teacher network) and then use it to give synthetic labels on a mixture of inputs from L and U. For MNIST, the teacher network learned is FC(9), where the number of the hidden units is determined by Algorithm 1. See empirical veri\ufb01cation in Figure 26. For CIFAR10 and SVHN, the teacher networks are ResNet(5) and ResNet(2), respectively, as determined by our method. The student network for MNIST is FC(9), and those for CIFAR10 and SVHN are ResNet(9) and ResNet(8), respectively. Finally, we train the student networks on these new datasets with perturbed input distributions. 53 \fPublished as a conference paper at ICLR 2022 (a) (b) (c) Figure 22: Test accuracy at different steps for an equal mixture \u03b1 = 0.5 of Gaussian inputs with data: (a) MNIST, (b) CIFAR10, (c) SVHN. (a) (b) Figure 23: Test accuracy at different steps for an equal mixture \u03b1 = 0.5 of Tiny ImageNet inputs with data: (a) CIFAR10, (b) SVHN. (a) \u03b1 = 0.25 (b) \u03b1 = 0.50 (c) \u03b1 = 0.75 Figure 24: Test accuracy at different steps for varying mixture \u03b1 of Gaussian inputs with CIFAR10. Figure 22 shows the results on an equal mixture of data and Gaussian. It presents the test accuracy of the student on the original data part, the Gaussian part, and the whole mixture. For example, for CIFAR10, the test accuracy on the whole mixture is lower than that of training on the original CIFAR10, showing that the input structure indeed has a signi\ufb01cant impact on the learning. Furthermore, the network learns well over the CIFAR10 part (with accuracy similar to that on the original data) but learns slower with worse accuracy on the Gaussian part. This suggests that the CIFAR10 input structure is still helping the network to learn effective features. While the results on MNIST+Gaussian do not show a signi\ufb01cant trend (possibly because the tasks there are simpler), the results on SVHN+Gaussian show similar signi\ufb01cant trends as CIFAR10+Gaussian. Figure 24 shows the results when we vary the fraction of the Gaussian data \u03b1. We observe that the test accuracy curve on the original part and that on the synthetic part have roughly the same trend for different \u03b1 as before, further verifying our insights. 54 \fPublished as a conference paper at ICLR 2022 Figure 23 shows the results when mixed with Tiny ImageNet data instead of Gaussians. It shows a similar trend, while the performance on the Tiny ImageNet part is higher than that on the Gaussian part. This suggests that compared to Gaussians, the Tiny ImageNet data has helpful input structures, though not as helpful as that on the original data for learning the particular labeling. E.5.3 LARGER NETWORK ON MNIST FOR CHECKING THE EFFECT OF INPUT STRUCTURE Here we perform the experiment on MNIST as in E.5.2, but for a network with m = 50 hidden neurons rather than m = 9. Figure 25 shows similar results as those for m = 9: the learning on the MNIST input part is faster and better than that on the Gaussian input part. The separation between the two is actually more signi\ufb01cant than that for m = 9. This then also supports our insight about the effect of input structures. Figure 25: Test accuracy at different steps for an equal mixture \u03b1 = 0.5 of Gaussian inputs with MNIST, where m = 50. E.5.4 EMPIRICAL VERIFICATION OF OUR METHOD (a) Teacher\u2019s double descent curve (b) Student\u2019s curve when Teacher=FC(9) (c) Student\u2019s curve when Teacher=FC(50) (d) Student\u2019s curve when Teacher=FC(500) Figure 26: Double descent curves of the students trained on data with synthetic labels (Loss v.s. Parameter number). 55 \fPublished as a conference paper at ICLR 2022 We also perform experiments to verify the intuition behind our methodology, i.e., the method gives a synthetic labeling function with roughly the same complexity on the original inputs and the injected inputs. We \ufb01rst use our method on MNIST and samples (of the same size as MNIST) from a Gaussian to get the teacher FC(9); the double descent curve is in Figure 26(a). Then we train students on the Gaussian data with synthetic labels from the teacher, and plot the double descent curve for the students in Figure 26(b). The local minimums of the two U-shapes are roughly the same, matching our reasoning. Then we also train larger teachers and plot the double descent curve for students on Gaussian data. Figure 26(c) Teacher size 50. Figure 26(d) Teacher size 500. The local minimum of the U-shape becomes larger when the teacher gets larger, again matching our reasoning. 56 \fPublished as a conference paper at ICLR 2022 F PROVABLE GUARANTEES FOR NEURAL NETWORKS IN A MORE GENERAL SETTING This section provides the analysis in a more general setting. We \ufb01rst describe the learning problems, and then provide the proofs following similar intuitions as for the simpler settings in the main text. F.1 PROBLEM SETUP Let X = Rd be the input space, and Y = {\u00b11} be the label space. Suppose M \u2208Rd\u00d7D is a dictionary with D elements, where each element Mj can be regarded as a pattern. We assume quite general incoherent dictionary: (D) M is \u00b5-incoherent, i.e., the columns of M are unit vectors, and for any i \u0338= j, |\u27e8Mi, Mj\u27e9| \u2264 \u00b5/ \u221a d. Note that the setting in the main text corresponds to \u00b5 = 0. Let \u02dc \u03c6 \u2208{0, 1}D be a hidden vector that indicates the presence of each pattern, and D \u02dc \u03c6 a distribution for \u02dc \u03c6. Let A \u2286[D] be a subset of size k corresponding to the class relevant patterns. Let P \u2286[k]. We \ufb01rst sample \u02dc \u03c6 from D \u02dc \u03c6, and then generate the input \u02dc x and the class label y from \u02dc \u03c6, A, P by: \u02dc x = M \u02dc \u03c6 + \u03b6, y = \u001a+1, if P i\u2208A \u02dc \u03c6i \u2208P, \u22121, otherwise (294) where the Gaussian noise \u03b6 \u223cN(0, \u03c32 \u03b6Id\u00d7d) is independent from \u02dc \u03c6. Note that the setting in the main text corresponds to \u03c3\u03b6 = 0. We allow general D \u02dc \u03c6 with the following assumptions: (A1) The patterns in A are correlated with the labels: for any i \u2208A, for v \u2208{\u00b11} let \u03b3v = E[y \u02dc \u03c6i|y = v], then \u03b3 := (\u03b3+1 + \u03b3\u22121)/2 > 0. (A2) The patterns outside A are independent of the patterns in A. Note that we allow imbalanced classes. Let pmin := min(Pr[y = \u22121], Pr[y = +1]). If the classes are balanced, then the assumption (A1) implies the assumption (A1) in the main text, so the setting here is more general. (A2) is also more general, in particular, allowing dependence between irrelevant patterns and non-identical distributions for them. Let D(A, P, D \u02dc \u03c6) denote the distribution on (\u02dc x, y) corresponding to some A, P, and D \u02dc \u03c6. Given parameters \u039e = (d, D, k, \u03b3, po, \u00b5, \u03c3\u03b6), the family F\u039e of distributions for learning is the set of all D(A, P, D \u02dc \u03c6) with A \u2286[D], P \u2286[k], and D \u02dc \u03c6 satisfying the above assumptions. One special case is the mixture of two Gaussians. Example. Suppose M has one single column v, and y = +1 if \u02dc \u03c6 = 1 and y = \u22121 otherwise. Then the data distribution is simply a mixture of two Gaussians: \u02dc x \u223cv 2 + N(y v 2, \u03c32 \u03b6Id\u00d7d). F.1.1 NEURAL NETWORK LEARNING Again, we will normalize the data for learning: we \ufb01rst compute x = (\u02dc x \u2212E[\u02dc x])/\u02dc \u03c3 where \u02dc \u03c32 := Pd i=1(\u02dc xi \u2212E[\u02dc xi])2 = P j\u2208[D] Var(\u02dc \u03c6j) + d\u03c32 \u03b6 is the variance of the data, and then train on (x, y). This is equivalent to setting \u03c6 = (\u02dc \u03c6 \u2212E[\u02dc \u03c6])/\u02dc \u03c3 and generating x = M\u03c6 + \u03b6/\u03c3\u03b6. For (\u02dc x, y) from D and the normalized (x, y), we will simply say (x, y) \u223cD. The learning will be the same as that in the main text, except the following. We will use a small \u03c32 w = \u02dc \u03c32/poly(Dm). And we will use a weighted loss to handle the imbalanced classes in the \ufb01rst two steps for feature learning, and then use the unweighted loss in the remaining steps. Formally, the weighted loss is: L\u03b1 D(g; \u03c3\u03be) = E(x,y)[\u03b1y\u2113(y, g(x; \u03be))], (295) where the class weights \u03b1v = 1 2 Pr[y=v] for v \u2208{\u00b11}. 57 \fPublished as a conference paper at ICLR 2022 F.2 MAIN RESULT In this setting, we have the following theorem: Theorem 34. Set \u03b7(1) = \u03b32pmin\u02dc \u03c3 km3 , \u03bb(1) a = 0, \u03bb(1) w = 1/(2\u03b7(1)), \u03c3(1) \u03be = 1/k3/2, (296) \u03b7(2) = 1, \u03bb(2) a = \u03bb(2) w = 1/(2\u03b7(2)), \u03c3(2) \u03be = 1/k3/2, (297) \u03b7(t) = \u03b7 = k2 Tm1/3 , \u03bb(t) a = \u03bb(t) w = \u03bb \u2264 k3 \u02dc \u03c3m1/3 , \u03c3(t) \u03be = 0, for 2 < t \u2264T. (298) For any \u03b4 \u2208(0, O(1/k3)), if \u00b5 \u2264O( \u221a d/D), \u03c3\u03b6 \u2264O(min{1/\u02dc \u03c3, \u02dc \u03c3/ \u221a d}), k = \u2126 \u0010 log2 \u0010 Dmd \u03b4\u03b3pmin \u0011\u0011 , m \u2265max{\u2126(k4), D, d}, then we have for any D \u2208F\u039e, with probability at least 1 \u2212\u03b4, there exists t \u2208[T] such that Pr[sign(g(t)(x)) \u0338= y] \u2264LD(g(t)) = O \u0012 k8 m2/3 + k3T m2 + k2m2/3 T \u0013 . (299) Consequently, for any \u03f5 \u2208(0, 1), if T = m4/3, and m \u2265max{\u2126(k12/\u03f53/2), D}, then Pr[sign(g(t)(x)) \u0338= y] \u2264LD(g(t)) \u2264\u03f5. (300) The rest of the section is devoted to the proof of this theorem. F.3 NOTATIONS Recall some notations that we will use throughout the analysis. For a vector v and an index set I, let vI denote the vector containing the entries of v indexed by I, and v\u2212I denote the vector containing the entries of v with indices outside I. Let \u03c1 := M \u22a4M. Then we have \u03c1jj = 1 for any j, and |\u03c1j\u2113| \u2264\u00b5/ \u221a d for any j \u0338= \u2113. By initialization, w(0) i for i \u2208[m] are i.i.d. copies of the same random variable w(0) \u223cN(0, \u03c32 wId\u00d7d); similar for a(0) and b(0). Let \u03c32 \u03c6j := poj(1 \u2212poj)/\u02dc \u03c32 denote the variance of \u03c6\u2113for \u2113\u0338\u2208A, where poj = Pr[\u02dc \u03c6j = 1]. Let po be the value such that with probability 1 \u2212exp(\u2212\u2126(k)), P j\u0338\u2208A \u02dc \u03c6j \u2264 po(D \u2212k) for some po \u2208[0, 1]. That is, po is an upper bound on the density of \u02dc \u03c6j with high probability. Let q\u2113:= \u27e8w(0), M\u2113\u27e9. Similarly, de\ufb01ne q(t) i,\u2113:= \u27e8w(t) i , M\u2113\u27e9. We also de\ufb01ne the following sets to denote typical initialization. For a \ufb01xed \u03b4 \u2208(0, 1), de\ufb01ne Gw(\u03b4) := ( w \u2208Rd : q\u2113= \u27e8w, M\u2113\u27e9,\u03c32 wd 2 \u2264\u2225w(0)\u22252 2 \u22643\u03c32 wd 2 , (301) \u03c32 w(D \u2212k) 2 \u2264 X \u2113\u0338\u2208A q2 \u2113\u22643\u03c32 w(D \u2212k) 2 , max \u2113 |q\u2113| \u2264\u03c3w p 2 log(Dm/\u03b4) ) , (302) Ga(\u03b4) := {a \u2208R : |a| \u2264\u03c3a p 2 log(m/\u03b4)}. (303) Gb(\u03b4) := {b \u2208R : |b| \u2264\u03c3b p 2 log(m/\u03b4)}. (304) 58 \fPublished as a conference paper at ICLR 2022 F.4 EXISTENCE OF A GOOD NETWORK We \ufb01rst show that there exists a network that can \ufb01t the data distribution. Lemma 35. Suppose k\u00b5 \u221a d poD \u02dc \u03c3 \u2264 1 16. For any D \u2208F\u039e, there exists a network g\u2217(x) = Pn i=1 a\u2217 i \u03c3(\u27e8w\u2217 i , x\u27e9+ b\u2217 i ) which satis\ufb01es Pr (x,y)\u223cD[yg\u2217(x) \u22641] \u2264exp(\u2212\u2126(k)) + exp \u2212\u2126 1 \u03c32 \u03b6(k + k2\u00b5/ \u221a d) !! . Furthermore, the number of neurons n = 3(k + 1), |a\u2217 i | \u226464k, 1/(64k) \u2264|b\u2217 i | \u22641/4, w\u2217 i = \u02dc \u03c3 P j\u2208A Mj/(8k), and |\u27e8w\u2217 i , x\u27e9+ b\u2217 i | \u22641 for any i \u2208[n] and (x, y) \u223cD. Consequently, if furthermore we have k\u00b5/ \u221a d < 1 and \u03c3\u03b6 < 1/k, then Pr (x,y)\u223cD[yg\u2217(x) \u22641] \u2264exp(\u2212\u2126(k)). Proof of Lemma 35. Let w = \u02dc \u03c3 P j\u2208A Mj and let u = P j\u2208A E[\u02dc \u03c6j]. We have \u27e8w, x\u27e9= \u02dc \u03c3 X j\u2208A \u27e8Mj, M\u03c6\u27e9+ \u27e8w, \u03b6/\u02dc \u03c3\u27e9 (305) = X j\u2208A \u03c6j + X j\u2208A,\u2113\u0338=j \u03c1j\u2113\u03c6\u2113+ \u27e8w, \u03b6/\u02dc \u03c3\u27e9 (306) = X j\u2208A \u02dc \u03c6j \u2212u + X j\u2208A,\u2113\u0338=j \u03c1j\u2113\u03c6\u2113+ \u27e8w, \u03b6/\u02dc \u03c3\u27e9 | {z } :=\u03f5x . (307) With probability \u22651 \u2212exp(\u2212\u2126(k)), among all j \u0338\u2208A, we have that at most po(D \u2212k) of \u03c6j are (1 \u2212po)/\u02dc \u03c3, while the others are \u2212po/\u02dc \u03c3, and thus \f \f \f \f \f \f X j\u2208A,\u2113\u0338=j \u03c1j\u2113\u03c6\u2113 \f \f \f \f \f \f \u2264k\u00b5 \u221a d poD \u02dc \u03c3 \u22641 16. (308) Furthermore, \u27e8w, \u03b6\u27e9\u223cN(0, \u03c32 \u03b6\u2225w\u22252 2) and \u2225w\u22252 2 \u2264\u02dc \u03c32(k + k2\u00b5/ \u221a d), we have Pr[|\u27e8w, \u03b6/\u02dc \u03c3\u27e9| \u22641/16] \u22651 \u2212exp \u2212\u0398 1 \u03c32 \u03b6\u2225w\u22252 2/\u02dc \u03c32 !! (309) \u22651 \u2212exp \u2212\u0398 1 \u03c32 \u03b6(k + k2\u00b5/ \u221a d) !! . (310) For good data points with \u03c6 and \u03b6 satisfying the above, we have |\u03f5x| \u22641/8. By Lemma 7, g\u2217 1(x) := X p\u2208P \u03b4p\u2212\u00b5,4,1/2(\u27e8w, x\u27e9) \u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p\u2212\u00b5,4,1/2(\u27e8w, x\u27e9) (311) = X p\u2208P \u03b4p,4,1/2(\u27e8w, x\u27e9+ u) \u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p,4,1/2(\u27e8w, x\u27e9+ u) (312) = X p\u2208P \u03b4p,4,1/2 \uf8eb \uf8edX j\u2208A \u02dc \u03c6j + \u03f5x \uf8f6 \uf8f8\u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p,4,1/2 \uf8eb \uf8edX j\u2208A \u02dc \u03c6j + \u03f5x \uf8f6 \uf8f8. (313) 59 \fPublished as a conference paper at ICLR 2022 Then for good data points, we have yg\u2217 1(x) \u22651. Similarly, g\u2217 2(x) := X p\u2208P \u03b4p\u2212\u00b5+1/4,8,1/2(\u27e8w, x\u27e9) \u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p\u2212\u00b5+1/4,8,1/2(\u27e8w, x\u27e9) (314) = X p\u2208P \u03b4p+1/4,8,1/2(\u27e8w, x\u27e9+ u) \u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p+1/4,8,1/2(\u27e8w, x\u27e9+ u) (315) = X p\u2208P \u03b4p+1/4,8,1/2 \uf8eb \uf8edX j\u2208A \u02dc \u03c6j + \u03f5x \uf8f6 \uf8f8\u2212 X p\u0338\u2208P,0\u2264p\u2264k \u03b4p+1/4,8,1/2 \uf8eb \uf8edX j\u2208A \u02dc \u03c6j + \u03f5x \uf8f6 \uf8f8. (316) Then for good data points, we have yg\u2217 2(x) \u22651. Note that the bias terms in g\u2217 1 and g\u2217 2 have distance at least 1/4, then at least one of them satis\ufb01es that all its bias terms have absolute value \u22651/8. Pick that one and denote it as g(x) = Pn i=1 ai\u03c3r(\u27e8wi, x\u27e9+ bi). By the positive homogeneity of \u03c3r, we have g(x) = n X i=1 8kai\u03c3r(\u27e8wi, x\u27e9/(8k) + bi/(8k)). (317) Since for any good data points, |\u27e8wi, x\u27e9/(8k) + bi/(8k)| \u22641, then g(x) = n X i=1 8kai\u03c3(\u27e8wi, x\u27e9/(8k) + bi/(8k)) (318) where \u03c3 is the truncated ReLU. Now we can set a\u2217 i = 8kai, w\u2217 i = wi/(8k), b\u2217 i = bi/(8k), to get our \ufb01nal g\u2217. F.5 INITIALIZATION We \ufb01rst show that with high probability, the initial weights are in typical positions. Lemma 36. Suppose D\u00b5/ \u221a d \u22641/16. For any \u03b4 \u2208(0, 1), with probability at least 1 \u2212\u03b4 \u2212 2 exp (\u2212\u0398(D \u2212k)) over w(0), \u03c32 wd/2 \u2264\u2225w(0)\u22252 2 \u22643\u03c32 wd/2, \u03c32 w(D \u2212k)/2 \u2264 X \u2113\u0338\u2208A q2 \u2113\u22643\u03c32 w(D \u2212k)/2, max \u2113 |q\u2113| \u2264\u03c3w p 2 log(D/\u03b4). With probability at least 1 \u2212\u03b4 over b(0), |b(0)| \u2264\u03c3b p 2 log(1/\u03b4). With probability at least 1 \u2212\u03b4 over a(0), |a(0)| \u2264\u03c3a p 2 log(1/\u03b4). Proof of Lemma 36. The bound on \u2225w(0)\u22252 2 follows from the property of Gaussians. Note that q = M \u22a4w(0) \u223cN(0, \u03c32 w\u03c1) for the matrix \u03c1 = M \u22a4M. We have with probability \u22651\u2212\u03b4/2, max\u2113|q\u2113| \u2264 q 2\u03c32 w log D \u03b4 . For any subset S \u2286[D], let \u03c1S denote the submatrix of \u03c1 containing the rows and columns indexed by S. Then qS = M \u22a4w(0) \u223cN(0, \u03c32 w\u03c1S). By diagonalizing \u03c1S and then applying Bernstein\u2019s inequality, we have with probability \u22651 \u22122 exp (\u2212\u0398(|S|/\u2225\u03c1\u22252), \u2225qS\u22252 2 \u2208 \u0010 (\u2225\u03c1S\u22252 F \u2212|S| 4 )\u03c32 w, (\u2225\u03c1S\u22252 F + |S| 4 )\u03c32 w \u0011 . By Gershgorin circle theorem, we have \u2225\u03c1\u22252 \u22641 + (|S| \u22121)\u00b5/ \u221a d \u226417/16. 60 \fPublished as a conference paper at ICLR 2022 Similarly, we have 3 4|S| \u2264 \u001215 16 \u00132 |S| \u2264\u2225\u03c1S\u22252 F \u2264 \u001217 16 \u00132 |S| \u22645 4|S|. The bounds on q then follow. The bounds on b(0) and a(0) follow from the property of Gaussians. Lemma 37. Suppose D\u00b5/ \u221a d \u22641/16. We have: \u2022 With probability \u22651\u2212\u03b4\u22122m exp(\u2212\u0398(D\u2212k)) over w(0) i \u2019s, for all i \u2208[2m], w(0) i \u2208Gw(\u03b4). \u2022 With probability \u22651 \u2212\u03b4 over b(0) i \u2019s, for all i \u2208[2m], b(0) i \u2208Gb(\u03b4). \u2022 With probability \u22651 \u2212\u03b4 over a(0) i \u2019s, for all i \u2208[2m], a(0) i \u2208Ga(\u03b4). Proof of Lemma 37. This follows from Lemma 36 by union bound. F.6 SOME AUXILIARY LEMMAS The expression of the gradients will be used frequently. Lemma 38. \u2202 \u2202wi L\u03b1 D(g; \u03c3\u03be) = \u2212aiE(x,y)\u223cD {\u03b1yyI[yg(x; \u03be) \u22641]E\u03beiI[\u27e8wi, x\u27e9+ bi + \u03bei \u2208(0, 1)]x} , (319) \u2202 \u2202bi L\u03b1 D(g; \u03c3\u03be) = \u2212aiE(x,y)\u223cD {\u03b1yyI[yg(x; \u03be) \u22641]E\u03beiI[\u27e8wi, x\u27e9+ bi \u2208(0, 1)]} , (320) \u2202 \u2202ai L\u03b1 D(g; \u03c3\u03be) = \u2212E(x,y)\u223cD {\u03b1yyI[yg(x; \u03be) \u22641]E\u03bei\u03c3(\u27e8wi, x\u27e9+ bi + \u03bei)} . (321) Proof of Lemma 38. It follows from straightforward calculation. We also have the following auxiliary lemma for later calculations. Lemma 39. E\u03c6A {\u03b1yy} = 0, (322) E\u03c6A {|\u03b1yy|} = 1, (323) E\u03c6j {|\u03c6j|} = 2\u03c32 \u03c6j\u02dc \u03c3, for j \u0338\u2208A, (324) E\u03c6A {\u03b1yy\u03c6j} = \u03b3 \u02dc \u03c3 , for j \u2208A, (325) E\u03c6A {|\u03b1yy\u03c6j|} \u22641 \u02dc \u03c3 , for all j \u2208[D]. (326) 61 \fPublished as a conference paper at ICLR 2022 Proof of Lemma 39. E\u03c6A {\u03b1yy} = X v\u2208{\u00b11} E\u03c6A {\u03b1yy|y = v} Pr[y = v] (327) = 1 2 X v\u2208{\u00b11} E\u03c6A {y|y = v} (328) = 0. (329) E\u03c6A {|\u03b1yy|} = X v\u2208{\u00b11} E\u03c6A {|\u03b1yy| |y = v} Pr[y = v] (330) = 1 2 X v\u2208{\u00b11} E\u03c6A {|y| |y = v} (331) = 1. (332) E\u03c6j {|\u03c6j|} = | \u2212poj|(1 \u2212poj) + |1 \u2212poj|poj \u02dc \u03c3 = 2\u03c32 \u03c6j\u02dc \u03c3. (333) E\u03c6A {\u03b1yy\u03c6j} = X v\u2208{\u00b11} E\u03c6A {\u03b1yy\u03c6j| y = v} Pr[y = v] (334) = 1 2 X v\u2208{\u00b11} E\u03c6A {y\u03c6j| y = v} (335) = 1 2 X v\u2208{\u00b11} E\u03c6A ( y \u02dc \u03c6j \u2212E[\u02dc \u03c6j] \u02dc \u03c3 \f \f \f \f \fy = v ) (336) = 1 2\u02dc \u03c3 (\u03b3+1 + \u03b3\u22121) = \u03b3 \u02dc \u03c3 . (337) E\u03c6A {|\u03b1yy\u03c6j|} = X v\u2208{\u00b11} E\u03c6A {|\u03b1vy\u03c6j| |y = v} Pr[y = v] (338) \u22641 2 X v\u2208{\u00b11} E\u03c6A {|y\u03c6j| |y = v} (339) \u22641 2 X v\u2208{\u00b11} E\u03c6A {|y\u03c6j| |y = v} (340) \u22641 \u02dc \u03c3 . (341) F.7 FEATURE EMERGENCE: FIRST GRADIENT STEP We will show that w.h.p. over the initialization, after the \ufb01rst gradient step, there are neurons that represent good features. We begin with analyzing the gradients. Lemma 40. Fix \u03b4 \u2208(0, 1) and suppose w(0) i \u2208Gw(\u03b4), b(0) i \u2208Gb(\u03b4) for all i \u2208[2m]. Let \u03f5e := D\u03c3w p 2 log(D/\u03b4) \u02dc \u03c32\u03c3(1) \u03be + \u221a d\u03c3\u03be\u03c3w p 2 log(D/\u03b4) \u02dc \u03c3\u03c3(1) \u03be , \u03f5\u03bd := \u03f5e. If \u03c32 \u03b6\u03c32 wd/\u02dc \u03c32 = O(1/k), po = \u2126(k2/D), k = \u2126(log2(Dmd/\u03b4)), and \u03c3(1) \u03be = O(1/k), then \u2202 \u2202wi L\u03b1 D(g(0); \u03c3(1) \u03be ) = \u2212a(0) i \uf8eb \uf8ed D X j=1 MjTj + \u03bd \uf8f6 \uf8f8 (342) where Tj satis\ufb01es: 62 \fPublished as a conference paper at ICLR 2022 \u2022 if j \u2208A, then |Tj \u2212\u03b2\u03b3/\u02dc \u03c3| \u2264O(\u03f5e/\u02dc \u03c3), where \u03b2 \u2208[\u2126(1), 1] and depends only on w(0) i , b(0) i ; \u2022 if j \u0338\u2208A, then |Tj| \u2264O(\u03c32 \u03c6j\u03f5e\u02dc \u03c3); \u2022 |\u03bdj| \u2264O \u0012 \u03c3\u03b6\u221a log(k) \u02dc \u03c3 \u03f5\u03bd \u0013 + \u03c3\u03b6d \u02dc \u03c3 e\u2212\u0398(k). Proof of Lemma 40. Consider one neuron index i and omit the subscript i in the parameters. Since the unbiased initialization leads to g(0)(x; \u03be(1)) = 0, we have \u2202 \u2202wL\u03b1 D(g(0); \u03c3(1) \u03be ) (343) = \u2212a(0)E(x,y)\u223cD n \u03b1yyI[yg(0)(x; \u03be(1)) \u22641]E\u03be(1)I[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)]x o (344) = \u2212a(0)E(x,y)\u223cD,\u03be(1) n \u03b1yyI[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)]x o (345) = \u2212a(0) D X j=1 Mj E(x,y)\u223cD,\u03be(1) n \u03b1yy\u03c6jI[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] o | {z } :=Tj (346) \u2212a(0) E(x,y)\u223cD,\u03be(1) \u001a\u03b1yy\u03b6 \u02dc \u03c3 I[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] \u001b | {z } :=\u03bd (347) First, consider j \u2208A. Tj = E(x,y)\u223cD,\u03be(1) n \u03b1yy\u03c6jI[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] o (348) = E\u03c6A,\u03b6 \u001a \u03b1yy\u03c6j Pr \u03c6\u2212A,\u03be(1) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) i\u001b . (349) where \u03b9 := \u27e8w(0), \u03b6/\u02dc \u03c3\u27e9. Let Ia := Pr \u03be(1) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) i , (350) I\u2032 a := Pr \u03be(1) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) i . (351) Note that |\u27e8\u03c6A, qA\u27e9| = O( k\u03c3w\u221a 2 log(D/\u03b4) \u02dc \u03c32 ), and that |\u03b9| = |\u27e8w(0) i , \u03b6/\u02dc \u03c3\u27e9| = O( \u221a d\u03c3\u03be\u03c3w\u221a 2 log(D/\u03b4) \u02dc \u03c3 ), and that |\u27e8\u03c6, q\u27e9|, |\u27e8\u03c6\u2212A, q\u2212A\u27e9| are O( D\u03c3w\u221a 2 log(D/\u03b4) \u02dc \u03c32 ). When \u03c3w is suf\ufb01ciently small, by the property of the Gaussian \u03be(1), we have |Ia \u2212I\u2032 a| (352) \u2264 \f \f \f \fPr \u03be(1) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u22650 i \u2212Pr \u03be(1) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(0) + \u03be(1) \u22650 i\f \f \f \f (353) + Pr \u03be(1) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u22651 i + Pr \u03be(1) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(0) + \u03be(1) \u22651 i (354) = O(\u03f5e). (355) In summary, |E\u03b6,\u03c6\u2212A(Ia \u2212I\u2032 a)| = O(\u03f5e). (356) Then we have \f \fTj \u2212E\u03c6A,\u03b6,\u03c6\u2212A {\u03b1yy\u03c6jI\u2032 a} \f \f (357) \u2264E\u03c6A \b |\u03b1yy\u03c6j| \f \fE\u03b6,\u03c6\u2212A(Ia \u2212I\u2032 a) \f \f\t (358) \u2264O(\u03f5e)E\u03c6A {|\u03b1yy\u03c6j|} (359) \u2264O(\u03f5e/\u02dc \u03c3) (360) 63 \fPublished as a conference paper at ICLR 2022 where the last step is from Lemma 39. Furthermore, E\u03c6A,\u03b6,\u03c6\u2212A {\u03b1yy\u03c6jI\u2032 a} (361) = E\u03c6A {\u03b1yy\u03c6j} E\u03b6,\u03c6\u2212A[I\u2032 a] (362) = E\u03c6A {\u03b1yy\u03c6j} Pr \u03c6\u2212A,\u03b6,\u03c6\u2212A h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) i (363) When \u03c3w is suf\ufb01ciently small, we have Pr \u03c6\u2212A h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ b(0) \u2208(0, 1/2) i \u2265\u2126(1), (364) Pr \u03b6,\u03be(1) h \u03b9 + \u03be(1) \u2208(0, 1/2) i = 1/2 \u2212exp(\u2212\u2126(k)), (365) This leads to \u03b2 := E\u03b6,\u03c6\u2212A[I\u2032 a] = Pr \u03c6\u2212A,\u03b6,\u03be(1) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) i \u2265\u2126(1). (366) By Lemma 39, E\u03c6A {\u03b1yy\u03c6j} = \u03b3/\u02dc \u03c3. Therefore, |Tj \u2212\u03b2\u03b3/\u02dc \u03c3| \u2264O(\u03f5e/\u02dc \u03c3). (367) Now, consider j \u0338\u2208A. Let B denote A \u222a{j}. Tj = E(x,y)\u223cD,\u03b6,\u03be(1) n \u03b1yy\u03c6jI h \u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) io (368) = E\u03c6BE\u03c6\u2212B,\u03b6,\u03be(1) n \u03b1yy\u03c6jI h \u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) io (369) = E\u03c6B,\u03b6 \u001a \u03b1yy\u03c6j Pr \u03c6\u2212B,\u03be(1) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) i\u001b . (370) Let Ib := Pr \u03be(1) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) i , (371) I\u2032 b := Pr \u03be(1) h \u27e8\u03c6\u2212B, q\u2212B\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) i . (372) Similar as above, we have |E\u03b6,\u03be(1)(Ib \u2212I\u2032 b)| \u2264O(\u03f5e). Then by Lemma 39, \f \fTj \u2212E\u03c6B,\u03b6,\u03c6\u2212B {\u03b1yy\u03c6jI\u2032 b} \f \f (373) \u2264E\u03c6B \b |\u03b1yy\u03c6j||E\u03b6,\u03c6\u2212B(Ib \u2212I\u2032 b)| \t (374) \u2264O(\u03f5e)E\u03c6A {|\u03b1yy|} E\u03c6j {|\u03c6j|} (375) \u2264O(\u03f5e) \u00d7 1 \u00d7 O(\u03c32 \u03c6j\u02dc \u03c3) (376) = O(\u03c32 \u03c6j\u03f5e\u02dc \u03c3). (377) Furthermore, E\u03c6B,\u03b6,\u03c6\u2212B {\u03b1yy\u03c6jI\u2032 b} = E\u03c6A {\u03b1yy} E\u03c6j {\u03c6j} E\u03b6,\u03c6\u2212B[I\u2032 b] = 0. (378) Therefore, |Tj| \u2264O(\u03c32 \u03c6\u03f5e\u02dc \u03c3). (379) Finally, consider \u03bdj. \u03bdj = E(x,y)\u223cD,\u03be(1) \u001a\u03b1yy\u03b6j \u02dc \u03c3 I[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] \u001b (380) = E\u03c6A,\u03c6\u2212A,\u03b6,\u03be(1) \u001a\u03b1yy\u03b6j \u02dc \u03c3 I[\u27e8\u03c6, q\u27e9+ \u03b9j + \u03b9\u2212j + b(0) + \u03be(1) \u2208(0, 1)] \u001b (381) = E\u03c6A,\u03b6 \u001a\u03b1yy\u03b6j \u02dc \u03c3 Pr \u03c6\u2212A,\u03be(1)[\u27e8\u03c6, q\u27e9+ \u03b9j + \u03b9\u2212j + b(0) + \u03be(1) \u2208(0, 1)] \u001b (382) 64 \fPublished as a conference paper at ICLR 2022 where \u03b9j := w(0) j \u03b6j/\u02dc \u03c3 and \u03b9\u2212j := \u27e8w(0), \u03b6/\u02dc \u03c3\u27e9\u2212\u03b9j. With probability \u22651 \u2212d exp(\u2212\u0398(k)) over \u03b6, for any j, |\u03b6j| \u2264O(\u03c3\u03b6 p log(k)). Let G\u03b6 denote this event. Let Ij := Pr \u03be(1) h \u27e8\u03c6, q\u27e9+ \u03b9j + \u03b9\u2212j + b(0) + \u03be(1) \u2208(0, 1) i , (383) I\u2032 j := Pr \u03be(1) h \u27e8\u03c6, q\u27e9+ \u03b9\u2212j + b(0) + \u03be(1) \u2208(0, 1) i . (384) Similar as above, we have |E\u03b6[Ij \u2212I\u2032 j|G\u03b6]| \u2264O(\u03f5\u03bd). Then |E\u03b6,\u03c6\u2212A(Ij \u2212I\u2032 j)| \u2264|E\u03b6,\u03c6\u2212A[(Ij \u2212I\u2032 j)|G\u03b6]| + Pr[\u2212G\u03b6] (385) \u2264O(\u03f5\u03bd + d exp(\u2212\u0398(k))). (386) \f \f \f \f\u03bdj \u2212E\u03c6A,\u03b6,\u03c6\u2212A \u001a\u03b1yy\u03b6j \u02dc \u03c3 I\u2032 j \u001b\f \f \f \f (387) = \f \f \f \fE\u03c6A,\u03b6,\u03c6\u2212A \u001a\u03b1yy\u03b6j \u02dc \u03c3 (Ij \u2212I\u2032 j) \u001b\f \f \f \f (388) \u2264 \f \f \f \fE\u03c6A,\u03b6,\u03c6\u2212A \u001a\u03b1yy\u03b6j \u02dc \u03c3 (Ij \u2212I\u2032 j)|G\u03b6 \u001b\f \f \f \f + \f \f \f \fE\u03c6A,\u03b6,\u03c6\u2212A \u001a\u03b1yy\u03b6j \u02dc \u03c3 (Ij \u2212I\u2032 j)| \u2212G\u03b6 \u001b\f \f \f \f Pr[\u2212G\u03b6]. (389) The \ufb01rst term is bounded by\f \f \f \fE\u03c6A,\u03b6,\u03c6\u2212A \u001a\u03b1yy\u03b6j \u02dc \u03c3 (Ij \u2212I\u2032 j)|G\u03b6 \u001b\f \f \f \f (390) \u2264E\u03c6A ( \u03b1yy\u03c3\u03b6 p log(k) \u02dc \u03c3 |E\u03b6,\u03c6\u2212A[Ib \u2212I\u2032 b|G\u03b6]| ) (391) \u2264O(\u03f5\u03bd)E\u03c6A {|\u03b1yy|} \u03c3\u03b6 p log(k) \u02dc \u03c3 (392) \u2264O(\u03f5\u03bd) \u00d7 1 \u00d7 \u03c3\u03b6 p log(k) \u02dc \u03c3 (393) = O \u03c3\u03b6 p log(k) \u02dc \u03c3 \u03f5\u03bd ! . (394) The second term is bounded by \f \f \f \fE\u03c6A,\u03b6,\u03c6\u2212A \u001a\u03b1yy\u03b6j \u02dc \u03c3 (Ij \u2212I\u2032 j)| \u2212G\u03b6 \u001b\f \f \f \f Pr[\u2212G\u03b6] (395) \u2264 \f \f \f \fE\u03c6A,\u03b6,\u03c6\u2212A \u001a\u03b1yy\u03b6j \u02dc \u03c3 (Ij \u2212I\u2032 j)| \u2212G\u03b6 \u001b\f \f \f \f \u00d7 de\u2212\u0398(k) (396) \u2264E\u03c6A \f \f \f\u03b1yy \u02dc \u03c3 \f \f \f \u00d7 E\u03b6 {|\u03b6j|| \u2212G\u03b6} \u00d7 de\u2212\u0398(k) (397) \u2264\u03c3\u03b6 \u02dc \u03c3 \u00d7 de\u2212\u0398(k) (398) \u2264\u03c3\u03b6d \u02dc \u03c3 e\u2212\u0398(k). (399) Furthermore, E\u03c6A,\u03b6,\u03c6\u2212A \u001a\u03b1yy\u03b6j \u02dc \u03c3 I\u2032 j \u001b = E\u03c6A {\u03b1yy} E\u03b6j \u001a\u03b6j \u02dc \u03c3 \u001b E\u03b6\u2212j[I\u2032 j] = 0. (400) Therefore, |\u03bdj| \u2264O \u03c3\u03b6 p log(k) \u02dc \u03c3 \u03f5\u03bd ! + \u03c3\u03b6d \u02dc \u03c3 e\u2212\u0398(k). (401) 65 \fPublished as a conference paper at ICLR 2022 Lemma 41. Under the same assumptions as in Lemma 40, \u2202 \u2202bi L\u03b1 D(g(0); \u03c3(1) \u03be ) = \u2212a(0) i Tb (402) where |Tb| \u2264O(\u03f5e). Proof of Lemma 41. Consider one neuron index i and omit the subscript i in the parameters. Since the unbiased initialization leads to g(0)(x; \u03be(1)) = 0, we have \u2202 \u2202bL\u03b1 D(g(0); \u03c3(1) \u03be ) (403) = \u2212a(0)E(x,y)\u223cD n \u03b1yyI[yg(0)(x; \u03be(1)) \u22641]E\u03be(1)I[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] o (404) = \u2212a(0)E(x,y)\u223cD,\u03be(1) n \u03b1yyI[\u27e8w(0), x\u27e9+ b(0) + \u03be(1) \u2208(0, 1)] o (405) = \u2212a(0) E\u03c6A,\u03b6,\u03be(1) \u001a \u03b1yy Pr \u03c6\u2212A h \u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) i\u001b | {z } :=Tb . (406) where \u03b9 := \u27e8w(0), \u03b6/\u02dc \u03c3\u27e9. Similar to the proof in Lemma 40, \f \f \f \f \fE\u03b6 Pr \u03c6\u2212A,\u03be(1)[\u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1)] (407) \u2212 Pr \u03c6\u2212A,\u03be(1)[\u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1)] !\f \f \f \f \f = O(\u03f5e). (408) Then \f \f \f \fTb \u2212E\u03c6A,\u03b6 \u001a \u03b1yy Pr \u03c6\u2212A,\u03be(1)[\u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1)] \u001b\f \f \f \f (409) = E\u03c6A,\u03b6 ( |\u03b1yy| \f \f \f \f \f Pr \u03c6\u2212A,\u03be(1)[\u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1)] (410) \u2212 Pr \u03c6\u2212A,\u03be(1)[\u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1)] \f \f \f \f \f ) (411) \u2264O(\u03f5e)E\u03c6A {|\u03b1yy|} (412) \u2264O(\u03f5e). (413) Also, E\u03c6A,\u03b6 \u001a \u03b1yy Pr \u03c6\u2212A,\u03be(1)[\u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1)] \u001b (414) = E\u03c6A {\u03b1yy} Pr \u03c6\u2212A,\u03b6,\u03be(1)[\u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1)] (415) = 0. (416) Therefore, |Tb| \u2264O(\u03f5e). Lemma 42. We have \u2202 \u2202ai L\u03b1 D(g(0); \u03c3(1) \u03be ) = \u2212Ta (417) where |Ta| \u2264O(max\u2113q(0) i,\u2113). So if w(0) i \u2208G(\u03b4), |Ta| \u2264O(\u03c3w p log(Dm/\u03b4)). 66 \fPublished as a conference paper at ICLR 2022 Proof of Lemma 42. Consider one neuron index i and omit the subscript i in the parameters. Since the unbiased initialization leads to g(0)(x; \u03be(1)) = 0, we have \u2202 \u2202aL\u03b1 D(g(0); \u03c3(1) \u03be ) (418) = \u2212E(x,y)\u223cD n \u03b1yyI[yg(0)(x; \u03be(1)) \u22641]E\u03be(1)\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1)) o (419) = \u2212E(x,y)\u223cD,\u03be(1) n \u03b1yy\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1)) o | {z } :=Ta . (420) Let \u03c6\u2032 A be an independent copy of \u03c6A, \u03c6\u2032 be the vector obtained by replacing in \u03c6 the entries \u03c6A with \u03c6\u2032 A, and let x\u2032 = M\u03c6\u2032 + \u03b6/\u02dc \u03c3 and its label is y\u2032. Then |Ta| = \f \f \fE\u03c6A n \u03b1yyE\u03c6\u2212A,\u03b6,\u03be(1)\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1)) o\f \f \f (421) \u22641 2 \f \f \f \f \fE\u03c6A n E\u03c6\u2212A,\u03b6,\u03be(1)\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1))|y = 1 o (422) \u2212E\u03c6A n E\u03c6\u2212A,\u03b6,\u03be(1)\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1))|y = \u22121 o \f \f \f \f \f (423) \u22641 2 \f \f \f \f \fE\u03c6A n E\u03c6\u2212A,\u03b6,\u03be(1)\u03c3(\u27e8w(0), x\u27e9+ b(0) + \u03be(1))|y = 1 o (424) \u2212E\u03c6\u2032 A n E\u03c6\u2212A,\u03b6,\u03be(1)\u03c3(\u27e8w(0), x\u2032\u27e9+ b(0) + \u03be(1))|y\u2032 = \u22121 o \f \f \f \f \f. (425) Since \u03c3 is 1-Lipschitz, |Ta| \u22641 2E\u03c6A,\u03c6\u2032 A n E\u03c6\u2212A \f \f \f\u27e8w(0), M\u03c6\u27e9\u2212\u27e8w(0), M\u03c6\u2032\u27e9 \f \f \f |y = 1, y\u2032 = \u22121 o (426) \u22641 2E\u03c6\u2212A \u0010 E\u03c6A n\f \f \f\u27e8w(0), M\u03c6\u27e9 \f \f \f |y = 1 o + E\u03c6\u2032 A n\f \f \f\u27e8w(0), M\u03c6\u2032\u27e9 \f \f \f |y\u2032 = \u22121 o\u0011 (427) \u2264max \u2113 q(0) i,\u2113 v u u u tE\u03c6 \uf8eb \uf8edX \u2113\u2208[D] \u03c62 \u2113+ X j\u0338=\u2113:j,\u2113\u2208A |\u03c6j\u03c6\u2113| \uf8f6 \uf8f8 (428) \u2264max \u2113 q(0) i,\u2113 q E\u03c6 (1 + O(1)) (429) = \u0398(max \u2113 q(0) i,\u2113). (430) With the bounds on the gradient, we now summarize the results for the weights after the \ufb01rst gradient step. Lemma 43. Set \u03bb(1) w = 1/(2\u03b7(1)), \u03bb(1) a = \u03bb(1) b = 0, \u03c3(1) \u03be = 1/k3/2. Fix \u03b4 \u2208(0, 1) and suppose w(0) i \u2208Gw(\u03b4), b(0) i \u2208Gb(\u03b4) for all i \u2208[2m]. If k = \u2126(log2(Dm/\u03b4)), then for all i \u2208[m], w(1) i = PD \u2113=1 q(1) i,\u2113M\u2113+ \u03c5 satisfying \u2022 if \u2113\u2208A, then |q(1) i,\u2113\u2212\u03b7(1)a(0) i \u03b2\u03b3/\u02dc \u03c3| \u2264O \u0012 |\u03b7(1)a(0) i |\u03f5e \u02dc \u03c3 \u0013 , where \u03b2 \u2208[\u2126(1), 1] and depends only on w(0) i , b(0) i ; \u2022 if \u2113\u0338\u2208A, then |q(1) i,\u2113| \u2264O \u0010 |\u03b7(1)a(0) i |\u03c32 \u03c6\u2113\u03f5e\u02dc \u03c3 \u0011 ; 67 \fPublished as a conference paper at ICLR 2022 \u2022 |\u03c5j| \u2264O \u0012 |\u03b7(1)a(0) i | \u0012 \u03c3\u03b6\u221a log(k) \u02dc \u03c3 \u03f5\u03bd + \u03c3\u03b6d \u02dc \u03c3 e\u2212\u0398(k) \u0013\u0013 . and \u2022 b(1) i = b(0) i + \u03b7(1)a(0) i Tb where |Tb| = O (\u03f5e); \u2022 a(1) i = a(0) i + \u03b7(1)Ta where |Ta| = O(\u03c3w p log(Dm/\u03b4)). Proof of Lemma 43. This follows from Lemma 37 and Lemma 40-42. F.8 FEATURE IMPROVEMENT: SECOND GRADIENT STEP We \ufb01rst show that with properly set \u03b7(1), for most x, |g(1)(x; \u03c3(2) \u03be )| < 1 and thus yg(1)(x; \u03c3(2) \u03be ) < 1. Lemma 44. Fix \u03b4 \u2208(0, 1) and suppose w(0) i \u2208Gw(\u03b4), b(0) i \u2208Gb(\u03b4), a(0) i \u2208Ga(\u03b4) for all i \u2208[2m]. If D\u00b5/ \u221a d \u22641/16, \u03c3\u03b6\u02dc \u03c3 = O(1), \u03c32 \u03b6d/\u02dc \u03c32 = O(1), k = \u2126(log2(Dm/\u03b4)), \u03c3a \u2264\u02dc \u03c32/(\u03b3k2), \u03b7(1) = O \u0010 \u03b3 km\u03c3a\u02dc \u03c3 \u0011 , and \u03c3(2) \u03be \u22641/k, then with probability \u22651 \u2212(d + D) exp(\u2212\u2126(k)) over (x, y), we have yg(1)(x; \u03c3(2) \u03be ) < 1. Furthermore, for any i \u2208[2m], \f \f \f\u27e8w(1) i , \u03b6/\u02dc \u03c3\u27e9 \f \f \f = O(\u03b7(1)\u02dc \u03c3/\u03b3), \f \f \f\u27e8q(1) i , \u03c6\u27e9 \f \f \f = O(\u03b7(1)\u02dc \u03c3/\u03b3), and \f \f \f\u27e8(q(1) i )\u2212A, \u03c6\u2212A\u27e9 \f \f \f = O(\u03b7(1)\u02dc \u03c3/\u03b3), and for any j \u2208[d], \u2113\u2208[D], |\u03b6j| \u2264O(\u03c3\u03b6 p log(k)) and |\u27e8\u03b6, D\u2113\u27e9| \u2264O(\u03c3\u03b6 p log(k)). Proof of Lemma 44. Note that w(0) i = w(0) m+i, b(0) i = b(0) m+i, and a(0) i = a(0) m+i. Then the gradient for wm+i is the negation of that for wm+i, the gradient for bm+i is the negation of that for bm+i, and the gradient for am+i is the same as that for am+i. \f \f \fg(1)(x; \u03c3(2) \u03be ) \f \f \f (431) = \f \f \f \f \f 2m X i=1 a(1) i E\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) \f \f \f \f \f (432) = \f \f \f \f \f m X i=1 \u0010 a(1) i E\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) + a(1) m+iE\u03be(2)\u03c3(\u27e8w(1) m+i, x\u27e9+ b(1) m+i + \u03be(2) m+i) \u0011\f \f \f \f \f (433) \u2264 \f \f \f \f \f m X i=1 \u0010 a(1) i E\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) + a(1) m+iE\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) \u0011\f \f \f \f \f (434) + \f \f \f \f \f m X i=1 \u0010 \u2212a(1) m+iE\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) + a(1) m+iE\u03be(2)\u03c3(\u27e8w(1) m+i, x\u27e9+ b(1) m+i + \u03be(2) i ) \u0011\f \f \f \f \f . (435) Then we have \f \f \fg(1)(x; \u03c3(2) \u03be ) \f \f \f \u2264 m X i=1 \f \f \f2\u03b7(1)TaE\u03be(2)\u03c3(\u27e8w(1) i , x\u27e9+ b(1) i + \u03be(2) i ) \f \f \f (436) + m X i=1 \f \f \fa(1) m+i \f \f \f \u0010\f \f \f\u27e8w(1) i \u2212w(1) m+i, x\u27e9 \f \f \f + \f \f \fb(1) i \u2212b(1) m+i \f \f \f \u0011 (437) \u2264 m X i=1 \f \f \f2\u03b7(1)Ta \f \f \f \u0010\f \f \f\u27e8w(1) i , x\u27e9+ b(1) i \f \f \f + E\u03be(2) \f \f \f\u03be(2) i \f \f \f \u0011 (438) + m X i=1 \f \f \fa(1) m+i \f \f \f \u0010\f \f \f\u27e8w(1) i \u2212w(1) m+i, x\u27e9 \f \f \f + \f \f \fb(1) i \u2212b(1) m+i \f \f \f \u0011 . (439) 68 \fPublished as a conference paper at ICLR 2022 With probability \u22651 \u2212exp(\u2212\u2126(k)), among all j \u0338\u2208A, we have that at most po(D \u2212k) of \u03c6j are (1 \u2212poj)/\u02dc \u03c3, while the others are \u2212poj/\u02dc \u03c3. With probability \u22651 \u2212(d + D) exp(\u2212\u2126(k)) over \u03b6, for any j, |\u03b6j| \u2264O(\u03c3\u03b6 p log(k)) and |\u27e8\u03b6, D\u2113\u27e9| \u2264O(\u03c3\u03b6 p log(k)). For data points with \u03c6 and \u03b6 satisfying these, we have: Claim 1. \f \f \f\u27e8w(1) i , x\u27e9 \f \f \f \u2264O(\u03b7(1)/\u03b3)(1 + \u02dc \u03c3 + \u02dc \u03c3/ \u221a k). Proof of Claim 1. \f \f \f\u27e8w(1) i , x\u27e9 \f \f \f = \f \f \f \f \f \f \u27e8 D X \u2113=1 q(1) i,\u2113M\u2113+ \u03c5, D X j=1 \u03c6jMj + \u03b6/\u02dc \u03c3\u27e9 \f \f \f \f \f \f (440) \u2264 \f \f \f \f \f \f \u27e8 D X \u2113=1 q(1) i,\u2113M\u2113, D X j=1 \u03c6jMj\u27e9 \f \f \f \f \f \f + \f \f \f \f \f\u27e8 D X \u2113=1 q(1) i,\u2113M\u2113, \u03b6/\u02dc \u03c3\u27e9 \f \f \f \f \f + \f \f \f \f \f \f \u27e8\u03c5, D X j=1 \u03c6jMj\u27e9 \f \f \f \f \f \f + |\u27e8\u03c5, \u03b6/\u02dc \u03c3\u27e9| . (441) For each term above we bound as follows. Note that when \u03c3w is suf\ufb01ciently small, \u03f5e = O(k log1/2(Dm/\u03b4)/ \u221a D). Let B1 := \u03b2\u03b3/\u02dc \u03c3 + \u03f5e/\u02dc \u03c3, (442) B2 := \u03c32 \u03c6\u03f5e\u02dc \u03c3 = O(\u03f5e/ \u221a D), (443) C1 = k \u02dc \u03c3 , (444) C2 := poD/\u02dc \u03c3 = O(D/\u02dc \u03c3). (445) Then |a(0) i |B1C1 = O(log(Dm/\u03b4)/k + log(Dm/\u03b4)\u03f5e/(\u03b3k)) = O(1/\u03b3), (446) |a(0) i |B2C2 = O(\u02dc \u03c3/\u03b3), (447) |a(0) i |B1C2 = O(D/k + \u221a D/\u03b3), (448) |a(0) i |B2C1 = O(\u03f5e/(\u03b3 \u221a k)) = O(1/\u03b3), (449) Then by the assumption on \u00b5, \f \f \f \f \f \f \u27e8 D X \u2113=1 q(1) i,\u2113M\u2113, D X j=1 \u03c6jMj\u27e9 \f \f \f \f \f \f (450) = \f \f \f \f \f X \u2113\u2208A \u27e8q(1) i,\u2113M\u2113, M\u2113\u03c6\u2113\u27e9 \f \f \f \f \f + \f \f \f \f \f \f X \u2113\u0338\u2208A \u27e8q(1) i,\u2113M\u2113, M\u2113\u03c6\u2113\u27e9 \f \f \f \f \f \f + \f \f \f \f \f \f X \u2113\u0338=j \u27e8q(1) i,\u2113M\u2113, Mj\u03c6j\u27e9 \f \f \f \f \f \f (451) \u2264O(|\u03b7(1)a(0) i |) \u0012 B1C1 + B2C2 + \u00b5 \u221a d (kB1(C1 + C2) + DB2(C1 + C2)) \u0013 (452) \u2264O(|\u03b7(1)a(0) i |) \u0012 B1C1 + B2C2 + k\u00b5 \u221a d B1C2 + B2C1 \u0013 (453) \u2264O(\u03b7(1))(1/\u03b3 + \u02dc \u03c3/\u03b3 + 1/\u03b3 + 1/\u03b3) (454) \u2264O(\u03b7(1)/\u03b3)(1 + \u02dc \u03c3). (455) 69 \fPublished as a conference paper at ICLR 2022 By the assumption on \u03c3\u03b6, \f \f \f \f \f\u27e8 D X \u2113=1 q(1) i,\u2113M\u2113, \u03b6/\u02dc \u03c3\u27e9 \f \f \f \f \f (456) \u2264O(|\u03b7(1)a(0) i |)(kB1 + DB2)\u03c3\u03b6 p log(k) \u02dc \u03c3 (457) \u2264O(\u03b7(1))(kB1 + DB2)\u03c3\u03b6 p log(k) \u02dc \u03c3 \u02dc \u03c32p log(Dm/\u03b4) \u03b3k2 (458) \u2264O(\u03b7(1)) \u0012 \u03c3\u03b6 log(Dm/\u03b4) \u00121 k + \u03f5e \u03b3k \u0013 + \u03c3\u03b6\u02dc \u03c3 log(k) log(Dm/\u03b4) \u03b3k \u0013 (459) \u2264O(\u03b7(1)/\u03b3). (460) Also note that |\u03bdj| \u2264O \u0010 \u03c3\u03b6 log2(Dm/\u03b4) \u02dc \u03c3 \u221a D \u0011 . Then by the assumption on \u03c3\u03b6, \f \f \f \f \f \f \u27e8\u03c5, D X j=1 \u03c6jMj\u27e9 \f \f \f \f \f \f (461) \u2264O(|\u03b7(1)a(0) i |) \u00d7 \u221a d \u00d7 O \u0012\u03c3\u03b6 log2(Dm/\u03b4) \u02dc \u03c3 \u221a D \u0013 \u00d7 (C1 + C2) (462) \u2264O(|\u03b7(1)/\u03b3). (463) Finally, we have |\u27e8\u03c5, \u03b6/\u02dc \u03c3\u27e9| \u2264 d X j=1 |\u03c5j||\u03b6j/\u02dc \u03c3| (464) \u2264O(|\u03b7(1)a(0) i |) \u00d7 d \u00d7 \u03c3\u03b6 log2(Dm/\u03b4) \u02dc \u03c3 \u221a D \u02dc \u03c3 p log(k) \u02dc \u03c3 (465) \u2264O(\u03b7(1)/\u03b3) \u02dc \u03c3 \u221a k . (466) We also have: |Ta| = O(\u03c3w p log(Dm/\u03b4)) (467) \f \f \fb(1) i \f \f \f \u2264 \f \f \fb(0) i \f \f \f + \f \f \f\u03b7(1)a(0) i Tb \f \f \f (468) \u2264 p log(m/\u03b4) k2 + \f \f \f\u03b7(1)a(0) i \u03f5e \f \f \f . (469) E\u03be(2) \f \f \f\u03be(2) i \f \f \f \u2264O(\u03c3(2) \u03be ). (470) |a(1) m+i| \u2264|a(0) i | + |\u03b7(1)Ta| \u2264|a(0) i | + O(\u03b7(1)\u03c3w p log(Dm/\u03b4)). (471) \f \f \f\u27e8w(1) i \u2212w(1) m+i, x\u27e9 \f \f \f = 2 \f \f \f\u27e8w(1) i , x\u27e9 \f \f \f = O(\u03b7(1)\u02dc \u03c3/\u03b3). (472) \f \f \fb(1) i \u2212b(1) m+i \f \f \f = 2|\u03b7(1)a(0) i Tb| = O(|\u03b7(1)a(0) i |\u03f5e). (473) 70 \fPublished as a conference paper at ICLR 2022 Then we have \f \f \fg(1)(x; \u03c3(2) \u03be ) \f \f \f \u2264O \u0010 m\u03b7(1)\u03c3w p log(Dm/\u03b4) \u0011 \u03b7(1)\u02dc \u03c3 \u03b3 + p log(m/\u03b4) k2 + \f \f \f\u03b7(1)a(0) i \u03f5e \f \f \f + \u03c3(2) \u03be ! (474) + O \u0010 m(|a(0) i | + \u03b7(1)\u03c3w p log(Dm/\u03b4)) \u0011 \u0012\u03b7(1)\u02dc \u03c3 \u03b3 + \f \f \f\u03b7(1)a(0) i \u03f5e \f \f \f \u0013 (475) = O \u0012 m\u03b7(1)\u03c3w log(Dm/\u03b4) k + m|a(0) i | \u0012\u03b7(1)\u02dc \u03c3 \u03b3 + \f \f \f\u03b7(1)a(0) i \u03f5e \f \f \f \u0013\u0013 (476) = O \u0012 m\u03b7(1)\u03c3w log(Dm/\u03b4) k + m|a(0) i |\u03b7(1)\u02dc \u03c3 \u03b3 + m|a(0) i |\u03b7(1)\u02dc \u03c3 \u03b3 \u221a k \u0013 (477) < 1. (478) Then \f \f \fyg(1)(x; \u03c3(2) \u03be ) \f \f \f < 1. Finally, the statement on \f \f \f\u27e8(q(1) i )\u2212A, \u03c6\u2212A\u27e9 \f \f \f follows from a similar calculation on \f \f \f\u27e8w(1) i , x\u27e9 \f \f \f = \f \f \f\u27e8q(1) i , \u03c6\u27e9 \f \f \f. We are now ready to analyze the gradients in the second gradient step. Lemma 45. Fix \u03b4 \u2208(0, 1) and suppose w(0) i \u2208Gw(\u03b4), b(0) i \u2208Gb(\u03b4), a(0) i \u2208Ga(\u03b4) for all i \u2208[2m]. Let \u03f5e2 := O \u0012 \u03b7(1)|a(0) i |k(\u03b3+\u03f5e) \u02dc \u03c32\u03c3(2) \u03be \u0013 + exp(\u2212\u0398(k)). If D\u00b5/ \u221a d \u22641/16, \u03c3\u03b6\u02dc \u03c3 = O(1), \u03c32 \u03b6d/\u02dc \u03c32 = O(1), k = \u2126(log2(Dm/\u03b4)), \u03c3a \u2264\u02dc \u03c32/(\u03b3k2), \u03b7(1) = O \u0010 \u03b3 km\u03c3a\u02dc \u03c3 \u0011 , and \u03c3(2) \u03be = 1/k3/2, then \u2202 \u2202wi LD(g(1); \u03c3(2) \u03be ) = \u2212a(1) i \uf8eb \uf8ed D X j=1 MjTj + \u03bd \uf8f6 \uf8f8 (479) where Tj satis\ufb01es: \u2022 if j \u2208A, then |Tj \u2212\u03b2\u03b3/\u02dc \u03c3| \u2264O(\u03f5e2/\u02dc \u03c3 + \u03b7(1)/\u03c3(2) \u03be + \u03b7(1)|a(0) i |\u03f5e/(\u02dc \u03c3\u03c3(2) \u03be )), where \u03b2 \u2208 [\u2126(1), 1] and depends only on w(0) i , b(0) i ; \u2022 if j \u0338\u2208A, then |Tj| \u22641 \u02dc \u03c3 exp(\u2212\u2126(k)) + O(\u03c32 \u03c6\u02dc \u03c3\u03f5e2); \u2022 |\u03bdj| \u2264O \u0012 \u03b7(1)\u03c3\u03b6 \u03b3\u03c3(2) \u03be \u0013 + exp(\u2212\u2126(k)). Proof of Lemma 45. Consider one neuron index i and omit the subscript i in the parameters. By Lemma 44, with probability at least 1 \u2212(d + D) exp(\u2212\u2126(k)) = 1 \u2212exp(\u2212\u2126(k)) over (x, y), yg(1)(x; \u03be(2)) > 1 and furthermore, for any i \u2208[2m], \f \f \f\u27e8w(1) i , \u03b6/\u02dc \u03c3\u27e9 \f \f \f = O(\u03b7(1)\u02dc \u03c3/\u03b3), \f \f \f\u27e8q(1) i , \u03c6\u27e9 \f \f \f = O(\u03b7(1)\u02dc \u03c3/\u03b3), and \f \f \f\u27e8(q(1) i )\u2212A, \u03c6\u2212A\u27e9 \f \f \f = O(\u03b7(1)\u02dc \u03c3/\u03b3), and for any j \u2208[d], \u2113\u2208[D], |\u03b6j| \u2264O(\u03c3\u03b6 p log(k)) and |\u27e8\u03b6, D\u2113\u27e9| \u2264O(\u03c3\u03b6 p log(k)). Let Ix be the indicator of this event. \u2202 \u2202wL\u03b1 D(g(1); \u03c3(2) \u03be ) (480) = \u2212a(1)E(x,y)\u223cD n \u03b1yyIxE\u03be(2)I[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)]x o (481) = \u2212a(1) D X j=1 Mj E(x,y)\u223cD,\u03be(2) n \u03b1yyIxI[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)]\u03c6j o | {z } :=Tj (482) \u2212a(1) E(x,y)\u223cD,\u03be(2) \u001a\u03b1yy\u03b6 \u02dc \u03c3 IxI[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)] \u001b | {z } :=\u03bd . (483) 71 \fPublished as a conference paper at ICLR 2022 Let Tj1 := E(x,y)\u223cD,\u03be(2) \b \u03b1yyI[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)]\u03c6j \t . We have |Tj \u2212Tj1| (484) = \f \f \fE(x,y)\u223cD,\u03be(2) n \u03b1yy(1 \u2212Ix)I[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)]\u03c6j o\f \f \f (485) \u22641 \u02dc \u03c3 exp(\u2212\u2126(k)). (486) Similarly, let \u03bd\u2032 := E(x,y)\u223cD,\u03be(2) n \u03b1yy\u03b6 \u02dc \u03c3 I[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)] o . We have |\u03bd \u2212\u03bd\u2032| (487) = \f \f \f \fE(x,y)\u223cD,\u03be(2) \u001a\u03b1yy\u03b6 \u02dc \u03c3 (1 \u2212Ix)I[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)] \u001b\f \f \f \f (488) \u2264\u03c3\u03b6 \u02dc \u03c3 exp(\u2212\u2126(k)). (489) So it is suf\ufb01cient to bound Tj1 and \u03bd\u2032. For simplicity, we use q as a shorthand for q(1) i . First, consider j \u2208A. Tj1 = E(x,y)\u223cD,\u03be(2) n \u03b1yyI[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)]\u03c6j o (490) = E\u03c6A \u001a \u03b1yy\u03c6j Pr \u03c6\u2212A,\u03be(2) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) i\u001b (491) where \u03b9 := \u27e8w(1), \u03b6/\u02dc \u03c3\u27e9. Let Ia := Pr \u03be(2) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) i , (492) I\u2032 a := Pr \u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) i . (493) By the property of the Gaussian \u03be(2), that |\u27e8\u03c6A, qA\u27e9| = O( \u03b7(1)|a(0) i |k(\u03b3+\u03f5e) \u02dc \u03c32 ), and that |\u03b9| = |\u27e8w(1) i , \u03b6/\u02dc \u03c3\u27e9|, |\u27e8\u03c6, q\u27e9|, |\u27e8\u03c6\u2212A, q\u2212A\u27e9| are all O(\u03b7(1)\u02dc \u03c3/\u03b3) < O(1/k), we have |Ia \u2212I\u2032 a| (494) \u2264 \f \f \f \fPr \u03be(2) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u22650 i \u2212Pr \u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(1) + \u03be(2) \u22650 i\f \f \f \f (495) + Pr \u03be(2) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u22651 i + Pr \u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(1) + \u03be(2) \u22651 i (496) = O \u03b7(1)|a(0) i |k(\u03b3 + \u03f5e) \u02dc \u03c32\u03c3(2) \u03be ! + exp(\u2212\u0398(k)) = O(\u03f5e2). (497) This leads to \f \fTj1 \u2212E\u03c6A,\u03c6\u2212A {\u03b1yy\u03c6jI\u2032 a} \f \f (498) \u2264E\u03c6A \b |\u03b1yy\u03c6j| \f \fE\u03c6\u2212A(Ia \u2212I\u2032 a) \f \f\t (499) \u2264O(\u03f5e2)E\u03c6A {|\u03b1yy\u03c6j|} (500) \u2264O(\u03f5e2/\u02dc \u03c3) (501) where the last step is from Lemma 39. Furthermore, E\u03c6A,\u03c6\u2212A {\u03b1yy\u03c6jI\u2032 a} (502) = E\u03c6A {\u03b1yy\u03c6j} E\u03c6\u2212A[I\u2032 a] (503) = E\u03c6A {\u03b1yy\u03c6j} Pr \u03c6\u2212A,\u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) i . (504) 72 \fPublished as a conference paper at ICLR 2022 By Lemma 19, we have |\u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9| \u2264O(\u03b7(1)\u02dc \u03c3/\u03b3). Also, |b(1) \u2212b(0)| \u2264O(\u03b7(1)|a(0) i |\u03f5e). By the property of \u03be(2), \f \f \f \fPr \u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) i \u2212Pr \u03be(2) h b(0) + \u03be(2) \u2208(0, 1) i\f \f \f \f (505) \u2264O(\u03b7(1)\u02dc \u03c3/(\u03b3\u03c3(2) \u03be )) + O(\u03b7(1)|a(0) i |\u03f5e/\u03c3(2) \u03be ). (506) On the other hand, \u03b2 := Pr \u03c6\u2212A,\u03be(2) h b(0) + \u03be(2) \u2208(0, 1) i = Pr \u03be(2) h \u03be(2) \u2208(\u2212b(0), 1 \u2212b(0)) i (507) = \u2126(1) (508) and \u03b2 only depends on b(0). By Lemma 39, E\u03c6A {\u03b1yy\u03c6j} = \u03b3/\u02dc \u03c3. Therefore, |Tj1 \u2212\u03b2\u03b3/\u02dc \u03c3| \u2264O(\u03f5e2/\u02dc \u03c3) + O(\u03b7(1)/\u03c3(2) \u03be ) + O(\u03b7(1)|a(0) i |\u03f5e/(\u02dc \u03c3\u03c3(2) \u03be )). (509) Now, consider j \u0338\u2208A. Let B denote A \u222a{j}. Tj1 = E(x,y)\u223cD,\u03be(2) n \u03b1yy\u03c6jI h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) io (510) = E\u03c6BE\u03c6\u2212B,\u03b6,\u03be(2) n \u03b1yy\u03c6jI h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) io (511) = E\u03c6B \u001a \u03b1yy\u03c6j Pr \u03c6\u2212B,\u03b6,\u03be(2) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) i\u001b . (512) Let Ib := Pr \u03be(2) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) i , (513) I\u2032 b := Pr \u03be(2) h \u27e8\u03c6\u2212B, q\u2212B\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) i . (514) Similar as above, we have |Ib \u2212I\u2032 b| \u2264\u03f5e2. Then by Lemma 39, \f \fTj1 \u2212E\u03c6B,\u03c6\u2212B,\u03b6 {\u03b1yy\u03c6jI\u2032 b} \f \f (515) \u2264E\u03c6B \b |\u03b1yy\u03c6j||E\u03c6\u2212B,\u03b6(Ib \u2212I\u2032 b)| \t (516) \u2264O(\u03f5e2)E\u03c6A {|\u03b1yy|} E\u03c6j {|\u03c6j|} (517) \u2264O(\u03f5e2) \u00d7 O(\u03c32 \u03c6\u02dc \u03c3) (518) = O(\u03c32 \u03c6\u02dc \u03c3\u03f5e2). (519) Furthermore, E\u03c6B,\u03c6\u2212B,\u03b6 {\u03b1yy\u03c6jI\u2032 b} = E\u03c6A {\u03b1yy} E\u03c6j {\u03c6j} E\u03c6\u2212B[I\u2032 b] = 0. (520) Therefore, |Tj1| \u2264O(\u03c32 \u03c6\u02dc \u03c3\u03f5e2). (521) Finally, consider \u03bd\u2032 j. \u03bd\u2032 j = E(x,y)\u223cD,\u03be(2) \u001a\u03b1yy\u03b6j \u02dc \u03c3 I[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)] \u001b (522) = E\u03c6A,\u03c6\u2212A,\u03b6,\u03be(2) \u001a\u03b1yy\u03b6j \u02dc \u03c3 I[\u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1)] \u001b (523) = E\u03c6A,\u03c6\u2212A,\u03b6 \u001a\u03b1yy\u03b6j \u02dc \u03c3 Pr \u03be(2)[\u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1)] \u001b (524) 73 \fPublished as a conference paper at ICLR 2022 Let Ij := Pr \u03be(2) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(0) + \u03be(1) \u2208(0, 1) i , (525) I\u2032 j := Pr \u03be(2) h \u27e8\u03c6, q\u27e9+ b(0) + \u03be(1) \u2208(0, 1) i . (526) Since |\u03b9| \u2264O(\u03b7(1)\u02dc \u03c3/\u03b3), we have |Ij \u2212I\u2032 j| \u2264O(\u03b7(1)\u02dc \u03c3/(\u03b3\u03c3(2) \u03be )). Then \f \f \f \f\u03bd\u2032 j \u2212E\u03c6A,\u03c6\u2212A,\u03b6 \u001a\u03b1yy\u03b6j \u02dc \u03c3 I\u2032 j \u001b\f \f \f \f (527) = \f \f \f \fE\u03c6A,\u03c6\u2212A,\u03b6 \u001a\u03b1yy\u03b6j \u02dc \u03c3 (Ij \u2212I\u2032 j) \u001b\f \f \f \f (528) \u2264O(\u03b7(1)\u02dc \u03c3/(\u03b3\u03c3(2) \u03be ))E\u03c6A,\u03c6\u2212A,\u03b6 \f \f \f \f \u03b1yy\u03b6j \u02dc \u03c3 \f \f \f \f (529) \u2264O(\u03b7(1)\u02dc \u03c3/(\u03b3\u03c3(2) \u03be ))E\u03c6A |\u03b1yy| E\u03b6 \f \f \f \f \u03b6j \u02dc \u03c3 \f \f \f \f (530) \u2264O(\u03b7(1)\u02dc \u03c3/(\u03b3\u03c3(2) \u03be )) \u00d7 1 \u00d7 \u03c3\u03b6 \u02dc \u03c3 (531) \u2264O(\u03b7(1)\u03c3\u03b6/(\u03b3\u03c3(2) \u03be )). (532) Furthermore, E\u03c6A,\u03c6\u2212A,\u03b6 \u001a\u03b1yy\u03b6j \u02dc \u03c3 I\u2032 j \u001b = E\u03c6A,\u03c6\u2212A n\u03b1yy \u02dc \u03c3 I\u2032 j o E\u03b6 {\u03b6j} = 0. (533) Therefore, |\u03bdj| \u2264O \u03b7(1)\u03c3\u03b6 \u03b3\u03c3(2) \u03be ! + exp(\u2212\u2126(k)). (534) Lemma 46. Under the same assumptions as in Lemma 45, \u2202 \u2202bLD(g(1); \u03c3(2) \u03be ) = \u2212a(1) i Tb (535) where |Tb| \u2264exp(\u2212\u2126(k)) + O(\u03f5e2). Proof of Lemma 46. Consider one neuron index i and omit the subscript i in the parameters. By Lemma 44, Pr[yg(1)(x; \u03be(2)) > 1] \u2264exp(\u2212\u2126(k)). Let Ix = I[yg(1)(x; \u03be(2)) \u22641]. \u2202 \u2202bL\u03b1 D(g(1); \u03c3(2) \u03be ) (536) = \u2212a(1) E(x,y)\u223cD n \u03b1yyIxE\u03be(2)I[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)] o | {z } :=Tb . (537) Let Tb1 := E(x,y)\u223cD,\u03be(2) \b \u03b1yyI[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)] \t . We have |Tb \u2212Tb1| (538) = \f \f \fE(x,y)\u223cD,\u03be(2) n \u03b1yy(1 \u2212Ix)I[\u27e8w(1), x\u27e9+ b(1) + \u03be(2) \u2208(0, 1)] o\f \f \f (539) \u2264exp(\u2212\u2126(k)). (540) So it is suf\ufb01cient to bound Tb1. For simplicity, we use q as a shorthand for q(1) i . Tb1 = E(x,y)\u223cD,\u03be(2) n \u03b1yyI h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) io (541) = E\u03c6AE\u03c6\u2212A,\u03be(2) n \u03b1yyI h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) io (542) = E\u03c6A \u001a \u03b1yy Pr \u03c6\u2212A,\u03be(2) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) i\u001b . (543) 74 \fPublished as a conference paper at ICLR 2022 Let Ib := Pr \u03be(2) h \u27e8\u03c6, q\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) i , (544) I\u2032 b := Pr \u03be(2) h \u27e8\u03c6\u2212A, q\u2212A\u27e9+ \u03b9 + b(1) + \u03be(2) \u2208(0, 1) i . (545) Similar as in Lemma 20, we have |Ib \u2212I\u2032 b| \u2264\u03f5e2. Then by Lemma 39, \f \fTb1 \u2212E\u03c6A,\u03c6\u2212A {\u03b1yyI\u2032 b} \f \f (546) = \f \fE\u03c6A,\u03c6\u2212A {\u03b1yy(Ib \u2212I\u2032 b)} \f \f (547) = O(\u03f5e2)E\u03c6A,\u03c6\u2212A |\u03b1yy| (548) \u2264O(\u03f5e2). (549) Furthermore, E\u03c6A,\u03c6\u2212A {\u03b1yyI\u2032 b} = E\u03c6A {\u03b1yy} E\u03c6\u2212A[I\u2032 b] = 0. (550) Therefore, |Tb1| \u2264O(\u03f5e2) and the statement follows. Lemma 47. Under the same assumptions as in Lemma 45, \u2202 \u2202ai LD(g(1); \u03c3(2) \u03be ) = \u2212Ta (551) where |Ta| = O \u0010 \u03b7(1)\u02dc \u03c3 \u03b3pmin \u0011 + exp(\u2212\u2126(k))poly \u0010 dD pmin \u0011 . Proof of Lemma 47. Consider one neuron index i and omit the subscript i in the parameters. By Lemma 44, Pr[yg(1)(x; \u03be(2)) > 1] \u2264exp(\u2212\u0398(k)). Let Ix = I[yg(1)(x; \u03be(2)) \u22641]. \u2202 \u2202aL\u03b1 D(g(1); \u03c3(2) \u03be ) (552) = \u2212E(x,y)\u223cD n \u03b1yyIxE\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2)) o | {z } :=Ta . (553) Let Ta1 := E(x,y)\u223cD \b \u03b1yyE\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2)) \t . We have |Ta \u2212Ta1| (554) = \f \f \fE(x,y)\u223cD n \u03b1yy(1 \u2212Ix)E\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2)) o\f \f \f (555) \u2264exp(\u2212\u2126(k)). (556) So it is suf\ufb01cient to bound Ta1. For simplicity, we use q as a shorthand for q(1) i . 75 \fPublished as a conference paper at ICLR 2022 Let \u03c6\u2032 A be an independent copy of \u03c6A, \u03c6\u2032 be the vector obtained by replacing in \u03c6 the entries \u03c6A with \u03c6\u2032 A, and let x\u2032 = M\u03c6\u2032 + \u03b6 and its label is y\u2032. Then |Ta1| := \f \f \fE\u03c6A n \u03b1yyE\u03c6\u2212A,\u03b6,\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2)) o\f \f \f (557) \u22641 2 \f \f \f \f \fE\u03c6A n E\u03c6\u2212A,\u03b6,\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(1))|y = 1 o (558) \u2212E\u03c6A n E\u03c6\u2212A,\u03b6,\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2))|y = \u22121 o \f \f \f \f \f (559) \u22641 2 \f \f \f \f \fE\u03c6A n E\u03c6\u2212A,\u03b6,\u03be(2)\u03c3(\u27e8w(1), x\u27e9+ b(1) + \u03be(2))|y = 1 o (560) \u2212E\u03c6\u2032 A n E\u03c6\u2212A,\u03b6,\u03be(2)\u03c3(\u27e8w(1), x\u2032\u27e9+ b(1) + \u03be(2))|y\u2032 = \u22121 o \f \f \f \f \f (561) \u22641 2E\u03c6A,\u03c6\u2032 A n E\u03c6\u2212A \f \f \f\u27e8w(1), x\u27e9\u2212\u27e8w(1), x\u2032\u27e9 \f \f \f |y = 1, y\u2032 = \u22121 o (562) \u22641 2E\u03c6\u2212A \u0010 E\u03c6A n\f \f \f\u27e8w(1), M\u03c6\u27e9 \f \f \f |y = 1 o + E\u03c6\u2032 A n\f \f \f\u27e8w(1), M\u03c6\u2032\u27e9 \f \f \f |y\u2032 = \u22121 o\u0011 (563) \u2264E\u03c6\u2212A,\u03c6A \f \f \f\u03b1y\u27e8w(1), M\u03c6\u27e9 \f \f \f (564) = E\u03c6 \f \f \f\u03b1y\u27e8w(1), M\u03c6\u27e9 \f \f \f (565) = O(\u03b7(1)\u02dc \u03c3/\u03b3) + exp(\u2212\u2126(k)) \u00d7 \u221a dD \u02dc \u03c3 \u00d7 \u2225w(1)\u2225\u221e (566) = O \u0012 \u03b7(1)\u02dc \u03c3 \u03b3pmin \u0013 + exp(\u2212\u2126(k))poly(dD/pmin) (567) where the fourth step follows from that \u03c3 is 1-Lipschitz, and the second to the last line from Lemma 44 and that \f \f\u27e8w(1), M\u03c6\u27e9 \f \f \u2264\u2225w(1)\u2225\u221e p d\u2225M\u03c6\u22252 2. With the above lemmas about the gradients, we are now ready to show that at the end of the second step, we get a good set of features for accurate prediction. Lemma 48. Set \u03b7(1) = \u03b32pmin\u02dc \u03c3 km3 , \u03bb(1) a = 0, \u03bb(1) w = 1/(2\u03b7(1)), \u03c3(1) \u03be = 1/k3/2, (568) \u03b7(2) = 1, \u03bb(2) a = \u03bb(2) w = 1/(2\u03b7(2)), \u03c3(2) \u03be = 1/k3/2. (569) Fix \u03b4 \u2208(0, O(1/k3)). If D\u00b5/ \u221a d \u22641/16, \u03c3\u03b6\u02dc \u03c3 = O(1), \u03c32 \u03b6d/\u02dc \u03c32 = O(1), k = \u2126 \u0010 log2 \u0010 Dmd \u03b4\u03b3pmin \u0011\u0011 , and m \u2265max{\u2126(k4), D, d}, then with probability at least 1 \u2212\u03b4 over the initialization, there exist \u02dc ai\u2019s such that \u02dc g(x) := P2m i=1 \u02dc ai\u03c3(\u27e8w(2) i , x\u27e9+ b(2) i ) satis\ufb01es LD(\u02dc g) \u2264exp(\u2212\u2126(k)). Furthermore, \u2225\u02dc a\u22250 = O(m/k), \u2225\u02dc a\u2225\u221e= O(k5/m), and \u2225\u02dc a\u22252 2 = O(k9/m). Finally, \u2225a(2)\u2225\u221e= O \u00001 km2 \u0001 , \u2225w(2) i \u22252 = O(\u02dc \u03c3/k), and |b(2) i | = O(1/k2) for all i \u2208[2m]. Proof of Lemma 48. By Lemma 35, there exists a network g\u2217(x) = P3(k+1) \u2113=1 a\u2217 \u2113\u03c3(\u27e8w\u2217 \u2113, x\u27e9+ b\u2217 \u2113) satisfying Pr (x,y)\u223cD[yg\u2217(x) \u22641] \u2264exp(\u2212\u2126(k)). Furthermore, the number of neurons n = 3(k + 1), |a\u2217 i | \u226464k, 1/(64k) \u2264|b\u2217 i | \u22641/4, w\u2217 i = \u02dc \u03c3 P j\u2208A Mj/(8k), and |\u27e8w\u2217 i , x\u27e9+ b\u2217 i | \u22641 for any i \u2208[n] and (x, y) \u223cD. Now we \ufb01x an \u2113, and show that with high probability there is a neuron in g(2) that can approximate the \u2113-th neuron in g\u2217. With probability \u22651 \u2212exp(\u2212\u2126(max{2po(D \u2212k), k})), among all j \u0338\u2208A, we have that at most 2po(D \u2212k) + k of \u03c6j are (1 \u2212po)/\u02dc \u03c3, while the others are \u2212po/\u02dc \u03c3. With probability \u22651 \u2212(d + 76 \fPublished as a conference paper at ICLR 2022 D) exp(\u2212\u2126(k)) over \u03b6, for any j, |\u03b6j| \u2264O(\u03c3\u03b6 p log(k)) and |\u27e8\u03b6, D\u2113\u27e9| \u2264O(\u03c3\u03b6 p log(k)). Below we consider data points with \u03c6 and \u03b6 satisfying these. By Lemma 37, with probability 1 \u22122\u03b4 over w(0) i \u2019s, they are all in Gw(\u03b4); with probability 1 \u2212\u03b4 over a(0) i \u2019s, they are all in Ga(\u03b4); with probability 1 \u2212\u03b4 over b(0) i \u2019s, they are all in Gb(\u03b4). Under these events, by Lemma 43, Lemma 45 and 46, for any neuron i \u2208[2m], we have w(2) i = a(1) i \uf8eb \uf8ed D X j=1 MjTj + \u03bd \uf8f6 \uf8f8, (570) b(2) i = b(1) i + a(1) i Tb. (571) where \u2022 if j \u2208A, then |Tj \u2212\u03b2\u03b3/\u02dc \u03c3| \u2264\u03f5w1 := O(\u03f5e2/\u02dc \u03c3 + \u03b7(1)/\u03c3(2) \u03be + \u03b7(1)|a(0) i |\u03f5e/(\u02dc \u03c3\u03c3(2) \u03be )), where \u03b2 \u2208[\u2126(1), 1] and depends only on w(0) i , b(0) i ; \u2022 if j \u0338\u2208A, then |Tj| \u2264\u03f5w2 := 1 \u02dc \u03c3 exp(\u2212\u2126(k)) + O(\u03c32 \u03c6\u02dc \u03c3\u03f5e2); \u2022 |\u03bdj| \u2264\u03f5\u03bd := O \u0012 \u03b7(1)\u03c3\u03b6 \u03b3\u03c3(2) \u03be \u0013 + exp(\u2212\u2126(k)). \u2022 |Tb| \u2264\u03f5b := exp(\u2212\u2126(k)) + O(\u03f5e2). Given the initialization, with probability \u2126(1) over b(0) i , we have |b(0) i | \u2208 \u0014 1 2k2 , 2 k2 \u0015 , sign(b(0) i ) = sign(b\u2217 \u2113). (572) Finally, since 8k|b\u2217 \u2113|\u03b2\u03b3 |b(0) i |\u02dc \u03c32 \u2208[\u2126(k2\u03b3/\u02dc \u03c32), O(k3\u03b3/\u02dc \u03c32)] and depends only on w(0) i , b(0) i , we have that for \u03f5a = \u0398(1/k2), with probability \u2126(\u03f5a) > \u03b4 over a(0) i , \f \f \f \f \f 8k|b\u2217 \u2113|\u03b2\u03b3 |b(0) i |\u02dc \u03c32 a(0) i \u22121 \f \f \f \f \f \u2264\u03f5a, |a(0) i | = O \u0012 \u02dc \u03c32 k2\u03b3 \u0013 . (573) Let na = \u03f5am/4. For the given value of m, by (570)-(573) we have with probability \u22651 \u22125\u03b4 over the initialization, for each \u2113there is a different set of neurons I\u2113\u2286[m] with |I\u2113| = na and such that for each i\u2113\u2208I\u2113, |b(0) i\u2113| \u2208 \u0014 1 2k2 , 2 k2 \u0015 , sign(b(0) i\u2113) = sign(b\u2217 \u2113), (574) \f \f \f \f \f 8k|b\u2217 \u2113|\u03b2\u03b3 |b(0) i\u2113|\u02dc \u03c32 a(0) i\u2113\u22121 \f \f \f \f \f \u2264\u03f5a, |a(0) i\u2113| = O \u0012 \u02dc \u03c32 k2\u03b3 \u0013 . (575) Now, construct \u02dc a such that \u02dc ai\u2113= 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113|na for each \u2113and each i\u2113\u2208I\u2113, and \u02dc ai = 0 elsewhere. To show that it gives accurate predictions, we \ufb01rst consider bounding some error terms. For the given values of parameters, we have \u03f5e2 = O \u0010 \u03b3 m2 \u0011 , (576) \u03f5w1 = O \u0012 k\u03b3 m2\u02dc \u03c3 + \u03b3\u03f5e km2 \u0013 , (577) \u03f5w2 = O \u0010 \u03b3 m2\u02dc \u03c3 \u0011 , (578) \u03f5\u03bd = O \u0012 \u03b3k m3 \u0013 , (579) \u03f5b = O \u0010 \u03b3 m2 \u0011 . (580) 77 \fPublished as a conference paper at ICLR 2022 We also have the following useful claims. Claim 2. P \u2113\u2208A |\u27e8M\u2113, x\u27e9| \u2264O \u0000 k \u02dc \u03c3 \u0001 . Proof of Claim 2. X \u2113\u2208A |\u27e8M\u2113, x\u27e9| (581) \u2264 X \u2113\u2208A \uf8eb \uf8ed|\u03c6j| + \f \f \f \f \f \f X j\u0338=\u2113 M \u22a4 \u2113Mj\u03c6j \f \f \f \f \f \f + \f \fM \u22a4 \u2113\u03b6/\u02dc \u03c3 \f \f \uf8f6 \uf8f8 (582) \u2264O \u0012k \u02dc \u03c3 \u0013 + O \u0012 kD \u00b5 \u221a d\u02dc \u03c3 \u0013 + O k \u03c3\u03b6 p log(k) \u02dc \u03c3 ! (583) \u2264O \u0012k \u02dc \u03c3 \u0013 . (584) Claim 3. \f \f \f \f \f \f \u27e8w(2) i\u2113, x\u27e9\u2212a(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f \u2264O \u0012 1 m \u0013 . (585) Proof of Claim 3. \f \f \f\u27e8w(2) i\u2113, x\u27e9 \f \f \f = \f \f \f \f \f\u27e8 D X \u2113=1 a(1) i\u2113T\u2113M\u2113+ \u03c5, x\u27e9 \f \f \f \f \f (586) \u2264 \f \f \f \f \f\u27e8 X \u2113\u2208A a(1) i\u2113T\u2113M\u2113, x\u27e9 \f \f \f \f \f + \f \f \f \f \f \f \u27e8 X \u2113\u0338\u2208A a(1) i\u2113T\u2113M\u2113, x\u27e9 \f \f \f \f \f \f + |\u27e8\u03c5, x\u27e9| . (587) Then \f \f \f \f \f \f \u27e8w(2) i\u2113, x\u27e9\u2212a(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f (588) \u2264 \f \f \f \f \f \f \u27e8w(2) i\u2113, x\u27e9\u2212a(1) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f + \f \f \f \f \f \f a(1) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9\u2212a(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f (589) \u2264 \f \f \f \f \f \f \u27e8w(2) i\u2113, x\u27e9\u2212a(1) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f + \f \f \fa(1) i\u2113\u2212a(0) i\u2113 \f \f \f \f \f \f \f \f \f \u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f . (590) The \ufb01rst term is \f \f \f \f \f \f \u27e8w(2) i\u2113, x\u27e9\u2212a(1) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f (591) \u2264 \f \f \fa(1) i\u2113 \f \f \f \uf8eb \uf8ed \f \f \f \f \f\u27e8 X \u2113\u2208A \u0012 T\u2113\u2212\u03b2\u03b3 \u02dc \u03c3 \u0013 M\u2113, x\u27e9 \f \f \f \f \f + \f \f \f \f \f \f \u27e8 X \u2113\u0338\u2208A T\u2113M\u2113, x\u27e9 \f \f \f \f \f \f + |\u27e8\u03bd, x\u27e9| \uf8f6 \uf8f8. (592) 78 \fPublished as a conference paper at ICLR 2022 By Claim 2, \f \f \f \f \f\u27e8 X \u2113\u2208A \u0012 T\u2113\u2212\u03b2\u03b3 \u02dc \u03c3 \u0013 M\u2113, x\u27e9 \f \f \f \f \f \u2264 X \u2113\u2208A \f \f \f \fT\u2113\u2212\u03b2\u03b3 \u02dc \u03c3 \f \f \f \f |\u27e8M\u2113, x\u27e9| (593) \u2264\u03f5w1 X \u2113\u2208A |\u27e8M\u2113, x\u27e9| (594) \u2264O \u0012k\u03f5w1 \u02dc \u03c3 \u0013 . (595) \f \f \f \f \f \f \u27e8 X \u2113\u0338\u2208A T\u2113M\u2113, x\u27e9 \f \f \f \f \f \f \u2264 \f \f \f \f \f \f X \u2113\u0338\u2208A T\u2113\u03c6j \f \f \f \f \f \f + \f \f \f \f \f \f X \u2113\u0338\u2208A,j\u0338=\u2113 T\u2113M \u22a4 \u2113Mj\u03c6j \f \f \f \f \f \f + \f \f \f \f \f \f X \u2113\u0338\u2208A T\u2113M \u22a4 \u2113\u03b6/\u02dc \u03c3 \f \f \f \f \f \f (596) \u2264O \u0012D\u03f5w2 \u02dc \u03c3 \u0013 + O \u0012 D2\u03f5w2 \u00b5 \u221a d\u02dc \u03c3 \u0013 + O D\u03f5w2 \u03c3\u03b6 p log(k) \u02dc \u03c3 ! (597) \u2264O \u0012D\u03f5w2 \u02dc \u03c3 \u0013 . (598) |\u27e8\u03bd, x\u27e9| \u2264 \f \f \f \f \f \f \u27e8\u03bd, X \u2113\u2208[D] M\u2113\u03c6\u2113\u27e9 \f \f \f \f \f \f + |\u27e8\u03bd, \u03b6/\u02dc \u03c3\u27e9| (599) \u2264 X \u2113\u2208[D] |\u03c6\u2113\u27e8\u03bd, M\u2113\u27e9| + |\u27e8\u03bd, \u03b6/\u02dc \u03c3\u27e9| (600) \u2264O \u0012poD + k \u02dc \u03c3 \u0013 \u03f5\u03bd \u221a d + d\u03f5\u03bd O(\u03c3\u03b6 p log(k)) \u02dc \u03c3 (601) \u2264O \u03f5\u03bd \u221a d \u02dc \u03c3 ! \u0010 poD + \u221a d\u03f5\u03bd p log(k) \u0011 (602) \u2264O \u03f5\u03bd \u221a d \u02dc \u03c3 ! \u0010 poD + \u02dc \u03c3 p log(k) \u0011 (603) \u2264O poD\u03f5\u03bd \u221a d \u02dc \u03c3 ! . (604) Then by (576)-(580), \u03f5\u03c6 := k\u03f5w1 \u02dc \u03c3 + D\u03f5w2 \u02dc \u03c3 + poD\u03f5\u03bd \u221a d \u02dc \u03c3 ! = O \u0012 k2\u03b3 m2\u02dc \u03c32 + \u03b3\u03f5e m2\u02dc \u03c3 + \u03b3 m\u02dc \u03c32 + \u03b3k m3/2\u02dc \u03c3 \u0013 . (605) We have \f \f \fa(1) i\u2113\u2212a(0) i\u2113 \f \f \f = O(\u03b7(1)\u03c3w p log(Dm/\u03b4)). So the \ufb01rst term is bounded by \f \f \f \f \f \f \u27e8w(2) i\u2113, x\u27e9\u2212a(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f \u2264 \f \f \fa(1) i\u2113 \f \f \f \u03f5\u03c6 (606) \u2264O \u0012 \u02dc \u03c32 k2\u03b3 + \u03b7(1)\u03c3w p log(Dm/\u03b4) \u0013 \u0012 k2\u03b3 m2\u02dc \u03c32 + \u03b3\u03f5e m2\u02dc \u03c3 + \u03b3 m\u02dc \u03c32 + \u03b3k m3/2\u02dc \u03c3 \u0013 \u2264O \u0012 1 m \u0013 . (607) By Claim 2, the second term is bounded by \f \f \fa(1) i\u2113\u2212a(0) i\u2113 \f \f \f \f \f \f \f \f \f \u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f \u2264O k\u03b7(1)\u03c3w p log(Dm/\u03b4)\u03b3 \u02dc \u03c32 ! \u2264O \u0012 \u03b33 m3 \u0013 . (608) Combining the bounds on the two terms leads to the claim. 79 \fPublished as a conference paper at ICLR 2022 Claim 4. |b(2) i\u2113\u2212b(0) i\u2113| \u2264O \u0012 1 k2m \u0013 . (609) Proof of Claim 4. By Lemma 43 and 46: |b(2) i\u2113\u2212b(0) i\u2113| \u2264|b(2) i\u2113\u2212b(1) i\u2113| + |b(1) i\u2113\u2212b(0) i\u2113| (610) \u2264O \u0010 \u03b7(1)|a(0) i\u2113|\u03f5e + |a(1) i\u2113| (exp(\u2212\u2126(k)) + \u03f5e2) \u0011 (611) \u2264O \u0012 \u03b3 km2 + 1 k2m \u0013 \u2264O \u0012 1 k2m \u0013 . (612) We are now ready to show \u02dc g is close to 2g\u2217. |\u02dc g(x) \u22122g\u2217(x)| (613) = \f \f \f \f \f \f 3(k+1) X \u2113=1 X i\u2113\u2208I\u2113 \u02dc ai\u2113\u03c3 \u0010 \u27e8w(2) i\u2113, x\u27e9+ b(2) i\u2113 \u0011 \u2212 3(k+1) X \u2113=1 2a\u2217 \u2113\u03c3 (\u27e8w\u2217 \u2113, x\u27e9+ b\u2217 \u2113) \f \f \f \f \f \f (614) = \f \f \f \f \f \f 3(k+1) X \u2113=1 X i\u2113\u2208I\u2113 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113|na \u03c3 \u0010 \u27e8w(2) i\u2113, x\u27e9+ b(2) i\u2113 \u0011 \u2212 3(k+1) X \u2113=1 X i\u2113\u2208I\u2113 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113|na \u03c3 |b(0) i\u2113| |b\u2217 \u2113| \u27e8w\u2217 \u2113, x\u27e9+ b(0) i\u2113 !\f \f \f \f \f \f (615) \u2264 \f \f \f \f \f \f 3(k+1) X \u2113=1 X i\u2113\u2208I\u2113 1 na \uf8eb \uf8ed2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \u03c3 \u0010 \u27e8w(2) i\u2113, x\u27e9+ b(2) i\u2113 \u0011 \u22122a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \u03c3 \uf8eb \uf8eda(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9+ b(0) i\u2113 \uf8f6 \uf8f8 \uf8f6 \uf8f8 \f \f \f \f \f \f (616) + \f \f \f \f \f \f 3(k+1) X \u2113=1 X i\u2113\u2208I\u2113 1 na \uf8eb \uf8ed2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \u03c3 \uf8eb \uf8eda(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9+ b(0) i\u2113 \uf8f6 \uf8f8\u22122a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \u03c3 |b(0) i\u2113| |b\u2217 \u2113| \u27e8w\u2217 \u2113, x\u27e9+ b(0) i\u2113 !\uf8f6 \uf8f8 \f \f \f \f \f \f . (617) Here the second equation follows from that \u03c3 is positive-homogeneous in [0, 1], |\u27e8w\u2217 \u2113, x\u27e9+ b\u2217 \u2113| \u22641, |b(0) i\u2113|/|b\u2217 \u2113| \u22641. By Claim 3 and 4, the \ufb01rst term is bounded by: 3(k + 1) max \u2113 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \uf8eb \uf8ed \f \f \f \f \f \f \u27e8w(2) i\u2113, x\u27e9\u2212a(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f + |b(2) i\u2113\u2212b(0) i\u2113| \uf8f6 \uf8f8 (618) \u22643(k + 1) max \u2113 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| O \u0012 1 m \u0013 (619) \u2264O \u0012k4 m \u0013 . (620) 80 \fPublished as a conference paper at ICLR 2022 By Claim 2, the second term is bounded by: 3(k + 1) max \u2113 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \f \f \f \f \f \f a(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9\u2212|b(0) i\u2113| |b\u2217 \u2113| \u27e8w\u2217 \u2113, x\u27e9 \f \f \f \f \f \f (621) \u22643(k + 1) max \u2113 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \f \f \f \f \f \f a(0) i\u2113\u03b2\u03b3 \u02dc \u03c3 X j\u2208A \u27e8Mj, x\u27e9\u2212|b(0) i\u2113|\u02dc \u03c3 8k|b\u2217 \u2113| X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f (622) \u22643(k + 1) max \u2113 2a\u2217 \u2113|b\u2217 \u2113| |b(0) i\u2113| \u02dc \u03c3|b(0) i\u2113| 8k|b\u2217 \u2113| \f \f \f \f \f 8kai\u2113\u03b2\u03b3|b\u2217 \u2113| \u02dc \u03c32|b(0) i\u2113| \u22121 \f \f \f \f \f \f \f \f \f \f \f X j\u2208A \u27e8Mj, x\u27e9 \f \f \f \f \f \f (623) \u22643(k + 1) max \u2113 O(a\u2217 \u2113\u03f5a) (624) \u2264O \u0000k2\u03f5a \u0001 . (625) Then |\u02dc g(x) \u22122g\u2217(x)| = O \u0012k4 m + k2\u03f5a \u0013 \u22641. (626) This guarantees y\u02dc g(x) \u22651. Changing the scaling of \u03b4 leads to the statement. Finally, the bounds on \u02dc a follow from the above calculation. The bound on \u2225a(2)\u22252 follows from Lemma 47, and those on \u2225w(2) i \u22252 and \u2225b(2) i \u22252 follow from (570)(571) and the bounds on a(1) i and b(1) i in Lemma 43. F.9 CLASSIFIER LEARNING STAGE AND MAIN THEOREM Once we have a good set of features in Lemma 48, we can follow exactly the same argument as in Section B.6 and B.7 for the simpli\ufb01ed setting, and arrive at the main theorem for the general setting: Theorem 49 (Restatement of Theorem 34). Set \u03b7(1) = \u03b32pmin\u02dc \u03c3 km3 , \u03bb(1) a = 0, \u03bb(1) w = 1/(2\u03b7(1)), \u03c3(1) \u03be = 1/k3/2, (627) \u03b7(2) = 1, \u03bb(2) a = \u03bb(2) w = 1/(2\u03b7(2)), \u03c3(2) \u03be = 1/k3/2, (628) \u03b7(t) = \u03b7 = k2 Tm1/3 , \u03bb(t) a = \u03bb(t) w = \u03bb \u2264 k3 \u02dc \u03c3m1/3 , \u03c3(t) \u03be = 0, for 2 < t \u2264T. (629) For any \u03b4 \u2208(0, O(1/k3)), if \u00b5 \u2264O( \u221a d/D), \u03c3\u03b6 \u2264O(min{1/\u02dc \u03c3, \u02dc \u03c3/ \u221a d}), k = \u2126 \u0010 log2 \u0010 Dmd \u03b4\u03b3pmin \u0011\u0011 , m \u2265max{\u2126(k4), D, d}, then we have for any D \u2208F\u039e, with probability at least 1 \u2212\u03b4, there exists t \u2208[T] such that Pr[sign(g(t)(x)) \u0338= y] \u2264LD(g(t)) = O \u0012 k8 m2/3 + k3T m2 + k2m2/3 T \u0013 . (630) Consequently, for any \u03f5 \u2208(0, 1), if T = m4/3, and m \u2265max{\u2126(k12/\u03f53/2), D}, then Pr[sign(g(t)(x)) \u0338= y] \u2264LD(g(t)) \u2264\u03f5. (631) 81" + }, + { + "url": "http://arxiv.org/abs/2102.01279v2", + "title": "Deep Online Fused Video Stabilization", + "abstract": "We present a deep neural network (DNN) that uses both sensor data (gyroscope)\nand image content (optical flow) to stabilize videos through unsupervised\nlearning. The network fuses optical flow with real/virtual camera pose\nhistories into a joint motion representation. Next, the LSTM block infers the\nnew virtual camera pose, and this virtual pose is used to generate a warping\ngrid that stabilizes the frame. Novel relative motion representation as well as\na multi-stage training process are presented to optimize our model without any\nsupervision. To the best of our knowledge, this is the first DNN solution that\nadopts both sensor data and image for stabilization. We validate the proposed\nframework through ablation studies and demonstrated the proposed method\noutperforms the state-of-art alternative solutions via quantitative evaluations\nand a user study.", + "authors": "Zhenmei Shi, Fuhao Shi, Wei-Sheng Lai, Chia-Kai Liang, Yingyu Liang", + "published": "2021-02-02", + "updated": "2021-04-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Videos captured with a hand-held device are often shaky. With the growing popularity of casual video recording, live streaming, and movie-making on hand-held smartphones, effective and ef\ufb01cient video stabilization is crucial for improving overall video quality. However, high-quality stabilization remains challenging due to complex camera motions and scene variations. The existing video stabilization systems can be generally categorized into image-based and sensor-based methods. Image-based methods output a smooth camera path by extracting camera motions from sparse image features [7, 14, 18] or dense optical \ufb02ow [3, 29, 31, 32, 34]. These methods offer nonlinear \ufb02exibility on motion compensations. However, they often fail when there are complex motions such as parallax, or no reliable features in the scene, and produce visible non-rigid distortions and artifacts due to lack of reliable rigidity constraints. The sensor-based methods use motion sensor data, e.g., gyroscope and accelerometer, to obtain accurate motion. They are free from scene contents and can achieve impressive stabilization results with effective distortion corrections [13, 28]. However, these methods deliver homographies to stabilize the plane at in\ufb01nity and do not adapt to the scene depth, leading to more residual parallax motions for close scenes. In this work, we present an ef\ufb01cient deep Fused Video Stabilization (deep-FVS) framework to fuse the two motion sources (image content and motion sensor) and draw bene\ufb01ts from both ends. On one hand, the network outputs a single virtual camera pose instead of dense warping \ufb02ow, and the videos are then stabilized by warping the sensor-based real camera motions towards this virtual pose. In this way, the motion rigidity is naturally preserved and rolling shutter distortion is corrected without artifacts (e.g., wobbling). On the other hand, the network learns to jointly minimize both camera pose smoothness and optical \ufb02ow losses. Thus, it automatically adjusts to different scenes (e.g., depth variations) to minimize the residual motions. Our network is trained with unsupervised learning with carefully designed loss functions and a multi-stage training procedure. Fig. 1 shows an overview of conventional methods [7, 18], recent learning-based approaches [3, 29, 30, 32, 34], and the proposed deep-FVS. As the existing datasets [18, 29] do not record the sensor data, we collect a new video dataset that contains videos with both gyroscope and OIS data for training and evaluation. Our dataset covers diverse scenarios with different illumination conditions and camera/subject motions. The sensor data and video frames are accurately aligned through calibration. We evaluate the proposed solution objectively and subjectively and show that it outperforms state-of-theart methods by generating more stable and distortion-free results. This paper makes the following contributions: \u2022 The \ufb01rst DNN-based framework that fuses motion sensor data and optical \ufb02ow for online video stabilization. \u2022 An unsupervised learning process with multi-stage training and relative motion representation. \u2022 A benchmark dataset that contains videos with gyroscope and OIS sensor data and covers various scenararXiv:2102.01279v2 [cs.CV] 4 Apr 2021 \fMotion Estimation Motion Smoothing/Optimization Warp Field DNN Input frames Stabilized frames (a) Conventional methods (b) Learning-based methods Input frame Real pose history Stabilized frame Virtual pose history Feedback DNN Optical flow + Inputs + Virtual Camera Pose (c) Deep-FVS (ours) Figure 1: Comparisons of existing video stabilization methods and the proposed method. (a) Conventional video stabilization methods [7, 18] estimate camera motions based on image feature trajectories and \ufb01nd a smooth camera path to render a stabilized video. (b) Learning-based approaches [3, 29, 30, 32, 34] learn deep networks to predict warp \ufb01elds for warping the input video. (c) The proposed Deep-FVS learns to stabilize a video by fusing the optical \ufb02ow and gyroscope data. ios. Both the dataset and code will be publicly released. 2. Related Work Conventional methods. Classical video stabilization algorithms typically involve motion estimation, camera path smoothing, and video frame warping/rendering steps [23]. Some solutions also correct the rolling shutter distortions [6, 10, 12]. Those methods can be categorized into 3D, 2D, and 2.5D approaches based on motion estimation. The 3D approaches model the camera poses and estimate a smooth virtual camera trajectory in the 3D space. To \ufb01nd 6DoF camera poses, several techniques have been adopted, including projective 3D reconstruction [1], depth camera [17], structure from motion [14], and light-\ufb01eld [27]. While 3D approaches can handle parallax and produce high-quality results, they often entail expensive computational costs or require speci\ufb01c hardware devices. The 2D approaches represent and estimate camera motions as a series of 2D af\ufb01ne or perspective transformations [7, 18, 22]. Robust feature tracking and outlier rejection are applied to obtain reliable estimation [33]. Liu et al. [19] replace feature trajectories with optical \ufb02ows to handle spatially-variant motion. Early approaches apply low-pass \ufb01lters to smooth individual motion parameters [2, 22], while recent ones adopt L1 optimization [7] and joint optimization with bundled local camera paths [18]. Some hybrid 2D-3D approaches exploit the subspace constraints [15] and epipolar geometry [4]. Zhuang et al. [35] smooth 3D rotation from the gyroscope and stabilize the residual 2D motion based on feature matching. The above methods often process a video of\ufb02ine, which are not suitable for live-streaming and mobile use cases. Liu et al. [16] propose a MeshFlow motion model with only one frame latency for online video stabilization. A mobile online solution using both the OIS and EIS is developed in [13]. In this work, we utilize the OIS, gyroscope, and optical \ufb02ow to learn a deep network for stabilization. Our online method has only 10 frames latency and does not require per-video optimization. Learning-based methods. With the success of deep learning on image recognition [8, 20, 24], DNNs have been adopted to several computer vision tasks and achieved stateof-the-art performance. However, DNN based video stabilization still does not attract much attention, mainly due to the lack of proper training data. Wang et al. [29] collect the DeepStab dataset with 60 pairs of stable/unstable videos, and train a deep CNN to predict mesh-grids for warping the video. Instead of predicting low-resolution mesh-grids, the PWStableNet [34] learns dense 2D warping \ufb01elds to stabilize the video. Xu et al. [30] train a generative adversarial network to generate a steady frame as guidance and use the spatial transformer network to extract the af\ufb01ne transform for warping the video frames. Yu and Ramamoorthi [31] take optical \ufb02ows as input and optimize the weights of a deep network to generate a warp \ufb01eld for each speci\ufb01c video. They further train a stabilization network that can be \fReal pose history (10 past frames + current frame + 10 future frames) Stabilized frame Input frames Optical flow LSTM Concatenate Warping Losses Gyroscope 2D conv Virtual pose history (10 past frames) FC FC 4D Quaternion Remove OIS translations Metadata Figure 2: Overview of deep-FVS. Given an input video, we \ufb01rst remove the OIS translation to extract the raw optical \ufb02ow. We also obtain the real camera poses from the gyroscope and convert it to a relative quaternion. An encoder with 2D convolutions embeds optical \ufb02ows to a latent representation, which is then concatenated with the real and virtual camera poses. This joint motion representation is fed to a LSTM cell and FC layers to predict the new virtual camera pose as a quaternion. Finally, we warp the input frame based on the OIS and virtual camera pose to generate the stabilized frame. generalized test videos without optimization [32]. Choi et al. [3] learn a frame interpolation model to iteratively interpolate the input video into a stable one without cropping. These learning-based methods learn to stabilize videos from the video content and optical \ufb02ow. Their performance heavily depends on the training data and can suffer from visible distortion for large motions (e.g., running). In contrast, we use the gyroscope to compensate camera motions and utilize optical \ufb02ow to correct the residual motions from scene geometry. 3. Deep Fused Video Stabilization The overview of our method is shown in Fig. 2. We \ufb01rst process the gyroscope and OIS reading so that we can query the real camera extrinsic (i.e., rotation) and intrinsic (i.e., principal point offsets) at arbitrary timestamps (Sec. 3.1). We then remove the OIS translations on the input video and extract optical \ufb02ows from the raw video frames (Sec. 4.1). The optical \ufb02ows are encoded to a latent space via 2D convolutional layers and concatenated with the real camera poses within a temporal window and the previous virtual camera poses as a joint motion representation (Sec. 3.2 and 4.2). Next, we feed this joint motion representation to an LSTM cell and a few fully-connected layers to predict a virtual camera pose at the current timestamp. Finally, we use a grid-based warping method similar to Karpenko et al. [10] to warp the input frame to the stabilized rolling shutter corrected domain using the input camera rotations, OIS movement, and the predicted virtual camera poses (supplementary material Section 3). Our solution stabilizes a video frame-by-frame and is suitable for online processing. During the training stage, we randomly select long subsequences from the training videos, and optimize our DNN with a set of loss functions without any ground-truth videos or camera poses for supervision (Sec. 4.3). To stabilize the training, we adopt a multi-stage training strategy to constrain the solution space (Sec. 4.4). 3.1. Gyroscope and OIS Pre-processing In our dataset, the gyroscope (\u03c9x, \u03c9y, \u03c9z, t) and OIS (ox, oy, t) are sampled at 200 Hz, where \u03c9 is the angular velocity, and ox, oy are the OIS movements. The camera rotation is integrated by R(t) = S\u03c9(t) \u2217R(t \u2212S), where S is the sampling interval (5ms). We represent the rotation as a 4D quaternion and save it in a queue. To obtain the camera rotation at an arbitrary timestamp tf, we \ufb01rst locate the two consecutive gyro samples a, b in the queue such that ta \u2264tf \u2264tb, and obtain R(tf) by applying a spherical linear interpolation (SLERP): R(tf) = SLERP(R(ta), R(tb), (tb \u2212tf)/(tb \u2212ta)). (1) Similarly, O(t) is calculated from a linear interpolation between O(ta) and O(tb). 3.2. Camera Pose Representation We represent a camera pose as P = (R, O), where R is the camera rotation and O = (ox, oy) is a 2D offset to the camera principal point (u, v). Given a 3D world coordinate X, the projected point on the 2D image at timestamp t is x = K(t)R(t)X, (2) where K(t) = [f, 0, u + ox(t); 0, f, v + oy(t); 0, 0, 1] is the intrinsic matrix with focal length f. Given a real camera pose Pr = (Rr, Or) and virtual one Pv = (Rv, Ov), the transformation of a point from the real \fcamera space to the virtual (stabilized) one is xv = Kv(t)Rv(t)R\u22121 r (t)K\u22121 r (t)xr, (3) where xr, xv are the 2D image coordinates at real and virtual camera spaces, respectively. In all the experiments, we normalize f = 1511.8 for both the real and virtual cameras. 4. Unsupervised Learning for Sensor Fusion This section introduces the core of our deep fused video stabilization network. As shown in Fig. 2, our network consists of a sequence of 2D convolutional layers to encode the optical \ufb02ow, an LSTM cell to fuse the latent motion representation and maintain temporal information, and fullyconnected layers to decode the latent representation to virtual camera poses. The detailed network con\ufb01guration is provided in the supplementary material. We \ufb01rst extract the OIS-free optical \ufb02ow from the input frames and OIS data (Sec. 4.1) and map it to a lowdimensional representation z. Meanwhile, we extract the past and future real camera rotation history Hr and the past virtual rotation history Hv from the queues (Sec. 4.2). We de\ufb01ne the joint motion representation as [z, Hr, Hv] and feed it into the LSTM to predict an incremental rotation \u2206Rv(t) to the previous virtual pose Rv(t \u2212\u2206t), where \u2206t is \ufb01xed to 40ms in our experiments and is invariant to the video frame rate. Note we set the virtual offset Ov to 0. The \ufb01nal virtual pose is then calculated as Pv = (\u2206Rv(t)Rv(t \u2212\u2206t), Ov) and used to generate the warping grid. It is also pushed into the virtual pose queue as the input for later frames. We can interpret the LSTM, virtual pose prediction, and frame warping steps as a decoder that maps the current motion state [z, Hr, Hv] to a stabilized frame. 4.1. OIS-free Optical Flow Camera motions in the input videos are compensated by the OIS to reduce the motion blur. Although the OIS movement depends on the hand motion, the offset Or is different at each scanline due to the rolling shutter and more like a random noise (see the supplementary materials for more discussions). It is non-trivial to let the network learn to associate the local offset with the principal point changes. To address this issue, we remove OIS motions when estimating the optical \ufb02ow such that the input to our model contains only the camera and object motions. Speci\ufb01cally, we denote the position of a pixel in frame n as xr,n and its corresponding pixel in frame n + 1 as yr,n+1. The raw forward optical \ufb02ow can be represented as \u02dc F n+1 n = yr,n+1 \u2212xr,n. (4) By reverting the OIS movement at the pixel\u2019s timestamp Forward flow !\"\u2192\"$% Backward flow !\"$%\u2192\" Warped by Eq. 4 Warped by Eq. 4 Input frame & Warped frame & Warped frame & + 1 Input frame & + 1 )*,, )-,, .*,,$/ .-,,$/ 0 \" 0\"$% 12 Figure 3: Optical \ufb02ow loss. The optical \ufb02ow loss aims to minimize the distance between xv,n and yv,n+1 in the virtual camera space. By incorporating the forward and backward \ufb02ows, we de\ufb01ne our optical \ufb02ow loss as in (14). (which depends on the y-coordinate due to the rolling shutter readout), xr,n and yr,n+1 are mapped to xr,n \u2212O(txr,n) and yr,n+1 \u2212O(tyr,n+1), respectively. The forward optical \ufb02ow is then adjusted to F n+1 n = (yr,n+1 \u2212O(tyr,n+1)) \u2212(xr,n \u2212O(txr,n)) = \u02dc F n+1 n \u2212(O(tyr,n+1) \u2212O(txr,n)). (5) The backward \ufb02ow is adjusted similarly. We use the FlowNet2 [9] to extract optical \ufb02ows in our experiments. 4.2. Relative Rotation based Motion History To obtain the real and virtual pose histories [Hr, Hv] at a timestamp t, we \ufb01rst sample N past and future timestamps from the gyro queue (Sec. 3.1) and obtain the real absolute camera rotations Hr,absolute = (Rr(t \u2212 N\u2206t), ..., Rr(t), ..., Rr(t + N\u2206t)). Meanwhile, we sample the virtual pose queue to obtain the virtual camera pose history as Hv,absolute = (Rv(t \u2212N\u2206t), ..., Rv(t \u2212\u2206t)). One key novelty here is to convert the absolute poses, which are integrated from the very \ufb01rst frame, into a relative rotation w.r.t. the current real camera pose: Hr = Hr,absolute \u2217R\u22121 r (t), (6) Hv = Hv,absolute \u2217R\u22121 r (t). (7) The network output is also a relative rotation to the previous virtual camera pose. Therefore, our model only needs to learn the \ufb01rst order pose changes and is invariant to the absolute poses. Our experiments show that this relative rotation representation leads to more stable predictions and provides a much better generalization (Sec. 5.2). \f4.3. Loss Functions We de\ufb01ne the following loss functions to train our network. These loss functions can be evaluated without any ground-truth. Note that we omit the timestamp or frame index in some terms (e.g., L instead of L(t)) for simplicity. Smoothness losses. We measure the C0 and C1 smoothness of the virtual camera poses by LC0 =||Rv(t) \u2212Rv(t \u2212\u2206t)||2, (8) LC1 =||Rv(t)R\u22121 v (t\u2212\u2206t) \u2212Rv(t\u2212\u2206t)R\u22121 v (t\u22122\u2206t)||2. (9) These two losses encourage the virtual camera to be stable and vary smoothly. Protrusion loss. To avoid unde\ufb01ned regions and excessive cropping on the stabilized video, we measure how the warped frame protrudes the real frame boundary [26]: Lp = N X i=0 wp,i||Min(protrude(Pv(t), Pr(t+i\u2206t)) / \u03b1, 1)||2, (10) where N is the number of look-ahead frames, wp,i is the normalized Gaussian weights (with a standard deviation \u03c3) centered at the current frame, and \u03b1 is a reference protrusion value that we can tolerate. To evaluate protrude(), we project the virtual frame corners cropped with a ratio \u03b2 on each side to the real camera space using (3) and measure the max normalized signed distance between the four warped corners to the frame boundary cropped with a ratio \u03b3. We use \u03b2 and \u03b3 to control the Lp\u2019s sensitivity to the camera motion. The protrusion value is clamped to 1 to disregard very large motions and make training stable. We set \u03c3 = 2.5, N = 10, \u03b1 = 0.2, \u03b2 = 0.08 and \u03b3 = 0.04 in our experiments. Distortion loss. We measure the warping distortion by: Ld = \u2126(Rv, Rr)/(1 + e(\u2212\u03b21(\u2126(Rv,Rr)\u2212\u03b20)), (11) where \u2126(Rv, Rr) is the spherical angle between the current virtual and real camera poses, \u03b20 is a threshold and \u03b21 is a parameter to control the slope of the logistic function. This loss is only effective when the angle deviation is larger than a threshold. We empirically set \u03b20 = 6\u25e6and \u03b21 = 100 in our experiments. Optical \ufb02ow loss. We adopt an optical \ufb02ow loss similar to [31] to minimize the pixel motion between adjacent frames. As shown in Fig. 3, let xr,n and yr,n+1 be the correspondences between frame n and n + 1 in the real camera space. We de\ufb01ne the transform from the real camera space to the virtual camera space as T, and obtain xv,n = Tn(xr,n) and yv,n+1 = Tn+1(yr,n+1) in the virtual camera space. By incorporating the forward \ufb02ow F n+1 n and backward \ufb02ow F n n+1, the warped pixels can be represented as: xv,n = Tn(xr,n) = Tn(yr,n+1 + F n n+1), (12) yv,n+1 = Tn+1(yr,n+1) = Tn+1(xr,n + F n+1 n ). (13) Our goal is to minimize ||xv,n\u2212yv,n+1||2 so they stay close in the stabilized video. This can be measured by: Lf = |Xn|\u22121 X Xn ||xv,n \u2212Tn+1(xr,n + F n+1 n )||2+ |Xn+1|\u22121 X Xn+1 ||yv,n+1 \u2212Tn(yr,n+1 + F n n+1)||2, (14) where Xn is the set of all pixel positions in frame n except those fall into unde\ufb01ned regions after warping. Overall loss. Our \ufb01nal loss at a timestamp t is the weighted summation of the above loss terms: L = wC0LC0 + wC1LC1 + wpLp + wdLd + wfLf, (15) where wC0, wC1, wp, wd and wf are set to 2, 40, 2, 1 and 1 respectively in our experiments. At each training iteration, we forward a sub-sequence with 100 frames to evaluate the losses and accumulate gradients before updating the model parameters. 4.4. Multi-Stage Training For the virtual camera poses, there is a trade-off between following the real camera motion and staying stable. Although we have de\ufb01ned loss terms in (15) to constrain the solution space, it is dif\ufb01cult for the network to learn this non-linearity the training cannot converge when we optimize all the loss terms simultaneously. We adopt a multi-stage training to address this issue. In the \ufb01rst stage, we only minimize LC0, LC1, and Ld to ensure that our model can generate a meaningful camera pose. In the second stage, Lp is added to reduce the unde\ufb01ned regions in the output. In the last stage, Lf is included to enhance the overall quality. We train each stage for 200, 100, and 500 iterations. To improve the model generalization, we adopt a data augmentation by randomly changing the virtual camera poses (within \u00b16 degrees) to model possible real-virtual pose deviations in the test sequences. 5. Experimental Results In this section, we show that our deep-FVS achieves state-of-the-art results in quantitative analysis (Sec. 5.1) and a user study (Sec. 5.1). We then validate the effectiveness of the key components in the proposed framework by ablation study (Sec. 5.2). We strongly encourage readers to watch the source and stabilized videos (by our and existing methods) in the supplemental materials. \fMethod Stability Distortion Correlation FOV YouTube stabilizer 0.834 0.978 0.969 0.977 Grundmann et al. [7] 0.818 0.896 0.948 0.635 Wang et al. [29] 0.836 0.850 0.877 0.753 PWStableNet [34] 0.830 0.965 0.973 0.934 Yu et al. [32] 0.842 0.854 0.941 0.793 Choi et al. [3] 0.781 0.875 0.916 1.0\u2217 Ours (sensor only) 0.846 0.888 0.976 0.827 Ours (sensor + \ufb02ow) 0.853 0.937 0.976 0.906 Table 1: Quantitative results. The best one is marked in red and the second best one is marked in blue. \u2217The FOV ratio of Choi et al. [3] is always 1 as they generate fullframe results. For other approaches, the FOV ratio is computed from the scale components of the \ufb01tted homography between the input and stabilized frames. GENERAL ROTATION PARALLAX DRIVING PEOPLE RUNNING Stability Distortion Correlation FOV ratio Figure 4: Per-category quantitative evaluation. We compare the stability, FOV ratio, distortion, and correlation with state-of-the-art methods [3, 7, 29, 32, 34] on each category. 5.1. Comparisons with State-of-the-Arts Experimental settings. We compare our deep-FVS with conventional methods [7]1 and YouTube stabilizer (based on [6]), and 4 recent learning-based methods [3, 29, 32, 34]2. We collect 50 videos with sensor logs using Google Pixel 4 (1920 \u00d7 1080 resolution with variable FPS). The video dataset covers a wide range of variations, such as scenes, illuminations, and motions. The sensor data are accurately calibrated and timestamp-aligned with frames (Google Pixel 4 is ARCore certi\ufb01ed [5]). We split our dataset into 16 videos for training and 34 videos for testing, 1We use a third-party implementation from https://github. com/ishit/L1Stabilizer. 2The source code of [3, 29, 34] are publicly available. We obtain the source code of [32] from the authors. where the test set classi\ufb01ed into 6 categories: GENERAL, ROTATION, PARALLAX, DRIVING, PEOPLE, and RUNNING. Fig. 4 shows a few sample frames from each category. Quantitative comparisons. We use three commonly used metrics [3, 18, 29, 32, 34]: Stability, Distortion, and FOV ratio, and de\ufb01ne a Correlation score, to evaluate the tested methods (please refer to the supplemental materials on their de\ufb01nitions). Note that the distortion measures the global geometry deviation from the input frames, while the correlation evaluates the local deformation. The results of all test videos are summarized in Table 1, and Fig. 4 plots the average scores for the 6 categories. Overall, our method achieves the best stability and correlation scores. The YouTube stabilizer and Choi et al. [3] generate nearly uncropped or full-frame results, while our FOV ratio is comparable to PWStableNet [34]. We found the quantitative gaps in existing metrics do not fully re\ufb02ect large visual differences. For example, YouTube stabilizer keeps frames nearly uncropped but outputs relatively unstable videos. PWStableNet [34] produces lots of residual global motions and temporal wobbling, which are not captured by the distortion score. Our method generally obtains better stability and correlation scores on challenging ROTATION, RUNNING, and PEOPLE categories. Qualitative comparisons. We provide visual comparisons of stabilized frames in Fig. 5. Both Yu et al. [32] and Choi et al. [3] use optical \ufb02ows to warp the frames and often generate local distortion (03:20 in demo video). Choi et al. [3] produce severe artifacts when the motion is large (04:55 in demo video). Grundmann et al. [7] estimate a global transformation, and Wang et al. [29] predict low-resolution warping grids. The results of both methods have less local distortion but are not temporally stable as the motion is purely estimated from the video content (03:56, 06:02 in demo video). In contrast, we fuse both the gyroscope data and optical \ufb02ow for more accurate motion inference and obtain stable results without distortion or wobbling. Fig. 6 shows the averaged frame of 11 adjacent frames from a short clip, where the input video contains only handshake motion. Ideally, the stabilized video should look static as it was captured on a tripod. Our result is the sharpest one, while the averaged frames from other approaches [3, 7, 29, 32, 34] are blurry, demonstrating that our result is more stable than others (03:35 in demo video). We highly encourage readers to watch the full video comparisons in the demo to better evaluate the stabilization quality. User Study. We conduct a user study to evaluate human preferences on the stabilized videos. As it is easier for a user to make a judgment between two results instead of ranking multiple videos, we adopt the paired comparison [11, 25] to measure the subject preference. In each test, we show two stabilized videos side-by-side and the input video as a \fInput frame Yu et al. [32] Choi et al. [3] Ours Input frame Grundmann et al. [7] Wang et al. [29] Ours Figure 5: Visual comparisons. Non-rigid distortion, local artifacts and temporal wobbling are observed in Yu et al. [32] and Choi et al. [3], and large rotation deviation observed in Grundmann et al. [7] and Wang et al. [29] (which also exhibits local distortions). Our method is free of such issues. Please refer to our supplemental videos on the full results of all the methods. Input frame Grundmann et al. [7] Wang et al. [29] PWStableNet [34] YouTube stabilizer Yu et al. [32] Choi et al. [3] Ours Figure 6: Stability comparisons. We take a video with almost no camera motion except handshakes and average 11 adjacent frames. Our average frame is sharper than other methods, indicating that our result is more stable. Please refer to our supplemental videos on the full results of all the methods. reference. We ask the participant the following questions: 1. Which video is more stable? 2. Which video has less distortion? 3. Which video has a larger FOV? In total, we recruit 50 participants online, where each participant evaluates 18 pairs of videos. The videos are shuf\ufb02ed randomly when presenting to each user. All the methods are compared the same number of times. Table 2 shows that our method is preferred in more than 95% of comparisons regarding the stability and 93% regarding the visual quality (less distortion). Our results have larger FOVs than Grundmann et al. [7] and Wang et al. [29] as these two approaches apply excessive cropping to avoid the irregular boundaries. Our FOV is comparable to PWStableNet [34] as users cannot tell the difference (\u224350%). Yu et al. [32] generates results with a large FOV in most cases, but applies excessive cropping when the video motion is large (e.g., running). Choi et al. [3] generates fullMore stable Less distortion Larger FOV vs. Grundmann et al. [7] 98.4\u00b12.3% 95.9\u00b13.5% 82.1\u00b16.9% vs. Wang et al. [29] 98.4\u00b12.3% 95.1\u00b13.9% 77.2\u00b17.5% vs. PWStableNet [34] 91.1\u00b15.1% 89.4\u00b15.5% 48.0\u00b19.0% vs. Yu et al. [32] 92.7\u00b14.7% 93.5\u00b14.4% 38.2\u00b18.7% vs. Choi et al. [3] 96.7\u00b13.2% 97.6\u00b12.8% 27.6\u00b18.0% vs. YouTube stabilizer 96.7\u00b13.2% 88.6\u00b15.7% 23.6\u00b17.6% Average 95.7\u00b11.5% 93.4\u00b11.8% 49.5\u00b13.6% Table 2: Results of user study. Our results are more stable with less distortion and overall a comparable \ufb01eld-of-view. frame results but at the cost of visible distortion. YouTube stabilizer applies lightweight changes to preserve most of the input content, but the results are less stable. The user study demonstrates that our method generates more stable results with fewer visual artifacts and distortions, while the amount of cropping is similar to other approaches. \f(a) Analysis on relative pose (b) Analysis on LSTM (c) Analysis on loss functions Figure 7: Ablation studies on relative poses, LSTM and losses. (a) The model using relative poses can output more stable poses and follow the real camera motion well. (b) Without LSTM, our model cannot learn motion patterns well and often generate unstable prediction. (c) The protrusion loss Lp reduces the unde\ufb01ned region, and the optical \ufb02ow loss Lf further improves the smoothness. 5.2. Ablation Study Importance of using relative poses. As the same motion patterns can be converted to similar relative poses, it is easier for the model to infer the motion pattern from rotation deviations instead of the absolute poses. Using the relative poses also makes the model training more numerically stable. Fig. 7(a) shows that our method with relative poses can follow the real camera poses well for a PANNING case. In contrast, the model using absolute poses deviates away from the real motion. Importance of LSTM. The LSTM unit carries the temporal information (e.g., motion state) and enables the model to output state-speci\ufb01c results. With the temporal information, the LSTM can also reduce high-frequency noise and generate more stable poses. As shown in Fig. 7(b), when replacing the LSTM with an FC layer, the output poses contain more jitter, resulting in less stable videos. 0 20 40 60 80 100 120 140 600 800 1000 1200 1400 1600 1800 2000 Horizontal position Figure 8: Feature trajectories before and after stabilization. Grey: input video. Blue: our method with sensor only (stage 2). Red: our method with sensor and optical \ufb02ow fusion (stage 3). Our \ufb01nal fused model produces the most stable trajectories which also better follow the real motion. Importance of optical \ufb02ow loss. We compare the sensoronly stabilization results in stage 2 and the full fused results with the optical \ufb02ow term Lf in stage 3 (Sec. 4.4). The optical \ufb02ow loss enables the model to adapt to scene content (e.g. parallax), which can improve stability and increase the FOV as shown in the last two rows of Table 1. We also visualize the feature trajectories detected from the KLT tracker [21] in Fig. 8. Our method with sensor and optical \ufb02ow fusion (red curves) can follow the original camera motion well and maintain the smoothness. Importance of other losses. Fig. 7(c) shows the x-axis virtual rotation for a RUNNING case. We see a step-by-step stability improvement (smoother motion curves) after introducing each loss. We also provide the comparisons of execution time in the supplementary material. 6. Limitations and Conclusion In this work, we present deep Fused Video Stabilization, the \ufb01rst DNN-based unsupervised framework that utilizes both sensor data and images to generate high-quality distortion-free results. The proposed network achieves high-quality performance using joint motion representation, relative motion history, unsupervised loss functions, and multi-stage training. We have demonstrated that our method outperforms state-of-the-art alternatives in both quantitative comparisons and user study. We will release the source code and the video dataset to facilitate future research. Unlike image-based methods, our framework requires additional sensor data. Fortunately, most modern smartphones have a well-synchronized sensor and camera system [5] which makes the requirement minimal. Our model does not support hard FOV preservation. To address this, one can tune the protrusion loss and apply post-processing to pull the virtual camera back toward the real motion when \fthe virtual camera pose deviates from the real pose too much. Our experiments also show a discrepancy between the existing metrics and user preference. Closing this gap with more human perception studies and de\ufb01ning representative metrics will enable more effective learning-based solutions." + }, + { + "url": "http://arxiv.org/abs/1908.00777v2", + "title": "DAWN: Dual Augmented Memory Network for Unsupervised Video Object Tracking", + "abstract": "Psychological studies have found that human visual tracking system involves\nlearning, memory, and planning. Despite recent successes, not many works have\nfocused on memory and planning in deep learning based tracking. We are thus\ninterested in memory augmented network, where an external memory remembers the\nevolving appearance of the target (foreground) object without backpropagation\nfor updating weights. Our Dual Augmented Memory Network (DAWN) is unique in\nremembering both target and background, and using an improved attention LSTM\nmemory to guide the focus on memorized features. DAWN is effective in\nunsupervised tracking in handling total occlusion, severe motion blur, abrupt\nchanges in target appearance, multiple object instances, and similar foreground\nand background features. We present extensive quantitative and qualitative\nexperimental comparison with state-of-the-art methods including top contenders\nin recent VOT challenges. Notably, despite the straightforward implementation,\nDAWN is ranked third in both VOT2016 and VOT2017 challenges with excellent\nsuccess rate among all VOT fast trackers running at fps > 10 in unsupervised\ntracking in both challenges. We propose DAWN-RPN, where we simply augment our\nmemory and attention LSTM modules to the state-of-the-art SiamRPN, and report\nimmediate performance gain, thus demonstrating DAWN can work well with and\ndirectly benefit other models to handle difficult cases as well.", + "authors": "Zhenmei Shi, Haoyang Fang, Yu-Wing Tai, Chi-Keung Tang", + "published": "2019-08-02", + "updated": "2019-08-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Despite recent progress in deep learning and pertinent research on video object tracking, as exempli\ufb01ed in VOT challenges [17, 18] and OTB benchmarks [35, 36], top contenders in these challenges and benchmarks still have problem in unsupervised object tracking where for a given video only the bounding box in the \ufb01rst frame is given and no reset is allowed, in the presence of total occlusion, abrupt changes in target appearance, multiple object instances, and similar foreground and background features, see Figure 1. \u2217Equal contribution ECO ADNet DAWN DaSiamRPN (a) girl: total occlusion (b) helicopter: abrupt changes in target appearance (c) basketball: multiple object instances (d) \ufb01sh: similar foreground and background features Figure 1. DAWN can handle various dif\ufb01cult situations in unsupervised video object tracking whereas existing methods (ECO [7], ADNet [40], DaSiamRPN [42]) fail in at least one of them. In this paper, we introduce a deep neural network augmented with both foreground and background memory blocks which are updated by an attention LSTM as a memory controller for unsupervised video object tracking. Central to our network architecture is an external memory module to remember (and forget) evolving target object and scene structures which are inspired by [23, 39], thus setting such memory-augmented tracking methods apart from top tracking methods such as [25, 22] (online learning), [40, 41, 4] (deep reinforcement learning), and [7, 9] (coupling deep features with traditional features). The central issue in unsupervised video object tracking is that target appearance given in the \ufb01rst frame is likely to dramatically change during the course of tracking, which can be complicated by occlusion and image degradation 1 arXiv:1908.00777v2 [cs.CV] 8 Aug 2019 \fsuch as motion blurs. Online tracking is a typical method to adjust a given model to adapt to changes during tracking. However, updating network parameters through online backpropagation can be quite computationally expensive, as opposed to updating an external memory block where a lightweight read and write strategy can be adopted. Such memory updating strategy for unsupervised object tracking, on the other hand, is susceptible to problems such as total occlusion and severe motion blurs which can contaminate memory blocks. To protect the memory from erroneous update due to the above problems, we utilize an attention LSTM as a memory controller. The attentional module attenuates the effect of noise by focusing on the target despite partial occlusion or similar background features, thus making the memory I/O less vulnerable in our method. By adding an identical memory module to remember (and forget) the evolving background as well, our dual memory augmented network (DAWN) can suppress confusing background that may look similar to the tracking target, and thus can seamlessly deal with the above dif\ufb01cult tracking situations which are problematic to conventional trackers. Such confusing background can distract these previous models and make them start tracking the occluder and thus the wrong object. To focus on dual memory and attention LSTM, we \ufb01x the aspect ratio of our bounding boxes on DAWN to make us comparable to previous memory-based methods [39, 23]. To eliminate contributing factors other than the two technical contributions, we do not have sophisticated engineering other than restarting DAWN after total occlusion, so we evaluate our model in VOT2016 [17] and VOT2017 [18] competitions including comparison with the VOT2018 [19] real-time champion, showing DAWN performs signi\ufb01cantly better than state-of-the-art trackers while running fps > 10. 2. Related work 2.1. Deep Feature in Tracking Over the past decade, we have witnessed the signi\ufb01cant development of deep learning on many important computer vision tasks, particularly object classi\ufb01cation and detection [29, 12, 26]. Recently deep learning has been extensively employed in tracking thanks to its outstanding feature representations. A number of trackers, coupling deep features with traditional features as correlation \ufb01lters such as C-COT [9] and ECO [7] have shown their effectiveness on various tracking benchmarks [35, 36] and challenges [17, 18]. Tracking methods based on Tracking-by-Detection [30, 38, 22] build a classi\ufb01er that separates the target from the enclosing background. However, training data of these online-only approaches is the input video itself which fundamentally limits their learned models. The MDNet [25] solves this problem by end-to-end learning of an of\ufb02ine deep feature extractor, where the classi\ufb01er will be re\ufb01ned online. Although it has achieved competitive performance, its speed is restricted by online training and updating. Deep feature matching based methods are gaining much attention because of its excellent speed and tracking performance. The GOTURN [13] learns target tracking states by comparing and matching feature pairs in consecutive frames, so it cannot handle target occlusion in principle. The SiamFC [2] uses fully convolutional Siamese networks to match features between template frame and target frame. However, this method only utilizes the \ufb01rst frame as the template which will not work well when the tracking object undergoes notable deformation in the long term. 2.2. Memory Structure in Tracking To solve the static template frame problem in GOTURN and SiamFC mentioned above, memory blocks are adopted in a number of contemporary trackers. The RDT [4] uses a template pool to store the target\u2019s different appearances. Even though its template pool updating is trained by deep reinforcement learning [41], the target appearances written and read are discrete. MemTrack [39] uses NTM (Neural Turing Machine) [10] as its memory block to store continuous feature of the target, and uses LSTM [15] to dynamically control memory reading and writing. They used gated residual template learning to manage the quantity of retrieved memory which is used to mix with the starting template. This memory architecture is commonly used in many \ufb01elds of deep learning, especially in one-shot learning, such as [3, 28]. On the other hand, MAVOT [23] retrieves and memorizes the information of both foreground and background to support background suppression. A similar idea was adopted in DSiam [11] where a dynamic Siamese network was used for target image variation and background suppression. Our improved attention LSTM memory controller is closely related to [39] and background memory to [23], which will be explained in the next section. 2.3. Attention Mechanism in Tracking Attention models, initially applied in image recognition tasks, was then combined with a recurrent model to form Recurrent Attention Model (RAM) [24]. The RAM soon became popular in multiple areas such as image captioning [37], image classi\ufb01cation [33], pose estimation [6], etc. In multiple objects tracking, STAM [5] uses the attention model to deal with occlusions and interactions among targets. For single object tracking, RASNet [34] introduces general attention, residual attention and channel attention. MemTrack [39] uses a common RAM structure similar to those in NLP tasks to roughly locate the object, and provide the memory block a correct retrieval key. However, we found that this structure can give bad location estimation and consequently, it tracks the background and memorizes the relative location of the object context. In this paper, we modify the RAM structure to produce more accurate estimations of the target\u2019s location. While a single object is 2 \fROI Attention LSTM Foreground Memory LSTM Image Feature Extractor Read Read Bounding Box Feature Extractor Feature Extractor Write Write Image B Receive foreground memory read in last frame Feature Extractor Feature Extractor Feature Extractor Feature Extractor Feature Extractor Feature Extractor Feature Extractor Feature Extractor Feature Extractor Feature Extractor Feature Extractor Feature Extractor Feature Extractor Background Memory Frame Image ROI Foreground Image Background Image Foreground Background Heatmap Average Pooling Send foreground memory read to next frame Figure 2. General structure of our network, where attention LSTM (Figure 3) and dual memory (Figure 4) will be detailed. Blue rectangle in the frame image is the Region Of Interest (ROI) where ROI features are extracted. ROI features are then fed into attention LSTM together with foreground memory read in the last frame, which are also fed into a conventional LSTM. The outputs from two LSTMs are used to read memories from foreground and background memory blocks respectively. These memories are then used to generate a heat map to predict the bounding box which divides ROI into two parts, a foreground image, and a background image. Features extracted from the two images are then used to write into foreground and background memory blocks respectively. tracked in unsupervised, it may be partially or totally occluded during tracking, which will cause the model especially one with memory blocks to track another object instead of the original target. Our attention mechanism is designed to solve the occlusion problem, which demonstrates more robust behavior than memory structures without attention mechanism such as [23]. 3. Dual Augmented Memory Network We present our DAWN model in detail. Figure 2 shows the overall framework: our model consists of a dual memory structure for remembering the evolving foreground and background features, an improved attention LSTM and a conventional LSTM for controlling the read/write operations of the foreground and background memory respectively. Note that no backpropagation is run, online or of\ufb02ine during the tracking process. All feature extractors in DAWN share the same weights. In this section, \u2217, \u00b7 and \u2299 denote element-wise multiplication, matrix multiplication and convolution operation respectively. 3.1. Feature Extraction We follow the Fully Convolutional Neural Networks structure from SiamFC [2] to extract deep features within the ROI which include the deep features for both target and background. The output will then be fed into three branches, namely, ROI branch, foreground branch, and background branch. The ROI branch receives an ROI patch from the newest frame and bounding box predicted in the last frame. The output of the ROI branch is an n \u00d7 n \u00d7 c matrix F, which consists of vectors fi,j of size c where 0 \u2264i, j < n. Working with the memory I/O, F will be used to predict the new bounding box. The foreground branch receives the newest target patch from the bounding box predicted in ROI branch. The output of the foreground branch is an m \u00d7 m \u00d7 c matrix Ffore, which will be written to foreground memory. The background branch receives the background information in ROI and outputs the feature of the background branch, an n \u00d7 n \u00d7 c matrix, which will be written in background memory after an additional average pooling layer. 3.2. Attention Based on Foreground Memory In video object tracking, a bounding box is an axisaligned rectangle, so it includes a lot of background regions as well. Thus features extracted from a given bounding box contain noises which adversely affect both the performance and robustness of the model. To extract object features with less noise, Yang and Chan (MemTrack) [39] introduced an attention LSTM as a memory controller, but their attention module acts more like a simple protocol for memory reading and writing. Often times, their attention seems to be random and consequently the model tracks some part of the background and remembers the target\u2019s relative position with respect to the tracked background regions. Thus, once the target\u2019s relative position starts to change, MemTrack will lose track, see Figures 9 and 10. Since attention can be roughly interpreted as an estimation of the target object\u2019s position, the concept of memory augmentation can also be applied here. Speci\ufb01cally, to achieve better estimation, we introduce a memory augmented attention block whose difference from the attention block in MemTrack [39] is shown in Figure 3. In DAWN, we utilize the memory read from the last frame, i.e., a m \u00d7 m \u00d7 c matrix M consists of vectors mi,j of size c, where 0 \u2264i, j < m. First, to generate attention score, M is convolved with F which outputs an a \u00d7 a matrix A, as weights of attention scalars ai,j, where 0 \u2264i, j < a and a = n \u2212m + 1, whose 3 \fROI Feature Attention Average Pooling LSTM F: n n c A: a a Favg: a a c Fatt: a a c FC + Tanh Hidden State (a) MemTrack ROI Feature Memory Extracted (Last Frame) Attention Average Pooling LSTM F: n n c M: m m c A: a a Favg: a a c Fatt: a a c (b) DAWN Figure 3. Attention modules of MemTrack [39] and DAWN. value is given by ai,j = exp(ri,j) a\u22121 X s=0 a\u22121 X t=0 exp(rs,t) , (1) where ri,j = m\u22121 X s=0 m\u22121 X t=0 f \u22a4 i+s,j+t \u00b7 ms,t. (2) Then, we use average pooling on F with \ufb01lter size m\u00d7m and stride size 1, and obtain a \u00d7 a \u00d7 c matrix Favg: Favg = AveragePoolingm\u00d7m(F). (3) Finally, we use the attention score A and averaged feature Favg to produce attention feature Fatt: Fatt = A \u2217Favg. (4) Comparing with MemTrack [39], our attention mechanism has the following advantages: 1. We use the last frame to extract the memory for generating attention, more inter-frame information being incorporated in DAWN during tracking. This is a more direct approach than MemTrack [39], which uses the LSTM hidden layer and current ROI feature to generate attention. 2. Our mechanism allows us to use the \ufb01rst frame target feature to initialize the memory, which is more stable and robust than LSTM hidden layer initialization. See the \ufb01rst frame in Figure 10. 3. The uncompressed memory extracted from the last frame has less information loss than the compressed LSTM hidden layer. 3.3. Memory Controller DAWN uses standard LSTM with layer normalization [1] and dropout regularization [31] in both foreground and background memory controllers, see Figure 4. The input to foreground and background memory controller are Fatt and Favg respectively, computed in section 3.2. With the input feature and the previous hidden state ht\u22121, the hidden state updates to ht in both controllers. Our memory reading and writing process are inspired by MemN2N [32] and MemTrack [39]. Here, we describe foreground memory reading and writing process. The background memory read/write is similar. During a reading process, a template is retrieved from all the memory slots as a weighted sum, where the read weights are the softmax result of the cosine similarity between the given read key rt and all the memory keys. The read key rt is calculated by hidden states ht of LSTM: rt = Wr \u00b7 ht + br, (5) where Wr and br are weight matrix and bias. The memory key is the output of m \u00d7 m average pooling on the corresponding memory slot. During a writing process, the write weight wt controls whether to update the memory and which slots to update. Using hidden states ht, we apply the same method as MemTrack [39] to generate wt. The writing process is de\ufb01ned by Mt = (1 \u2212wt) \u2217Mt\u22121 + wt \u2217Ffore, (6) where Ffore is the output of the foreground branch. For LSTM initialization, we use the initial target\u2019s feature with average pooling and tanh activation to generate hidden state h0 and cell state c0. We apply the Residual Template Learning adopted in MemTrack [39] in foreground memory but not in background memory. 3.4. Dual Memory Structures ROI Feature Attention LSTM Foreground Memory Read LSTM Background Memory Read Average Pooling Bounding Box Heatmap Figure 4. Dual Memory Structure of DAWN. 4 \fSince memory structure with foreground-only features shows limitations in tackling problems such as occlusion, multiple objects, and similar background features, etc., DAWN uses dual memory blocks, see Figure 4. Foreground memory can remember the appearance of the target, while background memory can record surrounding variation. Reading from foreground and background memory has been discussed in Section 3.2 and 3.3. After reading foreground memory M and background memory Mback, both with dimension m \u00d7 m \u00d7 c, we obtain M by non-maximal suppression using subtraction: M = M \u2212Mback (7) M is convolved with the F to obtain the heat map H with dimension a \u00d7 a, which is calculated by: H = F \u2299M. (8) We up-sample the heat map H as in SiamFC [2] to predict the bounding box. Then we extract features from the image cropped with the bounding box and write them to the corresponding memory blocks. An additional average pooling layer is applied before writing background features into background memory. Comparing with memory blocks only involving foreground features [39]: 1. Occlusion. DAWN can suppress the adverse in\ufb02uence of possible memory contamination caused by occlusion. Previously, when the target is being occluded, features of the occluding object will be written into and thus contaminate foreground memory (see \u2018girl\u2019 and \u2018frisbee\u2019 in Figure 11), consequently causing the model to track a false background object. But with DAWN\u2019s background memory block, since the memory of a background object has been recorded in previous frames, subtracting its memory can mitigate the above contamination while keeping track the target after occlusion. When detecting total occlusion, DAWN will not update bounding box until the target re-appears in a subsequent ROI. 2. Similar Background. DAWN can distinguish other similar background objects or features, which usually confuse trackers with only foreground memory. Since background features have been memorized, although they look similar to the target in foreground memory, DAWN can still differentiate them after subtracting the relevant background memory. Using cosine window to penalize large displacements can suppress similar objects in the background. See \u2018basketball\u2019, \u2018\ufb01sh\u2019 in Figure 1, and \u2018godfather\u2019, \u2018gymnastics\u2019 in Figure 11. 3.5. DAWN-PRN: DAWN with Region Proposal Network We change our tracking backbone from SiamFC [2] to SiamRPN [21], and call it DAWN-RPN, which is shown Feature ROI M Regression BBox Classifcation Branch Box Bounding Memory Write Foreground Memory Write Background 1 Figure 5. DAWN-PRN: M is same and de\ufb01ned by Eq. (7). in Figure 5. In DAWN-RPN, we adopt the techniques in SiamRPN [21] to generate bounding boxes. The reading and writing operations of memory is unchanged. 4. Experiments We will present the comparative results with other trackers using VOT toolkits on two challenging tracking competitions: VOT2016 and VOT2017, and then show the respective improvement due to the attention and memory module. Please see our videos at https://zhmeishi.github.io/DAWN/ . 4.1. Implementation Details We apply the same Alex-like [20] CNN feature representation as SiamFC [2]. The input image sizes of the ROI, target and background branch are respectively 255\u00d7255\u00d73, 127 \u00d7 127 \u00d7 3 and 255 \u00d7 255 \u00d7 3. The output matrix size of the foreground branch is 6 \u00d7 6 \u00d7 256, and, for the ROI branch and background branch, the size is 22 \u00d7 22 \u00d7 256. Our DAWN is pre-trained of\ufb02ine on the video object detection dataset of ImageNet Large Scale Visual Recognition Challenge (ILSVRC15) [27]. The of\ufb02ine training strategy is the same as MemTrack [39], where the training optimizer is Adam [16] with initial learning rate 1e\u22124 and the weight decay set to be 0.02 every 10, 000 iterations. The number of memory slots is 8. We suppress the heat map with a cosine window by an exponential factor of 0.27. The target image size is 1.32 times bounding box size. We update the target size with exponential smoothing st = (1\u2212\u03b1)st\u22121 +\u03b1snew to deal with scale changes, where s is the target size and the exponential factor \u03b1 is 0.6. Our tracker is implemented in Python using the TensorFlow framework. All of the experiments were conducted on the following hardware speci\ufb01cations: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz, 16 GB RAM, and NVIDIA GPU GTX1070. 4.2. VOT2016 and VOT2017 Results Using the VOT toolkit, we compare DAWN with all fast trackers (fps > 10) in VOT2016 and VOT2017 competitions which respectively contains 60 video sequences. Figure 6 shows the comparison results. Only \ufb01ne-tuned on the 5 \fFigure 6. DAWN is ranked 3rd in VOT2016 (left) and VOT2017 (right) among all the VOT fast trackers running at fps > 10. training dataset, DAWN has consistently good performance in VOT2016 and VOT2017 with same hyper-parameters. Though DAWN ranks the third in VOT2016 and VOT2017, where the rank is measured by the expected overlap ratio (EAO), but our bounding box aspect is \ufb01xed, DAWN still has an excellent success rate in both VOT2016 (Table 1) and VOT2017 (Table 2) with the least number of failures. Measures DAWN STAPLEp Staple EAO (\u2191) 0.28 0.29 0.30 Failures (\u2193) 18.58 24.32 23.89 Table 1. Model Performance in VOT2016 Measures DAWN ECOhc SiamDCF EAO (\u2191) 0.24 0.24 0.25 Failures (\u2193) 26.58 28.77 29.41 Table 2. Model Performance in VOT2017 4.3. VOT unsupervised Tracking Here we choose videos that are very dif\ufb01cult to track in the unsupervised mode and provide brief information as well as major tracking dif\ufb01culties. Then we will tabulate the tracking results of DAWN and current state-of-the-art trackers. The following summarizes the major problems in unsupervised tracking with their abbreviations used in Table 3: OC: Occlusion (Total or Partial) MB: Motion Blur (Severe) AC: Abrupt Changes in Target Appearance SB: Similar Background Objects or Features Apart from a good success rate, DAWN also has excellent performance in unsupervised tracking regarding accuracy per frame as the standard. As shown in the Figure 7, DAWN enjoys an overwhelming success compared with fast trackers running at fps > 10 in VOT2016, and a performance better than most fast trackers running at fps > 10 in VOT2017. DAWN can focus on not losing target and the result shows that it can keep track of many hard sequences that current SOTA cannot. Further, we used DAWN as the pretrained model and trained the RPN branch, which was similarly done in Figure 7. Comparison results on unsupervised tracking on VOT2016 (left) and VOT2017 (right). Fast trackers with fps > 10 in VOT toolkit are tested. In the above plots, x-axis indicates overlap threshold and y-axis accuracy. Figure 8. Unsupervised result of VOT2016 (left) and VOT2017 (right), where x-axis is overlap threshold and y-axis is accuracy. training SiamRPN [21]. Then we tested DAWN-RPN on VOT2016 and VOT2017 using the VOT toolkit, and compare DAWN-RPN with DAWN and SiamRPN\u2013 (which is our own implementation since the training code is not released). Figure 8 shows the consistent improvement of DAWN-RPN over SiamRPN\u2013 (and the original DAWN as well) in the unsupervised competitions in VOT2016 and VOT2017. 4.4. Attention This section reports the comparison results between the attention module of DAWN and MemTrack evaluated under the VOT2016 dataset. Both models are trained on the VID dataset of ILSVRC [27]. 6 \fVideo DAWN MemTrack MAVOT SiamFC KCF DSST DaSiam ADNet C-COT ECO wiper (OC, MB) \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 bolt1 (SB) \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 butter\ufb02y (AC, SB) \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 helicopter (AC) \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 godfather (OC, SB) \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 soccer2 (OC, MB) \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 ants1 (SB) \u2713 \u2713 \u2713 \u2713 \u2713 basketball (OC, SB) \u2713 \u2713 \u2713 \u2713 \u2713 motocross2 ( MB,AC) \u2713 \u2713 \u2713 \u2713 \u2713 ball2 (MB,SB) \u2713 \u2713 \u2713 \u2713 gymnastics1 (MB, SB) \u2713 \u2713 \u2713 \u2713 girl (OC) \u2713 \u2713 \u2713 zebra\ufb01sh1 (AC, SB) \u2713 \u2713 \ufb01sh2 (AC, SB) \u2713 frisbee (OC, AC) \u2713 fps 15 27 7 36 64 41 60 4.5 0.27 4.6 Table 3. Results on state-of-the-art trackers, especially C-COT [9] the most precise tracker of VOT2016 and VOT2017, and DaSiam [42] the champion of VOT2018 real-time competition, in tracking unsupervised sequences with different technical dif\ufb01culties. KCF [14], DSST [8], and DaSiam [42] are real-time trackers. \u2713means the tracker can track the whole sequence from start to end successfully. Running speed in VOT toolkit are measured under GTX 1070. (a) DAWN (b) Memtrack (c) MAVOT Figure 9. Results of two sample frames selected in hand and book respectively. Time axis is running downward. Figure 9 shows the respective attention of DAWN and MemTrack: the blue box is the ROI where the attention is computed, and the inner box (red) is the bounding box produced by the respective tracker. Within a given ROI, brightness indicates the scale of the attention: the brighter the pixel, the higher the attention at that pixel. Observing the results on the two videos \u2018hand\u2019 and \u2018book\u2019, when tracking starts, MemTrack\u2019s attention is quite random and mostly focused on the background, causing it to lose track of the object in subsequent frames due to a background object with similar features (in video \u2018hand\u2019 under severe motion blur both the face and hand are dominated by \ufb02esh color), or changes in the object\u2019s relative position on the background (in video \u2018book\u2019). On the other hand, DAWN produces a more precise attention, thus giving a stronger focus on the object in the early stage of tracking and making it less vulnerable to losing track in later frames. A stronger focus helps the model to be more robust against similar objects in video \u2018hand\u2019, and a precise attention prevents the model from tracking background features. (a) DAWN (b) MemTrack Figure 10. Attention in butter\ufb02y. In \u2018butter\ufb02y\u2019 (Figure 10) a false initial attention causes MemTrack not moving at all, whereas DAWN with the proper initialization can readily track the object all along. 4.5. Memory We present a number of examples to show how background memory can effectively address occlusion, confusing background features, multiple object instances and abrupt changes of appearance (Figure 11). Both partial and total occlusion occur in \u2018frisbee\u2019 and \u2018girl\u2019. In both videos, DAWN\u2019s background memory module can suppress the occluder (i.e., the guy), allowing the original target to be correctly tracked even after total occlusion. In \u2018godfather\u2019, the white veil and white hat distract the trackers without background memory. But DAWN still works well thanks to its effective background memory. No7 \fDAWN-RPN MAVOT MemTrack DAWN SiamFC 1 Figure 11. From top to bottom: frisbee, girl, gymnastics1, godfather, zebra\ufb01sh1. tice that due to abrupt changes of target appearance and similar background features, tracking in \u2018zebra\ufb01sh1\u2019 becomes an almost impossible task for most of the tested trackers, especially those with a \ufb01xed aspect ratio of the bounding box. However, with attention and dual memory, DAWN can robustly track the zebra\ufb01sh, despite the low overlap ratio due to its \ufb01xed aspect ratio. Figure 12. Comparison result of average overlap with ground-truth in VOT2016 (left) and VOT2017 (right). Tracker Foreground Memory Background Memory Attention DAWN \u2713 \u2713 \u2713 MemTrack \u2713 \u2713 MAVOT \u2713 \u2713 SiamFC Table 4. Model architecture comparison. Table 4 compares the attention and memory modules among memory-based trackers. Figures 8 and 12 show our clear improvement over others while still running very fast. Tested in the VOT toolkit, DAWN runs at 15 fps, SiamFC [2] at 36 fps, MemTrack [39] at 27 fps, and MAVOT [23] at 7 fps. Table 5 shows that after adding background memory block and using our new attention scheme Measures SiamFC MemTrack DAWNDAWN EAO (\u2191) 0.19 0.23 0.24 0.24 Failures (\u2193) 34.03 30.53 27.93 26.58 Table 5. Performance in VOT 2017, SiamFC: no memory, MemTrack: target memory, DAWN-: dual memory, DAWN: dual memory with improved attention. the success rates are improved. 5. Conclusion We have presented a simple and yet effective memorybased approach for unsupervised video object tracking. While strikingly simple in implementation, attention LSTM and dual memory structures enable DAWN to track video objects in unsupervised manner with excellent performance and robustness under challenging scenarios: partial and total occlusion, severe motion blur, abrupt changes in target appearance, multiple object instances, and similar foreground and background features. We have presented extensive quantitative and qualitative experimental comparison: DAWN is ranked third in both VOT2016 and VOT2017 challenges with an excellent success rate among all fast trackers running at fps > 10 and has great performance in unsupervised tracking in both challenges. We further showed using DAWN-RPN that state-of-the-art models can immediately bene\ufb01t by simply augmenting them with the proposed dual memory and attention LSTM. Acknowledgement This research is supported by Tencent and the Research Grant Council of Hong Kong SAR under grant no. 16201818. 8" + } + ], + "Mengdi Wang": [ + { + "url": "http://arxiv.org/abs/2401.12012v4", + "title": "TurboSVM-FL: Boosting Federated Learning through SVM Aggregation for Lazy Clients", + "abstract": "Federated learning is a distributed collaborative machine learning paradigm\nthat has gained strong momentum in recent years. In federated learning, a\ncentral server periodically coordinates models with clients and aggregates the\nmodels trained locally by clients without necessitating access to local data.\nDespite its potential, the implementation of federated learning continues to\nencounter several challenges, predominantly the slow convergence that is\nlargely due to data heterogeneity. The slow convergence becomes particularly\nproblematic in cross-device federated learning scenarios where clients may be\nstrongly limited by computing power and storage space, and hence counteracting\nmethods that induce additional computation or memory cost on the client side\nsuch as auxiliary objective terms and larger training iterations can be\nimpractical. In this paper, we propose a novel federated aggregation strategy,\nTurboSVM-FL, that poses no additional computation burden on the client side and\ncan significantly accelerate convergence for federated classification task,\nespecially when clients are \"lazy\" and train their models solely for few epochs\nfor next global aggregation. TurboSVM-FL extensively utilizes support vector\nmachine to conduct selective aggregation and max-margin spread-out\nregularization on class embeddings. We evaluate TurboSVM-FL on multiple\ndatasets including FEMNIST, CelebA, and Shakespeare using user-independent\nvalidation with non-iid data distribution. Our results show that TurboSVM-FL\ncan significantly outperform existing popular algorithms on convergence rate\nand reduce communication rounds while delivering better test metrics including\naccuracy, F1 score, and MCC.", + "authors": "Mengdi Wang, Anna Bodonhelyi, Efe Bozkir, Enkelejda Kasneci", + "published": "2024-01-22", + "updated": "2024-02-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "main_content": "Introduction With the increasing importance of data privacy, a giant stride in distributed machine learning has been observed in recent years. As one promising distributed machine learning paradigm, federated learning (FL) has been growing at an astounding rate after its introduction (McMahan et al. 2017). In the common FL settings, data is distributed over numerous end clients, while the central server possesses no data by itself. After the server initiates a model and sends the model to clients, each client trains the model locally using its own data. The server periodically aggregates the locally trained *These authors contributed equally. Copyright \u00a9 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. models and synchronizes local models of clients with the latest aggregated one. With such a process, FL provides a primary privacy guarantee to a large extent since the server does not require data sharing and is hence preferred in many privacy-preserving scenarios where sensitive data is utilized. Based on the characteristics of participating entities, FL can be further categorized into cross-silo FL and crossdevice FL (Kairouz et al. 2021). In cross-silo FL, the target clients are often large-scale institutions such as hospitals, data centers, educational organizations, and hightech companies. Such stakeholders commonly possess decent resource for computing, storage, and internet connection, while the number of attending institutions is relatively low. Therefore, the probability that each client takes part in all aggregation rounds is high. In contrast to cross-silo FL, cross-device FL focuses more on training on end-user devices like smartphones and personal computers using user data. The scale of participating clients in cross-device FL can be fairly large, while each client may be strongly limited by its computing power and connectivity. As a consequence, only a (small) portion of clients could share their models during a global aggregation. An additional critical aspect of cross-device FL is that each client might only contain data collected from a single user, which exacerbates data heterogeneity across clients. Despite recent advancements in FL, its implementation in practice is still facing some challenges. Among these, the slow convergence is a primary concern: a significantly greater number of aggregation rounds are often needed to reach convergence compared to non-FL setups. Several factors contribute to this inefficiency according to (Wu et al. 2023), such as client drift caused by data heterogeneity (Karimireddy et al. 2020), lack of adaptive optimization (Reddi et al. 2020), and the increase in model complexity and data size. One trivial solution proposed in (McMahan et al. 2017) is to increase client computation load by a larger number of local training iterations. Although this solution vastly speeds up the convergence, it multiplies the computation load on the client side, which can be problematic when clients are constrained by computing power, like in cross-device FL case. Other existing solutions mainly target at either client drift or adaptive optimization. The former often requires to optimize additional objective functions on the client side, which in turn also increases the client comarXiv:2401.12012v4 [cs.LG] 11 Feb 2024 \fsend client model to server send global model to clients Step 1. Selective aggregation on support vectors Step 2. Max-margin spread-out regularization Logit Layer Client Level Server Level Class Embeddings Logit Layer Logit Layer Encoder Logit Layer Encoder Encoder Encoder Client 1 Client 2 Client N-1 Client N Class Embeddings Class EmbeddingsClass Embeddings Figure 1: Left: pipeline of TurboSVM-FL. Right: test performance of TurboSVM-FL against FedAvg. E indicates the number of client local training epochs. The results were obtained on FEMNIST dataset using a suboptimal client learning rate. putation load and even the need for storage, while the latter can be particularly hard to tune because it is often necessary to decide the choice of optimizers and learning rates jointly between server and clients. There also exist solutions that require additional data on the server side, which may increase the risk of privacy leaks if not handled properly. In this paper, we focus on embedding-based neural network classifiers, which means the input samples are encoded into the same space as class representations and their similarity determines the probability of the sample belonging to the class (Yu et al. 2020). We propose a novel federated aggregation strategy for classification tasks called TurboSVM-FL. TurboSVM-FL induces no additional computation cost or storage consumption on the client side compared to the vanilla FL algorithm, and shows great potential in reducing communication rounds, especially when clients are \u201clazy\u201d and train their local models for very few iterations. TurboSVM-FL extensively exploits the property of support vector machine (SVM) and consists of three steps. Firstly, TurboSVM-FL reformalizes classification while treating client models in a model-as-sample way, and fits SVMs using latent class embeddings. Then, it conducts selective aggregation on latent class features that form support vectors. Lastly, TurboSVM-FL applies max-margin spread-out regularization on aggregated class representations upon SVM hyperplanes. By adopting adaptive methods in max-margin spread-out regularization, TurboSVMFL can also benefit from adaptive optimization. The main contributions of this work can be summarized as follows: \u2022 We introduce a novel perspective to interpret classification in FL setting and a model-as-sample strategy, which lay the foundation for further FL improvements such as selective aggregation and outlier detection. \u2022 We propose a novel federated aggregation algorithm named TurboSVM-FL that vastly boosts convergence for federated classification tasks using deep neural networks. The proposed method extensively exploits support vector machine (SVM) to conduct selective model aggregations and max-margin spread-out regularization. \u2022 We conduct experiments on various benchmarks in user-independent validation and show the potential of TurboSVM-FL in reducing communication cost. The benchmarks contain three different tasks covering image classification and natural language processing with noniid data distribution over clients. Related Work Federated Learning The concept of FL was originally introduced in (McMahan et al. 2017). Unlike centralized learning where the goal is to fit an optimal model on a collection of data, FL aims to train a model that delivers superior performance across data subsets. In the remaining of this work, we narrow down our focus to federated classification tasks. In a federated classification task with K classes and N clients, denote the local dataset of each client as D1, ..., DN with Dn = {(x, y)}, n \u2208[N], where (x, y) \u2208RP \u00d7 [K] is \fa sample point of class y \u2208[K]. Let DG = UN n=1 Dn with |DG| = PN n=1 |Dn| describe the collection of local datasets and \u2113(x, y, \u03b8) be the objective function measured on the sample (x, y) with model \u03b8. Then, the goal of centralized learning is to find an optimal model \u03b8\u2217that satisfies: \u03b8\u2217= argmin \u03b8 E(x,y)\u223cDG[\u2113(x, y, \u03b8)] (1) In contrast, FL aims to fit a model that performs optimally across clients: \u03b8\u2217= argmin \u03b8 N X n=1 |Dn| |DG|E(x,y)\u223cDn[\u2113(x, y, \u03b8)] (2) The typical workflow of FL can be broken down into three steps. First, a server initializes a model and broadcasts this model to all clients. Then, each client trains the received model on its own dataset for E epochs and sends the trained model back to the server. In the next step, the server aggregates locally trained models into a new global model and synchronizes clients with the latest global model. The last two steps are repeated for multiple rounds until convergence. The first federated aggregation algorithm, FedAvg, was introduced in (McMahan et al. 2017) and applies weighted average over client models: \u03b8G = N X n=1 |Dn| |DG|\u03b8n, \u03b8n = argmin \u03b8 E(x,y)\u223cDn[\u2113(x, y, \u03b8)] (3) where \u03b8G and \u03b8n denote the aggregated global model and the local model of n-th client, respectively. One of those challenges that FL is facing is the large amount of aggregation rounds needed to approach convergence. While increasing local training iterations E can significantly advance convergence, it also vastly increases the computation load on the client side, which can be extremely problematic in cross-device FL. Many follow-up works aim to speed up FL convergence, and they can be mainly categorized into two groups. The first group endeavor to address client drift (Karimireddy et al. 2020) caused by data heterogeneity, while the other group attempt to benefit FL with adaptive learning methods, which we describe as follows. Client Drift in Federated Learning Client drift (Karimireddy et al. 2020) describes the phenomenon that client local models approach individual local optima rather than global optima and their average is drifted away from global optima as well, which is caused by data heterogeneity across client local datasets and can dramatically impact convergence behavior. Various recent works attempt to solve client drift on the client side. SCAFFOLD (Karimireddy et al. 2020) introduces a control variate term to stabilize gradient update. FedProx (Li et al. 2020) proposes an additional loss term based on L2 distance between global model \u03b8G and client model \u03b8n during local training. MOON (Li, He, and Song 2021) and FedProc (Mu et al. 2023) suggest the use of contrastive learning to combat data heterogeneity. The former introduces an objective based on latent features extracted respectively by the global model, current client model, and previous client model, while the latter penalizes the dissimilarity between latent features and class representations. A common drawback of the aforementioned methods lies in that they increase either computation burden or memory consumption or even both on the client side, which can be quite a challenge for end-user devices like smartphones and tablets. There also exists works that aim to solve client drift on the server side. For instance, FedAwS (Yu et al. 2020) addresses an extreme data distribution case where clients may have only data from a single class. FedAwS utilizes spreadout regularizer (Zhang et al. 2017) and raises a penalty term based on cosine similarities among class embeddings on the server side. Adaptive Federated Learning In centralized learning, advanced adaptive and momentumbased optimization techniques such as AdaGrad (Duchi, Hazan, and Singer 2011), Adam (Kingma and Ba 2014), and Yogi (Zaheer et al. 2018) have shown great success in convergence acceleration. In contrast, in vanilla FedAvg, client models are trained with stochastic gradient descent (Robbins and Monro 1951; Kiefer and Wolfowitz 1952) and server aggregation is (weighted) averaging. Numerous works have been devoted to benefiting FL with advanced server-side optimizers. As a forerunner in this field, (Reddi et al. 2020) proposed a family of adaptive aggregation methods called FedOpt. Different from weighted average in Equation 3, FedOpt computes pseudo-gradient (Chen and Huo 2016; Nichol, Achiam, and Schulman 2018) from client models and updates the global model with a chosen optimizer: \u2206\u2190 N X n=1 |Dn| |DG|\u03b8n \u2212\u03b8G (4) \u03b8G \u2190server optimizer(\u03b8G, \u2212\u2206, \u03b7) (5) where \u03b7 indicates the learning rate. Depending on the choice of optimizer, FedOpt can be derivated into multiple variants. For instance, in (Reddi et al. 2020), the researchers introduced FedAdaGrad, FedAdam, and FedYogi with their names indicating the choice of optimization technique. FedAMS (Wang, Lin, and Chen 2022) suggests the use of AMSGrad (Reddi, Kale, and Kumar 2019) optimizer on the server side, which is an improved version of Adam. According to (Wang et al. 2021a), it can be hard to tune FedOpt-family methods due to the additional implementation of optimizer on the server side, and it is often necessary to search for optimal learning rates jointly for client optimizer and server optimizer. Compared to server-level adaptive learning, adaptive optimization on the client side is less studied. Client adaptivity poses its own challenges, particularly due to the potential for the states of client optimizers to significantly diverge from each other as a result of data heterogeneity. To address this challenge, (Wang et al. 2021b) proposes to reset client optimizer status in each global round, while LocalAMSGrad (Chen, Li, and Li 2020) and FAFED (Wu et al. 2023) suggest the sharing and aggregation of client optimizer states similarly to client models. These methods \fcan be meaningless if clients are limited by computation resource and can only train their local models for few epochs. Support Vector Machine Support vector machine (SVM) (Cortes and Vapnik 1995) is a widely-used and robust supervised learning model that can be used for both regression and classification tasks. Unlike traditional linear classifiers, where the decision boundary is a linear combination of all data points, the separating hyperplane of SVM is a combination of selected samples, which are also called support vectors and lie the closest to the decision boundary. While common classifiers minimize solely the classification objective, SVM struggles to control the trade-off between discriminative error minimization and margin maximization while allowing some misclassifications. The margin refers to the distance between the support vectors of different classes and the decision boundary. The primal problem of SVM can be formalized as: argmin w,\u03b61,...,\u03b6m 1 2||w||2 + \u03bb m X i=1 \u03b6i, s.t. yiw\u03c4xi \u22651 \u2212\u03b6i, \u03b6i \u22650 (6) where \u03b6i defines the distance of a misclassified sample to its correct margin plane. The coefficient \u03bb controls the magnitude of regularization. A smaller \u03bb prioritizes larger margins and may result in a greater number of support vectors. It is important to note that there are several prior works that integrate FL and SVM, such as (Bakopoulou, Tillman, and Markopoulou 2021; Navia-V\u00b4 azquez, D\u00b4 \u0131az-Morales, and Fern\u00b4 andez-D\u00b4 \u0131az 2022; Bemani and Bj\u00a8 orsell 2022). Our approach is distinctly different from them in the sense that our algorithm leverages SVM to improve global aggregation and offers a solution to the problem \u201chow to FL\u201d. In contrast, in previous works, SVM serves as the core model to be trained in FL and thus addresses the question of \u201cwhat to FL\u201d. Methodology In this paper, we propose a novel federated aggregation algorithm for classification task called TurboSVM-FL that is able to boost convergence significantly. TurboSVM-FL sophisticatedly leverages SVM to conduct selective aggregation and max-margin spread-out regularization. By adopting an adaptive optimizer, TurboSVM-FL can also benefit from adaptivity. Compared to vanilla FedAvg (McMahan et al. 2017), TurboSVM-FL requires no additional computation, storage, or communication cost on the client side. A pseudocode for TurboSVM-FL is given in Algorithm 3, and a graphical illustration is depicted Figure 1. In the following, we present our algorithm in detail, starting by reformalizing the classification task as \u201cfinding nearest neighbor\u201d. We reduce our discussion to embeddingbased deep learning networks and ignore the logit activation, which means the model \u03b8 can be divided into two parts: g and W, with f\u03b8(x) = Wg(x), where g : RP \u2192Rd is the feature encoder that maps input x \u2208RP to a latent representation in Rd and W is the last projection layer containing class embeddings. In a classification task with K classes, W will be of shape RK\u00d7d and is also called logit layer. Then, the class inference \u02c6 y of a sample (x, y) is indifferent from finding the nearest neighbor to g(x) in w1, ..., wK with wk \u2208Rd being the k-th row in W indicating the class embedding of class k. Implicitly, the metric used to measure distance is vector inner product, and the choice of nearest neighbor is regardless of activation function and loss function. For simplicity, we ignore bias terms, and the class inference can then be represented as: \u02c6 y = argmax k\u2208[K] sim(wk, g(x)), sim(wk, g(x)) = w\u03c4 k \u00b7 g(x) (7) Hence, training the last layer in classification can be regarded as encouraging the correct class embedding to approach instance embedding while discouraging all other class embeddings to be close. Next, TurboSVM-FL treats client models as sample points for a secondary classification task at higher level, and fits SVM using these samples. The SVM is constructed as a multi-class classification among K classes, and the SVM training samples are exactly the collected class embeddings, i.e., {(wn k, k)|k \u2208[K], n \u2208[N]}. In other words, for each class k, there are N sample points {(w1 k, k), ..., (wN k , k)}, and each sample point is the k-th row of the weight matrix of the logit layer from a client model. In vanilla FL, the class embeddings in the logit layer of the global model are obtained by averaging client models, in other words, wG k = PN n=1 |Dn| |DG|wn k. Due to data heterogeneity among clients, the class embeddings of some clients can be drifted away from global optima and hence seriously disturb the aggregation. TurboSVM-FL addresses this problem with the help of support vectors during global update. SVM aims at a margin-maximization decision boundary that is a linear combination of selected samples, which are also called support vectors. The support vectors can be regarded as the most informative samples of each class and function similarly to contrastive anchors. In other words, fitting SVM is to some extent equivalent to selective aggregation over samples. TurboSVM-FL brings this property to federated aggregation by averaging only class embeddings that form support vectors, as depicted in Algorithm 1. Moreover, TurboSVM-FL employs spread-out regularization across projected global class embeddings to maintain the margin-maximization property. This is crucial for two reasons: first, although support vectors are the most informative data points, they are close to decision boundary and can be misclassified; second, the weights used during FL agAlgorithm 1: TurboSVM-FL part 1: selective aggregation Input: fitted SVM, sizes of local datasets |D1|, ..., |DN|. for k \u2208[K] do retrieve support vectors {vm k } for class k from SVM wG k \u2190 P m |Dm|vm k P m |Dm| \u25b7m: index to client model end for return global class embeddings wG 1 , ..., wG K \fAlgorithm 2: TurboSVM-FL part 2: max-margin spread-out regularization Input: fitted SVM, global class embeddings wG k , k \u2208 [K], server learning rate \u03b7G. \u2113sp \u21900 for k \u2208[K] do for k\u2032 \u2208[K] with k\u2032 > k do retrieve hyperplane hk,k\u2032 from SVM \u2113sp \u2190\u2113sp + exp(\u2212 (wG k \u03c4 \u00b7hk,k\u2032\u2212wG k\u2032 \u03c4 \u00b7hk,k\u2032)2 2||hk,k\u2032||2 ) end for end for for k \u2208[K] do wG k \u2190server optimizer(wG k , \u2212\u2207wG k \u2113sp, \u03b7G) end for return global class embeddings wG 1 , ..., wG K gregation may differ from the coefficients assigned to support vectors during SVM fitting, which could undermine the SVM property. Spread-out regularization like in (Yu et al. 2020) offers the potential to distinguish class embeddings. Nevertheless, we propose that omnidirectional regularization is not the most efficient method. Instead, we leverage once again the SVM property, namely we project the aggregated embeddings back onto the SVM decision boundaries, and penalize the similarities among projected embeddings: \u2113sp(w1, ..., wK) = X k\u2208[K] X k\u2032\u0338=k sim(w\u03c4 k \u00b7 hk,k\u2032 ||hk,k\u2032|| , w\u03c4 k\u2032 \u00b7 hk,k\u2032 ||hk,k\u2032|| ) (8) where hk,k\u2032 is the normal of the separating hyperplane for classes k and k\u2032 retrieved from fitted SVM. In (Yu et al. 2020), the authors proved that the classification error can be upper-bounded by the separation of class embeddings. We extend their analysis for TurboSVM-FL by showing that by applying selective aggregation and maxmargin spread-out regularization, TurboSVM-FL effectively enlarges the difference between projected logits. For simplicity, we narrow down to binary classification and denote the distance relaxation terms for embeddings of each class as \u03b6+ n and \u03b6\u2212 n , n \u2208[N], respectively. Let h be the decision boundary of the fitted SVM. Then, under further simplification that all class embeddings serve as support vectors and all clients have same amount of samples, given a new positive sample x\u2217, the difference between the positive and negative logits when projected on h can be bounded as follows: proj(logit+(x\u2217), h) \u2212proj(logit\u2212(x\u2217), h) \u2265[2N \u2212PN n=1(\u03b6+ n + \u03b6\u2212 n )](1 \u2212\u03b6\u2217) N||h||2 (9) where \u03b6\u2217is the SVM relaxation term for x\u2217. By averaging support vectors and applying max-margin spreadout regularization,TurboSVM-FL reduces ||h|| and \u03b6\u00b1 n in essence according to SVM theory, and the term above that bounds logit distance from below is hence increased. A more detailed analysis of this is given in the Appendix. Algorithm 3: The TurboSVM-FL Framework Input: clients n \u2208[N], client local datasets D1, ..., DN, |DG| = |D1| + ... + |DN|, number of global epochs T, number of client epochs E, number of classes K, server learning rate \u03b7G, client learning rate \u03b7, mini-batch size B ServerUpdate: initialize global model \u03b8G 0 = (gG 0 , W G 0 ) for t = 0, 1, ..., T \u22121 do for client n \u2208[N] do \u03b8n t+1 \u2190ClientUpdate(n, \u03b8G t ) end for gG t+1 \u2190PN n=1 |Dn| |DG|gn t+1 fit SVM using samples {(wn k, k)|k \u2208[K], n \u2208[N]} W G t+1 \u2190Algorithm 1 TurboSVM-FL part 1 W G t+1 \u2190Algorithm 2 TurboSVM-FL part 2 end for return \u03b8G T = (gG T , W G T ) ClientUpdate(n, \u03b8G t ): \u25b7Run on client n with model \u03b8G t \u03b8n t+1 \u2190\u03b8G t for e = 0, 1, ..., E \u22121 do for mini-batch B of size B in Dn do \u03b8n t+1 \u2190client optimizer(\u03b8n t+1, \u2212\u2207\u03b8n t+1\u2113(B), \u03b7) end for end for return \u03b8n t+1 Since projections onto the same axis are always co-linear, the common cosine similarity between them is always either 1 or -1 and thus not meaningful. We hence use Gaussian function as a similarity measurement because of its outstanding capability (Yang et al. 2021) as similarity kernel: sim(w\u03c4 k \u00b7 hk,k\u2032 ||hk,k\u2032|| , w\u03c4 k\u2032 \u00b7 hk,k\u2032 ||hk,k\u2032|| ) = exp(\u2212(w\u03c4 k \u00b7 hk,k\u2032 \u2212w\u03c4 k\u2032 \u00b7 hk,k\u2032)2 2||hk,k\u2032||2 ) (10) TurboSVM-FL then optimizes class embeddings regarding the objective \u2113sp and can benefit from adaptivity and momentum with a proper choice of optimizer. The max-margin spread-out regularization part of TurboSVM-FL is illustrated in Algorithm 2. The pseudocode for the whole algorithm is given in Algorithm 3. Experiments and Results We benchmarked TurboSVM-FL on three datasets against six FL algorithms, including FedAvg (McMahan et al. 2017), FedAdam (Reddi et al. 2020), FedAMS (Wang, Lin, and Chen 2022), FedProx (Li et al. 2020), MOON (Li, He, and Song 2021), and FedAwS (Yu et al. 2020). In this section, we provide task descriptions and results. For more details such as reproducibility and model structures, we redirect readers to the Appendix and our GitHub repository1. 1 https://github.com/Kasneci-Lab/TurboSVM-FL. \fDataset Source Task # Classes # Users Type & Dim Mean/Std per User FEMNIST (LeCun 1998) image 62 3550 BW image 226.8 / 88.9 (Cohen et al. 2017) classification 28 \u00d7 28 CelebA (Liu et al. 2015) smile 2 9343 RGB image 21.4 / 7.6 detection 84 \u00d7 84 Shakespeare (Shakespeare 2014) next char 80 1129 string 3743.2 / 6212.3 (McMahan et al. 2017) prediction 80 Table 1: Overview of used datasets. (a) (b) (c) Figure 2: Test metrics on FEMNIST dataset. Tasks We benchmarked TurboSVM-FL on three different datasets covering data types of both image and nature language, namely FEMNIST (LeCun 1998; Cohen et al. 2017), CelebA (Liu et al. 2015), and Shakespeare (Shakespeare 2014; McMahan et al. 2017) (Table 1). The task in FEMNIST dataset is handwritten digit and letter classification using grayscale images. The number of classes in FEMNIST is 62 (10 digits, 26 lowercase letters, and 26 uppercase letters) and the resolution of images is 28 \u00d7 28. The CelebA dataset contains 84 \u00d7 84 RGB images of faces of celebrities, and the task is binary classification between smiling and non-smiling faces. The Shakespeare dataset consists of scripts of different roles from Shakespeare\u2019s works, and the task is next-character prediction given an input string of length 80. The number of unique characters and symbols in this dataset is 80. All three datasets can be acquired on LEAF (Caldas et al. 2018). For the two image classification tasks, CNN models were implemented, while for the language task LSTM model was utilized. We adopted the model structures as given in LEAF. Details about data distributions and models can be found in the Appendix. We also adopted the data split given in LEAF. More specifically, we conducted 90% \u221210% train-test-split in a user-independent way, which means we had a held-out set of clients for validation rather than a fraction of validation data on each client (Wang et al. 2021a). The main reason for conducting user-independent validation is that such a test is a more valid approach for unseen data and, thus, more representative for real-world applications. Moreover, it is more challenging to fit a model in a user-independent setting compared to a user-dependent data split. Algorithm FEMNIST CelebA Shakespr. FedAvg 144.4\u00b14.6 91.6\u00b118.2 51.0\u00b15.0 FedAdam 110.8\u00b116.2 >200 54.8\u00b13.2 FedAwS 81.4\u00b12.2 84.2\u00b124.4 45.0\u00b12.3 FedProx 145.4\u00b13.4 94.4\u00b118.4 157.2\u00b15.3 FedAMS 116.4\u00b119.6 >200 51.6\u00b12.1 MOON 145.6\u00b13.7 94.2\u00b118.8 52.4\u00b13.1 TurboSVM-FL 54.6\u00b11.6 46.4\u00b19.4 43.4\u00b12.9 Table 2: Number of communication rounds needed to reach certain test accuracy on FEMNIST (70%), CelebA (70%), and Shakespeare (50%) datasets. Smaller is better. Results To compare the convergence rate of different FL algorithms, we reported two groups of metrics: number of global aggregation rounds needed to reach certain validation accuracy (70% for FEMNIST and CelebA, 50% for Shakespeare), and the achieved F1 score, accuracy, and MCC (Matthews Correlation Coefficient) after 100 aggregation rounds. The results are given in Tables 2 and 3, and also visualized in Figures 2, 4 and 5 (Appendix) for each task respectively. Our results clearly indicate that on FEMNNIST and CelebA datasets TurboSVM-FL yields a significantly faster convergence in contrast to other FL methods, while on the Shakespeare dataset TurboSVM-FL slightly outperforms others. Compared to the baseline FedAvg, TurboSVM-FL successfully reduces the number of global rounds by 62.2%, 49.3%, and 14.9% to reach the same test accuracy as given in Table 2. When all FL methods are run for the same rounds, TurboSVM-FL yields in much better test metrics on both image classification tasks in comparison to other meth\fAlgorithm [%] FEMN. CelebA Shakespr. FedAvg F1 33.1\u00b11.3 70.5\u00b12.2 17.4\u00b10.5 A 59.3\u00b11.8 70.6\u00b12.1 52.9\u00b10.8 M 57.9\u00b11.9 41.6\u00b14.2 48.9\u00b10.8 FedAdam F1 48.1\u00b14.9 34.2\u00b10.3 18.1\u00b10.2 A 66.6\u00b13.9 51.6\u00b10.1 53.8\u00b10.9 M 65.4\u00b14.1 1.0\u00b12.3 49.8\u00b10.9 FedAwS F1 54.8\u00b10.8 72.2\u00b13.0 18.3\u00b10.6 A 73.1\u00b10.6 72.3\u00b12.8 53.3\u00b10.7 M 72.2\u00b10.6 44.9\u00b15.6 49.3\u00b10.7 FedProx F1 32.8\u00b11.3 70.4\u00b12.1 12.5\u00b10.6 A 59.2\u00b11.8 70.6\u00b12.1 46.0\u00b11.2 M 57.7\u00b11.9 41.4\u00b14.1 41.2\u00b11.2 FedAMS F1 46.8\u00b16.5 36.3\u00b15.0 18.2\u00b10.4 A 65.8\u00b15.0 52.6\u00b12.3 54.0\u00b10.5 M 64.6\u00b15.2 4.7\u00b110.5 50.0\u00b10.6 MOON F1 33.4\u00b13.7 70.7\u00b12.3 17.6\u00b10.6 A 58.2\u00b13.0 70.9\u00b12.2 52.8\u00b11.0 M 56.8\u00b13.1 41.8\u00b14.4 48.8\u00b11.0 TurboSVMFL F1 61.9\u00b10.7 77.2\u00b11.3 19.2\u00b10.3 A 76.6\u00b11.0 77.3\u00b11.2 53.7\u00b10.2 M 75.8\u00b11.0 55.0\u00b12.4 49.7\u00b10.2 Table 3: Achieved test F1 score, accuracy (A), and MCC score (M) after 100 aggregation rounds. Greater is better. ods, while its performance is a fair match to the adaptive algorithms on the next-character prediction task. Moreover, we show in Figure 4 that while adaptive FL methods like FedAdam and FedAMS are not stable, TurboSVM-FL can still robustly benefit from adaptivity on the server side. Impact of Embedding Size on Selectivity We further explored the impact of embedding size on the number of client models that form support vectors. To approach this, we ran experiments on the FEMNIST dataset with varying embedding size and number of participating clients C. We recorded the number of support vectors for class 1 at 200th round in Table 4. It is clear that a higher embedding size is associated with better performance and fewer support vectors when there exist enough participating clients, while a scarcity of clients can lead to full use of class embeddings as support vectors. Furthermore, larger embedding size also leads to higher complexity and burden, which needs to be balanced off depending on the specific tasks. Embedding Dimension C 4 16 64 256 1024 8 #SV 8 8 8 8 8 F1[%] 6.1 14.2 58.9 63.9 67.4 64 #SV 62 63 58 52 37 F1[%] 16.3 43.2 60.5 65.9 68.3 512 #SV 197 150 147 100 77 F1[%] 16.7 43.2 59.7 64.5 67.0 Table 4: Influence of embedding size on support vectors. Discussion As TurboSVM-FL focuses on class representations on the server side while other parts of the global model are still aggregated with average and client training is done with vanilla SGD, the use of TurboSVM-FL in combination with other FL algorithms is promising. For instance, on the client side FedProx can be applied to counteract data heterogeneity, while on the server side, class embeddings are aggregated with TurboSVM-FL. Another example is the use of adaptive FL methods like FedAdam for encoder aggregation while logit layers are aggregated with TurboSVM-FL. Moreover the idea of model-as-sample can be further explored, for example, for anomaly client detection and client clustering. TurboSVM-FL is particularly suitable for cross-device FL where edge devices are often constrained by computation and storage resources, but its improvement is also not excluded from cross-silo case. A typical application scenario of TurboSVM-FL is federated transfer learning, where a pre-trained model like VGG16 and Resnet50 is adopted, and all of the layers except the last few ones are frozen. In this case, each client only needs to train and share the last few layers, which makes TurboSVM-FL extremely efficient. The capability of TurboSVM-FL is also not constrained to single-output tasks. For multi-output tasks such as multilabel classification and multi-task learning, TurboSVM-FL can also be applied. To approach this, separate classification heads for different tasks should be implemented where the backbone encoder shares its weights among tasks, and then TurboSVM-FL should be applied to each head in parallel. One improvement direction of TurboSVM-FL is to relax the implicit assumption about linear separability of class embeddings with kernelization during SVM fitting and class inference. A piloting ablation study is included in the Appendix in this regard. Furthermore, while posing no additional computation cost on the client side, TurboSVM-FL requires the server to be powerful such that it can fit SVMs efficiently, especially when the SVMs are in one-vs-one (OVO) pattern. We chose OVO instead of OVR (one-vs-rest) mainly for two reasons: 1. in general, OVO performs better than OVR; 2. for TurboSVM-FL, OVO never suffers from class imbalance while OVR always does, since the numbers of samples for each class are always the same. Although OVO imposes more computation on the server side, we think that to approach FL, a powerful server is a must-have, and OVO is no burden for such a server. In case the number of classes is large, the computation burden can be further resolved by sampling a proportion of classes on which SVMs are fitted. Conclusion In this work, we proposed a novel federated aggregation strategy called TurboSVM-FL, which extensively exploits SVM to conduct selective aggregation and max-margin spread-out regularization for class embeddings and can vastly reduce communication rounds. We tested our approach on three publicly available datasets, and our results show that TurboSVM-FL outperforms existing FL methods largely on convergence rate regarding various metrics. \fAcknowledgements We acknowledge the funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) \u2013 Project number KA 4539/5-1." + } + ], + "Yifei Ming": [ + { + "url": "http://arxiv.org/abs/2402.07785v2", + "title": "HYPO: Hyperspherical Out-of-Distribution Generalization", + "abstract": "Out-of-distribution (OOD) generalization is critical for machine learning\nmodels deployed in the real world. However, achieving this can be fundamentally\nchallenging, as it requires the ability to learn invariant features across\ndifferent domains or environments. In this paper, we propose a novel framework\nHYPO (HYPerspherical OOD generalization) that provably learns domain-invariant\nrepresentations in a hyperspherical space. In particular, our hyperspherical\nlearning algorithm is guided by intra-class variation and inter-class\nseparation principles -- ensuring that features from the same class (across\ndifferent training domains) are closely aligned with their class prototypes,\nwhile different class prototypes are maximally separated. We further provide\ntheoretical justifications on how our prototypical learning objective improves\nthe OOD generalization bound. Through extensive experiments on challenging OOD\nbenchmarks, we demonstrate that our approach outperforms competitive baselines\nand achieves superior performance. Code is available at\nhttps://github.com/deeplearning-wisc/hypo.", + "authors": "Yifei Ming, Haoyue Bai, Julian Katz-Samuels, Yixuan Li", + "published": "2024-02-12", + "updated": "2024-03-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Deploying machine learning models in real-world settings presents a critical challenge of generalizing under distributional shifts. These shifts are common due to mismatches between the training and test data distributions. For instance, in autonomous driving, a model trained on in-distribution (ID) data collected under sunny weather conditions is expected to perform well in out-of-distribution (OOD) scenarios, such as rain or snow. This underscores the importance of the OOD generalization problem, which involves learning a predictor that can generalize across all possible environments, despite being trained on a finite subset of training environments. A plethora of OOD generalization algorithms has been developed in recent years [77], where a central theme is to learn domain-invariant representations\u2014features that are consistent and meaningful across different environments (domains) and can generalize to the unseen test environment. Recently, Ye et al. [70] theoretically showed that the OOD generalization error can be bounded in terms of intra-class variation and inter-class separation. Intra-class variation measures the stability of representations across different environments, while inter-class separation assesses the dispersion of features among different classes. Ideally, features should display low variation and high separation, in order to generalize well to OOD data (formally described in Section 3). Despite the theoretical analysis, a research question remains open in the field: RQ: How to design a practical learning algorithm that directly achieves these two properties, and what theoretical guarantees can the algorithm offer? To address the question, this paper presents a learning framework HYPO (HYPerspherical OOD generalization), which provably learns domain-invariant representations in the hyperspherical space with unit norm (Section 4). Our key idea is to promote low variation (aligning representation across domains for every class) and high separation (separating prototypes across different classes). In particular, the learning objective shapes the embeddings such that samples from the same class (across all training environments) gravitate towards their corresponding class prototype, while different class prototypes are maximally separated. The two losses in our objective function \u2217Equal contribution. \u2020This work is not related to the author\u2019s position at Amazon. arXiv:2402.07785v2 [cs.LG] 19 Mar 2024 \fcan be viewed as optimizing the key properties of intra-class variation and inter-class separation, respectively. Since samples are encouraged to have a small distance with respect to their class prototypes, the resulting embedding geometry can have a small distribution discrepancy across domains and benefits OOD generalization. Geometrically, we show that our loss function can be understood through the lens of maximum likelihood estimation under the classic von Mises-Fisher distribution. Empirical contribution. Empirically, we demonstrate strong OOD generalization performance by extensively evaluating HYPO on common benchmarks (Section 5). On the CIFAR-10 (ID) vs. CIFAR-10-Corruption (OOD) task, HYPO substantially improves the OOD generalization accuracy on challenging cases such as Gaussian noise, from 78.09% to 85.21%. Furthermore, we establish superior performance on popular domain generalization benchmarks, including PACS, Office-Home, VLCS, etc. For example, we achieve 88.0% accuracy on PACS which outperforms the best loss-based method by 1.1%. This improvement is non-trivial using standard stochastic gradient descent optimization. When coupling our loss with specialized optimization SWAD [10], the accuracy is further increased to 89%. We provide visualization and quantitative analysis to verify that features learned by HYPO indeed achieve low intra-class variation and high inter-class separation. Theoretical insight. We provide theoretical justification for how HYPO can guarantee improved OOD generalization, supporting our empirical findings. Our theory complements [70], which does not provide a loss for optimizing the intra-class variation or inter-class separation. Thus, a key contribution of this paper is to provide a crucial link between provable understanding and a practical algorithm for OOD generalization in the hypersphere. In particular, our Theorem 6.1 shows that when the model is trained with our loss function, we can upper bound intra-class variation, a key quantity to bound OOD generalization error. For a learnable OOD generalization task, the upper bound on generalization error is determined by the variation estimate on the training environments, which is effectively reduced by our loss function under sufficient sample size and expressiveness of the neural network. 2 Problem Setup We consider a multi-class classification task that involves a pair of random variables (X, Y ) over instances x \u2208X \u2282Rd and corresponding labels y \u2208Y := {1, 2, \u00b7 \u00b7 \u00b7 , C}. The joint distribution of X and Y is unknown and represented by PXY . The goal is to learn a predictor function, f : X \u2192RC, that can accurately predict the label y for an input x, where (x, y) \u223cPXY . Unlike in standard supervised learning tasks, the out-of-distribution (OOD) generalization problem is challenged by the fact that one cannot sample directly from PXY . Instead, we can only sample (X, Y ) under limited environmental conditions, each of which corrupts or varies the data differently. For example, in autonomous driving, these environmental conditions may represent different weathering conditions such as snow, rain, etc. We formalize this notion of environmental variations with a set of environments or domains Eall. Sample pairs (Xe, Y e) are randomly drawn from environment e. In practice, we may only have samples from a finite subset of available environments Eavail \u2282Eall. Given Eavail, the goal is to learn a predictor f that can generalize across all possible environments. The problem is stated formally below. Definition 2.1 (OOD Generalization). Let Eavail \u2282Eall be a set of training environments, and assume that for each environment e \u2208Eavail, we have a dataset De = {(xe j, ye j)}ne j=1, sampled i.i.d. from an unknown distribution Pe XY . The goal of OOD generalization is to find a classifier f \u2217, using the data from the datasets De, that minimizes the worst-case risk over the entire family of environments Eall: min f\u2208F max e\u2208Eall EPe XY \u2113(f(Xe), Y e), (1) where F is hypothesis space and l(\u00b7, \u00b7) is the loss function. The problem is challenging since we do not have access to data from domains outside Eavail. In particular, the task is commonly referred to as multi-source domain generalization when |Eavail| > 1. 3 Motivation of Algorithm Design Our work is motivated by the theoretical findings in [70], which shows that the OOD generalization performance can be bounded in terms of intra-class variation and inter-class separation with respect to various environments. The formal definitions are given as follows. \fDefinition 3.1 (Intra-class variation). The variation of feature \u03d5 across a domain set E is V(\u03d5, E) = max y\u2208Y sup e,e\u2032\u2208E \u03c1 \u0000P(\u03d5e|y), P(\u03d5e\u2032|y) \u0001 , (2) where \u03c1(P, Q) is a symmetric distance (e.g., Wasserstein distance, total variation, Hellinger distance) between two distributions, and P(\u03d5e|y) denotes the class-conditional distribution for features of samples in environment e. Definition 3.2 (Inter-class separation3). The separation of feature \u03d5 across domain set E is I\u03c1(\u03d5, E) = 1 C(C \u22121) X y\u0338=y\u2032 y,y\u2032\u2208Y min e\u2208E \u03c1 \u0000P(\u03d5e|y), P(\u03d5e|y\u2032) \u0001 . (3) The intra-class variation V(\u03d5, E) measures the stability of feature \u03d5 over the domains in E and the inter-class separation I(\u03d5, E) captures the ability of \u03d5 to distinguish different labels. Ideally, features should display high separation and low variation. Definition 3.3. The OOD generalization error of classifier f is defined as follows: err(f) = max e\u2208Eall EPe XY \u2113(f(Xe), Y e) \u2212max e\u2208Eavail EPe XY \u2113(f(Xe), Y e) which is bounded by the variation estimate on Eavail with the following theorem. Theorem 3.1 (OOD error upper bound, informal [70]). Suppose the loss function \u2113(\u00b7, \u00b7) is bounded by [0, B]. For a learnable OOD generalization problem with sufficient inter-class separation, the OOD generalization error err(f) can be upper bounded by err(f) \u2264O \u0010\u0000Vsup(h, Eavail) \u0001 \u03b12 (\u03b1+d)2 \u0011 , (4) for some \u03b1 > 0, and Vsup (h, Eavail ) \u225csup\u03b2\u2208Sd\u22121 V \u0000\u03b2\u22a4h, Eavail \u0001 is the inter-class variation, h(\u00b7) \u2208Rd is the feature vector, and \u03b2 is a vector in unit hypersphere Sd\u22121 = \b \u03b2 \u2208Rd : \u2225\u03b2\u22252 = 1 \t , and f is a classifier based on normalized feature h. Remarks. The Theorem above suggests that both low intra-class variation and high inter-class separation are desirable properties for theoretically grounded OOD generalization. Note that in the full formal Theorem (see Appendix C), maintaining the inter-class separation is necessary for the learnability of the OOD generalization problem (Def. C.2). In other words, when the learned embeddings exhibit high inter-class separation, the problem becomes learnable. In this context, bounding intra-class variation becomes crucial for reducing the OOD generalization error. Despite the theoretical underpinnings, it remains unknown to the field how to design a practical learning algorithm that directly achieves these two properties, and what theoretical guarantees can the algorithm offer. This motivates our work. To reduce the OOD generalization error, our key motivation is to design a hyperspherical learning algorithm that directly promotes low variation (aligning representation across domains for every class) and high separation (separating prototypes across different classes). 4 Method In this section, we introduce the details of the learning algorithm HYPO (HYPerspherical OOD generalization), which is designed to promote domain invariant representations in the hyperspherical space. The key idea is to shape the hyperspherical embedding space so that samples from the same class (across all training environments Eavail) are closely aligned with the corresponding class prototype. Since all points are encouraged to have a small distance with respect to the class prototypes, the resulting embedding geometry can have a small distribution discrepancy across domains and hence benefits OOD generalization. In what follows, we first introduce the learning objective (Section 4.1), and then we discuss the geometrical interpretation of the loss and embedding (Section 4.2). We will provide theoretical justifications for HYPO in Section 6, which leads to a provably smaller intra-class variation, a key quantity to bound the OOD generalization error. 3Referred to as \u201cInformativeness\u201d in Ye et al. [70]. \f4.1 Hyperspherical Learning for OOD Generalization Loss function. The learning algorithm is motivated to directly optimize the two criteria: intra-class variation and inter-class separation. At a high level, HYPO aims to learn embeddings for each sample in the training environments by maintaining a class prototype vector \u00b5c \u2208Rd for each class c \u2208{1, 2, ..., C}. To optimize for low variation, the loss encourages the feature embedding of a sample to be close to its class prototype. To optimize for high separation, the loss encourages different class prototypes to be far apart from each other. Specifically, we consider a deep neural network h : X 7\u2192Rd that maps an input \u02dc x \u2208X to a feature embedding \u02dc z := h(\u02dc x). The loss operates on the normalized feature embedding z := \u02dc z/\u2225\u02dc z\u22252. The normalized embeddings are also referred to as hyperspherical embeddings, since they are on a unit hypersphere, denoted as Sd\u22121 := {z \u2208 Rd | \u2225z\u22252 = 1}. The loss is formalized as follows: L = \u22121 N X e\u2208Eavail |De| X i=1 log exp \u0000ze i \u22a4\u00b5c(i)/\u03c4 \u0001 PC j=1 exp \u0000ze i \u22a4\u00b5j/\u03c4 \u0001 | {z } Lvar: \u2193variation + 1 C C X i=1 log 1 C \u22121 X j\u0338=i,j\u2208Y exp \u0000\u00b5\u22a4 i \u00b5j/\u03c4 \u0001 | {z } \u2191separation , where N is the number of samples, \u03c4 is the temperature, z is the normalized feature embedding, and \u00b5c is the prototype embedding for class c. While hyperspherical learning algorithms have been studied in other context [44, 31, 46], none of the prior works explored its provable connection to domain generalization, which is our distinct contribution. We will theoretically show in Section 6 that minimizing our loss function effectively reduces intra-class variation, a key quantity to bound OOD generalization error. The training objective in Equation 5 can be efficiently optimized end-to-end. During training, an important step is to estimate the class prototype \u00b5c for each class c \u2208{1, 2, ..., C}. The class-conditional prototypes can be updated in an exponential-moving-average manner (EMA) [40]: \u00b5c := Normalize(\u03b1\u00b5c + (1 \u2212\u03b1)z), \u2200c \u2208{1, 2, . . . , C} (5) where the prototype \u00b5c for class c is updated during training as the moving average of all embeddings with label c, and z denotes the normalized embedding of samples of class c. An end-to-end pseudo algorithm is summarized in Appendix A. Class prediction. In testing, classification is conducted by identifying the closest class prototype: \u02c6 y = argmaxc\u2208[C] fc(x), where fc(x) = z\u22a4\u00b5c and z = h(x) \u2225h(x)\u22252 is the normalized feature embedding. 4.2 Geometrical Interpretation of Loss and Embedding Figure 1: Illustration of hyperspherical embeddings. Images are from PACS [37]. Geometrically, the loss function above can be interpreted as learning embeddings located on the surface of a unit hypersphere. The hyperspherical embeddings can be modeled by the von Mises-Fisher (vMF) distribution, a well-known distribution in directional statistics [29]. For a unit vector z \u2208Rd in class c, the probability density function is defined as p(z | y = c) = Zd(\u03ba) exp(\u03ba\u00b5\u22a4 c z), (6) where \u00b5c \u2208Rd denotes the mean direction of the class c, \u03ba \u22650 denotes the concentration of the distribution around \u00b5c, and Zd(\u03ba) denotes the normalization factor. A larger \u03ba indicates a higher concentration around the class center. In the extreme case of \u03ba = 0, the samples are distributed uniformly on the hypersphere. Under this probabilistic model, an embedding z is assigned to the class c with the following probability p(y = c | z; {\u03ba, \u00b5j}C j=1) = Zd(\u03ba) exp(\u03ba\u00b5\u22a4 c z) PC j=1 Zd(\u03ba) exp(\u03ba\u00b5\u22a4 j z) = exp(\u00b5\u22a4 c z/\u03c4) PC j=1 exp(\u00b5\u22a4 j z/\u03c4) , (7) \fwhere \u03c4 = 1/\u03ba denotes a temperature parameter. Maximum likelihood view. Notably, minimizing the first term in our loss (cf. Eq. 5) is equivalent to performing maximum likelihood estimation under the vMF distribution: argmax\u03b8 N Y i=1 p(yi | xi; {\u03ba, \u00b5j}C j=1), where (xi, yi) \u2208 [ e\u2208Etrain De where i is the index of sample, j is the index of the class, and N is the size of the training set. In effect, this loss encourages each ID sample to have a high probability assigned to the correct class in the mixtures of the vMF distributions. 5 Experiments In this section, we show that HYPO achieves strong OOD generalization performance in practice, establishing competitive performance on several benchmarks. In what follows, we describe the experimental setup in Section 5.1, followed by main results and analysis in Section 5.2. 5.1 Experimental Setup Datasets. Following the common benchmarks in literature, we use CIFAR-10 [35] as the in-distribution data. We use CIFAR-10-C [24] as OOD data, with 19 different common corruption applied to CIFAR-10. In addition to CIFAR-10, we conduct experiments on popular benchmarks including PACS [37], OfficeHome [22], and VLCS [22] to validate the generalization performance. PACS contains 4 domains/environments (photo, art painting, cartoon, sketch) with 7 classes (dog, elephant, giraffe, guitar, horse, house, person). Office-Home comprises four different domains: art, clipart, product, and real. Results on additional OOD datasets Terra Incognita [22], and ImageNet can be found in Appendix F and Appendix G. Evaluation metrics. We report the following two metrics: (1) ID classification accuracy (ID Acc.) for ID generalization, and (2) OOD classification accuracy (OOD Acc.) for OOD generalization. Experimental details. In our main experiments, we use ResNet-18 for CIFAR-10 and ResNet-50 for PACS, Office-Home, and VLCS. For these datasets, we use stochastic gradient descent with momentum 0.9, and weight decay 10\u22124. For CIFAR-10, we train the model from scratch for 500 epochs using an initial learning rate of 0.5 and cosine scheduling, with a batch size of 512. Following common practice for contrastive losses [14, 31, 69], we use an MLP projection head with one hidden layer to obtain features. The embedding (output) dimension is 128 for the projection head. We set the default temperature \u03c4 as 0.1 and the prototype update factor \u03b1 as 0.95. For PACS, Office-Home, and VLCS, we follow the common practice and initialize the network using ImageNet pre-trained weights. We fine-tune the network for 50 epochs. The embedding dimension is 512 for the projection head. We adopt the leave-one-domain-out evaluation protocol and use the training domain validation set for model selection [22], where the validation set is pooled from all training domains. Details on other hyperparameters are in Appendix D. 5.2 Main Results and Analysis HYPO excels on common corruption benchmarks. As shown in Figure 2, HYPO achieves consistent improvement over the ERM baseline (trained with cross-entropy loss), on a variety of common corruptions. Our evaluation includes different corruptions including Gaussian noise, Snow, JPEG compression, Shot noise, Zoom blur, etc. The model is trained on CIFAR-10, without seeing any type of corruption data. In particular, our method brings significant improvement for challenging cases such as Gaussian noise, enhancing OOD accuracy from 78.09% to 85.21% (+7.12%). Complete results on all 19 different corruption types are in Appendix E. HYPO establishes competitive performance on popular benchmarks. Our method delivers superior results in the popular domain generalization tasks, as shown in Table 1. HYPO outperforms an extensive collection of common OOD generalization baselines on popular domain generalization datasets, including PACS, Office-Home, VLCS. For instance, on PACS, HYPO improves the best loss-based method by 1.1%. Notably, this enhancement is non-trivial since we are not relying on specialized optimization algorithms such as SWAD [10]. Later in our ablation, we show that coupling HYPO with SWAD can further boost the OOD generalization performance, establishing superior performance on this challenging task. \fGaussian Noise Snow JPEG Comp. Shot Noise Zoom Blur Frost Motion Blur Impulse Noise Pixelate Speckle Noise Elastic Transform Saturate Avg 75 80 85 90 95 Acc ERM Ours Figure 2: Our method HYPO significantly improves the OOD generalization performance compared to ERM on various OOD datasets w.r.t. CIFAR-10 (ID). Full results can be seen in Appendix E. Algorithm PACS Office-Home VLCS Average Acc. (%) ERM [60] 85.5 67.6 77.5 76.7 CORAL [55] 86.2 68.7 78.8 77.9 DANN [20] 83.7 65.9 78.6 76.1 MLDG [38] 84.9 66.8 77.2 76.3 CDANN [41] 82.6 65.7 77.5 75.3 MMD [39] 84.7 66.4 77.5 76.2 IRM [3] 83.5 64.3 78.6 75.5 GroupDRO [54] 84.4 66.0 76.7 75.7 I-Mixup [66, 67, 68] 84.6 68.1 77.4 76.7 RSC [25] 85.2 65.5 77.1 75.9 ARM [73] 85.1 64.8 77.6 75.8 MTL [9] 84.6 66.4 77.2 76.1 VREx [36] 84.9 66.4 78.3 76.5 Mixstyle [76] 85.2 60.4 77.9 74.5 SelfReg [32] 85.6 67.9 77.8 77.1 SagNet [48] 86.3 68.1 77.8 77.4 GVRT [45] 85.1 70.1 79.0 78.1 VNE [33] 86.9 65.9 78.1 77.0 HYPO (Ours) 88.0\u00b10.4 71.7\u00b10.3 78.2\u00b10.4 79.3 Table 1: Comparison with domain generalization methods on the PACS, Office-Home, and VLCS. All methods are trained on ResNet-50. The model selection is based on a training domain validation set. To isolate the effect of loss functions, all methods are optimized using standard SGD. We report the average and std of our method. \u00b1x denotes the rounded standard error. With multiple training domains, we observe that it is desirable to emphasize hard negative pairs when optimizing the inter-class separation. As depicted in Figure 3, the embeddings of negative pairs from the same domain but different classes (such as dog and elephant in art painting) can be quite close on the hypersphere. Therefore, it is beneficial to separate such hard negative pairs. This can be enforced by a simple modification to the denominator of our variation loss (Eq. 11 in Appendix D), which we adopt for multi-source domain generalization tasks. Figure 3: Illustration of hard negative pairs which share the same domain (art painting) but have different class labels. Relations to PCL. PCL [69] adapts a proxy-based contrastive learning framework for domain generalization. We highlight several notable distinctions from ours: (1) While PCL offers no theoretical insights, HYPO is guided by theory. We provide a formal theoretical justification that our method reduces intra-class variation which is essential to bounding OOD generalization error (see Section 6); (2) Our loss function formulation is different and can be rigorously interpreted as shaping vMF distributions of hyperspherical embeddings (see Section 4.2), whereas PCL can not; (3) Unlike PCL (86.3% w/o SWAD), HYPO is able to achieve competitive performance (88.0%) without heavy reliance on special optimization SWAD [10], a dense and overfit-aware stochastic weight sampling [27] strategy for OOD generalization. As shown in Table 2, we also conduct experiments in conjunction with SWAD. \fAlgorithm Art painting Cartoon Photo Sketch Average Acc. (%) PCL w/ SGD [69] 88.0 78.8 98.1 80.3 86.3 HYPO w/ SGD (Ours) 87.2 82.3 98.0 84.5 88.0 PCL w/ SWAD [69] 90.2 83.9 98.1 82.6 88.7 HYPO w/ SWAD (Ours) 90.5 84.6 97.7 83.2 89.0 Table 2: Results for comparing PCL and HYPO with SGD-based and SWAD-based optimizations on the PACS benchmark. (*The performance reported in the original PCL paper Table 3 is implicitly based on SWAD). Compared to PCL, HYPO achieves superior performance with 89% accuracy, which further demonstrates its advantage. Visualization of embedding. Figure 4 shows the UMAP [43] visualization of feature embeddings for ERM (left) vs. HYPO (right). The embeddings are extracted from models trained on PACS. The red, orange, and green points are from the in-distribution, corresponding to art painting (A), photo (P), and sketch (S) domains. The violet points are from the unseen OOD domain cartoon (C). There are two salient observations: (1) for any given class, the embeddings across domains Eall become significantly more aligned (and invariant) using our method compared to the ERM baseline. This directly verifies the low variation (cf. Equation 2) of our learned embedding. (2) The embeddings are well separated across different classes, and distributed more uniformly in the space than ERM, which verifies the high inter-class separation (cf. Equation 3) of our method. Overall, our observations well support the efficacy of HYPO. Dog Elephant Giraffe Guitar Horse House Person A P S C A (train) P (train) S (train) C (test) (a) ERM (high variation) Dog Elephant Giraffe Guitar Horse House Person A (train) P (train) S (train) C (test) (b) HYPO (low variation) Figure 4: UMAP [43] visualization of the features when the model is trained with CE vs. HYPO for PACS. The red, orange, and green points are from the in-distribution, which denote art painting (A), photo (P), and sketch (S). The violet points are from the unseen OOD domain cartoon (C). Quantitative verification of intra-class variation. We provide empirical verification on intra-class variation in Figure 5, where the model is trained on PACS. We measure the intra-class variation with Sinkhorn divergence (entropy regularized Wasserstein distance). The horizontal axis (0)-(6) denotes different classes, and the vertical axis denotes different pairs of training domains (\u2018P\u2019, \u2018A\u2019, \u2018S\u2019). Darker color indicates lower Sinkhorn divergence. We can see that our method results in significantly lower intra-class variation compared to ERM, which aligns with our theoretical insights in Section 6. Additional ablation studies. Due to space constraints, we defer additional experiments and ablations to the Appendix, including (1) results on other tasks from DomainBed (Appendix F); (2) results on large-scale benchmarks such as ImageNet-100 (Appendix G); (3) ablation of different loss terms (Appendix H); (4) an analysis on the effect of \u03c4 and \u03b1 (Appendix I). 6 Why HYPO Improves Out-of-Distribution Generalization? In this section, we provide a formal justification of the loss function. Our main Theorem 6.1 gives a provable understanding of how the learning objective effectively reduces the variation estimate Vsup(h, Eavail), thus directly \fFigure 5: Intra-class variation for ERM (left) vs. HYPO (right) on PACS. For each class y, we measure the Sinkhorn Divergence between the embeddings of each pair of domains. Our method results in significantly lower intra-class variation across different pairs of training domains compared to ERM. reducing the OOD generalization error according to Theorem 3.1. For simplicity, we assume \u03c4 = 1 and denote the prototype vectors \u00b51, . . . , \u00b5C \u2208Sd\u22121. Let H \u2282{h : X 7\u2192Sd\u22121} denote the function class induced by the neural network. Theorem 6.1 (Variation upper bound using HYPO). When samples are aligned with class prototypes such that 1 N PN j=1 \u00b5\u22a4 c(j)zj \u22651 \u2212\u03f5 for some \u03f5 \u2208(0, 1), then \u2203\u03b4 \u2208(0, 1), with probability at least 1 \u2212\u03b4, Vsup(h, Eavail) \u2264O(\u03f51/3 + (ln(2/\u03b4) N )1/6 + (ED[ 1 N E\u03c31,...,\u03c3N sup h\u2208H N X i=1 \u03c3iz\u22a4 i \u00b5c(i)])1/3), where zj = h(xj) \u2225h(xj)\u22252 , \u03c31, . . . , \u03c3N are Rademacher random variables and O(\u00b7) suppresses dependence on constants and |Eavail|. Implications. In Theorem 6.1, we can see that the upper bound consists of three factors: the optimization error, the Rademacher complexity of the given neural network, and the estimation error which becomes close to 0 as the number of samples N increases. Importantly, the term \u03f5 reflects how sample embeddings are aligned with their class prototypes on the hyperspherical space (as we have 1 N PN j=1 \u00b5\u22a4 c(j)zj \u22651 \u2212\u03f5), which is directly minimized by our proposed loss in Equation 5. The above Theorem implies that when we train the model with the HYPO loss, we can effectively upper bound the intra-class variation, a key term for bounding OOD generation performance by Theorem 3.1. In Section H, we provide empirical verification of our bound by estimating \u02c6 \u03f5, which is indeed close to 0 for models trained with HYPO loss. We defer proof details to Appendix C. Necessity of inter-class separation loss. We further present a theoretical analysis in Appendix J explaining how our loss promotes inter-class separation, which is necessary to ensure the learnability of the OOD generalization problem. We provide a brief summary in Appendix C and discuss the notion of OOD learnability, and would like to refer readers to [70] for an in-depth and formal treatment. Empirically, to verify the impact of inter-class separation, we conducted an ablation study in Appendix H, where we compare the OOD performance of our method (with separation loss) vs. our method (without separation loss). We observe that incorporating separation loss indeed achieves stronger OOD generalization performance, echoing the theory. 7 Related Works Out-of-distribution generalization. OOD generalization is an important problem when the training and test data are sampled from different distributions. Compared to domain adaptation [18, 7, 59, 30, 64], OOD generalization is more challenging [8, 47, 22, 5, 76, 34, 4, 63, 71, 11, 6, 33, 23, 17, 58], which aims to generalize to unseen distributions without any sample from the target domain. In particular, A popular direction is to extract domaininvariant feature representation. Prior works show that the invariant features from training domains can help discover invariance on target domains for linear models [50, 52]. IRM [3] and its variants [1, 36] aim to find invariant representation from different training domains via an invariant risk regularizer. [42] propose a causal matching-based algorithm for domain generalization. Other lines of works have explored the problem from various perspectives such as causal discovery [12], distributional robustness [54, 75], model ensembles [15, 51], and test-time adaptation [49, 13]. In this paper, we focus on improving OOD generalization via hyperspherical learning, and provide a new theoretical analysis of the generalization error. \fTheory for OOD generalization. Although the problem has attracted great interest, theoretical understanding of desirable conditions for OOD generalization is under-explored. Generalization to arbitrary OOD is impossible since the test distribution is unknown [8, 47]. Numerous general distance measures exist for defining a set of test domains around the training domain, such as KL divergence [28], MMD [21], and EMD [53]. Based on these measures, some prior works focus on analyzing the OOD generalization error bound. For instance, Albuquerque et al. [2] obtain a risk bound for linear combinations of training domains. Ye et al. [70] provide OOD generalization error bounds based on the notation of variation. In this work, we provide a hyperspherical learning algorithm that provably reduces the variation, thereby improving OOD generalization both theoretically and empirically. Contrastive learning for domain generalization Contrastive learning methods have been widely explored in different learning tasks. For example, Wang & Isola [65] analyze the relation between the alignment and uniformity properties on the hypersphere for unsupervised learning, while we focus on supervised learning with domain shift. Tapaswi et al. [57] investigates a contrastive metric learning approach for hyperspherical embeddings in video face clustering, which differs from our objective of OOD generalization. Von K\u00fcgelgen et al. [61] provide theoretical justification for self-supervised learning with data augmentations. Recently, contrastive losses have been adopted for OOD generalization. For example, CIGA [16] captures the invariance of graphs to enable OOD generalization for graph data. CNC [74] is specifically designed for learning representations robust to spurious correlation by inferring pseudo-group labels and performing supervised contrastive learning. SelfReg [32] proposes a selfsupervised contrastive regularization for domain generalization with non-hyperspherical embeddings, while we focus on hyperspherical features with theoretically grounded loss formulations. 8 Conclusion In this paper, we present a theoretically justified algorithm for OOD generalization via hyperspherical learning. HYPO facilitates learning domain-invariant representations in the hyperspherical space. Specifically, we encourage low variation via aligning features across domains for each class and promote high separation by separating prototypes across different classes. Theoretically, we provide a provable understanding of how our loss function reduces the OOD generalization error. Minimizing our learning objective can reduce the variation estimates, which determine the general upper bound on the generalization error of a learnable OOD generalization task. Empirically, HYPO achieves superior performance compared to competitive OOD generalization baselines. We hope our work can inspire future research on OOD generalization and provable understanding. Acknowledgement The work is supported by the AFOSR Young Investigator Program under award number FA9550-23-1-0184, National Science Foundation (NSF) Award No. IIS-2237037 & IIS-2331669, and Office of Naval Research under grant number N00014-23-1-2643." + }, + { + "url": "http://arxiv.org/abs/2306.06048v2", + "title": "How Does Fine-Tuning Impact Out-of-Distribution Detection for Vision-Language Models?", + "abstract": "Recent large vision-language models such as CLIP have shown remarkable\nout-of-distribution (OOD) detection and generalization performance. However,\ntheir zero-shot in-distribution (ID) accuracy is often limited for downstream\ndatasets. Recent CLIP-based fine-tuning methods such as prompt learning have\ndemonstrated significant improvements in ID classification and OOD\ngeneralization where OOD labels are available. Nonetheless, it remains unclear\nwhether the model is reliable to semantic shifts without OOD labels. In this\npaper, we aim to bridge the gap and present a comprehensive study to understand\nhow fine-tuning impact OOD detection for few-shot downstream tasks. By framing\nOOD detection as multi-modal concept matching, we establish a connection\nbetween fine-tuning methods and various OOD scores. Our results suggest that a\nproper choice of OOD scores is essential for CLIP-based fine-tuning. In\nparticular, the maximum concept matching (MCM) score provides a promising\nsolution consistently. We also show that prompt learning demonstrates the\nstate-of-the-art OOD detection performance over the zero-shot counterpart.", + "authors": "Yifei Ming, Yixuan Li", + "published": "2023-06-09", + "updated": "2023-11-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CY", + "cs.LG" + ], + "main_content": "Introduction Machine learning (ML) is undergoing a paradigm shift with the rise of models that are trained on massive data and are adaptable to a wide range of downstream tasks. Popular pre-trained large vision-language models (Radford et al., 2021; Jia et al., 2021; Yao et al., 2021; Li et al., 2022) demonstrate remarkable performance, and allow researchers without extensive computation power to benefit from these models. It is now the common practice of the ML community to adopt pre-trained models for transfer learning on downstream tasks rather than learning from scratch. Despite the promise, the safety risks of these large pre-trained models can be potentially inherited by all the fine-tuned models. Without appropriately understanding the safety risks, development on top of pre-trained models can exacerbate and propagate safety concerns writ large, causing profound impacts on society. In response to these urgent challenges, the overall objective of this paper is to systematically understand the out-of-distribution risks of learning with pre-trained vision-language models. This paper seeks to address the research question that arises in building responsible and ethical AI models: How does fine-tuning influence out-of-distribution (OOD) detection for large vision-language models? Detecting OOD samples is crucial for machine learning models deployed in the open world, where samples from unseen classes 1 arXiv:2306.06048v2 [cs.CV] 17 Nov 2023 \fnaturally emerge, and failure to detect them can have severe consequences. Despite increasing attention (Yang et al., 2021), OOD detection research for large vision-language models has been scant. Among the most recent works, Ming et al. (2022) investigated training-free OOD detection based on the pre-trained CLIP model. However, the impact of fine-tuning on OOD detection has been unexplored in the vision-language literature. In this paper, we bridge the gap by investigating how fine-tuning large vision-language models affects OOD detection. Parameter-efficient finetuning methods have been popularized in recent years. In particular, prompt learning (Zhou et al., 2022a,b) optimizes learnable word embeddings of the prompts, while adaptors directly optimize the internal feature representations (Gao et al., 2021; Zhang et al., 2022). Both methods are parameter-efficient as image and text encoders are frozen during fine-tuning, and have shown significant improvement for few-shot indistribution (ID) classification. Complementary to existing research, we focus on OOD detection for fine-tuned models using multi-modal concept matching. At the core of the concept matching framework, we use the few-shot ID training set and textual descriptions of the labels to derive a set of visual and textual features that represent the typical features for each ID class. We can measure OOD uncertainty based on the distance between the input feature and the nearest ID prototype. Based on the concept matching framework, we then present a comprehensive and systematic study to explore how different parameterefficient fine-tuning methods impact OOD detection performance, and contribute unexplored findings to the community. We disentangle various aspects such as adaptation methods and OOD scoring functions. Interestingly, we observe that parameter-efficient fine-tuning can significantly improve OOD reliability compared to zero-shot CLIP models. In particular, prompt learning methods exhibit very competitive performance when coupled with the maximum concept matching (MCM) score (Ming et al., 2022). Furthermore, we delve deeper into prompt learning and analyze how the pre-trained features are modified during fine-tuning, and how it impacts OOD detection as a consequence. We study the impact of shots, architectures, and explore the effects of prompt learning on various downstream tasks, including the challenging ImageNet-1k (ID) benchmark. Our results demonstrate that prompt learning perturbs the pretrained feature space that benefits both ID and OOD performance. More encouragingly, the trend holds consistently across different settings, highlighting its potential for reliable fine-tuning in vision-language modeling. We summarize the contributions of this work as follows: \u2022 We provide a timely and systematic study on how CLIP-based fine-tuning influences OOD detection in the few-shot setting. Our study disentangles various factors, including adaptation methods and OOD scoring functions. \u2022 We present novel evidence that parameterefficient fine-tuning does not deteriorate pretrained features. Instead, they can improve both ID and OOD performance with a proper OOD scoring function, especially the MCM score. We show that prompt learning consistently demonstrates the state-of-the-art OOD detection performance over the zero-shot counterpart. \u2022 We provide an in-depth analysis of prompt learning\u2019s impact on the feature space for OOD detection and conduct comprehensive ablations across datasets, architectures, and the number of shots with various OOD detection scores. 2 Preliminaries Contrastive vision-language models. Recent large vision-language models have shown great potential for various computer vision tasks. In this paper, we focus on CLIP-like models (Radford et al., 2021; Yao et al., 2021), which adopt a dual-stream architecture with one text encoder f : t \u2192Rd and one image encoder g : x \u2192Rd. CLIP is pre-trained on a massive web-scale imagecaption dataset with a multi-modal contrastive loss that promotes the alignment of features from different modalities. CLIP learns transferable feature representations and demonstrates promising zero-shot generalization performance (Fort et al., 2021). Despite the promise, existing visionlanguage models perform zero-shot classification in a closed-world setting. That is, it will match an input into a fixed set of categories, even if it is irrelevant. For example, a bird in Figure 1 can \f3 Text Encoder image Encoder image Encoder OOD score distribution similarity vector cosine similarities Butterfly Headphone ... ... ... ... ... Flamingo Cat S(x) K Textual Prototypes OOD scoring function KxD Visual Prototypes K way D shot training set ID class labels test input (OOD) text prompt word embeddings label scale and aggregate ID OOD Fig. 1 A unified pipeline for OOD detection with parameter-efficient fine-tuning of CLIP models on few-shot datasets. Given ID text labels Yin and a few-shot training set, we view the textual and visual embeddings of ID classes as concept prototypes in the feature space. The OOD uncertainty of an input image can be characterized by the distance from its visual feature to the closest ID prototype from both modalities. See Section 3 for details. be blindly predicted as one of the in-distribution classes Yin ={headphone, cat, flamingo, butterfly}. This motivates the importance of OOD detection for vision-language models. OOD detection for vision-language models. In the open-world setting, the goal of OOD detection is to detect samples that do not belong to ID classes Yin. Here ID classes are defined w.r.t. the classification task of interest, instead of the classes used in pre-training. Accordingly, OOD is defined w.r.t. the ID classes, not the data distribution during pre-training. Ming et al. (2022) explore the zero-shot OOD detection for the pre-trained CLIP model, without adapting to the ID dataset. Instead, we focus on the setting where CLIP models are fine-tuned on a few-shot dataset Din, and hence are better adapted to the downstream ID task. We evaluate the fine-tuned CLIP model on a combination of ID and OOD datasets Din \u222aDout, where Dout = {xi, yout i }m i=1 contains inputs with semantically different categories yout / \u2208Yin. Formally, given an input x, OOD detection can be formulated as: G(x; f, g) = ( 1 S(x; f, g) \u2265\u03bb \u22121 S(x; f, g) < \u03bb , where S(\u00b7) is a scoring function that measures OOD uncertainty. In practice, \u03bb is chosen so that a high fraction of ID data (e.g., 95%) is above the threshold. Parameter-efficient fine-tuning. To improve the performance on downstream tasks, parameterefficient approaches are proposed to fine-tune CLIP on datasets of interest. Prompt learning and adaptor tuning have recently gained popularity and demonstrated improved results over zero-shot settings. In particular, prompt learning optimizes the word embeddings of the prompts, while adaptors directly optimize the internal feature representations. Both methods are parameter-efficient as image and text encoders are frozen during fine-tuning. In what follows, we introduce promptbased and adaptor-based methods respectively. For a downstream dataset with K indistribution classes Yin = {y1, y2, ..., yK}, prompt learning method such as CoOp (Zhou et al., 2022b) introduces M learnable context vectors vi \u2208Re to replace hand-engineered text prompts such as \u201cthis is a photo of\u201d, where e is the dimension of word embeddings. For each class yk, we obtain its contextualized representation tk = [v1, v2, \u00b7 \u00b7 \u00b7 , vM, wk] by concatenating the context vectors and the word embedding wk \u2208Re of \fthe label (upper left, Figure 1). To avoid overfitting and improve generalization performance, CoCoOp (Zhou et al., 2022a) further introduces instance-conditional prompts via a meta-network which produces a meta token m(x) given the visual feature of the input x. The meta token is added to each context token vi(x) = vi + m(x) for i \u2208{1, 2, \u00b7 \u00b7 \u00b7 , M}. Therefore, the prompt for class k is conditioned on each input: tk(x) = [v1(x), v2(x), \u00b7 \u00b7 \u00b7 , vM(x), wk]. To learn the context vectors, the cross-entropy loss is used in fine-tuning: p(yk | x) = exp (sk(x)/\u03c4) PK i=1 exp (si(x)/\u03c4) , (1) where sk(x) = g(x)\u00b7f(tk) \u2225g(x)\u2225\u00b7\u2225f(tk)\u2225is the cosine similarity of input x with the k-th label, and \u03c4 is the temperature. Alternatively, adaptor-based methods directly optimize the feature representations g(x) instead of learning context vectors. Specifically, given a K-way-D-shot ID training set (consisting of K classes with D examples per class), Zhang et al. (2022) propose a training-free adaptation method TipAdaptor which extracts all the visual features Wg = [g(x1,1), g(x1,2), \u00b7 \u00b7 \u00b7 , g(xK,D)] \u2208 RKD\u00d7d from the few-shot training dataset. For each input x, we can obtain K \u00d7 D cosine similarities sk,d(x) = g(x)\u00b7g(xk,d) \u2225g(x)\u2225\u00b7\u2225g(xk,d)\u2225. The cosine similarities are scaled by an exponential function \u02dc s : s 7\u2192exp(\u2212\u03b2 + \u03b2s) with a hyperparameter \u03b2 that modulates the sharpness. Therefore, we can obtain an average similarity vector for each class based on visual features, \u02dc sk(x) = 1 D PD d=1 \u02dc sk,d(x). The final similarity for class k is a weighted sum of similarities from the two modalities \u03b1\u02dc sk(x) + sk(x). To achieve better few-shot ID performance, Zhang et al. (2022) set visual features Wg as learnable parameters and denote the method as TipAdaptorF, where F stands for fine-tuning. Despite the stronger downstream classification performance, it remains unknown if fine-tuning leads to more reliable OOD detection at test time. We aim to provide a comprehensive understanding in this paper. 3 Method 3.1 OOD detection with fine-tuning We investigate OOD detection with parameterefficient fine-tuning on downstream tasks. We present a unified framework in Figure 1, where the learnable part of the CLIP model is marked with an \u201cunlock\u201d icon while the frozen part is marked with a \u201clock\u201d icon. For prompt learning methods such as CoOp and CoCoOp, the cosine similarity of the input feature with the k-th class sk(x) = g(x)\u00b7f(tk) \u2225g(x)\u2225\u00b7\u2225f(tk)\u2225is derived based on the adapted textual feature vector tk. Alternatively, adaptor-based methods such as TipAdaptor and TipAdaptorF first scale the cosine similarities of visual prototypes and perform a weighted sum with the similarities of textual prototypes. Therefore, we can view TipAdaptor as an ensemble method that utilizes multi-modal prototypes. To summarize, for each adaptation algorithm A, OOD detection can be performed by: GA(x; f, g) = ( ID S(x; f, g) \u2265\u03bb OOD S(x; f, g) < \u03bb , where A can be instantiated by an adaptation method such as CoOp, CoCoOp, TipAdaptor, or TipAdaptorF. Therefore, the OOD detector GA(\u00b7) can be viewed as a \u201csafeguard\u201d for the classification model. Next, we introduce various OOD score functions S(x; f, g) assuming GA(x; f, g) is defined implicitly as each score function corresponds to an OOD detector G. 3.2 OOD score for vision-language models Recently, Ming et al. (2022) propose a conceptual framework of CLIP-based OOD detection via concept matching, where the textual feature f(tk) is viewed as the concept prototype for ID class k \u2208{1, 2, ..., K}. OOD uncertainty is then characterized by the distance from the visual feature of the input to the closest ID textual prototype. That is, images closer to one of the ID prototypes are more likely to be ID and vice versa. Ming et al. (2022) suggest that softmax scaling with a proper temperature \u03c4 provably leads to state-of-the-art performance under the zero-shot (training-free) setting. Specifically, the maximum \f5 concept matching (MCM) score is defined as: SMCM(x) = max k\u2208[K] esk(x)/\u03c4 PK j=1 esj(x)/\u03c4 , (2) where the temperature \u03c4 needs to be tuned on the downstream dataset. As a special case of MCM, we use MSP to denote the MCM score when the temperature \u03c4d is set as default for CLIP models at inference time (e.g., 100 for CLIP-B/16). Additionally, we consider a simpler scoring function based on the maximum similarity (MS) among ID prototypes before applying softmax scaling: SMS(x) = max k\u2208[K] sk(x), (3) which does not require any hyperparameter tuning. We show in Section 4 that the MS score demonstrates strong OOD detection performance with fine-tuning, especially for fine-grained ID datasets. We now proceed to experiments where we investigate the impact of fine-tuning on realworld tasks. 4 Experiments 4.1 Setup Datasets. Following Ming et al. (2022), we consider a wide range of real-world ID datasets with various semantics and number of classes: Caltech101 (Bossard et al., 2014), Stanford-Cars (Krause et al., 2013), Food-101 (Bossard et al., 2014), Oxford-Pets (Parkhi et al., 2012) and ImageNet1k (Deng et al., 2009). For each ID dataset, we follow Zhou et al. (2022a) and construct the training set with D random samples per class, while the original test set is used for testing. We use D = 16 by default and study the impact of shots as ablations in Section 4.3. For OOD test datasets, we use the same ones in Huang and Li (2021), including subsets of iNaturalist (Van Horn et al., 2018), Sun (Xiao et al., 2010), Places (Zhou et al., 2017), and Texture (Cimpoi et al., 2014). For each OOD dataset, the categories do not overlap with the ID dataset. For ImageNet-1k as ID, we also consider two additional OOD datasets ImageNet-O (Hendrycks et al., 2021) and OpenImage-O (Wang et al., 2022). Models and training details. For pre-trained models, we use CLIP-B/16 as the default backbone for main experiments, which uses ViTB/16 (Dosovitskiy et al., 2021) as the image encoder. The impact of backbones is included in the ablation studies. We use ZOCLIP to denote pre-trained CLIP without fine-tuning. For each method, we closely follow the original implementations. Specifically, for CoOp and CoCoOp, the context length is set to 4, and the context vectors are initialized using the pre-trained word embeddings of \u201ca photo of a\u201d. CoCoOp is trained with a batch size of 1 for 10 epochs using SGD, while CoOp is trained for 100 epochs with a batch size of 32. TipAdapterF is trained with a batch size 256 using AdamW (Loshchilov and Hutter, 2019) for 20 epochs. Cosine scheduling is used for all methods and the data preprocessing protocol consists of random re-sizing, cropping, and random horizontal flip. Evaluation metrics. We consider the following evaluation metrics: (1) the false positive rate (FPR95) of OOD samples when the true positive rate of in-distribution samples is at 95%, (2) the area under the receiver operating characteristic curve (AUROC), and (3) ID classification accuracy (ID ACC). 4.2 Main results and discussions In this section, we first present novel evidence that parameter-efficient fine-tuning generally improves OOD performance over the zero-shot counterpart with a simple OOD scoring function. Next, we investigate the effects of various OOD scoring functions in the parameter-efficient fine-tuning setting. In particular, we will show that the MCM score consistently demonstrates the most promising performance compared to alternative OOD scores when coupled with prompt learning. How does parameter-efficient fine-tuning impact OOD detection? We evaluate the OOD detection performance on various ID datasets. The results are summarized in Table 1. We show that adapted CLIP models demonstrate nearly perfect OOD detection performance for ID datasets with fine-grained categories such as Stanford-Cars and Oxford-Pets. Moreover, when the ID dataset contains a diverse collection of categories such as \fTable 1 OOD detection performance based on SMS score (w.o. softmax scaling). When ID datasets contain finer-grained categories semantically different from OOD categories, the pre-trained CLIP model demonstrates nearly perfect OOD detection performance. More encouragingly, after adapting the model to downstream datasets, OOD detection performance remains competitive. ID Dataset Method SUN Places Textures iNaturalist Average FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 Training not required Food-101 ZOCLIP 0.04 99.92 0.12 99.93 4.63 98.29 0.15 99.87 1.24 99.50 TipAdaptor 0.00 99.94 0.04 99.95 2.87 98.85 0.06 99.90 0.74 99.66 Requires training TipAdaptorF 0.00 99.94 0.03 99.95 3.16 98.77 0.05 99.91 0.81 99.64 CoOp 0.01 99.97 0.00 99.98 1.45 99.68 0.00 99.97 0.36 99.90 CoCoOp 0.00 99.98 0.00 99.98 1.97 99.51 0.01 99.97 0.49 99.86 Training not required Oxford-Pets ZOCLIP 0.03 99.99 0.14 99.96 0.12 99.95 0.00 100.00 0.07 99.97 TipAdaptor 0.01 100.00 0.07 99.98 0.07 99.99 0.00 100.00 0.04 99.99 Requires training TipAdaptorF 0.02 100.00 0.07 99.98 0.09 99.98 0.00 100.00 0.04 99.99 CoOp 0.02 100.00 0.18 99.97 0.25 99.92 0.00 100.00 0.11 99.97 CoCoOp 0.03 99.99 0.19 99.96 0.11 99.96 0.00 100.00 0.08 99.98 Training not required Stanford-Cars ZOCLIP 0.02 99.99 0.24 99.94 0.00 100.00 0.00 100.00 0.07 99.98 TipAdaptor 0.01 100.00 0.08 99.98 0.00 100.00 0.00 100.00 0.02 100.00 Requires training TipAdaptorF 0.01 100.00 0.06 99.98 0.00 100.00 0.00 100.00 0.02 100.00 CoOp 0.01 100.00 0.07 99.97 0.00 100.00 0.00 100.00 0.02 99.99 CoCoOp 0.01 100.00 0.07 99.97 0.00 100.00 0.00 100.00 0.02 99.99 Training not required Caltech-101 ZOCLIP 32.03 94.06 33.01 93.39 54.66 89.29 32.14 94.30 37.96 92.76 TipAdaptor 9.69 98.07 11.25 97.84 20.90 96.68 13.62 97.72 13.86 97.58 Requires training TipAdaptorF 10.20 97.76 11.60 97.42 23.32 95.54 14.01 97.36 14.78 97.02 CoOp 5.53 98.56 9.88 97.50 13.10 97.10 4.89 98.76 8.35 97.98 CoCoOp 2.86 99.19 6.42 98.37 8.81 98.09 5.68 98.68 5.94 98.58 Caltech-1011, parameter-efficient fine-tuning still significantly improves the OOD detection performance on average compared to ZOCLIP. In particular, CoCoOp yields the best performance among other adaptation methods on Caltech-101 (ID). It achieves an average FPR95 of 5.94% using SMS, improving by 32.02% over ZOCLIP. While prior works suggest that parameter-efficient finetuning methods improve ID accuracy on few-shot datasets, our results complement their findings and show that fine-tuning also improves the OOD detection performance with proper OOD scoring functions. Effects of OOD scoring functions. We investigate the effect of OOD scoring functions under fine-tuned vision-language models. In Table 2, we contrast the OOD detection performance using MCM (Ming et al., 2022) vs. MS on Caltech101 (ID). Our findings suggest that: (1) SMCM performs on par with SMS for fine-grained ID tasks across a wide range of adaptation methods (Table 3). (2) However, when ID contains diverse 1Similar trends also hold for ImageNet-1k as ID. categories, utilizing SMCM generally leads to better performance compared to using SMS for most adaptation methods (Table 2). (3) In particular, prompt learning methods such as CoCoOp demonstrate very competitive results with both OOD scores (an average FPR95 of 5.02% with SMCM and 5.94% with SMS in Table 2). Effects of softmax scaling. Previously, Ming et al. (2022) observed that the commonly used maximum softmax score (SMSP) is suboptimal for zero-shot OOD detection with vision-language models. We investigate whether MSP is competitive for OOD detection with fine-tuned models. To better illustrate the effects, we plot the score distributions for Stanford-Cars (ID) vs. SUN (OOD) in Figure 2 when the model is fine-tuned with CoOp, CoCoOp, and TipAdaptorF respectively. For each fine-tuning method, we can clearly see that the SMS leads to superior ID-OOD separability, while SMSP displays significant overlapping. Quantitatively, compared to SMSP, the average FPR95 is significantly decreased with SMS (Table B4). Our findings highlight that directly \f7 Table 2 OOD detection performance with SMS and SMCM score when the ID dataset contains diverse categories. Prompt learning methods display clear advantages over zero-shot models. The results are based on Caltech-101 (ID). OOD Score Method SUN Places Textures iNaturalist Average FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 SMS ZOCLIP 32.03 94.06 33.01 93.39 54.66 89.29 32.14 94.30 37.96 92.76 TipAdaptor 9.69 98.07 11.25 97.84 20.90 96.68 13.62 97.72 13.86 97.58 TipAdaptorF 10.20 97.76 11.60 97.42 23.32 95.54 14.01 97.36 14.78 97.02 CoOp 5.53 98.56 9.88 97.50 13.10 97.10 4.89 98.76 8.35 97.98 CoCoOp 2.86 99.19 6.42 98.37 8.81 98.09 5.68 98.68 5.94 98.58 SMCM ZOCLIP 14.83 97.20 20.45 96.00 14.98 97.35 10.84 97.76 15.28 97.08 TipAdaptor 5.12 98.83 8.05 98.34 4.65 99.05 6.94 98.77 6.19 98.75 TipAdaptorF 4.83 98.79 8.09 98.07 6.41 98.11 4.94 98.98 6.07 98.49 CoOp 3.62 99.01 8.15 97.89 6.29 98.62 7.57 98.35 6.41 98.47 CoCoOp 4.26 98.94 6.76 98.00 4.33 98.88 4.71 98.68 5.02 98.62 Table 3 OOD detection performance based on SMCM score. ID Dataset Method SUN Places Textures iNaturalist Average FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 Food-101 ZOCLIP 1.75 99.46 2.04 99.35 5.54 98.05 2.80 99.17 3.03 99.01 TipAdaptor 0.63 99.75 0.64 99.71 3.76 98.59 1.32 99.55 1.59 99.40 TipAdaptorF 1.77 99.57 1.57 99.53 4.43 98.34 1.85 99.40 2.40 99.21 CoOp 2.00 99.46 1.60 99.47 5.85 98.39 1.37 99.54 2.71 99.22 CoCoOp 1.06 99.69 1.01 99.63 4.17 98.42 1.40 99.53 1.91 99.32 Oxford-Pets ZOCLIP 1.18 99.73 3.37 99.28 1.37 99.73 6.17 98.84 3.02 99.40 TipAdaptor 0.05 99.97 0.62 99.87 0.17 99.96 0.11 99.87 0.24 99.92 TipAdaptorF 0.48 99.89 1.74 99.66 0.43 99.88 0.93 99.53 0.90 99.74 CoOp 0.06 99.96 0.55 99.85 0.39 99.90 2.07 99.37 0.77 99.77 CoCoOp 0.08 99.95 0.53 99.85 0.25 99.91 1.12 99.55 0.49 99.82 Stanford-Cars ZOCLIP 0.02 99.96 0.31 99.89 0.02 99.96 0.10 99.74 0.11 99.89 TipAdaptor 0.01 99.98 0.11 99.94 0.00 99.97 0.00 99.84 0.03 99.93 TipAdaptorF 0.03 99.98 0.19 99.94 0.00 99.99 0.00 99.93 0.06 99.96 CoOp 0.01 99.98 0.17 99.93 0.00 99.98 0.02 99.84 0.05 99.93 CoCoOp 0.02 99.98 0.15 99.93 0.00 99.97 0.00 99.87 0.04 99.94 Caltech-101 ZOCLIP 14.83 97.20 20.45 96.00 14.98 97.35 10.84 97.76 15.28 97.08 TipAdaptor 5.12 98.83 8.05 98.34 4.65 99.05 6.94 98.77 6.19 98.75 TipAdaptorF 4.83 98.79 8.09 98.07 6.41 98.11 4.94 98.98 6.07 98.49 CoOp 3.62 99.01 8.15 97.89 6.29 98.62 7.57 98.35 6.41 98.47 CoCoOp 4.26 98.94 6.76 98.00 4.33 98.88 4.71 98.68 5.02 98.62 applying MSP is not competitive for fine-tuned vision-language models. 4.3 Delving into parameter-efficient fine-tuning for OOD detection The impact of fine-tuning on feature geometry. To better understand how fine-tuning leads to improved OOD detection performance, we examine the geometry of the feature representations. For illustration, we use the simple SMS score as it provides an intuitive geometric interpretation. For each test input, SMS captures the angular distance between its visual features and the closest ID prototype. Figure 3 shows SMS for ID and each OOD test dataset, where radians are converted to degrees for better readability. Intuitively, one desires to learn compact ID clusters such that ID inputs are closer to the nearest ID prototypes than OOD inputs. We illustrate the effects of prompt learning in Figure 4. Compared to zero-shot CLIP, CoOp and CoCoOp decrease the angular distance for ID inputs to the nearest concept prototype while simultaneously increasing the angular distance for OOD inputs. In particular, CoCoOp decreases the angular distance for ID inputs more significantly, resulting in better ID-OOD separability. Although prompt learning methods introduce perturbations to the feature space, the overall effect is modest, with only a slight deviation of a few degrees from the pre-trained model2. Nonetheless, these perturbations play a crucial role in enhancing both ID classification and OOD detection performance. Exploring prompt learning for OOD detection on challenging large-scale benchmarks 2Similar observations can also be verified for adaptor-based methods. \fMax Similarity (w.o. softmax) (w. softmax) MSP OOD ID OOD ID OOD ID CoCoOp TipAdaptorF CoOp Density Density Density Density Density Density Fig. 2 The impact of softmax scaling. We use StanfordCars (ID) vs. SUN (OOD) for illustration. Applying softmax scaling significantly decreases ID-OOD separability for CoOp (top row), CoCoOp (second row), and TipAdaptorF (last row), resulting in worse OOD detection performance. ID SUN Places Textures iNaturalist Dataset 70 72 74 76 78 80 Degree ZOCLIP CoOp CoCoOp Fig. 3 Average SMS for ID (Caltech-101) and OOD test sets. Prompt learning methods decrease the angular distance for ID inputs while increasing the angular distance for OOD inputs to the nearest concept prototype, leading to better ID-OOD separability (Figure 4). In previous sections, we show that prompt learning with both SMS and SMCM scores display competitive performance. Next, we consider a more ID feature ZOCLIP CoOp CoCoOp OOD feature ZOCLIP CoOp CoCoOp Fig. 4 Illustration of how prompt learning methods impact the hyperspherical features. Left: feature of an ID sample and its nearest ID prototype; Right: feature of an OOD sample and its nearest ID prototype. SUN Places Textures iNaturalist AVG OOD Dataset 0 10 20 30 40 50 60 70 FPR95 MSP MS MCM Fig. 5 OOD detection performance (FPR95) on ImageNet-1k (ID). Using SMCM score leads to significant improvement over SMSP. SUN Places Textures iNaturalist AVG OOD Dataset 60 65 70 75 80 85 90 95 100 AUROC MSP MS MCM Fig. 6 OOD detection performance (AUROC) on ImageNet-1k (ID). The trend is consistent with Fig 5. challenging large-scale benchmark ImageNet-1k (ID). The results in FPR95 and AUROC are shown in Figure 5 and Figure 6. While SMS outperforms SMSP score, we can clearly see that SMCM is particularly advantageous compared to the simpler SMS baseline. In particular, SMCM outperforms SMS by 7.44% in FPR95 averaged across the four OOD test sets. Moreover, CoOp with SMCM achieves an average FPR95 of 37.74% \f9 on the benchmark, surpassing the zero-shot performance of the large backbone CLIP-L/14 model which has an FPR95 of 38.17% (Ming et al., 2022). These results further demonstrate the effectiveness of SMCM in CLIP-based prompt learning for challenging scenarios. The impact of shots. We investigate the impact of shots for CoOp and CoCoOp with various OOD detection scores. The results are shown in Figure 7 and Figure 8, where each point represents the average FPR95 over the four OOD test sets. We highlight two key findings. First, the OOD detection performance with both SMS and SMCM score improves as the number of shots increases. This trend is consistent with the ID classification accuracy reported in Zhou et al. (2022b), suggesting that using a suitable OOD uncertainty score can enhance the representation quality as more data is incorporated during prompt learning. Second, the performance of SMCM is promising even with a low number of shots, demonstrating its effectiveness in resource-constrained settings. 2 4 8 12 16 Shots 10 15 20 25 30 FPR95 MCM MSP MS Fig. 7 The effects of shots for CoOp with various OOD detection scores on Caltech-101 (ID). The performance is averaged over the four OOD test sets. The impact of backbone architecture. We conduct another ablation study on the impact of model architectures. We consider CLIP with ResNet backbones (N50, RN101) and ViT backbones (CLIP-B/32, CLIP-L/14), where the vision encoder is based on ViT-B/32 and ViT-L/14, respectively. We train with CoOp with hyperparameters following the original implementation for each architecture (Zhou et al., 2022b). We evaluate the models using SMSP, SMS, and SMCM score and summarize the results in Table 4 and 2 4 8 12 16 Shots 5 10 15 20 25 30 35 FPR95 MCM MSP MS Fig. 8 The effects of shots for CoCoOp with various OOD detection scores on Caltech-101 (ID). The performance is averaged over the four OOD test sets. Table 4 The impact of model architecture on ResNet backbones with CoOp on Caltech-101 (ID). Arch Score OOD Dataset FPR95\u2193 AUROC\u2191 RN50 SMSP SUN 29.93 93.95 Places 37.64 91.96 Textures 35.69 93.58 iNaturalist 43.42 91.27 AVG 36.67 92.69 SMS SUN 6.02 98.45 Places 9.02 97.79 Textures 23.17 95.25 iNaturalist 12.39 97.37 AVG 12.65 97.22 SMCM SUN 8.56 98.03 Places 17.02 95.88 Textures 12.09 97.56 iNaturalist 21.00 95.93 AVG 14.67 96.85 RN101 SMSP SUN 23.60 95.20 Places 29.37 93.94 Textures 21.29 96.24 iNaturalist 34.18 94.05 AVG 27.11 94.86 SMS SUN 19.08 96.56 Places 20.79 96.25 Textures 36.97 94.39 iNaturalist 30.89 95.41 AVG 26.93 95.65 SMCM SUN 6.19 98.42 Places 11.57 97.16 Textures 5.83 98.49 iNaturalist 10.56 97.69 AVG 8.54 97.94 Table 5. Interestingly, compared to SMSP, SMS brings more significant improvements under ViT backbones than ResNet backbones. In contrast, \fTable 5 The impact of model architecture on ViT backbones with CoOp on Caltech-101 (ID). Arch Score OOD Dataset FPR95\u2193 AUROC\u2191 CLIP-B/32 SMSP SUN 24.20 96.02 Places 27.94 94.99 Textures 24.54 96.09 iNaturalist 28.90 95.37 AVG 26.40 95.62 SMS SUN 13.81 97.41 Places 16.49 96.48 Textures 25.23 95.24 iNaturalist 13.00 97.60 AVG 17.13 96.68 SMCM SUN 4.06 98.92 Places 7.31 98.01 Textures 4.61 98.81 iNaturalist 8.70 98.17 AVG 6.17 98.48 CLIP-L/14 SMSP SUN 7.73 98.36 Places 10.96 97.71 Textures 19.18 96.60 iNaturalist 11.33 97.71 AVG 15.85 97.41 SMS SUN 13.81 97.41 Places 16.49 96.48 Textures 25.23 95.24 iNaturalist 13.00 97.60 AVG 12.30 97.59 SMCM SUN 2.15 99.33 Places 5.60 98.30 Textures 2.32 99.31 iNaturalist 3.94 99.06 AVG 3.50 99.00 SMCM score consistently demonstrates competitive performance for all the architectures considered. For instance, with CLIP-B/32, SMCM achieves an average FPR95 of 6.17%, a 20.23% improvement over the SMSP baseline. We observe similar improvements for RN101 (18.57%) and RN50 (22%). Moreover, larger backbones lead to superior performance when fixing the OOD detection score as MCM. For example, with CLIPL/14, the average FPR95 is improved by 11.17% compared to RN50 and 2.67% compared to CLIPB/32. A similar trend has been shown for ID classification (Radford et al., 2021), where larger models yield better feature representation. 5 Related works Parameter-efficient fine-tuning of visionlanguage models. Large-scale vision-language models have shown impressive performance on various downstream tasks (Radford et al., 2021; Jia et al., 2021; Yao et al., 2021; Li et al., 2022). These models learn transferable feature representations via pre-training on web-scale heterogeneous datasets. However, as downstream datasets can have a limited number of samples, adapting these large models in a parameter and dataefficient manner is crucial for effective knowledge transfer. Recent works propose various ways to tackle this challenge. Zhou et al. (2022b) propose to tune a set of soft prompts (Li and Liang, 2021; Lester et al., 2021) while freezing the encoders of CLIP. Zhou et al. (2022a) aims to improve the generalization ability of CoOp by introducing a meta-network that learns inputdependent tokens. Huang et al. (2022) propose to learn prompts in an unsupervised manner while TPT (Manli et al., 2022) uses test-time prompt tuning to learn adaptive prompts on the fly. Beyond textual prompt learning, Bahng et al. (2022) propose to tune visual prompts for CLIPbased fine-tuning. Another line of work focuses on adaptor-style fine-tuning, where instead of tuning prompts, the feature embedding is directly optimized using an adaptor module (Gao et al., 2021; Zhang et al., 2022; Udandarao et al., 2023). Prior works demonstrate significant improvement over zero-shot CLIP for few-shot ID classification and OOD generalization where OOD labels are given. However, it is unclear how reliable these parameter-efficient fine-tuning methods are for OOD detection tasks. Our work bridges this gap and explores how fine-tuning impacts OOD detection for few-shot downstream datasets. OOD detection with vision-language representations. A plethora of OOD detection methods have been proposed on visual inputs (Lee et al., 2018; Liang et al., 2018; Hendrycks et al., 2019; Tack et al., 2020; Sun et al., 2022; Ming et al., 2022; Du et al., 2022; Wang et al., 2022; Ming et al., 2023). With the rise of large-scale pre-trained models on vision language inputs, an increasing number of works utilize textual information for visual OOD detection and demonstrate promising performance. Fort et al. (2021) propose a scheme where pre-trained CLIP models are provided with candidate OOD labels for each target dataset, and show that the output probabilities summed over the OOD labels effectively capture OOD uncertainty. Without the assumption of OOD labels, Esmaeilpour et al. (2022) propose to train a decoder based on the visual encoder of \f11 CLIP to generate candidate labels for OOD detection. However, training a high-quality decoder incurs significant computational costs and requires extra data. While both Esmaeilpour et al. (2022) and Radford et al. (2021) focus on small-scale inputs, Ming et al. (2022) propose an OOD labelfree method MCM which demonstrates promising results on a wide range of large-scale and challenging tasks (Ming et al., 2022). However, Ming et al. (2022) only investigate pre-trained CLIP models. For multi-modal OOD detection benchmarks, Bitterwolf et al. (2023) curate a new OOD test set for ImageNet-1k while Gu et al. (2023) provide new OOD datasets for document understanding. In contrast, our work focuses on the impact of parameter-efficient fine-tuning methods for OOD detection in few-shot downstream tasks, which has not been explored. 6 Conclusion In this paper, we provide a timely study on the impact of parameter-efficient fine-tuning methods for OOD detection with large vision-language models. We focus on the few-shot setting without access to OOD labels, which has been largely unexplored in the literature. We show that parameter-efficient fine-tuning methods can improve both ID and OOD performance when coupled with a proper OOD score, with prompt learning-based methods showing the strongest performance under the MCM score. We analyze the feature space and provide insights into the effectiveness of such methods through the lens of multi-modal concept matching. We hope our findings will inspire and motivate future research on designing reliable fine-tuning methods for large vision-language models. Acknowledgement Support for this research was provided by American Family Insurance through a research partnership with the University of Wisconsin\u2013Madison\u2019s Data Science Institute, Office of Naval Research under award number N00014-23-1-2643, AFOSR Young Investigator Program under award number FA9550-23-1-0184, and National Science Foundation (NSF) CAREER Award No. IIS-2237037. Appendix A Dataset Details Details on ID and OOD dataset construction For ID datasets, we follow the same construction as in previous works (Zhang et al., 2022; Zhou et al., 2022a,b). Detailed instructions on dataset installation can be found in https://github.com/KaiyangZhou/CoOp/blob/ main/DATASETS.md. For OOD datasets, Huang and Li (2021) curate a collection of subsets from iNaturalist Van Horn et al. (2018), SUN Xiao et al. (2010), Places Zhou et al. (2017), and Texture Cimpoi et al. (2014) as large-scale OOD datasets for ImageNet-1k, where the classes of the test sets do not overlap with ImageNet-1k. Detailed instructions can be found in https: //github.com/deeplearning-wisc/large scale ood. Appendix B Additional Results B.1 ID accuracy While we primarily focus on the OOD detection performance of CLIP-based fine-tuning methods, we present the results of the ID accuracy for each dataset based on CLIP-B/16 in Table B1 for completeness. Further results on the ID accuracy with various datasets and architectures can be seen in Zhou et al. (2022a), Zhou et al. (2022b), and Zhang et al. (2022). B.2 OOD detection performance based on visual features alone In this section, we explore several commonly used OOD detection scores solely based on the visual branch of CLIP models. Specifically, we consider the Mahalanobis score (Lee et al., 2018) on the penultimate layer of the visual encoder and MSP (Hendrycks and Gimpel, 2017), Energy (Liu et al., 2020), and KL Matching (Hendrycks et al., 2022) scores on the logit layer after linear probing the visual encoder. The results are summarized in Table B2, based on 16-shot Caltech-101 (ID). We can see that the Mahalanobis score does not yield promising performance because 1) the feature embeddings from the visual encoder of CLIP may not follow class-conditional Gaussian distributions, 2) it is challenging to estimate the mean \fTable B1 ID accuracy on the downstream datasets for CLIP-based fine-tuning methods with CLIP-B/16. ID Dataset Method ID Acc Caltech-101 ZOCLIP 92.90 TipAdaptor 95.01 TipAdaptorF 95.66 CoOp 95.30 CoCoOp 95.00 Food-101 ZOCLIP 86.10 TipAdaptor 86.49 TipAdaptorF 87.43 CoOp 85.50 CoCoOp 87.30 Stanford-Cars ZOCLIP 65.27 TipAdaptor 75.29 TipAdaptorF 83.40 CoOp 78.50 CoCoOp 72.30 Oxford-Pets ZOCLIP 89.10 TipAdaptor 91.85 TipAdaptorF 92.91 CoOp 93.40 CoCoOp 93.30 ImageNet-1k ZOCLIP 68.77 TipAdaptor 70.26 TipAdaptorF 73.70 CoOp 71.63 CoCoOp 71.20 and especially covariance matrix when the number of samples is much smaller than the feature dimension in the few-shot setting. On the other hand, the OOD scores based on fine-tuned logit layer result in worse performance compared to the MCM score. One major reason is that fine-tuning CLIP in the few-shot setting is prone to overfitting the downstream ID dataset, making the model less reliable. This further highlights the importance of choosing OOD detection scores fitted to parameter-efficient fine-tuning methods. B.3 Additional results on ImageNet-1k In this section, we consider two additional OOD test sets ImageNet-O (Hendrycks et al., 2021) and OpenImage-O (Wang et al., 2022) for ImageNet1k (ID). OpenImage-O is a subset curated from the test set of OpenImage-V3 (Krasin et al., 2017) containing a diverse set of categories. ImageNet-O Table B2 Additional results for OOD scores based on visual encoder only. ID dataset is Caltech-101 (16 shot). OOD Score OOD Dataset FPR95\u2193 AUROC\u2191 Maha SUN 34.15 95.20 Places 20.50 96.21 Textures 64.10 92.43 iNaturalist 66.62 92.97 AVG 46.34 94.20 Energy SUN 15.02 97.05 Places 21.10 95.75 Textures 15.60 97.00 iNaturalist 33.77 95.49 AVG 21.37 96.32 KL Matching SUN 4.56 98.21 Places 8.92 97.52 Textures 42.64 94.47 iNaturalist 9.70 97.35 AVG 16.46 96.89 MSP SUN 16.23 96.59 Places 20.98 95.97 Textures 7.15 98.33 iNaturalist 11.79 97.31 AVG 14.04 97.05 is a challenging OOD dataset that contains naturally adversarial examples for ImageNet-1k. The results are shown in Table B3. The model (CLIPB/16) is trained with CoOp. We can see that: 1) The performance on ImageNet-O is generally worse than the rest of OOD test sets (iNaturalist, Textures, SUN, Places) in Section 4.3, suggesting that this task remains challenging in the context of few-shot prompt learning. 2) MCM score still performs the best compared to MS and MSP on both OOD test sets, consistent with our previous observations, which further highlights the importance of softmax and temperature scaling for OOD detection with fine-tuning. Table B3 OOD detection performance on two OOD additional test sets for ImageNet-1k (ID). We train CLIP-B/16 with CoOp. OOD Dataset OOD Score FPR95\u2193 AUROC\u2191 ImageNet-O SMSP 77.20 74.01 SMS 70.75 82.30 SMCM 61.50 84.13 OpenImage-O SMSP 56.89 83.73 SMS 39.18 91.48 SMCM 36.68 92.76 \f13 B.4 Alternative OOD scores In this section, we investigate the performance with several alternative OOD scoring functions based on the cosine similarities of input x with the k-th label sk(x), k \u2208{1, 2, ..., K} (defined in Section 3.2). Specifically, we consider the energy and the KL matching score for each adaptation method and summarize the results based on Caltech-101 (ID) in Table B5. We observe that 1) using the energy score, all adaptation methods significantly enhance the performance over the zero-shot baseline (ZOCLIP). 2) the general performance vastly improves when utilizing the KL Matching score. However, even the highest achieved performance (FPR95 at 7.91 with CoCoOp) falls short when compared to the MCM score (FPR95 at 5.02 with CoCoOp)." + }, + { + "url": "http://arxiv.org/abs/2211.13445v1", + "title": "Delving into Out-of-Distribution Detection with Vision-Language Representations", + "abstract": "Recognizing out-of-distribution (OOD) samples is critical for machine\nlearning systems deployed in the open world. The vast majority of OOD detection\nmethods are driven by a single modality (e.g., either vision or language),\nleaving the rich information in multi-modal representations untapped. Inspired\nby the recent success of vision-language pre-training, this paper enriches the\nlandscape of OOD detection from a single-modal to a multi-modal regime.\nParticularly, we propose Maximum Concept Matching (MCM), a simple yet effective\nzero-shot OOD detection method based on aligning visual features with textual\nconcepts. We contribute in-depth analysis and theoretical insights to\nunderstand the effectiveness of MCM. Extensive experiments demonstrate that MCM\nachieves superior performance on a wide variety of real-world tasks. MCM with\nvision-language features outperforms a common baseline with pure visual\nfeatures on a hard OOD task with semantically similar classes by 13.1% (AUROC).\nCode is available at https://github.com/deeplearning-wisc/MCM.", + "authors": "Yifei Ming, Ziyang Cai, Jiuxiang Gu, Yiyou Sun, Wei Li, Yixuan Li", + "published": "2022-11-24", + "updated": "2022-11-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction Out-of-distribution (OOD) detection is critical for deploying machine learning models in the wild, where samples from novel classes can naturally emerge and should be \ufb02agged for caution. Despite increasing attention, the vast majority of OOD detection methods are driven by single-modal learning [26, 29, 34, 68, 89, 93, 95, 98]. For example, labels are typically encoded as one-hot vectors in image classi\ufb01cation, leaving the semantic information encapsulated in texts largely unexploited. OOD detection relying on pure visual information can inherit the limitations, e.g., when an OOD input may be visually similar to in-distribution (ID) data yet semantically different from any ID class. In this paper, we delve into a new landscape for OOD detection, departing from the classic singlemodal toward a multi-modal regime. While the motivation is appealing, a core challenge remains: how to effectively utilize joint vision-language features for OOD detection? In the visual domain, existing methods typically require good feature representations [66, 72], and a distance metric under which OOD data points are relatively far away from the in-distribution (ID) data [42, 71]. These approaches, however, do not directly translate into the multi-modal regime. On the representation learning side, recent vision-language pre-training schemes such as CLIP [59] and ALIGN [33] have emerged as promising alternatives for visual representation learning. The main idea is to align an image with its corresponding textual description in the feature space. While the resulting representations are powerful, OOD detection based on such aligned multi-modal features is still in its infancy. We bridge the gap by exploring a distance-based OOD detection approach, leveraging the joint vision-language representations. Our method capitalizes on the compatibility between visual features and textual features. By de\ufb01ning the textual features as the \u201cconcept prototypes\u201d for each ID class, 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2211.13445v1 [cs.CV] 24 Nov 2022 \fMCM score distribution Text Encoder Image Encoder ... ... \u201cThis is a photo of a ______\u201d Frog Bird Dog Cat Truck ... MCM OOD Input ID Input 0.29 0.18 0.17 0.19 0.17 ... 0.19 0.17 0.17 0.16 0.16 ID OOD Figure 1: Overview of the proposed zero-shot OOD detection framework. The ID classi\ufb01cation task is de\ufb01ned by a set of class labels Yin. The goal of OOD detection is to detect samples that do not belong to Yin. We view the textual embeddings of ID classes (wrapped by text templates) as concept prototypes. The OOD uncertainty of an input image can be characterized by the distance from visual features to the closest ID prototype. By properly scaling the distance, the MCM score achieves strong ID-OOD separability. See Section 3 for details. we characterize OOD uncertainty by the distance from the visual feature to the closest ID prototype. That is, images closer to one of the textual embeddings of ID classes are more likely to be ID and vice versa. By a proper scaling of the distance, our proposed Maximum Concept Matching (MCM) score achieves strong ID-OOD separability (see Figure 1). MCM stands in contrast with the previous distance-based approaches, such as Mahalanobis [42], which de\ufb01nes class prototypes based on pure visual embeddings. Indeed, we show later in Section 5 that MCM (with multi-modal vision-language features) is far more competitive than Mahalanobis (with single-modal visual features). Moreover, while prior works of CLIP-based OOD detection [16, 19] rely on a set of candidate OOD labels, MCM is OOD-agnostic and alleviates the need for any prior information about test inputs. Our work also advances the \ufb01eld by showcasing the promise of zero-shot OOD detection, which offers strong performance and generality without training on the ID samples. In particular, classic OOD detection methods often require training from scratch [9, 27] or \ufb01ne-tuning [19, 32] on a given ID dataset. In this setting, a classi\ufb01er and its companion OOD detector are good at only one task. Every new task (ID dataset) requires additional training and brings additional computation and storage costs. In contrast, we show for the \ufb01rst time that: (1) MCM achieves superior performance across a wide variety of real-world tasks\u2014with just one single pre-trained model. This is encouraging given that there is no training or any OOD information involved. (2) On the challenging ImageNet-1k benchmark, MCM\u2019s zero-shot OOD detection performance favorably matches and even outperforms strong task-speci\ufb01c baselines \ufb01ne-tuned on BiT [32] and ViT models [19]. (3) MCM remains robust against hard OOD inputs, including both semantically hard OODs [85] and spurious OODs [50]. We summarize our main contributions as follows: 1. We propose MCM, a simple yet effective OOD detection method based on aligned visionlanguage features. MCM offers several compelling advantages over other OOD detection methods: generalizable (one model supports many tasks), OOD-agnostic (no information required from OOD data), training-free (no downstream \ufb01ne-tuning required), and scalable to large real-world tasks. 2. We conduct extensive experiments and show that MCM achieves superior performance on a wide range of real-world tasks. On ImageNet-1k, MCM achieves an average AUROC of 91.49%, outperforming methods that require training. Moreover, we show that MCM remains competitive under challenging hard OOD evaluation tasks. 3. We provide in-depth empirical and theoretical analysis, providing insights to understand the effectiveness of MCM. We hope that this work will serve as a springboard for future works on OOD detection with multi-modal features. 2 \f2 Preliminaries Contrastive vision-language pre-training. Compared to visual representation learning models such as ViT [13], vision-language representation learning demonstrates superior performance on image classi\ufb01cation tasks. For instance, CLIP [59] adopts a self-supervised contrastive objective (i.e., InfoNCE loss [75]) to align an image with its corresponding textual description in the feature space. Speci\ufb01cally, CLIP adopts a simple dual-stream architecture with one text encoder T : t \u2192Rd (e.g., Transformer [77]) and one image encoder I : x \u2192Rd (e.g., ViT [13]). After pre-training on a dataset of 400 million text-image pairs, the joint vision-language embeddings of CLIP well associate objects in different modalities. Despite the promise, existing CLIP-like models perform zero-shot classi\ufb01cation in a closed-world setting. That is, it will match an input into a \ufb01xed set of categories, even if it is irrelevant (e.g., a tree being predicted as a bird in Figure 1). This motivates our work to leverage the multi-modal representation for OOD detection, which is largely unexplored. Zero-shot OOD detection. Given a pre-trained model, a classi\ufb01cation task of interest is de\ufb01ned by a set of class labels/names Yin, which we refer to as the known (ID) classes. Here ID classes are de\ufb01ned w.r.t. the classi\ufb01cation task of interest, instead of the classes used in pre-training. Accordingly, OOD is de\ufb01ned w.r.t. the ID classes, not the data distribution during pre-training. The goal of OOD detection is to (1) detect samples that do not belong to any of the known classes; (2) otherwise, assign test samples to one of the known classes. Therefore, the OOD detector can be viewed as a \u201csafeguard\u201d for the classi\ufb01cation model. Formally, we denote the OOD detector as a binary function: G(x; Yin, T , I) : X \u2192{in, out}, where x \u2208X denotes a test image. Our method is based on only the names of the given classes in Yin, and a pre-trained model. Different from standard supervised learning, there is no training on the ID samples involved, hence zero-shot. 3 OOD Detection via Concept Matching We illustrate our approach in Figure 1, which derives the OOD detector G(\u00b7) based on concept matching. For a given task with label set Yin = {y1, y2, ..., yK}, we can construct a collection of concept vectors T (ti), i \u2208{1, 2, ..., K}, where ti is the text prompt \u201cthis is a photo of a \u27e8yi\u27e9\u201d for a label yi. The concept vectors are represented by the embeddings of the text prompts. For any test input image x\u2032, we can calculate the label-wise matching score based on the cosine similarity between the image feature I(x\u2032) and the concept vector T (ti): si(x\u2032) = I(x\u2032)\u00b7T (ti) \u2225I(x\u2032)\u2225\u00b7\u2225T (ti)\u2225. Formally, we de\ufb01ne the maximum concept matching (MCM) score as: SMCM(x\u2032; Yin, T , I) = max i esi(x\u2032)/\u03c4 PK j=1 esj(x\u2032)/\u03c4 , (1) where \u03c4 is the temperature. For ID data, it will be matched to one of the concept vectors (textual prototypes) with a high score; and vice versa. Formally, our OOD detection function can be formulated as: G(x\u2032; Yin, T , I) = \u001a1 SMCM(x\u2032; Yin, T , I) \u2265\u03bb 0 SMCM(x\u2032; Yin, T , I) < \u03bb , where by convention 1 represents the positive class (ID) and 0 indicates OOD. \u03bb is chosen so that a high fraction of ID data (e.g., 95%) is above the threshold. For samples that are classi\ufb01ed as ID, one can obtain the class prediction based on the closest concept: \u02c6 y = arg maxi\u2208[K] si. Remark: (1) Our work differs from (and is complementary to) CLIP by focusing on OOD detection rather than (closed-world) zero-shot classi\ufb01cation. We show new theoretical insights that softmax scaling plays a unique role in zero-shot OOD detection\u2014improving the separability between ID and OOD data. This role has not been studied rigorously for zero-shot OOD detection. Readers familiar with CLIP may notice that MCM can be used for zero-shot classi\ufb01cation in the closed world. This also makes MCM practically convenient for dual goals: detect OOD samples and assign ID data to one of the known classes. (2) Our method in principle is not limited to CLIP; it can be generally applicable for contrastive vision-language pre-training models that promote multi-modal feature alignment. 3 \fNew insights on softmax scaling for zero-shot OOD detection. We provide theoretical justi\ufb01cations that softmax scaling improves the separability between ID and OOD data for CLIP-based OOD detection, which is contrary to models trained with cross-entropy (CE) loss. In particular, CLIP-like models are trained with a multi-modal contrastive loss, which maximizes the cosine similarity between an image and its textual description in the feature space. The resulting cosine similarity scores display strong uniformity1 across labels, as evidenced in Figure 2 (right). Compared to OOD inputs, the gap between the maximum cosine similarity and the average is larger for ID inputs. However, the gap can be small when the number of ID classes increases where ID samples occur with lower highest cosine similarity. As a result, the highest cosine similarity for ID samples and OOD samples can be highly close (c.f. Figure 2 (left)). Maximum Cosine Similarity OOD ID Density OOD Samples Figure 2: Left: Maximum cosine similarity for ID and OOD inputs. There exists overlapping regions (shown in yellow); Right: Cosine similarities between OOD inputs and ID concept vectors. For OOD inputs, the cosine similarities display uniformity. Motivated by these observations, MCM employs softmax as a post hoc mechanism to magnify the difference. This is fundamentally different from the softmax score derived from a model trained with cross-entropy loss, which inherently maximizes the posterior p(y|x) for the ground-truth label, and minimizes the probability for other labels. Unlike CLIPlike models, logit scores displaying uniformity would be heavily penalized by the CE loss. As a result, the logit score corresponding to the ground-truth label can already be signi\ufb01cantly higher than other labels. Applying softmax on the logit scores can exacerbate overcon\ufb01dent predictions, and reduce the separability between ID and OOD data [46]. Indeed, for a model trained with cross-entropy loss, a logit-based score such as Energy [48] is shown to be much more effective than the softmax score. Interestingly, for CLIP-like models, the trend is the opposite\u2014applying softmax helps sharpen the uniform-like inner product scores, and increases the separability between ID and OOD data. To help readers better understand the insights, we \ufb01rst formalize our observations that OOD inputs trigger similar cosine similarities across ID concepts (Figure 2, right) as the following assumption: Assumption 3.1. Let z := 1{y \u2208Yin}. Qx denotes the out-of-distribution Px|z=0 (marginal distribution of x conditioned on z = 0). Assume \u2203\u03b4 > 0 such that Qx \uf8eb \uf8ed 1 K \u22121 X i\u0338=\u02c6 y [s\u02c6 y2(x) \u2212si(x)] < \u03b4 \uf8f6 \uf8f8= 1, where \u02c6 y := argmaxi\u2208[K]si(x) and \u02c6 y2 := argmaxi\u0338=\u02c6 y,i\u2208[K]si(x) denote the indices of the largest and second largest cosine similarities for an OOD input x. Now we provide formal guarantees that using softmax can provably reduce the false positive rate (FPR) compared to that without softmax. Theorem 3.1. Given a task with ID label set Yin = {y1, y2, ..., yK} and a pre-trained CLIP-like model (T , I). If Qx satis\ufb01es Assumption 3.1, then there exists a constant T = \u03bb(K\u22121)(\u03bbwo+\u03b4\u2212s\u02c6 y2) K\u03bb\u22121 such that for any temperature \u03c4 > T, we have FPR(\u03c4, \u03bb) \u2264FPRwo(\u03bbwo), where FPR(\u03c4, \u03bb) is the false positive rate based on softmax scaling with temperature \u03c4 and detection threshold \u03bb; FPRwo(\u03bbwo) is the false positive rate without softmax scaling based on threshold \u03bbwo. 1This can be explained both theoretically [84] and empirically [81]. It has been shown that self-supervised contrastive learning with a smaller temperature (e.g., initialized as 0.07 for CLIP) promotes uniform distribution for L2-normalized features. Moreover, as CLIP features lie on a high-dimensional space (512 for CLIP-B/16 and 768 for CLIP-L/14), uniformly distributed points in a high-dimensional sphere tend to be equidistant to each other [79]. Therefore, for OOD inputs, we observe approximately uniform cosine similarity with concept vectors. 4 \fThis suggests that applying softmax scaling with a moderate temperature results in superior OOD detection performance compared to that without softmax scaling. The proof is in Appendix A. Later in Section 5, we empirically verify on a real-world ImageNet dataset that our bound can indeed be satis\ufb01ed in CLIP where the thresholds are chosen at 95% true positive rate. What MCM offers: Beyond theoretical insights, we would like to highlight several compelling advantages of our zero-shot OOD detection approach, owing to the strong pre-trained CLIP model: \u2022 Generalizable to many tasks: Traditional OOD detection methods are based on a task-speci\ufb01c model. As a result, the OOD detector is not suitable for a realistic online scenario where the task changes from one to another. In contrast, we will show in Section 4 that MCM can perform a wide variety of OOD detection tasks, with just one single model. For a new task, only the names of the task\u2019s visual concepts Yin are required. \u2022 OOD-agnostic: Our method does not rely on any OOD information, and thus suits many realworld scenarios where one cannot anticipate what the unknowns would be ahead of time. This also mitigates the shortcoming of a recent approach [19], which assumes that a set of unseen labels are given as some weak information about OOD data. \u2022 Training-free: MCM enables OOD detection in a zero-shot fashion. This stands in contrast to the vast majority of OOD detection literature, which often requires training from scratch or \ufb01ne-tuning to achieve competitive performance. \u2022 Scalable: The contrastive vision-language pre-training paradigm makes MCM scalable to a large number of class labels and realistic high-resolution images. We now proceed to the experimental results, demonstrating these advantages on real-world tasks. 4 A Comprehensive Analysis of MCM 4.1 Datasets and Implementation Details Datasets. Most previous works on OOD detection only focus on small-scale datasets with blurry images such as CIFAR [40] and TinyImageNet [41]. With pre-trained models such as CLIP, OOD detection can be extended to more realistic and complex datasets. In this work, we scale up evaluations in terms of (1) image resolution, (2) dataset variety, and (3) number of classes. We consider the following ID datasets: CUB-200 [80], STANFORD-CARS [39], FOOD-101 [6], OXFORD-PET [57] and variants of IMAGENET [11]. For OOD test datasets, we use the same ones in [32], including subsets of iNaturalist [76], SUN [86], PLACES [96], and TEXTURE [10]. For each OOD dataset, the categories are not overlapping with the ID dataset. We also use subsets of ImageNet-1k for \ufb01ne-grained analysis. For example, we construct ImageNet-10 that mimics the class distribution of CIFAR-10 but with high-resolution images. For hard OOD evaluation, we curate ImageNet-20, which consists of 20 classes semantically similar to ImageNet-10 (e.g., dog (ID) vs. wolf (OOD)). Model. In our experiments, we adopt CLIP [59] as the target pre-trained model, which is one of the most popular and publicly available vision-language models. Note that our method is not limited to CLIP; it can generally be applicable for contrastive vision-language pre-training models that promote multi-modal feature alignment. Speci\ufb01cally, we mainly use CLIP-B/16, which consists of a ViT-B/16 Transformer as the image encoder and a masked self-attention Transformer [77] as the text encoder. To indicate the input patch size in ViT models, we append \u201c/x\u201d to model names. We prepend -B, -L to indicate Base and Large versions of the corresponding architecture. For instance, ViT-B/16 implies the Base variant with an input patch resolution of 16 \u00d7 16. We also use CLIP-L/14 which is based on ViT-L/14 as a representative of large models. Unless speci\ufb01ed otherwise, the temperature \u03c4 is 1 for all experiments. Details of the datasets, experimental setup, and hyperparameters are provided in Appendix B. Metrics. For evaluation, we use the following metrics: (1) the false positive rate (FPR95) of OOD samples when the true positive rate of in-distribution samples is at 95%, (2) the area under the receiver operating characteristic curve (AUROC), and (3) ID classi\ufb01cation accuracy (ID ACC). 5 \fTable 1: Zero-shot OOD detection with MCM score based on CLIP-B/16 with various ID datasets. ID Dataset OOD Dataset Average iNaturalist SUN Places Texture FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 CUB-200 [80] 9.83 98.24 4.93 99.10 6.65 98.57 6.97 98.75 7.09 98.66 Stanford-Cars [39] 0.05 99.77 0.02 99.95 0.24 99.89 0.02 99.96 0.08 99.89 Food-101 [6] 0.64 99.78 0.90 99.75 1.86 99.58 4.04 98.62 1.86 99.43 Oxford-Pet [57] 2.85 99.38 1.06 99.73 2.11 99.56 0.80 99.81 1.70 99.62 ImageNet-10 0.12 99.80 0.29 99.79 0.88 99.62 0.04 99.90 0.33 99.78 ImageNet-20 1.02 99.66 2.55 99.50 4.40 99.11 2.43 99.03 2.60 99.32 ImageNet-100 18.13 96.77 36.45 94.54 34.52 94.36 41.22 92.25 32.58 94.48 Table 2: OOD detection performance for ImageNet-1k [11] as ID. Method OOD Dataset Average iNaturalist SUN Places Texture FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 Requires training (or w. \ufb01ne-tuning) MOS [32] (BiT) 9.28 98.15 40.63 92.01 49.54 89.06 60.43 81.23 39.97 90.11 Fort et al. [19] (ViT-B) 15.07 96.64 54.12 86.37 57.99 85.24 53.32 84.77 45.12 88.25 Fort et al. [19] (ViT-L) 15.74 96.51 52.34 87.32 55.14 86.48 51.38 85.54 43.65 88.96 Energy [48] (CLIP-B) 21.59 95.99 34.28 93.15 36.64 91.82 51.18 88.09 35.92 92.26 Energy [48] (CLIP-L) 10.62 97.52 30.46 93.83 32.25 93.01 44.35 89.64 29.42 93.50 MSP [25] (CLIP-B) 40.89 88.63 65.81 81.24 67.90 80.14 64.96 78.16 59.89 82.04 MSP [25] (CLIP-L) 34.54 92.62 61.18 83.68 59.86 84.10 59.27 82.31 53.71 85.68 Zero-shot (no training required) MCM (CLIP-B) 30.91 94.61 37.59 92.57 44.69 89.77 57.77 86.11 42.74 90.77 MCM (CLIP-L) 28.38 94.95 29.00 94.14 35.42 92.00 59.88 84.88 38.17 91.49 4.2 Main Results MCM supports a diverse collection of tasks while being zero-shot. We \ufb01rst show that zero-shot OOD detection with MCM is effective across a wide variety of tasks\u2014with just one single pre-trained model. To showcase the versatility of MCM, we consider the seven ID datasets here. To the best of our knowledge, this is among the \ufb01rst attempts to showcase the ef\ufb01cacy under an expansive and diverse collection of ID datasets. The zero-shot OOD detection performance is summarized in Table 1. A salient observation is that MCM can achieve superior detection performance on many tasks. For example, using STANFORD-CARS as ID, MCM yields an average FPR95 of 0.08%. Considering that there are no training samples or OOD information involved, these results are very encouraging. It can be also seen from Table 1 that MCM is promising, especially when the number of samples per ID class is limited in the training set. For example, there are only around 40 samples per class for Stanford-Cars, 100 for Oxford-Pet, and 30 for CUB-200. The sample scarcity makes OOD detection methods that rely on \ufb01ne-tuning dif\ufb01cult. For example, after \ufb01ne-tuning on Food-101, while the ID accuracy is increased from 86.3% to 92.5% \u2191, OOD detection based on MSP is on par with MCM (99.5% vs. 99.4% in AUROC). MCM scales effectively to large datasets. To examine the scalability of MCM, we compare it with recent competitive OOD detection methods [19, 32] on the ImageNet-1k dataset (ID) in Table 2. We observe the following trends: \u2022 Larger models lead to superior performance. Compared with CLIP-B, MCM based on CLIP-L reduces FPR95 by 4.57%. Zero-shot ID classi\ufb01cation accuracy is also improved by 6.27% with the larger model, reaching 73.28% (see Appendix D). This suggests that larger models are endowed with a better representation quality, which bene\ufb01ts both ID classi\ufb01cation and OOD detection with MCM. Our \ufb01nding echos with the recent observations [78] that higher ID classi\ufb01cation accuracy is correlated with stronger OOD detection performance. \u2022 MOS [32] recently demonstrated competitive performance on ImageNet-1k, which requires model \ufb01ne-tuning based on BiT [38]. In contrast, we show that MCM (CLIP-L) outperforms MOS by 1.38% in AUROC while being zero-shot (training-free). \u2022 MCM shares a softmax scaling function with the classic (visual) con\ufb01dence-based score MSP [25]. To implement MSP, we adopt the commonly used linear probe approach by \ufb01ne-tuning a linear layer on frozen visual features of CLIP. After \ufb01ne-tuning, ID accuracy signi\ufb01cantly improves, reaching 84.12% (CLIP-L). Interestingly, the OOD detection performance of MSP is worse than 6 \fTable 3: Performance comparison on hard OOD detection tasks. MCM is competitive on all three hard OOD tasks without training involved. MSP (based on \ufb01ne-tuned CLIP) does not further improve performance. Method ID ImageNet-10 ImageNet-20 Waterbirds OOD ImageNet-20 ImageNet-10 Spurious OOD FPR95 / AUROC FPR95 / AUROC FPR95 / AUROC MSP [25] (\ufb01ne-tuning) 9.38 / 98.31 12.51 / 97.70 39.57 / 90.99 Mahalanobis [42] (visual only) 78.32 / 85.60 43.03 / 89.94 2.21 / 99.55 MCM (zero-shot) 5.00 / 98.71 12.91 / 98.09 5.87 / 98.36 MCM by 15.54% in FPR95. Under the same model \ufb01ne-tuned with linear probing, we observe that the Energy score outperforms MSP, corroborating \ufb01ndings in [48]. We investigate more in Section 5. \u2022 Recently, Fort et al. [19] explore small-scale OOD detection by \ufb01ne-tuning the full ViT model. When extended to large-scale tasks, we \ufb01nd that MCM still yields superior performance under the same image encoder con\ufb01guration (ViT-B or ViT-L). This further highlights the advantage of utilizing vision-language joint embeddings for large-scale visual OOD detection. MCM bene\ufb01ts hard OOD detection. Going beyond, we investigate whether MCM is still effective for hard OOD inputs. We consider the following two categories of hard OOD: \u2022 Semantically hard OOD: OOD samples that are semantically similar to ID samples are particularly challenging for OOD detection algorithms [85]. To evaluate hard OOD detection tasks in realistic settings, here we consider ImageNet-10 (ID) vs. ImageNet-20 (OOD) and vice versa. The pair consists of high-resolution images with semantically similar categories such as dog versus wolf. As shown in Table 3, MCM outperforms Mahalanobis [42] by 73.32% in FPR95 for ImageNet-10 (ID) vs. ImageNet-20 (OOD) and 30.12% vice versa. \u2022 Spurious OOD: Modern neural networks can exploit spurious correlations for predictions [3]. For example, in the Waterbirds dataset [64], there exist spurious correlations between the habitat (e.g., water) and bird types. A recent work [50] proposes a new type of hard OOD named spurious OOD and shows that most OOD detection methods perform much worse for spurious OOD inputs compared to non-spurious inputs. The spurious OOD inputs are created to share the same background (i.e., water) as ID data but have different object labels (e.g., a boat rather than a bird). See Appendix C for illustrations. The results are shown in Table 3. It has been shown that CLIP representations are robust to distributional shifts [59]. Therefore, while prior works [50] show that spurious OOD inputs are challenging for methods based on ResNet [23], MCM and Mahalanobis scores based on pre-trained CLIP perform much better. On the other hand, \ufb01ne-tuning exposes the model to the training set containing spurious correlations. As a result, MSP performs much worse than MCM (39.57% vs. 5.87% in FPR95). iNaturalist Places SUN Texture Dataset 85 90 95 100 105 AUROC 92.66 95.14 96.13 96.76 99.66 99.11 99.50 99.03 ZO-CLIP MCM Figure 3: Comparison with a candidate label-based score ZO-CLIP on ImageNet-20, based on our implementation of [16]. Implementation details are deferred to Appendix E.1. MCM outperforms CLIP-based baselines. Two recent works also use CLIP embeddings for OOD detection [16, 19]. However, fundamental limitations exist for both works. Fort et al. [19] assume that a candidate OOD label set YC is known, and used P y\u2208YC \u02c6 p(y|x) for OOD detection. Here the predictive probability \u02c6 p(y|x) is obtained by normalizing the inner products over |Yin| + |YC| classes. While applying softmax converts any vector to probabilities, as we show in Section 3, the converted probabilities do not necessarily correspond to P(OOD|x). Moreover, obtaining such an OOD label set is typically not feasible, which fundamentally limits its applicability. A recent work [16] realizes this idea by training an extra text decoder on top of CLIP\u2019s image encoder to generate candidate labels. However, [16] cannot guarantee the generated labels are non-overlapping with the ID labels. 7 \fID OOD Density Density Density No softmax (FPR: 40.75 %) = 0.01 (FPR: 31.85 %) = 1 (FPR: 18.13 %) = 10 (FPR: 18.43 %) Density Figure 4: The in\ufb02uence of softmax scaling and temperature. We use ImgeNet-100 (ID) vs. iNaturalist (OOD). Softmax scaling with a moderate temperature signi\ufb01cantly improves FPR95. We enhance the baseline with a stronger decoder and a \ufb01lter module (see Appendix E.1). As shown in Figure 3, MCM outperforms the enhanced baseline on all OOD datasets. Moreover, MCM is much simpler to use\u2014alleviating the need for an OOD label set or training an additional caption generator. In contrast, the caption generator\u2019s performance largely affects OOD detection. Poor caption quality degenerates the OOD detection performance of candidate label-based methods. Moreover, obtaining a reliable caption generator for any input image can signi\ufb01cantly increase the computational overhead. 5 Discussion: A Closer Look at MCM Empirical veri\ufb01cation on the role of softmax. In Section 3, we prove that softmax scaling on cosine similarity scores with a moderate \u03c4 improves the ID-OOD separability. Here we empirically verify our theoretical results. As shown in Figure 4, compared to directly using the maximum cosine similarity without softmax (leftmost \ufb01gure), softmax scaling with a temperature \u03c4 = 1 signi\ufb01cantly improves the performance by 22.6% in FPR95, and further increasing \u03c4 (e.g., \u03c4 = 10) leads to similar performance. The results are based on ImageNet-100 (ID) versus iNaturalist (OOD). Now, we verify if our theoretical bound (c.f. Theorem 3.1) is satis\ufb01ed empirically as well in Figure 4. From the leftmost \ufb01gure, we can estimate \u03bbwo \u22480.26, \u03b4 \u22480.03, and s\u02c6 y2 \u22480.23. By checking the third \ufb01gure (\u03c4 = 1 is the temperature value we use for most experiments), we approximate \u03bb \u22480.011. As K = 100, we plug in the values and obtain the lower bound T = \u03bb(K\u22121)(\u03bbwo+\u03b4\u2212s\u02c6 y2) K\u03bb\u22121 \u22480.65. Since \u03c4 = 1 > 0.65, by Theorem 3.1, applying softmax scaling with \u03c4 = 1 is provably superior to without softmax scaling for OOD detection. iNaturalist Places SUN Texture Dataset 50 60 70 80 90 100 AUROC 71.46 76.23 70.29 74.56 94.61 89.77 92.57 86.11 Maha MCM Figure 5: Comparison with Mahalanobis (Maha) score on ImageNet-1k. Are vision-language features better than visual feature alone? MCM can be interpreted as a distance-based approach\u2014images that are closer to one of the K class prototypes are more likely to be ID and vice versa. Here the class prototypes are de\ufb01ned based on a textual encoder. Alternatively, one can de\ufb01ne the class prototypes based on visual features. For example, Mahalanobis [42] de\ufb01nes a class prototype as the average of visual embeddings for images belonging to the same class. This raises the question whether MCM (with multi-modal vision-language features) is better than Mahalanobis (with single-modal visual feature). For a fair comparison, we use the same ViT image encoder from CLIP-B. Both MCM and Mahalanobis extract visual features from the penultimate layer. On ImageNet-1k, Mahalanobis displays a limited performance, with 73.14% AUROC averaged across four OOD test datasets (90.77% for MCM), as shown in Figure 5. From a practical perspective, Mahalanobis requires computing the inverse covariance matrix, which can be both computationally expensive and inaccurate when the number of samples is scarce and the number of ID classes grows. In contrast, MCM is easier to use and more robust. MCM without softmax scaling. In Section 3, we provide theoretical justi\ufb01cations for the necessity of softmax scaling for CLIP-like models. To further verify our observations empirically, we show OOD detection performance based on the maximum cosine similarity score Swo MCM(x\u2032; Yin, T , I) = maxi\u2208[K] si(x\u2032). The results are shown in Table 4. For easy tasks such as Food-101 [39], Stanford8 \fTable 4: Zero-shot OOD detection of Swo MCM based on CLIP-B/16. ID Dataset OOD Dataset Average iNaturalist SUN Places Texture FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 Stanford-Cars [39] 0.00 100 0.02 99.99 0.26 99.94 0.00 100 0.07 99.98 Food-101 [6] 0.56 99.86 0.09 99.95 0.49 99.88 8.33 97.44 2.37 99.28 Oxford-Pet [57] 0.02 99.98 0.05 99.97 0.20 99.94 0.27 99.91 0.14 99.95 ImageNet-10 2.40 99.42 1.79 99.55 2.83 99.32 1.86 99.56 2.22 99.46 ImageNet-20 14.96 97.87 13.10 97.97 14.21 97.67 13.46 97.32 13.93 97.71 ImageNet-1k 61.66 89.31 64.39 87.43 63.67 85.95 86.61 71.68 69.08 83.59 Cars [39], and Oxford-Pet [57] as ID, the performance of maximum cosine similarity score is similar to MCM (see Table 1 and Table 2). However, for more challenging tasks such as ImageNet-20 and ImageNet-1k, MCM signi\ufb01cantly outperforms that without softmax scaling. For example, the average FPR95 is improved by 11.33% on ImageNet-20 and 26.34% on ImageNet-1k, which highlights the necessity of a proper scaling function for CLIP-based OOD detection. MCM for ResNet-based CLIP models. Our main results are based on the CLIP model with ViT image encoder. We additionally investigate the effectiveness of MCM on ResNet-based CLIP. Speci\ufb01cally, we use RN50x4 (178.3M), which shares a similar number of parameters as CLIP-B/16 (149.6M). The results are shown in Table 5. We can see that MCM still shows promising results with ResNet-based CLIP models, and the performance is comparable between RN50x4 and CLIP-B/16 (89.97 vs. 90.77 in AUROC). Table 5: Comparison with ResNet-based CLIP models on ImageNet-1k (ID). Model OOD Dataset Average iNaturalist SUN Places Texture FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 FPR95\u2193 AUROC\u2191 RN50x4 44.51 91.51 35.11 92.84 43.74 89.60 57.73 85.93 45.27 89.97 CLIP-B/16 30.91 94.61 37.59 92.57 44.69 89.77 57.77 86.11 42.74 90.77 A photo of a