text stringlengths 1 7.76k | source stringlengths 17 81 |
|---|---|
88 4. BEYOND LINEAR SEPARABILITY We recover the unmodified problem of optimal stability (3.68) with βµ = 0 for all µ = 1, 2, ..., P, which also implies that all Eµ ≥1. On the contrary, non- zero βµ > 0 correspond to violations of the original constraints, i.e. examples with Eµ < 1, which includes potential misclassifications (Eµ < 0). In (4.3), the Lagrange multiplier γ ∈R controls the extent to which non-zero βµ are accepted. In the by now familiar matrix-vector notation with β = (β1, β2, ..., βP )⊤we can rewrite the problem in terms of embedding strengths: Soft margin perceptron (embedding strengths) minimize x, β 1 2 x⊤Cx + γβ⊤1 subject to E ≥1 −β and β ≥0. (4.4) Here, the derivation of the Wolfe Dual [Fle00] amounts to the elimination of the slack variables β. Similar to the error-free case, cf. Sec. 3.7, we obtain a modified cost function with simpler constraints: Soft margin perceptron (dual problem) maximize x −1 2 x⊤Cx + x⊤1 subject to 0 ≤x ≤γ1. (4.5) In comparison to the dual problem (3.68) for the error-free case, the non- negative embedding strengths x ≥0 are now also bounded from above. The parameter γ limits the magnitudes of the xµ. In analogy to the derivation of the AdaTron algorithm (3.98) we can devise a similar, sequential projected-gradient descent algorithm: AdaTron with errors, sequential updates (repeated presentation of D) – at time step t, present the example µ = 1, 2, ..., P, 1, 2, ... – perform the update xµ(t + 1) = xµ(t) + η (1 −[Cx(t)]µ) gradient step xµ(t + 1) = max {0, xµ(t + 1)} non-negative embeddings... xµ(t + 1) = min {γ, xµ(t + 1)} ...with limited magnitude. (4.6) | The+Shallow+and+the+Deep_Page_102_Chunk4001 |
4.1. PERCEPTRON WITH ERRORS 89 Eµ =1 ∝wmax Figure 4.1: Support vectors in the soft margin perceptron. The arrow represents the normalized weight vector wmax/|wmax|. Filled and open symbols correspond to the classes Sµ T = ±1, respectively. Sup- port vectors, displayed as squares, fall onto the two hyperplanes with Eµ = 1, into the region between the planes, or even deeper into the incor- rect half-space. All other examples, marked by circles, display Eµ > 1 without explicit embedding. Compared to the original AdaTron for lin. sep. data, the only difference is the additional restriction of the search to the x ≤γ1. Obviously we recover the the original algorithm in the limit γ →∞. As in the separable case, the algorithm follows the gradient of the cost func- tion along (1 −E), in principle. Hence, an individual embedding strength will increase if the corresponding example has a local potential Eµ < 1, including misclassifications with Eµ < 0. If some of the errors cannot be corrected because D is not separable, the corresponding xµ would grow indefinitely. In the soft margin version (4.6), however, updates are clipped at xµ = γ and the misclassi- fication1 of the corresponding example is tolerated. For fixed γ, the problem is well-defined and the AdaTron with errors (4.6) finds a solution efficiently. We refrain from further analysis and a formal proof. It is important to realize that the precise nature of the solution depends strongly on the setting of the parameter γ: It controls the compromise between the goals of – on the one hand – minimizing the norm w2 (maximizing the margin) and – on the other hand – correcting misclassifications. More precisely, the emphasis is not explicitly on the number of errors, but on the violations of Eµ ≥1 and their severity. If, for instance a mismatched (too small) value of γ is chosen, misclassifi- cations will be accepted and favored, even in a linearly separable data set. In practice, a suitable value can be determined by means of a validation procedure which estimates the performance for different choices of γ. In analogy to Sec. 3.7.4 the support vectors are characterized by xµ > 0, as before. However, only examples with 0 < xµ < γ will lie exactly in one of the planes with Eµ = 1. Clipped embedding strengths xµ = γ correspond to examples which fall into the region between the planes in Fig. 4.1 or even deeper into the incorrect half-space. The soft margin concept for the toleration of misclassification is highly rel- evant in the context of the Support Vector Machine, see Sec. 4.3. 1the violation of Eµ ≥1, to be more precise | The+Shallow+and+the+Deep_Page_103_Chunk4002 |
90 4. BEYOND LINEAR SEPARABILITY input layer ξ ∈RN adaptive weights wk ∈RN (k = 1, 2, . . . K) hidden units σk = sign wk · ξ−θk fixed hidden-to-output relation F(...) binary output S ξ = F σk(ξ) K k=1 Figure 4.2: The architecture of a “machine” as introduced in Sec. 4.2. A number K of hidden units of the perceptron type are connected by adaptive weights with the N-dim. input layer, K = 3 in the illustration. The bi- nary response S(ξ) is determined by a pre-defined, fixed functional dependence F σ1, σ2, ...σK = ±1. 4.2 Layered networks of perceptron-like units Perceptron-like units can be assembled in more powerful architectures as to overcome the restriction to linearly separable functions. We will highlight this in terms of a family of systems which are occasionally termed machines in the literature, see [MD89a, EB01, WRB93, AMB+18, MZ95, BO91, SH93] and references therein. In Fig. 4.2 the architecture of a machine is illustrated. It comprises ◦an input layer representing feature vectors ξ ∈RN ◦a single layer of K perceptron-like hidden units σk(ξ) = sign wk · ξ −θk ◦a set of adaptive input-to-hidden weight vectors wk ∈RN and local thresholds2 θk ∈RN ◦a single, binary output, determined by a fixed function F(σ1, σ2, ...σK) The function F ultimately determines the network’s input/output relation. However, it is assumed to be pre-wired and cannot be adapted in the training process. Learning is restricted to the wk connecting input and hidden layer. 2Thresholds could be replaced by a formal weight from a clamped input as outlined in (3.4). | The+Shallow+and+the+Deep_Page_104_Chunk4003 |
4.2. LAYERED NETWORKS OF PERCEPTRON-LIKE UNITS 91 4.2.1 Committee and parity machines Two specific machines have attracted particular interest: CM: The committee machine combines the hidden unit states σk in a majority vote [EB01,MD89a,WRB93,AMB+18]. This is realized by setting F CM σkK k=1 = sign K k=1 σk ⇒SCM(ξ)=sign K k=1 sign wk·ξ−θk (4.7) which is only well-defined for odd values of K which avoids ties k σk = 0. The majority vote is reminiscent of an ensemble of independently trained perceptrons. The CM, however, is meant to be trained as a whole [SH93]. PM: In the parity machine the output is computed as a product over the hidden unit states σk [EB01,WRB93,BO91]: F P M σkK k=1 = K k=1 σk ⇒SP M(ξ) = K k=1 sign wk · ξ (4.8) which results in a well-defined binary output SP M(ξ) = ±1 for any K. Note that the product depends on whether the number of units with σk = −1 is odd or even. In this sense F P M(. . .) is analogous to a par- ity operation. The hidden-to-output relation of the PM, i.e. the parity operation, cannot be represented by a single perceptron unit.3 We could realize the hidden-to-output function by a more complex multi-layered network. But since F is considered to be pre-wired and not adaptive, we do not have to specify a neural realization here. On the contrary, the committee machine can be interpreted as a conven- tional two-layered feed-forward neural network with an N −K−1 architecture as discussed in Sec. 1.3.2 and illustrated in Fig. 1.5 (right panel). However, compared to the general form of the output, Eq. (1.16), we have to use the acti- vation function g(z) = sign(z) throughout the net and fix all hidden-to-output to vk =1 in the CM. Many theoretical results are available for CM and PM networks and more general machines. Among other things, their storage capacity and generalization ability have been addressed, see [MD89a, SH93, Opp94, PBG+94] for examples and [EB01,WRB93,AMB+18] for further references. 3 The gist of Minsky and Papert’s book [MP69] is often reduced to this single insight, which is a gross injustice to both, the book and the perceptron. | The+Shallow+and+the+Deep_Page_105_Chunk4004 |
92 4. BEYOND LINEAR SEPARABILITY + + + + + − + −+ + −− −+ + −−+ −+ − + + + + + − + −− + −+ −+ + −−+ −+ − Figure 4.3: Output function of Committee machine and Parity machine, applied to identical sets of feature vectors ξ (filled and empty circles). The illustration corresponds to the surface of an N-dim. hypersphere. In both ma- chines, K = 3 oriented hyperplanes tesselate the feature space. Arrows point to the respective half-space of σk =+1. The three hidden units states are marked by "+" and "-" signs in the corresponding regions. The networks’ responses S = ±1 are marked by empty and filled circles. Left panel: The majority of σk determines the total response of the CM. Dashed lines mark redundant pieces of the hyperplanes which do not separate outputs S = +1 from S = −1. Right panel: In the PM, the total output is S = k σk. Every hyperplane separates total outputs S = ±1 locally, everywhere in feature space. 4.2.2 The parity machine: a universal classifier Here, we focus on a particularly interesting result. We show that a PM with sufficiently many hidden units can implement any data set of the form D = {ξµ, Sµ T }P µ=1 with binary training labels. In the following, we outline a constructive proof which is based on the appli- cation of a particular training strategy. So-called growth algorithms add units to a neural network with subsequent training until the desired performance is achieved and, for example, a given data set D has been implement. Mezard and Parisi coined the term tiling algorithm for a particular procedure [MN89], which adds neurons one by one. Several other similar growth schemes have been suggested, see [GM90,Fre90] for examples and references. A particular tiling-like algorithm for the PM was introduced and analysed in [BO91]. It was not necessarily designed as a practical training prescription for realistic applications. Tiling-like training of the PM proceeds along the fol- lowing lines: | The+Shallow+and+the+Deep_Page_106_Chunk4005 |
4.2. LAYERED NETWORKS OF PERCEPTRON-LIKE UNITS 93 Tiling-like learning, (parity machine) (4.9) (I) Initialization (m = 1): Train the first unit with output S1(ξ) = σ1(ξ) = sign[w1 · ξ −θ1] from the data set D1 = D , aiming at a large number of correctly classified examples Q1.4 (II) After training of m units: Given the PM with m hidden units {σ1, σ2, ...σm}, re-order the indices µ of the examples such that the PM output is Sm(ξµ) = m j=1 σj(ξµ) = +Sµ T for 1 ≤µ ≤Qm −Sµ T for Qm < µ ≤P, where Qm is the number of correctly classified examples in D. Define the new training set Dm+1 = {ξµ, [Sm(ξµ) Sµ T ]}P µ=1 with labels [Sm(ξµ)Sµ T ] = +1 for 1 ≤µ ≤Qm (Sm was correct) −1 for Qm < µ ≤P (Sm was wrong ) (III) Training step: add and train the next hidden unit σm+1(ξ) = sign wm+1 · ξ −θm+1 as to achieve a low number of errors (P −Qm+1) w.r.t. data set Dm+1. Note that if a solution with zero error, i.e. QM = P is found in step (III) for the M-th hidden unit σM, the total output of the PM is M m=1 σm(ξµ) = Sµ T for all examples in D, i.e. the data set is perfectly reproduced by the PM of M units. It is surprisingly straightforward to show that the number of correctly classi- fied feature vectors can be increased by at least one (Qm+1 > Qm) when adding the (m + 1)-th unit in step (III) of the procedure (4.9). To this end, we consider a set of normalized input vectors in the procedure (4.10). The normalization (4.11) could always be implemented in a preprocess- ing step. The second relation (4.12) is trivially satisfied: in every data set δ can be determined by computing all pair-wise scalar products. 4Any of the algorithms discussed in Sec. 4.1 could be used in this step. For the inhomo- geneity, a clamped input as in Eq. (3.4) can be employed, for simplicity. | The+Shallow+and+the+Deep_Page_107_Chunk4006 |
94 4. BEYOND LINEAR SEPARABILITY Grandmother neuron (4.10) Consider a set of feature vectors {ξµ}P µ=1 with |ξµ|2 = Γ for all µ = 1, 2, ..., P (4.11) 0 < δ < Γ −ξµ · ξν for all µ, ν (µ ∕= ν). (4.12) Construct a perceptron weight vector and threshold as w = −ξP and θ = δ −Γ. (4.13) It results in the inhomogeneously linearly separable classification Sw,θ(ξµ)=sign −ξP·ξµ−δ+Γ = sign −ξP · ξP =Γ +Γ−δ =−1 for µ = P sign −ξP · ξµ + Γ >δ −δ =+1 for µ ∕= P. (4.14) The corresponding perceptron separates exactly one feature vector, ξP , from all others in the set. The term grandmother neuron has been coined for this type of unit. It relates to the debatable concept that a single neuron in our brain is activated specifically whenever we see our grandmother.5 For the tiling-like learning (4.9), this implies that, by use of a grandmother unit, we can always separate ξP from all other examples in the training step (III). The hidden unit response for this input is σm+1(ξP ) = −1, which corrects the misclassification as the incorrect output Sm(ξP ) = −SP T is multiplied with −1 yielding Sm+1(ξP ) = SP T . All other PM outputs are left unchanged. Hence, at least one error can be corrected by adding a unit to the growing PM. With at most P units in the worst case, the number of errors is zero and all labels in D are reproduced correctly. A few remarks ◦The grandmother unit (4.14) serves at best as a minimal solution in the constructive proof - it is not suitable for practical purposes. The use of O(P) perceptron units for the labelling of P examples would be highly inefficient. ◦Step (III) can be improved significantly as compared to the constructive solution by using efficient training algorithms such as the soft margin AdaTron, see Sec. 4.1.1. 5An idea which is possibly not quite as unrealistic as it may seem, see for instance [QRK+05] for a discussion of “Jennifer Aniston cells”. | The+Shallow+and+the+Deep_Page_108_Chunk4007 |
4.2. LAYERED NETWORKS OF PERCEPTRON-LIKE UNITS 95 ◦Tiling-like learning imposes a strong ordering of the hidden units. Neurons added to the system later are supposed to correct only the (hopefully) very few misclassifications made by the first units. To some extent this contradicts the attractive concept of neural networks as fault-tolerant and robust distributed memories. ◦The strength of the tiling concept is at the same time its major weak- ness: Unlimited complexity and storage capacity can be achieved by adding more and more units to the system, until error-free classification is achieved. This will lead to inferior generalization behavior as the system adapts to every little detail of the data. This suggests to apply a form of early stopping, which limits the maximum number of units in the PM according to validation performance. We conclude that the parity machine is a universal classifier in the sense that a PM with sufficiently many hidden units can implement any two-class data D: Universal classifier (parity machine) (4.15) For a given data set D = {ξµ, Sµ T }P µ=1 with binary labels Sµ T ∈{−1, +1} and normalized feature vectors with |ξµ|2 = const. for all µ = 1, 2, . . . P, weight vectors wk ∈RN and thresholds θk ∈R exist (and can be found) with SP M(ξµ) = K k=1 sign wk · ξµ −θk = Sµ T for all µ = 1, 2, . . . P. Similar theorems have been derived for other “shallow” architectures with a single layer of hidden units and a single binary output.6 These findings are certainly of fundamental importance. In contrast to the Perceptron Convergence Theorem, however, (4.15) and similar propositions are in general not associated with practical, efficient training algorithms. The result parallels the findings of Cybenko and others [Cyb89,HTF01] that networks with a single, sufficiently large hidden layer of continuous non-linear units constitute universal function approximators, see Chapter 5 for details. It is interesting to note that also extremely deep and narrow networks can be universal classifiers. As an example, stacks of single perceptron units with shortcut connections to the input layer have been investigated in [Roj17,Roj03]. 4.2.3 The capacity of machines Compared to the simple perceptron, the greater complexity of the CM or PM should lead to an increased storage capacity. In fact, following the work of 6Strictly speaking the PM does not fall into this class, as F P M cannot be realized by a single perceptron-like unit. | The+Shallow+and+the+Deep_Page_109_Chunk4008 |
96 4. BEYOND LINEAR SEPARABILITY Mitchison and Durbin [MD89a], we can extend the arguments presented in Sec. 3.4 to machines. The number CK(P, N) of different dichotomies that a machine with K hid- den units can realize for P feature vectors in N dimensions is obviously bounded from above as CK(P, N) ≤C(P, N)K (4.16) with C(P, N) from Eq. (3.41) for the perceptron. If we assume that the network can freely combine all lin. sep. functions realized by the perceptron units in the hidden layer, we would expect an equality in (4.16). In general, correlations between the hidden units and other redundancies or restrictions will reduce CK(P, N) as compared to the upper bound. With the total number of possible dichotomies 2P we obtain an upper bound for the probability of a random labelling to be separable with K hidden units: PK(P, N) = min 1, C(P, N)K 2P (4.17) where the minimum is applied to explicitly avoid PK(P, N) > 1 resulting from the upper bound. Further analytical treatment of Eq. (4.17) is involved, including the limit N →∞. However, we can obtain numerical estimates (upper bounds) of the storage capacity by identifying, for given K and N, the value of P for which7 C(P, N)K 2P = 1 2. (4.18) This marks the characteristic point αc(K) = P/N at which the probability PK(P, N) drops, which is in line with the observation that C(2N, N)/2(2N) = 1/2 for the perceptron. Example estimates obtained for N = 1000 and a few, small values of K are αc(2)≈9.07(≈2×4.55), αc(6)≈40.53(≈6×6.76), αc(10)≈76.86(≈10×7.69) which already shows that αc(K) displays a superlinear dependence on K. In particular, αc(K) > K × 2 = K × αc(1) where the r.h.s. could be interpreted as a naive lower bound for combining K perceptrons without any synergy effect. Figure 4.4 displays the numerical estimates for N = 100 and K ≤20. One can show that for large values of K the capacity bound is given by [MD89a, Opp94,BO91] αc(K) ≤K ln K/ ln 2 (4.19) which is consistent with a faster than linear growth with the number of hidden units. The fact that the capacity grows rapidly with K agrees with the finding 7 More precisely: the smallest integer P for which the upper bound exceeds 1/2. | The+Shallow+and+the+Deep_Page_110_Chunk4009 |
4.3. SUPPORT VECTOR MACHINES 97 K αc Figure 4.4: The storage capac- ity αc(K) according to the esti- mate (4.18), obtained for N = 100. The solid line corresponds to the naive lower bound 2K for combining K perceptrons. that a (parity) machine with a sufficiently large number of units can implement any given dichotomy. In the context of Section 3.5.3, it also indicates that the number of hidden units should be carefully selected in order to avoid the realization of large storage capacity at the expense of poor generalization behavior. The asymptotic K ln K-dependence of the storage capacity for large K has been confirmed explicitly for the parity machine also by means of statistical physics based considerations [EB01, BSS94, WRB93, Opp94]. Interestingly, for the committee machine, the increase of αc appears to be slightly weaker as αc ∝K(ln K)1/2 for K →∞[Urb97]. This difference in the asymptotic growth of αc(K) is consistent with our intuitive insight that the committee machine is subject to redundancies, while the parity machine makes full use of the separating hyperplanes in its hidden layer. The storage capacities and/or VC dimensions of a variety of network types and architectures has been of great interest in the machine learning community. This is due to the role that the capacity plays with respect to the ultimate goal of learning, i.e. generalization. Examples from the statistical physics point of view can be found in [EB01,BSS94,WRB93,Opp94], see also references therein. 4.3 Support Vector Machines The Support Vector Machine (SVM) constitutes one of the most successful frameworks in supervised learning for classification tasks. It combines the con- ceptual simplicity of the large margin linear classifier, a.k.a. the perceptron of optimal stability, with the power of general non-linear transformations to high- dimensional spaces. In particular, it employs the so-called kernel trick which makes it possible to realize the transformation implicitly. For a detailed presentation of the SVM framework and further references see, for instance, [SS02,CST00,STC04,Her02,DFO20]. A comprehensive repos- itory of materials, including a short history of the approach is provided at www.svms.org [svm99]. The concept of applying a kernel function in the context of pattern recog- nition dates back to at least 1964, see Aizerman et al. [ABR64]. Probably the | The+Shallow+and+the+Deep_Page_111_Chunk4010 |
98 4. BEYOND LINEAR SEPARABILITY first practical version of Support Vector Machines, close to their current form, was introduced by Boser, Guyon and Vapnik in 1992 [BGV92] and relates to early algorithms developed by Vladimir Vapnik in the 1960s [VL63]. According to Isabelle Guyon [Guy16], the MinOver algorithm [KM87] triggered their in- terest in the concept of large margins which was then combined with the kernel approach. Eventually, the important and practically relevant extension to soft margin classification was introduced by Cortes and Vapnik in 1995 [CV95]. 4.3.1 Non-linear transformation to higher dimension The first important concept of the SVM framework exploits the fact that non- separable data sets can become linearly separable by means of a non-linear mapping of the form ξ ∈RN →Ψ(ξ) ∈RM with components Ψj(ξ) (4.20) where M can be different from N, in general. A function of the form S(ξ) = sign [W · Ψ(ξ)] with weights W ∈RM (4.21) is by definition linearly separable in the space of transformed vectors Ψ. So, while retaining the basic structure of the perceptron, formally, we will be able to realize functions beyond linearly separability by proper choice of the non-linear transformation ξ →Ψ. In fact, Rosenblatt already included this concept when introducing the Per- ceptron: the threshold function sign(. . .) is applied to the weighted sum of states in an association layer, cf. Fig. 3.1 showing the Mark I Perceptron. Its units are referred to as masks or predicate units in the literature [Ros58, HRM+60], see also [MP69]. In the hardware realization, for instance, 512 association units were connected to subsets of the 400 photosensor units and performed a threshold operation on an effectively randomized weighted sum of the incoming voltages, see [HRM+60] for details. In the Support Vector Machine, the non-linear mapping is – in general – from an N-dimensional space to a higher-dimensional space with M > N in order to achieve linear separability of the classes. As an illustration of the concept we discuss a simple example which was presented by Rainer Dietrich in [Die00]: Consider a set of two-dimensional feature vectors ξ = (ξ1, ξ1)⊤, with two classes separated by a non-linear decision boundary as displayed in Fig. 4.5 (left panel). We apply the explicit transformation Ψ(ξ1, ξ2) = (ξ2 1, √ 2 ξ1 ξ2, ξ2)⊤∈R3 which is non-linear as it contains the square ξ2 1 and the product ξ1 ξ2. In the example, the plane orthogonal to the weight vector W = (1, 1, −1)⊤separates the classes perfectly in M = 3 dimensions, see the center and right panels of Fig. 4.5. This is obviously only a toy example to illustrate the basic idea. In typical applications of the SVM the dimension N of the original feature space is already | The+Shallow+and+the+Deep_Page_112_Chunk4011 |
4.3. SUPPORT VECTOR MACHINES 99 Figure 4.5: Illustration courtesy of Rainer Dietrich [Die00]. A two-dimensional data set with two classes that are not linearly separable (left panel) can become linearly separable after applying an appropriate non-linear transformation to a higher-dimensional space (center and right panel, two different viewpoints). quite large and frequently M has to satisfy M ≫N in order to achieve linear separability. While the concept appears appealing, it is yet unclear how we should identify a suitable transformation ξ →Ψ for a given problem and data set. Before we return to this problem (and actually circumvent it elegantly), we discuss the actual training, i.e. the choice of a suitable weight vector W in the M- dimensional space. 4.3.2 Large margin classifier Let us assume that for a given, non-separable data set DN = ξµ ∈RN, Sµ T P µ=1 we have found a suitable transformation such that DM = Ψµ ∈RM, Sµ T P µ=1 is indeed linearly separable in M dimensions. Hence, we can apply conventional perceptron training in the M-dimensional space and, by means of the Perceptron Convergence Theorem (3.22), we are even guaranteed to find a solution. However, in general, we do not have explicit control of (or reliable informa- tion about) how difficult the task will be. On the one hand, we would wish to use a powerful transformation to very high-dimensional Ψ in order to guarantee separability and make it easy to find a suitable W. On the other hand, one could expect inferior generalization behavior in that case. Along the lines of the student-teacher scenarios discussed in Sec. 3.5.1, the corresponding version space of consistent hypotheses W might be unnecessarily large. The SVM aims at resolving this dilemma by determining the solution of maximum stability W max. Hence, the potentially very large freedom in select- ing a weight vector W in the high-dim. version space is efficiently restricted | The+Shallow+and+the+Deep_Page_113_Chunk4012 |
100 4. BEYOND LINEAR SEPARABILITY and – following the arguments provided in Sec. 3.5.2 – we can expect good generalization ability. The mathematical structure of the corresponding problem is fully analogous to the original (3.58). The M-dim. counterpart reads Perceptron of optimal stability (M-dim. feature space) (4.22) For a given data set DM = {Ψµ, Sµ T }P µ=1, find the vector W max ∈RM with W max = argmax W κ(W) with κ(W) = min κµ = W ·ΨµSµ T |W | P µ=1 . Obviously we can simply translate all results, concepts and algorithms from Sec. 3.6 to the transformed space. So far we have assumed that the transformation ξ →Ψ exists and is explic- itly known. Then we could for instance formulate and apply an M-dimensional version of the MinOver algorithm (3.59,3.60). Moreover, we can apply the opti- mization theoretical concepts and methods presented in Sec. 3.7 as exploited in the next sections. Among other aspects, this implies that the resulting classifier can be expressed in terms of support vectors, which ultimately motivates the use of the term Support Vector Machine. 4.3.3 The kernel trick In analogy to the original stability problem, cf. Sec. 3.7, we can introduce the embedding strengths X = (X1, X2, ..., XP )⊤∈RP . With the shorthand Ψµ = Ψ(ξµ) we also define the correlation matrix Γ with elements Γµν = 1 M Sµ T Ψµ · ΨνSν T (4.23) and analogous to Eqs. (3.76) we obtain W = 1 M P µ=1 Xµ Ψµ Sµ and W 2 = 1 M X⊤Γ X. (4.24) Eventually, we can re-formulate the problem (4.22) as Perceptron of optimal stability (M-dim. feature space) minimize X 1 2 X⊤Γ X subject to inequality constraints Γ X ≥1 (4.25) and proceed along the lines of Sec. 3.7 to derive, for instance, the corresponding AdaTron algorithm, see below. | The+Shallow+and+the+Deep_Page_114_Chunk4013 |
4.3. SUPPORT VECTOR MACHINES 101 The output of the M-dim. perceptron can be written as S(ξ) = sign W · Ψ(ξ) = sign 1 M P µ=1 Xµ SµΨµ · Ψ(ξ) . (4.26) We note that this involves the scalar products of the M-dimensional, trans- formed input vector with the transformed example training examples Ψµ. We define a so-called kernel function K : RN ×RN →R with K(ξ, ξ) = 1 M Ψ(ξ) · Ψ(ξ) = 1 M M j=1 Ψj(ξ) Ψj(ξ) (4.27) which represents the scalar product in RM. We observe that S(ξ) = sign P µ=1 Xµ SµK(ξµ, ξ) (4.28) does not involve the transformation Ψ(. . .) explicitly anymore. The kernel K is defined as a function of pairs of original feature vectors. Similarly, we have Eµ = Γ X µ = Sµ T P ν=1 Sν T Xν K(ξµ, ξν). (4.29) One can also formulate the AdaTron algorithm for optimal stability in the M-dimensional space. The Kernel AdaTron was introduced and discussed in [FCC98] and has been applied in a variety of practical problems. In analogy to (3.98) it is given as Kernel AdaTron (sequential updates) (repeated presentation of D) – at time step t, present example µ = 1, 2, 3, ..., P, 1, 2, 3, ... – perform the update Xµ(t + 1) = max 0, Xµ(t) + η 1 −Sµ T P ν=1 Sν T Xν K(ξµ, ξν) . (4.30) The training algorithm is also expressed in terms of the kernel and does not require explicit use of the transformation Ψ, formally. So far, the above insights suggest a strategy along the following lines: a) For a given, non-separable DN, identify a suitable non-linear mapping ξ →Ψ from N to M dimensions that achieves linear separability of DM. b) Compute the kernel function for all pairs of example inputs: Kµν = K(ξµ, ξν) = 1/M Ψµ · Ψν. | The+Shallow+and+the+Deep_Page_115_Chunk4014 |
102 4. BEYOND LINEAR SEPARABILITY c) Determine the embedding strengths Xmax corresponding to optimal sta- bility in the M-dim. weight space, for instance by use of the AdaTron (4.30). d) Classify an arbitrary ξ ∈RN according to S(ξ) = sign P µ=1 Xµ max SµK(ξµ, ξ) In practice, of course, the problem is to find and implement a suitable trans- formation that yields separability in a given problem and data set. However, we observe that once step (a) is performed, the transformation ξ →Ψ is not explic- itly used anymore. Even the weight vector W max is not required explicitly: it is not directly updated in in the training (c), nor is it used for the classification in working phase (d). Instead, the representation (4.24) is used throughout. Ultimately, this suggests to by-pass the explicit transformation in the first place and replace step (a) by a’) For a given, non-separable DN, identify a suitable kernel function K(ξ, ξ) and proceed from there as before. This can only be mathematically sound if the selected kernel K(ξ, ξ) func- tion represents some meaningful transformation, implicitly. It is obvious that for any transformation a kernel exists and we can work it out via the scalar prod- ucts ψ(ξ) · ψ(ξ). However, the reverse is less clear: given a particular kernel, can we guarantee that there is a valid, i.e. consistent, well-defined transforma- tion? Fortunately, such statements can be made with respect to a large class of functions without having to work out the underlying ξ →Ψ explicitly. Sufficient conditions for a kernel to be valid can be provided according to Mercer’s Theorem [Mer09], see also [SS02,CST00,STC04,Her02]. Without going into the mathematical details and potential additional conditions, it can be summarized for our purposes as: Mercer’s condition (sufficient condition for validity of a kernel) (4.31) A given kernel function K can be written as K(ξ, ξ) = 1 M Ψ(ξ) · Ψ(ξ), with a transformation ξ ∈RN →Ψ ∈RM of the form (4.20), if g(ξ) K(ξ, ξ) g(ξ) dNξ dNξ ≥0 holds true for all square-integrable functions g with g(ξ)2dNξ < ∞. Several families of kernel functions have been shown to satisfy Mercer’s condition and are referred to as Mercer kernels, frequently. A few popular examples are discussed in the following. | The+Shallow+and+the+Deep_Page_116_Chunk4015 |
4.3. SUPPORT VECTOR MACHINES 103 Polynomial kernels A polynomial kernel of degree q can be written as K(ξµ, ξ) = (1 + ξµ · ξ )q yielding S(ξ) = sign P µ=1 Xµ Sµ T (1 + ξµ · ξ )q (4.32) as the input-output relation of the classifier. As a special case, let us consider the simplest polynomial kernel: Linear kernel (q = 1) K(ξµ, ξ) = (1 + ξµ · ξ ) with S(ξ) = sign P µ=1 Xµ Sµ T (1 + ξµ · ξ ) (4.33) = sign P µ=1 Xµ Sµ T ≡M Θ + P µ=1 Xµ Sµ T ξµ ≡MW ·ξ . In this case, we can provide an immediate, almost trivial interpretation of the kernel: it corresponds to the realization of a linearly separable function in the original feature space (M = N) with weights W = w = 1 M P µ=1 Xµ Sµ T ξµ and off-set Θ = 1 M P µ=1 XµSµ T . The SVM with linear kernel is applied very frequently in practice. There is even an unfortunate trend to refer to it as “the SVM”. However – strictly speaking – the SVM is not a classifier but a framework for classification: it has to be specified by defining the kernel in use. In particular, the linear kernel reduces the SVM to the familiar perceptron of optimal stability (with local threshold Θ). In order to take full advantage of the SVM concept, we have to employ more sophisticated kernels. The first non-trivial choice beyond linearity corresponds to q = 2: Quadratic kernel (q = 2) K(ξµ, ξ) = (1 + ξµ · ξ )2 = 1 + 2 N j=1 ξµ j [ξj] + N j,k=1 ξµ j ξµ k [ξj ξk]. (4.34) Hence, the output S(ξ) in Eq. (4.32) with q = 2 corresponds to an inhomoge- neously linearly separable function in terms of the N original features augmented by N(N +1)/2 products of the form [ξjξk] which includes the squares of features for j = k. | The+Shallow+and+the+Deep_Page_117_Chunk4016 |
104 4. BEYOND LINEAR SEPARABILITY As intuitively expected, the use of the quadratic kernel represents the non- linear mapping from N-dim. feature space to in total M = N(N + 3)/2 trans- formed features (original, squares and mixed products). An explicit formulation is reminiscent of Quadratic Discriminant Analysis (QDA) [HTF01], albeit aim- ing at a different objective function in the training. Similarly, for the general polynomial kernel (4.32) the separating class bound- ary becomes a general polynomial surface and the dimension M of the trans- formed feature space grows rapidly with its degree q. Next, we consider a somewhat extreme, yet very popular choice: Radial Basis Functions (RBF) kernel K(ξµ, ξ) = exp −1 2 σ2 ξµ −ξ 2 (4.35) which involves the squared Euclidean distance and a width parameter σ. In an attempt to interpret the popular RBF Kernel along the lines of the discussion of polynomial kernels we could consider the Taylor series exp[x] = ∞ k=1 1 k! xk = 1 + x + 1 2x2 + 1 6x3 + 1 24x4 + . . . which shows that the dimension of the corresponding space would be M →∞ as all powers and products of the original features are involved. The RBF kernel has become one of the most popular choices in the literature. The fact that an SVM with this extremely powerful kernel with, formally, M → ∞can generalize at all demonstrates the importance of the restriction to optimal stability (the large margin concept) which constitutes an efficient regularization of the classifier. 4.3.4 A few remarks Selection of kernels and parameter setting In practice, the choice of the actual kernel function can influence the perfor- mance of the corresponding classifier significantly. In addition, kernels may contain parameters which have to be tuned to suitable values by means of vali- dation techniques. The RBF-kernel is just one example of kernels that feature a control parameter: the width σ in Eq. (4.35). The data-driven adaptation of kernel parameters as part of the training process has also been discussed in the literature, see [CCST99] for just one example. Inhomogeneous separation of classes In analogy to the perceptron of optimal stability, see Sec. 3.8, an offset from the origin of the high-dimensional feature space can be considered in the SVM. For the sake of simplicity we have restricted the discussion to the homogeneous case and refer the reader to the literature w.r.t. the conceptually straightforward extension to inhomogeneous separation, see e.g. [CST00,SS02]. | The+Shallow+and+the+Deep_Page_118_Chunk4017 |
4.3. SUPPORT VECTOR MACHINES 105 Soft-margin SVM In addition to the choice of the kernel and potential parameters thereof, one of- ten resorts to a soft margin version of the SVM [CV95], see also [SS02,CST00, STC04,Her02,DFO20]. The considerations of Sec. 4.1.2 for the simple percep- tron immediately carry over to the SVM formalism, once a kernel is defined. The modified optimization problems (4.4) and (4.5) are easily generalized to the case of the Support Vector Machine by replacing the embedding strengths with X and the correlation matrix by Γ from Eq. (4.24) and (4.23), respectively. Consequently, we can immediately derive suitable training algorithms for the soft margin SVM. For instance, the “AdaTron with errors” algorithm (4.6) carries over to the kernel-based formulation in a straightforward fashion. The soft margin extension introduces an additional parameter in the training process: the parameter γ in (4.4) implicitly controls the tolerance of constraint violations (or even misclassifications). Like potential parameters of the kernel, it should be determined by means of a suitable validation procedure. Overfitting In the early days of the SVM, occasionally the claim was made that the strong intrinsic regularization related to the large margin idea would eliminate the risk of overfitting to a large extent, if not completely. However, practice shows that the use of too complex kernels or low tolerance towards misclassification can result in poor generalization. Interestingly, the SVM offers a signal of overfitting which does not even require the explicit estimation of the generalization error: the number of support vectors ns with nonzero embedding strengths Xµ > 0. A relatively high fraction ns/P indicates that the classifier may be overly specific to the given data set. The fact that only very few examples in the data set are stabilized by embedding the others suggests inferior classification performance with respect to novel data in the working phase. This indication can be exploited for the choice of a suitable kernel to begin with, for the tuning of its parameters and for the choice of control parameters in the optimization. Efficient implementations The remark at the end of Sec. 3.7.3 concerning computational efficiency and scalability carries over to the kernel-based SVM as well. Efficient implementations, for instance based on the concept of Sequential Minimal Optimization (SMO) [Pla98] are available for a variety of platforms. As just one source of information, the reader is referred to a list of links provided at www.svms.org/software.html [svm99]. | The+Shallow+and+the+Deep_Page_119_Chunk4018 |
106 4. BEYOND LINEAR SEPARABILITY | The+Shallow+and+the+Deep_Page_120_Chunk4019 |
Chapter 5 Feed-forward networks for regression and classification The fishermen in the north of Spain have been using Deep Networks for centuries. Their contribution should be recognized . . . — Javier Movellan Layered Neural Networks have regained significant popularity due to their impressive successes in the context of Deep Learning applications such as image classification. The basic designs and training techniques have been established decades ago. In the previous chapter we presented examples of layered networks con- structed from perceptron-like threshold units. The by far most popular type of networks comprise continuous units with differentiable activation functions. In the following, we consider the use of such networks for regression and probabilis- tic classification and discuss their ability to approximate continuous functions. We present methods for their training by optimization techniques like gradient descent and variants thereof. Furthermore, we inspect specific example archi- tectures and very briefly address the use of deep networks with many hidden layers and strategies for their training. 5.1 Feed-forward networks as non-linear function approximators We first revisit the basic architecture and definition of strictly feed-forward, layered networks. We show that, in principle, suitable networks can approximate 107 | The+Shallow+and+the+Deep_Page_121_Chunk4020 |
108 5. NETWORKS FOR REGRESSION AND CLASSIFICATION ξj ≡σ(1) j w(1) kj , j = 1, . . . N S(1) k , k = 1, . . . K(1) S(2) k , k = 1, . . . K(2) w(2) kj , j = 1, . . . K(1) . . . vk ≡w(L) 1k , j = 1, . . . K(1) σ(ξ) ≡S(L) 1 Figure 5.1: A feed-forward neural network with N input units, L−1 hidden layers and a single output unit. any reasonable function from RN to R. This implies that layered networks can serve as tools for quite general problems of non-linear regression. In section 5.1.2 we show explicitly, that suitable layered networks can approximate any reasonable function to arbitrary precision. Cost function based training for the learning from examples is discussed in 5.2 with emphasis on gradient based methods. 5.1.1 Architecture and input-output relation Figure 5.1 displays a multilayer neural network with a single output unit. The generalization to several output units is formally straightforward. The figure suggests a convergent architecture with the number of hidden units per layer decreasing towards the output. While in practice this is frequently the case, it is by no means required in the following. The network in Fig. 5.1 is strictly feed-forward, i.e. the state of a particular hidden unit depends directly and only on the nodes in the previous layer. The resulting hidden unit activation is S(M) k = g(M) k K(M−1) j=1 w(M) kj S(M−1) j −θ(M) k (5.1) where adaptive weights w(M) kj connect the j-th unit in layer (M −1) to the k-th unit of layer M and θ(M) k denotes an adaptive local threshold. Alternatively, we could introduce an additional, clamped unit S(M) 0 ≡−1 in each layer and | The+Shallow+and+the+Deep_Page_122_Chunk4021 |
5.1. NON-LINEAR FUNCTION APPROXIMATORS 109 represent the local threshold by a weight w(M−1) k0 ≡θ(M) k . This would parallel our representation of inhomogeneously linear separable functions in Eq. (3.4). We can include the input layer in the the notation of Eq. (5.1) by defining S(0) j ≡ξj. Similarly, we can rename the single output as S(L) 1 ≡σ in the L-th layer with K(L) = 1: σ(ξ) ≡S(L) 1 = gout K(L−1) k=1 vk S(L−1) k −θout = g(L) 1 K(L−1) k=1 w(L) 1k S(L−1) k −θ(L) 1 (5.2) with the alternative notations vk ≡w(L) 1k and θout ≡θ(L) 1 . The output according to Eq. (5.2) together with (5.1) can be interpreted as a function σ : RN →R. The precise form of the input-output relation is determined by the network architecture including its connectivity and the activation functions. Frequently we will assume that the same transfer function g(. . .) defines the activation of all hidden units, while the output might result from a specific activation gout(. . .). 5.1.2 Universal approximators Following the previous section we can use a feed-forward network with a single output to implement a function from RN to R. It is a quite common concept to realize or approximate functional dependencies by the superposition of specific basis functions. A most prominent example is the representation in terms of Fourier series which exploit properties of the trigonometric basis functions. Co- efficients are chosen as to approximate a target function to a required precision. Frequently, they are fitted to a set of discrete points and the resulting series is used to interpolate or even extrapolate. Hence, the situation is reminiscent of more general non-linear regression. Neural networks of the form discussed in the previous section can be seen as a particular framework for functional approximation: for a given architecture and connectivity, the function σ : RN →R is parameterized by the choice of all adaptive quantities, i.e. weights and thresholds. In the following we will see that the considered type of layered neural net- works can approximate virtually any function1 to arbitrary precision. A network for piecewise constant approximation Here we show by construction that any reasonable function f : RN →R can be approximated by a layered neural network comprising sigmoidal and linear 1Mathematical subtleties are ignored here to some extent. | The+Shallow+and+the+Deep_Page_123_Chunk4022 |
110 5. NETWORKS FOR REGRESSION AND CLASSIFICATION z g2 3(z) sigm. linear z g5 a(z) −g5 b(z) Figure 5.2: Left panel: A sigmoidal activation of the form (5.3) with γ = 2, zo = 3 as an example. Right panel: The difference of two steep sigmoidals (here: γ = 5, a = 2, b = 6, respectively) singles out arguments a ≤z ≤b. The inset shows a graphical representation of the sigmoidals with equal γ connected to input z with thresholds a, b, respectively, and a linear unit computing the difference of their activations. units. More precisely, we will consider functions which map inputs from a compact subset of RN to a real valued output. We restrict the argument to feature vectors from the hypercube ξ ∈[0, 1]N, which can always be generalized by transformations like rescaling and translation in input space. Let us first consider sigmoidal neurons with, for instance, the activation gγ zo(z) = 1 1 + exp[−γ(z −zo)], (5.3) with the argument z ∈R, threshold zo and steepness parameter γ > 0, see Figure 5.2 (left panel) for an example. Two (steep) sigmoidal units can be combined in order to select a range of z-values, effectively: Gγ [a,b](z) = gγ a(z) −gγ b (z) = 1 1+e−γ(z−a) − 1 1+e−γ(z−b) ≈ 1 if a< z <b 0 else. (5.4) where the approximate identity becomes exact in the limit γ →∞.2 This is illustrated in the right panel of Fig. 5.2. For N-dim. inputs ξ we can realize the selection of a specific interval [ai, bi] for each dimension i = 1, 2, . . . N separately, defining regions of interest (ROI) R(j) = [a(j) 1 , b(j) 1 ] × [a(j) 2 , b(j) 2 ] . . . × [a(j) N , b(j) N ], j = 1, 2, . . . M. (5.5) These can be constructed to cover the volume of all possible inputs [0, 1]N, which implies that M grows exponentially with N. If we add up the corresponding N activations for one region of interest we have N i=1 Gγ [ai,bi](ξi) ≈N if ξ ∈R ≤N −1 if ξ ∕∈R. 2For the argument we can ignore the difference between open and closed intervals. | The+Shallow+and+the+Deep_Page_124_Chunk4023 |
5.1. NON-LINEAR FUNCTION APPROXIMATORS 111 2 i=1 Gγ [ai,bi](ξi) ξ2 ξ1 2 0 1 ξ2 ξ1 ⇒ gγ N−1 2 (. . .) 0 1 Figure 5.3: N threshold nodes (steep sigmoidal units) which select a specific interval per input dimension can be combined by adding up their activation. The left panel shows an illustration for N = 2 and ξj ∈[−10, 10]. Only where all threshold units are activated, the sum reaches its maximum N. For ξ ∕∈R, cf. Eq. 5.5, the total activation is 2 i=1 Gγ [ai,bi](ξi) ≤N −1. Consequently, an additional threshold operation can be employed to single out a region of interest R ⊂RN, as illustrated in the right panel. Here we consider one particular ROI and have omitted the superscript (j) for simplicity. Another threshold unit, i.e. a steep sigmoidal with zo = N −1/2 can be applied to the sum to single out inputs ξ ∈R, see Fig. 5.3 for an illustration with N = 2. Figure 5.4 displays the architecture of a layered net in which M such units correspond to the ROI of Eq. (5.5). Eventually, we select one representative value vj = f(ξ(j)) of the target function in each ROI, for instance with ξ(j) in the center of R(j). The vj serve as weights for feeding the activations ϑj into a simple, linear output unit. Therefore, the resulting network response M j=1 vjϑj(ξ) = vk for ξ ∈R(k) (5.6) amounts to a piecewise constant approximation of the target function in [0, 1]N. The constructive argument shows that the network in principle constitutes a universal approximator. However, a few remarks are in place: ◦In order to achieve an accurate approximation of any non-trivial func- tion, we would have to realize a rather fine-grained representation of in- put space. Assuming that we split [0, 1] into, say, k equal size intervals in each of the N feature dimensions, the network of Fig. 5.4 would com- prise O(Nk) units in the second and third layer, which appears reason- able. However, representing all possible combinations of N intervals in the fourth layer requires on the order of O kN hidden units. ◦The idea of training the feed-forward network by means of example data appears somewhat obscured. The simple-minded setting of the weights vj = f(ξ(j)) in Eq. (5.6) could be interpreted as learning from one ex- ample per ROI. However, in view of the previous remark, the procedure | The+Shallow+and+the+Deep_Page_125_Chunk4024 |
112 5. NETWORKS FOR REGRESSION AND CLASSIFICATION Figure 5.4: The constructed network for piecewise constant function approx- imation: Each input unit (top layer) is connected to a set of sigmoidal units in the second layer. Pairs of these connect to linear units in the third layer which select specific intervals [ai, bi], [bi, ci], . . . as illustrated in Fig. 5.2. Each unit marked as ϑj in the fourth layer performs a threshold operations on a par- ticular sum of N activations in the third layer, corresponding to one selected interval in each feature dimension. An activation ϑj = 1 indicates that ξ ∈R(j) and the resulting state of the linear output unit is given by vi = f(ξ(i)) which corresponds to a representative ξ(i) ∈R(i). would require a number P of examples that grows exponentially with the dimension N. ◦In this sense, the construction of ROI parallels the use of grandmother neurons when showing that the parity machine is a universal classifier in Section 4.2. While the argument justifies and supports the use of feed-forward networks in principle, it does not provide insight into how to design and how to train such a system in practice. Note that the number of layers required for universal approximation is lim- ited. In the above construction scheme, four layers of processing units are sufficient. Obviously, shallow networks are sufficient to achieve this property. However, this does not imply that shallow architectures are necessarily suitable for all practical applications. In fact, the recent success of deep networks appears to suggest the contrary, in many cases. | The+Shallow+and+the+Deep_Page_126_Chunk4025 |
5.1. NON-LINEAR FUNCTION APPROXIMATORS 113 input layer ξ ∈RN input-to-hidden weights w(k) ∈RN (k = 1, 2, . . . K) hidden units Sk = g w(k) · ξ−θ(k) hidden-to-output weights vk linear output σ ξ = K k=1 vk Sk Figure 5.5: A so-called Soft Committee Machine realizes a functional approx- imation as considered in Cybenko’s Theorem. A number K of hidden units with sigmoidal activation and adaptive thresholds θ(k) are connected through adap- tive weight vectors w(k) with the N-dim. input layer, in the illustration K = 3. The linear response σ(ξ) is determined as σ = K k=1 vk g w(k) · ξ−θ(k) with hidden-to-output weights vk ∈R. Variants of the Universal Approximation Theorem In the literature various incarnations of the Universal Approximation Theorem have been presented, differing in the degree of generality and practical relevance. Some of them address particular classes of target functions, others focus on specific types of activation functions or network architectures, see [Hor89,Fri94, LLPS93,SM15,MM92,CMB00] for examples and reviews. In the following we present an early and important formulation which is due to G. Cybenko [Cyb89]. The theorem states that a relatively simple network with a single hidden layer of sigmoidal units and a linear output is a universal function approximator: Cybenko’s Theorem (5.7) Consider inputs ξ ∈[0, 1]N and a continuous, sigmoidal function g : R →R with g(z) → 1 for z →+∞ 0 for z →−∞. Let w(k) ∈RN, vk ∈R, θ(k) ∈R for k = 1, 2, . . . K. Then, finite sums of the form σ(ξ) = K k=1 vk g w(k) · ξ −θ(k) are dense in the space of continuous functions C([0, 1])N. | The+Shallow+and+the+Deep_Page_127_Chunk4026 |
114 5. NETWORKS FOR REGRESSION AND CLASSIFICATION This implies that for any continuous target function τ ∈C([0, 1]N) and a given real number ε > 0, parameters w(k) ∈RN, vk ∈R, θ(k) ∈R K k=1 exist with σ(ξ) −τ(ξ) < ε for all ξ ∈[0, 1]N . The parameters can be interpreted as the weights and thresholds of a network with a single hidden layer and linear output which is illustrated in Figure 5.5. The term Soft Committee Machine has been coined for this architecture, see [Saa99] and references therein. The name refers to the network’s similarity with the (discrete output) committee machine for classification tasks which is discussed in Section 4.2. In a sense, Cybenko’s Theorem provides a stronger and more useful statement than the basic insight obtained by the construction in the previous section. However, the problem remains that a very large number K of hidden units might be required to exploit the approximation property in practice. Moreover, the theorem itself states only the existence of suitable parameters, it does not suggest how to find them. The choice of appropriate network parameters based on example data is addressed in the following sections. We focus on training schemes which are based on the minimization of appropriate cost functions by means of gradient descent techniques. 5.2 Gradient based training of feed-forward nets As we have seen, feed-forward neural networks can serve as universal function approximators. Hence it appears natural to employ them in the context of non- linear input/output relations which correspond to a real-valued target function σ : RN →R. Extensions to multiple continuous outputs are obviously possible. Likewise, classification schemes could be realized by an additional binary threshold oper- ation on the output σ or by appropriate binning in the case of multiple classes. Class memberships could also be represented by coding schemes applied to a number of output units as discussed in a forthcoming section. Formally, we concatenate all M adaptive parameters of a feed-forward net- work in one vector W ∈RM. The convenient flattened notation facilitates a unified discussion of various network architectures in the following. In the Soft Committee Machine displayed in Fig. 5.5 as just one example, we have W = w(1) 1 , . . . . . . , w(K) N , θ(1), . . . , θ(K), v1, . . . , vK ∈RM with M = KN+2K and we can refer to any of the adaptive parameters as a component Wj. For simplicity we focus on networks with a single, continuous output σ(ξ) in the following. The goal of training is to implement or approximate a target function τ : RN →R by adapting a given network architecture to a given set of examples D = {ξµ, τ(ξµ)}P µ=1. | The+Shallow+and+the+Deep_Page_128_Chunk4027 |
5.2. GRADIENT BASED TRAINING OF FEED-FORWARD NETS 115 To this end, we define an error measure which is suitable for the comparison of the network output σ(ξ) with the target function τ(ξ) for a given input vector. A popular and intuitive choice is the simple quadratic deviation e(σ, τ) = 1 2 (σ −τ)2 . (5.8) Here and in the following, shorthands σ = σ(ξ), σµ = σ(ξµ), τ = τ(ξ) and τ µ = τ(ξµ) refer to a generic input ξ or a particular example input ξµ ∈D, respectively. While many alternative measures can be considered, see the discussion in Section 5.3, the quadratic error (5.8) remains very popular and is particularly intuitive. It treats deviations σ > τ and σ < τ in a symmetric way and yields e = 0 only for perfect agreement with the target. Given a set of examples D we can define a corresponding training set specific cost function3: E(W) = 1 P P µ=1 eµ with eµ = e(σµ, τ µ). (5.9) In the context of regression, E(W) plays the role of the training error and quantifies the network performance with respect to D. As in classification, the expectation is that the trained network represents a hypothesis that can be applied successfully to novel data in the working phase. A plethora of numerical optimization methods could be used to optimize the cost function in practice. Most of these employ local gradient information or higher order derivatives of E(W) in order to iteratively find a (local) minimum of the cost function. If possible, derivatives can be computed analytically or are estimated numerically. A few prominent examples are the so-called Newton and quasi-Newton methods, conjugate gradient descent, Levenberg-Marquardt algorithm or line search methods. Here we point the reader to the literature, e.g. [Fle00, PAH19, Str19, DFO20, SNW11, Bis95a, HKP91] where an overview can be obtained and further references are provided. Relatively simple gradient descent techniques have been particularly popular in the context of machine learning for decades, as a very early example we have already discussed the Adaline algorithm [WH60] in Sec. 3.7.2. From an optimization theoretical point of view, simple gradient descent is certainly inferior to state-of-the art methods. However, it remains an impor- tant, very popular tool in many machine learning frameworks, including power- ful multi-layered architectures in Deep Learning. This is due to the conceptual simplicity and hands-on character of the descent and the fact that the mathe- matical structure of feed-forward networks appears particularly suitable for the computation of the required gradients. Moreover, although the training is generically formulated as an optimization, the actual minimization of E serves only as a proxy for the ultimate goal, which 3Other terms frequently used in this context are: objective function, loss function, or energy. | The+Shallow+and+the+Deep_Page_129_Chunk4028 |
116 5. NETWORKS FOR REGRESSION AND CLASSIFICATION is the successful application of the trained system to novel data. Hence, the precision to which a minimum is determined can play a minor role and the potential existence of many (suboptimal) local minima of E is not as problematic as one might expect. 5.2.1 Computing the gradient: Backpropagation of Error A key property of feed-forward layered neural networks with differentiable acti- vation functions is that the network output itself is a differentiable function of the inputs. Likewise, the output and the error measure e(σ, τ) are differentiable with respect to the adaptive parameters in the network for any given input ξ: ∂e(σ, τ) ∂Wj = (σ −τ) ∂σ ∂Wj . Consequently, also the data set specific cost function E = P µ=1 e(σµ, τ µ) is a differentiable function of all components of W. For strictly feed-forward architectures as shown in Figure 5.1 we can ob- tain derivatives of σ with respect to any network parameter Wj recursively by applying the chain rule [Bis95a, Bis06, HKP91]. The mathematical structure facilitates a very efficient calculation of the gradient: Weights and thresholds serve as coefficients when computing the output for a given input in a feed- forward network. The actual output is compared with the target and the error is said to propagate back (towards the input) when computing the derivatives in a layer-wise fashion, which involves the very same coefficients again. Gradients for an example of a shallow network architecture are worked out explicitly in Appendix A.6. The term Backpropagation of Error (Backpropagation or Backprop for short) was originally used for the efficient implementation of the gradient only [RM86]. Later it became synonymous with the the entire gradient based training of multi- layered feed-forward networks and is nowadays mostly used in this sense. The ambiguity of the term partly complicates the on-going debates about who invented Backpropagation or coined the term Deep Learning.4 Here we refrain from taking part in these discussions of questionable usefulness. For some original articles and reviews of the history of Backpropagation, see e.g. [Wer74,LBH18,CR95,Ama93,WL90,HKP91,GBC16]. 5.2.2 Batch gradient descent In principle, the minimization of E(W) could be done by any suitable method of non-linear optimization. In the context of layered neural networks, relatively simple gradient based techniques continue to play a very important role. Here we focus on the use of standard gradient descent. Gradient based techniques are also discussed in Appendix A.4. 4For an example thread initiated by J. Schmidhuber in the connectionists mailing-list see http://mailman.srv.cs.cmu.edu/pipermail/connectionists/2021-December/037086.html | The+Shallow+and+the+Deep_Page_130_Chunk4029 |
5.2. GRADIENT BASED TRAINING OF FEED-FORWARD NETS 117 In analogy to Eq. (A.39) in the Appendix, the basic form of the updates is given as Batch gradient descent (basic form) at discrete time step t perform an update step of the form W(t + 1) = W(t) −η ∇W E|W =W (t) (5.10) with the learning rate η and cost function E of the form (5.9). At each time step, the gradient with respect to all adaptive quantities W is computed as a sum over all examples in D: ∇W E = 1 P P µ=1 ∇W eµ = 1 P P µ=1 σ(ξµ) −τ(ξµ) ∇W σ(ξµ), (5.11) where the r.h.s. is given for the quadratic error (5.8) but can be worked out for alternative cost functions as well. The terms batch or offline gradient descent refer to the fact that the entire set D of example data is used in every update. Careful changes of W in the direction of −∇W E decrease the value of the cost function in each individual step and, consequently, the descent approaches some local minimum W ∗of the cost function. If several local minima 5 exist, the actual stationary W ∗depends on the initialization W(t=0) of the system. The specific form of ∇W σ has to be worked out by means of the chain rule in layered networks as explained in the previous section. It depends obviously on the network architecture, the activation functions and the set of all adaptive quantities in the system. In Appendix A.6 a specific example is given for a Soft Committee Machine, c.f. Section 5.1.2. The general discussion of gradient descent in the appendix shows that its convergence near a local minimum W ∗of E is governed by the symmetric, positive definite Hessian of second derivatives H∗= H(W ∗) ∈RM×M with elements H∗ ij =H∗ ji = ∂2 E(W) ∂Wi∂Wj W =W ∗. (5.12) In a local minimum, all eigenvalues {ρi}M i=1 of H∗are positive and can be sorted by magnitude: 0 < ρ1 ≤ρ2 ≤. . . ≤ρmax. We show in App. A.4.2 that with a given, constant learning rate η > 0 the following qualitative behavior can be expected near a local optimum W ∗: 5 The term refers to local properties of the function and possibly includes global minima. | The+Shallow+and+the+Deep_Page_131_Chunk4030 |
118 5. NETWORKS FOR REGRESSION AND CLASSIFICATION a) η ≤ 1 ρmax b) 1 ρmax <η≤ 2 ρmax c) η > 2 ρmax Figure 5.6: Illustration of the behavior of gradient descent near a local min- imum W ∗(marked by the red dot in the center). The contour lines represent the quadratic approximation of E(W) in the vicinity of a local minimum, the black symbols correspond to the iterates W(t) in Eq. (5.10). For small step size η (panel a) the iteration converges smoothly into the minimum, intermediate step sizes (panel b) result in convergent yet oscillatory behavior. Failure to converge is observed for too large step sizes (panel c). (a) η ≤1/ ρmax For small, finite learning rates, the iteration approaches the local minimum smoothly and converges to lim t→∞W(t) = W ∗, see Fig. 5.6 (a). Obviously, with very small rates η ≈0, the approach can become unnecessarily slow. (b) 1/ ρmax < η ≤2/ ρmax In this regime, convergence is still achieved, but in at least one of the eigendirections of H∗an oscillatory behavior is observed. As illustrated in panel (b) of Fig. 5.6, the alternating behavior occurs in eigendirections with large curvature, which correspond to narrow troughs in the landscape E(W), while the approach towards W ∗is smooth along directions of small ρi in which E resembles a shallow basin. (c) η > 2/ ρmax Too large learning rates result in divergent behavior of the iterations as displayed in Fig. 5.6 (c). Depending on η in relation to the individual ρi, the distance from the minimum can increase in one, several, or all eigendirections of H∗. This insight is valid for any local minimum of E. However, two important points should be noted. Firstly, the analysis presented in App. A.4.2 is only valid close to a local minimum, where the Taylor expansion up to second order, Eq. (A.41), is a good approximation of E. Secondly, different minima can dis- play very different properties in terms of the Hessian and its eigenvalues. In any case, minima are not known in advance, which would render the training un- necessary. Since the cost function E can have many local minima, the outcome | The+Shallow+and+the+Deep_Page_132_Chunk4031 |
5.2. GRADIENT BASED TRAINING OF FEED-FORWARD NETS 119 of the training process may depend strongly on the initialization of the system. Therefore, the practical relevance of the mathematical analysis is limited. In practice, a relatively large η could be used in the initial phase of training, assuming that the system is far away from any local minimum. Schemes have been suggested in the literature, which monitor the iterations and adjust the learning rate, for instance whenever a zigzagging behavior is observed. An intuitive example of a heuristic step size adaptation in batch gradient descent is presented in [PBB11]. The most important insight of this section is that in batch gradient descent, non-zero finite learning rates can be used to reach a local minimum. This is in contrast to the stochastic gradient descent strategy discussed in the next section. There, the learning rate has to be reduced to zero in the course of training in order to enforce convergence of the network configuration. 5.2.3 Stochastic gradient descent The cost function E(W), Eq. (5.9), is given as a sum over examples in D: E(W) = 1 P P µ=1 eµ(W) with ∇W E = 1 P P µ=1 ∇W eµ, (5.13) where eµ quantifies the contribution of an individual example to the total costs. Virtually all machine learning objectives mentioned and discussed in this text can be written in such a form, with the actual function eµ(W) and the pre- cise definition of D depending on the details, of course. This includes the log- likelihood in maximum-likelihood problems, the SSE (2.5) in regression, the quantization error in unsupervised Vector Quantization [HKP91,BHV16] or the objective function of the Generalized LVQ scheme introduced in Chapter 6. We note that costs of the form (5.13) can be interpreted as an empirical average of eµ over the data set, corresponding to randomly drawing examples from D with equal probability 1/P. Accordingly, the gradient of E as in Eq. (5.13) can also be seen as a data set average of the single example terms ∇W eµ. As a consequence, we can approximate the gradient of E by computing a restricted empirical mean over a random subset of D. As an extreme case, we can inspect single, randomly selected examples: Stochastic gradient descent (SGD) at discrete time step t - select a single example {ξµ(t), τ µ(t)} randomply with equal probability - perform an update step W(t + 1) = W(t) + ∆W(t) = W(t) −η(t) ∇W eµ(t) W =W (t) . (5.14) with the learning rate η(t) and error terms eµ as given in Eq. (5.13). | The+Shallow+and+the+Deep_Page_133_Chunk4032 |
120 5. NETWORKS FOR REGRESSION AND CLASSIFICATION The learning rate is denoted as η(t) in order to indicate a possible explicit time-dependence and to distinguish it from η in batch gradient descent, Eq. (5.10). Clearly, the computational costs per update are lower than in the batch procedure which involves the sum of P gradient terms in each step. The update of the form (5.14) is referred to as online or stochastic gradient descent, in contrast to offline batch algorithms. It can be seen as a special case of stochastic approximation, see [RM51,Bis95a,HTF01] and Appendix A.5.3. The stochastic approximation of the gradient introduces noise in the training process. As a consequence, E(W) may increase in individual update steps. The intuitive motivation for stochastic descent is that this noise helps the system to explore the search space more efficiently and to overcome barriers (e.g. saddle points of E) which separate different local minima of the cost function. The index µ(t) of the presented example at time t in Eq. (5.14) is drawn randomly from the set {1, 2, . . . , P} with equal probability 1/P. In general, the resulting individual ∆W(t) will deviate from the direction of steepest descent −∇W E. However, on average over the random selection of an example from D the update (5.14) follows the negative gradient of the total costs E. Therefore, in a local minimum W ∗ ∆W(t) ∗= −1 P P µ=1 η(t) ∇W eµ ∗= −η(t)∇W E ∗= 0 where (. . .) denotes the average over the selection of µ(t). The notation (. . .)|∗ indicates that a term is evaluated in W(t) = W ∗. Hence, the average update becomes zero in the local minimum. However, individual updates remain non-zero, in general. This can be seen by considering the averaged squared norm of ∆W: |∆W|2 ∗= 1 P P µ=1 η2(t) ∇W eµ |∗ 2 = 0 only if ∇W eµ ∗=0 for all µ > 0 else. In general, not all of the gradient contributions will vanish in W ∗. One exception would be a minimum of E in which all individual eµ are minimized. For the quadratic costs with eµ = (σ(ξµ) −τ µ)2/2 this would correspond to a global minimum E(W ∗) = 0, i.e. a perfectly solvable case where σ(ξµ) = τ µ for all µ. The generic behavior for constant learning rate η > 0 is illustrated schemati- cally in the left panel of Fig. 5.7. After reaching the vicinity of a local minimum, the iteration follows a seemingly irregular trajectory corresponding to the ran- dom sequence of individual gradient terms with | ∇W eµ|∗|2 > 0. We can enforce convergence near a local minimum in the sense of lim t→∞W(t) = W ∗and lim t→∞∆W(t) = 0 (5.15) by employing an explicitly time-dependent learning rate η(t) which decreases appropriately with the number of descent steps. Conditions for suitable learning | The+Shallow+and+the+Deep_Page_134_Chunk4033 |
5.2. GRADIENT BASED TRAINING OF FEED-FORWARD NETS 121 Figure 5.7: Schematic illustration of the behavior of stochastic gradient de- scent near a local minimum W ∗as marked by the (red) dot in the center. The contour lines represent the quadratic approximation of E(W) in the vicinity of W ∗. Left panel: Black symbols correspond to the iterates W(t) in Eq. (5.14). With a constant learning rate η > 0, the descent overshoots the point of sta- tionarity in an oscillatory way. Right panel: The average of the most recent (here: 3) positions W(t) of the stochastic descent (large filled circles) results in a favorable estimate (large empty circle) of the local minimum. rate schedules η(t) →0 can be found already in the seminal paper by Robbins and Monro [RM51] which introduced the concept of stochastic approximation in 1951, originally in the context of finding zeros of a function. Further references and more detailed discussions can be found in several textbooks, e.g. in [Bis95a, HTF01]. Robbins and Monro showed that schedules which satisfy lim t→∞η(t) = 0 with (I) lim T →∞ T t=0 η(t)2 < ∞ and (II) lim T →∞ T t=0 η(t) →∞ (5.16) facilitate convergence. Intuitively, the first condition (I) states that η(t) has to decrease fast enough in order to achieve a truly stationary configuration with limt→∞∆W(t) = 0. However, enforcing η(t) →0 too rapidly would result in trivial stationarity at arbitrary positions in W-space. Therefore, condition (II) implies that the decrease is slow enough so that the entire search space can be explored efficiently. Simple schedules which reduce the learning rate asymptotically like η(t) ∝ 1/t for large t satisfy both conditions in (5.16). This relates to the well-known results that ∞ n=1 n−2 = π2/6 while ∞ n=1 n−1 →∞. Just one possible (popu- lar) realization of such a decrease is of the form η(t) = a b + t with constant parameters a, b > 0. | The+Shallow+and+the+Deep_Page_135_Chunk4034 |
122 5. NETWORKS FOR REGRESSION AND CLASSIFICATION Various schedules which satisfy the conditions (I) and (II) of Eq. (5.16) can be considered, including power laws η(t) ∝t−β, logarithmic schedules like η(t) ∝1/(t ln t), or other explicitly time-dependent schemes, see for instance [DM92] and references therein. Stochastic gradient descent is arguably the most popular basic scheme for the training of neural networks, including systems with many layers in the context of Deep Learning. 5.2.4 Practical aspects and modifications Numerous alternative approaches or modifications of the gradient based schemes have been suggested and are of great practical relevance, in particular in the context of Deep Learning. Here, only a few can be mentioned and explained briefly. Note that some of these concepts can also be useful in batch gradient descent. SGD-training in epochs: In practice, we do not have to actually draw a random example from D independently at each time step. Most frequently, updates are organized in epochs, e.g. by generating a random permutation of {1, 2, . . . P} and presenting the entire D in this order before moving on to the next epoch with a novel randomized order of examples. Mini-batch training: Instead of performing the stochastic approximation with respect to a single example, a random subset of D can be employed replac- ing the full gradient ∇W E by partial sums in each training step. In a sense, this strategy retains the advantages of SGD, i.e. lower computational costs and the introduction of noise, but yields more reliable estimates of the gradient. The size of the mini-batches constitutes a hyperparameter which can be tuned in practice to achieve good performance and efficiency. Averaged SGD: While training with constant learning rate η will not re- sult in a converging behavior W(t) →W ∗, one can expect the iterates W(t) to approach the vicinity of a local minimum and to assume more or less ran- dom positions centered around W ∗. This can be exploited by considering a (potentially moving) average of the form W av(t) = 1 k k−1 j=0 W(t −j) which takes into account the last k iterations of SGD. In the simplest setting with k = t, the average is performed over all update steps up to t. As illus- trated in Fig. 5.7 (right panel) the averaged W av is expected to be closer to the local minimum than the individual W(t), once the training has reached the vicinity of the optimum. Averaging SGD for faster and smoother convergence was suggested and studied in [PJ92,Rup88], originally. | The+Shallow+and+the+Deep_Page_136_Chunk4035 |
5.3. OBJECTIVE FUNCTIONS 123 Local learning rates: For both, batch and stochastic gradient descent, it has been suggested to use local learning rates for different layers, nodes, or even individual weights in the network. As an early, simple rule of thumb, Plaut et al. [PNH86] suggest to use local learning rates inversely proportional to the fan-in of the given neuron, i.e. the number of units it receives input from, see the discussion in [HKP91]. More sophisticated methods make use of the local properties of the cost function in terms of second derivatives as motivated by Newton’s method [Fle00, HKP91]. Frequently, only the diagonal elements of the local Hessian are used to compute an individual learning rate for the update of W j which is, for instance, inversely proportional to ∂2E (∂Wi)2 , see [BL89] or [HKP91] for further references. Note that gradient based algorithms with local or even individual learning rates do not follow the steepest descent in E anymore. However, they still realize a descent procedure, see the discussion in the Appendix A.4. Momentum: Already in [RHW86] a modification of simple gradient descent was suggested, in which the update contains a memory term representing infor- mation about recently performed updates. In its simplest form the update is a linear combination of the (stochastic) gradient term and the previous update: ∆W(t) = −η ∇W eµ(W(t)) + α∆W(t −1) with α > 0. (5.17) The decay factor α controls the influence of previous update steps on the cur- rent change of W. Obviously, momentum could also be incorporated in batch gradient descent. The idea is to overcome flat regions where ∇W E ≈0, by keeping the mo- mentum of previous downhill moves. Furthermore, momentum should milden oscillatory behavior of the updates when E displays anisotropic curvatures. Adaptive learning rate schedules: The design and optimization of learn- ing rate schedules and schemes for the automated adaptation of η in SGD or η in batch algorithms plays a key role for efficient training prescriptions. Fre- quently, the learning rate adaptation is combined with the concept of momen- tum. Prominent examples are algorithms termed AdaGrad, RMSProp, Adam, or variance-based SGD (vSGD). For a first introduction and further references, Section 8.5 in [GBC16] can serve as a starting point. A systematic comparison of several popular schemes can be found in [LBV17], there in the context of gradient-based LVQ training. 5.3 Objective functions So far we have discussed the training of layered networks based on the quadratic deviation (5.8) which, arguably, constitutes the most prominent cost function in the context of regression. However, insights into the actual target problem, | The+Shallow+and+the+Deep_Page_137_Chunk4036 |
124 5. NETWORKS FOR REGRESSION AND CLASSIFICATION heuristic assumptions or concrete statistical models of the observations may mo- tivate the use of alternative objective functions in the training process. More- over, the use of differentiable neural networks for classification tasks motivates the use of cost functions which are designed for this particular purpose. In the following we briefly discuss very few important examples of cost func- tions for regression and classification based on layered neural networks with differentiable activation functions. 5.3.1 Cost functions for regression The very intuitive quadratic deviation or SSE cost function appears plausible and suitable for a variety of regression problems. A variety of alternative ob- jective functions have been suggested in the literature that can be optimized by gradient descent or similar procedures. Heuristic cost functions: Numerous heuristic schemes have been suggested which adjust an instantaneous objective function while training proceeds. The goal could be to smooth out the costs initially by levelling out details, which could help to avoid regions that contain unfavorable local minima. Gradually, more and more detail are re-introduced and, eventually, the genuine objective is optimized [HKP91]. As just one example, Makram-Ebeid et et al. suggest in [MSV89] the mod- ified quadratic costs E(W) = P µ=1 γ (σµ −τ µ)2 if σµ τ µ > 0 (σµ −τ µ)2 if σµ τ µ ≤0 (5.18) with the parameter γ ∈[0, 1] increasing during the training process, e.g. accord- ing to an explicit time-dependence γ(t). For γ = 0, deviations (σµ −τ µ)2 do not contribute to the costs if the output has the correct sign. Thus, training will focus on achieving agreement in terms of sign(σµ) = sign(τ µ), initially. As γ →1, the system is fine-tuned to achieve σµ ≈τ µ for all µ, eventually. Minkowski-r errors: In the following we focus on a class of cost functions that can be derived from a noise model which is assumed to describe the statis- tical properties of the data at hand. The popular quadratic deviation can be explicitly motivated by assuming Gaussian distributed training labels. These could result from, e.g., additive Gaussian noise corrupting the true target values in the available data, as in Eq. (2.12). In Section 2.2.2 we have seen in the specific example of linear regression that a corresponding Maximum Likelihood approach leads immediately to the SSE-criterion (2.5). Starting from different assumptions about the statistical properties leads to specific choices of the cost function. Assume, for instance, that the training | The+Shallow+and+the+Deep_Page_138_Chunk4037 |
5.3. OBJECTIVE FUNCTIONS 125 labels in a regression problem deviate from the true target function by indepen- dent noise terms ηµ with P(ηµ) ∝e−β |ηµ|r with r > 0, (5.19) which is normalized to P(ηµ)dηµ = 1. Obviously we recover a Gaussian density with a β-dependent variance for r = 2. The optimization of the corre- sponding Maximum Likelihood criterion introduces a cost function of the form E = 1 P P µ=1 |σµ(W) −τ µ|r (5.20) where terms independent of the network parameters W have been omitted. This objective function is referred to as the Minkowski-r error in the literature [HB87], see also [Bis95a]. We note again that for r = 2 the familiar MSE criterion is recovered, the special case of r = 1 corresponds to the so-called Manhattan distance or city block metric 6 |σ −τ|. Intuitively, costs with r < 2 will be less sensitive to outliers, i.e. to examples with very large |σ −τ|, than the conventional SSE. 5.3.2 Cost functions for classification Heuristically, we can apply networks with differentiable activation functions and output also for classification, retaining regression type cost functions like the simple quadratic deviation or (5.18) in the training. In the simple case of two classes we could perform one additional threshold operation on a single output of the trained network to obtain a crisp binary classifier. Similar ideas can be applied to multi-class problems. A more systematic approach realizes network responses that can be inter- preted as a probabilistic assignment of the input vector to one of the classes. Here we follow to a large extent the presentation in [Bis95a]. First, we restrict ourselves to the case of two classes, here represented by target values τµ ∈{0, 1}, which correspond to crisp training labels in the simplest case. Moreover, we as- sume that also the output of the network satisfies 0 ≤σ(ξµ) ≤1, as for instance realized by a proper sigmoidal output activation. After training, we want to interpret σ(ξ) as the class-membership probability p(τ = 1|ξ) = σ(ξ) and p(τ = 0|ξ) = 1 −σ(ξ). This can be written conveniently in the compact form p(τ|ξ) = σ(ξ)τ [1 −σ(ξ)]1−τ. (5.21) If we consider this as our model for the occurrence of a label τ in the data set and we furthermore assume that the examples are generated independently, the 6Here, we ignore the subtle difficulty that |x|r it is not differentiable in x = 0 for r ≤1. | The+Shallow+and+the+Deep_Page_139_Chunk4038 |
126 5. NETWORKS FOR REGRESSION AND CLASSIFICATION likelihood of generating a given set of labels {τ µ}P µ=1 with the network reads P µ=1 (σµ)τ µ [1 −σµ]1−τ µ with the shorthand σµ = σ(ξµ). Maximizing this likelihood by choice of the network parameters W is equivalent to minimizing the negative log-likelihood E(W) = − P µ=1 τ µ ln σµ + (1 −τ µ) ln(1 −σµ) . (5.22) In contrast to the SSE, we omit the irrelevant pre-factor 1/P here for simplicity. The cost function quantifies the similarity of the outputs σµ which we in- terpret as probabilities with the targets τ µ in terms of their cross entropy [Hop87,BW88,SLF88]. Note that it is bounded from below by the entropy Eo = − P µ=1 τ µ ln τ µ + (1 −τ µ) ln(1 −τ µ) which can only be achieved if all σµ = τ µ. Since the bound Eo does not depend on W we can subtract it from the cross entropy and consider the equivalent objective function E(W) −Eo = − P µ=1 τ µ ln σµ τ µ + (1 −τ µ) ln 1 −σµ 1 −τ µ ≥0 ≡DKL(τ||σ). (5.23) This is the so-called relative entropy or Kullback-Leibler divergence DKL(τ||σ) between the specific probability distributions σ and τ, see for instance [Bis95a] for a discussion in the machine learning context. 7 The cross entropy E(W), Eq. (5.22), or equivalently the Kullback-Leibler divergence DKL constitute differentiable objective function of the adaptive pa- rameters W and can be minimized by gradient-descent based or other opti- mization methods for a given data set D = {ξµ, τ µ}P µ=1 . This way we achieve a network with outputs σ(ξ) that can be interpreted as a class membership probability. Multi-class problems: The formalism can be extended to multiclass prob- lems with targets τ µ k ∈[0, 1], k ∈{1, 2, . . . , C} which satisfy C k=1 τ µ k = 1. In the simple case of crisp training labels we have that for each example exactly one τ µ j = 1 which indicates that ξµ is assigned to class j in the training data. Obviously we have to consider a network architecture and activations which can represent C assignment probabilities σk ∈[0, 1] with C k=1 σk = 1. This can be achieved, for instance, with a layer of C output units with a so-called soft-max or normalized exponential activation, see also the following Sec. 5.4. 7Note that the KL-divergence is in general non-symmetric: DKL(τ||σ) ∕= DKL(σ||τ). | The+Shallow+and+the+Deep_Page_140_Chunk4039 |
5.4. ACTIVATION FUNCTIONS 127 Now the equivalent of cost function (5.23) becomes DKL(τ||σ) = − P µ=1 C k=1 τ µ k ln σµ k τ µ k , (5.24) which reduces to (5.23) in the binary case with C = 2 where σ1 = 1 −σ2 and τ1 = 1 −τ2. Given a network structure with C outputs σk as described above, we can determine the adaptive parameters of the system by minimization of the above differentiable cost function. All gradient-based or more sophisticated techniques discussed here can be applied. 5.4 Activation functions When designing a neural network for a given task, the key step is the choice of the network architecture and size. The choice of activation functions is equally important, as it should reflect properties of the problem and the data. Obvi- ously, the output unit (or units) should realize the appropriate range of possible responses in a regression problem. In classification, properly defined outputs should encode the crisp or probabilistic class assignments. The choice of acti- vations in intermediate, hidden layers influences the complexity of the network and can be crucial for the success of training, e.g. by gradient descent or other techniques. So far we have mainly considered threshold or sigmoidal activation functions, with the notable exception of simple linear units when constructing a universal function approximator in 5.1.2. A large variety of activation functions have been suggested and investigated in the literature, see e.g. [HKP91, Bis95a, GBC16]. In this section, only a few important and/or popular choices are presented. In the following we refrain from including gain parameters, local thresh- olds or similar parameters in the description, the corresponding extensions are straightforward. Similarly the range of activations can be trivially shifted and scaled: for example, a sigmoidal function h(x) with 0 ≤h(x) ≤1 can be transformed as g(x) = 2h(x) −1 with −1 ≤g(x) ≤1. It is understood that all functions given as g(x) in the following can be modified like a g(γx) + b if needed. 5.4.1 Sigmoidal and related functions In Chapter 1 we motivated the use of sigmoidal activation functions as a rough approximation of biological neuron responses in a firing rate picture. A number of functions satisfy the conditions (1.2) or (1.4), see Fig. 5.8 for a few prominent examples. The left panel displays erf[x] (chain line, green), tanh[x] (dashed, blue), and the logistic function 1/(1 + exp[−x]). The latter was shifted and scaled to realize the range g(x) ∈[−1, 1]. In the right panel, the Heaviside step function of the McCulloch Pitts neuron and a piecewise linear activation, which resembles a sigmoidal function, are shown. | The+Shallow+and+the+Deep_Page_141_Chunk4040 |
128 5. NETWORKS FOR REGRESSION AND CLASSIFICATION x g(x) x g(x) Figure 5.8: Sigmoidal and related activation functions. Left panel: three differential sigmoidal functions. Right panel: The limiting case of the McCul- loch Pitts activation (Heaviside step function) and a piecewise linear function. 5.4.2 One-sided and unbounded activation functions The simplest of all activation functions, the trivial identity g(x) = x is un- bounded. In the context of biologically inspired firing rate models this does not make sense, as there are no limits to the frequency of spike generation. However, in artificial networks, linear neurons are very often employed and indeed useful for specific tasks, e.g. as trainable output units attached to an otherwise more complex network for regression. Examples will be presented in Sec. 5.5. Recitfied Linear Unit and variations thereof: The so-called Rectified Linear Unit or ReLU activation [NH10], see Fig. 5.9 (left panel), has gained significant popularity in the context of Deep Networks 8. In its original form it corresponds to g(x) = max{0, x} = 0 if x < 0 x if x ≥0 with g′(x) = 0 if x < 0 1 if x > 0. (5.25) We ignore here the subtlety that the ReLU function is not continuously differ- entiable in x = 0. Note that the function (5.25) has been known and used for long in various mathematical, technical and engineering fields, ranging from signal processing and filtering to finance mathematics. Depending on the context, it is know as the ramp function, hockey stick, or hinge function. In the literature, several advantages are associated with the ReLU activation when compared to, for instance, sigmoidal activations [GBC16]: - Obviously, the ReLU is computationally cheap, and so is its derivative. 8As stated in [GBC16] : “In modern neural networks, the default recommendation is to use the rectified linear unit . . . ”. | The+Shallow+and+the+Deep_Page_142_Chunk4041 |
5.4. ACTIVATION FUNCTIONS 129 x g(x) x g(x) Figure 5.9: Unbounded and one-sided activation functions. Left panel: Sim- ple linear activation g(x) = x (dotted), Rectified Linear Unit ReLU (solid), Eq. (5.25), and leaky ReLU, Eq. (5.26), with a = 0.25 (dashed). Right panel: Ex- ponential linear unit ELU (dashed, black), cf. Eq. (5.28), Swish (solid, green), Eq. (5.29), and Softplus (dotted, blue), Eq. (5.27). - The ReLU is one-sided, i.e. g(x) = 0 for x < 0. As a consequence, a considerable fraction of units will display zero activity in a given network, typically. In this sense, a network of rectified linear units realizes sparse activity which is considered advantageous in many cases. - When computing derivatives via the chain rule in a network of many layers, the multiplication of many derivatives |g′| < 1 causes the so-called problem of vanishing gradients. Supposedly, the problem is absent in ReLU networks where g′(x) = 1 at least for x > 0. - Several empirical comparisons of networks with ReLU and other acti- vations have been published, in which ReLU networks display favorable training behavior and performance. Recently, theoretical studies of model situations seem to support these claims [OSB20]. In the so-called leaky ReLU (LReLU) activation [HZRS15], the constant zero for x < 0 is replaced by a linear dependence, usually with a slope 0 < a < 1, see also the left panel of Fig. 5.9. Hence, it reads g(x) = a x if x < 0 x if x > 0 with g′(x) = a if x < 0 1 if x ≥0. (5.26) Differentiable, unbounded activations: Several differentiable functions which maintain or approximate the linear behavior g(x) = x for large posi- tive arguments x > 0 have been suggested in the literature. Fig. 5.9 (right panel) displays three examples: - the so-called Softplus function [GBB11] g(x) = ln(1 + exp[x]) (5.27) | The+Shallow+and+the+Deep_Page_143_Chunk4042 |
130 5. NETWORKS FOR REGRESSION AND CLASSIFICATION - the Exponential Linear Unit (ELU) [CUH16] with g(x) = exp[x] −1 for x < 0 x for x ≥0, (5.28) - the Swish function [EYG92] with g(x) = x 1 + exp[−x]. (5.29) Interestingly, the Swish activation is even non-monotonic and displays a min- imum in a negative value of the argument. Several empirical studies seem to show favorable convergence behavior of gradient descent based training and improved performance of Swish-networks compared to other activation func- tions [EYG92,VRV+20]. 5.4.3 Exponential and normalized activations Frequently, units within a particular layer are coupled, e.g. through a normal- ization, which deviates from the activation by local synaptic interaction. Softmax function: The most prominent example is the representation of assignment probabilities in an output layer of C units {σk}C k=1. As outlined in Sec. 5.3.2, the activations should obey 0 ≤σk ≤1 individually. In addition, C k=1 σk = 1 is required to justify their interpretation as probabilities. An obvious and popular choice is to set σk = gβ {xk}C k=1 with the C pre- activations xk and the so-called soft-max or normalized exponential activation: gβ {xk}C k=1 = exp[βxk] C j=1 exp[βxj] . (5.30) Note that the required normalization couples the units σk and their states cannot be interpreted as independently activated by synaptic interaction: each unit depends on all pre-activations in the layer. For β →0 all activations will be equal (σk = 1/C), while for β →∞the unit with maximum xk is singled out with σk = 1. Radial Basis Functions (RBF): Another popular class of activations also deviates from the familiar concept of synaptic interactions. The RBF activation of a given unit σ with input from neurons {sj}L j=1 which are concatenated in the vector s ∈RL is computed as σ = g (||s −c||) with c ∈RL. (5.31) The activation depends on the Euclidean distance of the activation vector s from the adaptive center vectors c. The term Radial Basis Function refers to the fact that σ is isotropically centered around c. | The+Shallow+and+the+Deep_Page_144_Chunk4043 |
5.5. SPECIFIC ARCHITECTURES 131 A most prominent example is the Gaussian RBF, which is frequently referred to as the RBF: σ = exp −β (s −c)2 with parameter β > 0. (5.32) Frequently, normalized Gaussian RBF are considered in a layer of hidden or output units σk(k = 1, 2, . . . K): σk(s) = exp −β(s −ck)2 K j=1 exp [−β(s −cj)2] . (5.33) In the limit β →∞, the normalized Gaussian RBF singles out the unit with smallest (s −ck)2, i.e. with the closest center vector for a given s. A popular network architecture for regression comprises a potentially high- dimensional input layer, a single hidden layer with K units (5.33), and a linear output unit with adjustable weights. It is described briefly in Sec. 5.5. 5.4.4 Remark: universal function approximation In Sec. 5.1.2 we constructed a piecewise constant function approximator using sigmoidal and linear units. It is interesting to note that it is often straight- forward to extend these considerations to other activation functions. Note for instance, that the combination of two ReLU units, equipped with suitable local thresholds and gain parameters, can replace a piecewise linear activation of the type displayed in Fig. 5.8 (right panel): max 0, x −a b −a −max 0, x −b b −a = 0 for x < a x−a b−a for a ≤x < b 1 for x ≥b. Using the resulting piecewise linear activation, we can implement the selection of ROI, cf. Sec. 5.1.2, in analogy to the sigmoidal activations assumed there. Hence, networks of ReLU and/or piecewise linear units also constitute universal approximators. Similar arguments can be provided for large families of activa- tion functions. Similarly, units with normalized RBF activation, Eq. (5.32), can be readily used to define ROI in input space and facilitate universal function approximation when combined with piecewise constant representations as in Sec. 5.1.2. 5.5 Specific architectures In this section we consider a selection of network architectures which play a role in practical applications and can be handled with the algorithmic approaches that we have studied so far. In the next section, specific shallow networks are in- troduced. In Sec. 5.5.2 we briefly discuss the design and training of multilayered deep neural networks. | The+Shallow+and+the+Deep_Page_145_Chunk4044 |
132 5. NETWORKS FOR REGRESSION AND CLASSIFICATION 5.5.1 Popular shallow networks So far, we have developed training prescriptions in terms of shallow, feed-forward architectures with only one or very few hidden layers. In particular we have seen that a single hidden layer is sufficient to provide universal function approxima- tions. An important example that we already discussed is the parity machine for classification, with hidden and output units of the McCulloch Pitts type9. A soft version of the committee machine, cf. Sec. 4.2, with sigmoidal ac- tivation can be shown to be provide universal function approximation in the context of regression, see Sec. 5.1.2. Analogous proofs exist for similar archi- tectures with alternative hidden activations. Radial Basis Function networks Radial basis functions (RBF) as activation functions have been addressed in Sec. 5.4 already. Frequently, N −K −1 architectures with K RBF hidden units and linear output units are referred to as RBF Networks [BL88,MD89b]. In the popular case of Gaussian RBF and a single, linear output with bias wo we have the input-output relation σ(ξ) = M j=1 wjφj (ξ) + wo with φj (ξ) = exp −(ξ −cj)2 2σ2 j . (5.34) This corresponds to Eq. (5.32) with unit-specific parameters βj = 1/(2σ2 j ). Each unit is equipped with an adaptive vector ci ∈RN which defines the center of the receptive field. The response of the unit to a given input ξ ∈R depends on its Euclidean distance from the center vector. Here, we also include an adaptive local bias wo ∈R in the activation. Instead, we could introduce an additional hidden unit with constant activation φo(ξ) = 1 for all ξ, similar to the clamped input employed in Eqs. (2.4) and (3.4). RBF networks of this form are universal approximators, see the general discussion in [Bis95a] and specifically, [GP90]. Hence, we can employ networks of the type (5.34) for general regression tasks. The complexity of the RBF network can be increased by allowing for adaptive (inverse) covariance matrices in the Gaussian activations: φj (ξ) = exp −(ξ −cj)⊤Σ−1 j (ξ −cj) . (5.35) Nominally, this introduces N(N +1)/2 additional adaptive parameters per sym- metric matrix Σj ∈RN×N. In turn, fewer units might be required to achieve the same accuracy and performance as a larger network with hidden unit ac- tivations of the form (5.34). Modifications can be considered, such as the re- striction to diagonal Σj or the pooling of covariances with a single adaptive Σ = Σ1 = . . . = ΣM. 9The parity machine is strictly speaking not an (N −K−1) architecture, see Sec. 4.2. | The+Shallow+and+the+Deep_Page_146_Chunk4045 |
5.5. SPECIFIC ARCHITECTURES 133 input layer ξ ∈RN fixed random input-to-hidden weights wm ∈RN (m = 1, 2, . . . M) hidden units σm = g (wm · ξ) hidden-to-output weights vm linear output S ξ = M m=1 vm σm Figure 5.10: Illustration of an Extreme Learning Machine (ELM). The N- dimensional input is connected to a hidden layer with M ≫N units by fixed (non-adaptive) random weights. In the example, the single output unit is linear, e.g. for the purpose of regression. Extensions to a threshold unit or a full layer of outputs are straightforward. The RBF architecture could be used for classification tasks by attaching a single or multiple output classifier to the hidden layer {φj(ξ)}M j=1 . A more natural approach is to normalize the M activations in the hidden layer as in (5.33) and interpret them as probabilistic class assignments: φj(ξ) = φj(ξ) M k=1 φk(ξ) with φj from (5.34) or (5.35). (5.36) These non-local activations satisfy φj ∈[0, 1] and j φj = 1 and hence we can train the system according to a classification specific cost function like (5.23) for binary problems and (5.24) in a multi-class setting. Remark: RBF systems for classification display a striking similarity with prototype-based classifiers. In an LVQ system as presented in Chapter 6, the prototypes correspond to the center vectors cj and the softmax scheme of the classifier would be replaced by a crisp Nearest Prototype classification (NPC). Similarly, the matrix Σ−1 j in (5.35) is the equivalent of a prototype-specific, local relevance matrix Λj in Eq. (6.13), see Sec. 6.2.2. Extreme Learning Machines Frank Rosenblatt already suggested randomized connections from an input layer to a so-called association layer, which then was classified by threshold units in the Mark I realization of the perceptron [HRM+60], see Section 3, Fig. 3.1. More recently, random projections have been become popular as a technique to achieve sparse, low-dimensional representations of high-dimensional data sets, see for instance [BM01,LHC06]. A specific feed-forward architecture, termed the Extreme Learning Machine (ELM), was introduced in 2004 by Huang et al. [HZS06]. It is schematically | The+Shallow+and+the+Deep_Page_147_Chunk4046 |
134 5. NETWORKS FOR REGRESSION AND CLASSIFICATION original input ξ ∈RN wm ∈RN (m = 1, 2, . . . M) encoder decoder latent variables ym = g (wm · ξ) vm ∈RN (m = 1, 2, . . . M) ξrec = M m=1 ymvm Figure 5.11: An example of a shallow auto-encoder network: The N-dim. inputs are represented in a hidden layer with M < N units. Here, N (linear) output units represent the target reconstruction ξrec ∈RN, the extension to non-linear output units is obviously possible. represented in Fig. 5.10. The random mapping of inputs to a high-dimensional hidden layer makes it, for instance, possible to separate classes or perform re- gression tasks with a single linear (threshold) unit which would not be sufficient to realize the target in terms of the original data. This is similar in spirit to the basic idea of the Support Vector Machine, cf. 4.3. The relation of ELM and SVM was first discussed in [FV10]. Shallow autoencoders A particular feed-forward type of networks can be used to find a low-dimensional representation of high-dimensional feature vectors {ξµ}P µ=1. To this end, we can employ a so-called auto-encoder as illustrated in Figure 5.11. The encoder rep- resents N-dim. input vectors ξ in a single hidden layer of M < N units. The output ξrec is again N-dimensional and the goal is to minimize the reconstruc- tion error in the decoder: Erec = 1 2 P µ=1 (ξµ rec −ξµ)2 . (5.37) Training amounts to the (e.g. gradient based) minimization of Erec with respect to the network weights wm, vm ∈RN. Obviously, the mathematical structure is the same as in function approximation, with the special target to approximate the identity function RN →RN. The resulting M-dimensional latent variables {ym}P m=1 serve as the low-dimensional representations of high-dimensional data. In Fig. 5.11 the output units are assumed to be linear, the possible general- ization to non-linear reconstructions is straightforward. One special case is par- ticularly interesting: for linear activations g in the hidden layer and linear recon- struction ξrec, the minimization of the reconstruction error (5.37) is analogous | The+Shallow+and+the+Deep_Page_148_Chunk4047 |
5.5. SPECIFIC ARCHITECTURES 135 to the well known Principal Component Analysis, e.g. [Bis95a,HKP91,HTF01]. More precisely, the weight vectors wm, which minimize Erec, span the same sub-space as the M leading principal components of the data set. Using sigmoidal or other non-trivial hidden and/or output activations in the auto-encoder network, generalizes the concept of PCA to non-linear low- dimensional representations and reconstructions. In the next section we will also briefly mention deep auto-encoders, where several hidden layers represent and process the data internally [GBC16]. Obviously we can exploit the latent variables of an auto-encoder immediately for the purpose of visualizing complex, high-dim. data if M = 2 or 3. Moreover, after having trained the auto-encoder to realize a faithful internal representation, one could attach a feed-forward classifier or regression network to the hidden layer and apply supervised learning to realize a target function, e.g. of the type RM →R. 5.5.2 Deep and convolutional neural networks At a glance, the term Deep Learning refers to the use of feed-forward neural networks with many hidden layers. While this over-simplified definition ignores several aspects of Deep Learning, e.g. the potential use of feedback and recurrent systems, we will focus here on deep feed-forward architectures. As phrased by Goodfellow, Bengio and Courville [GBC16]: [There is no] consensus about how much depth a model requires to qualify as ’deep’. However, deep learning can safely be regarded as the study of models that either involve a greater amount of compo- sition of learned functions or learned concepts than traditional ma- chine learning does. The enormous success of Deep Learning and its popularity after, say, 2010, can be attributed to a number of developments, including the following: ◦The availability of large amounts of data, e.g. from image data bases or large collections of commercial data in e-commerce ◦The pre-training of deep networks or sub-networks from unspecific data bases, followed by task-specific fine tuning (transfer learning) ◦The ever-increasing computational power, made available through super- computers or local solutions (e.g. GPU) ◦The refinement of (mostly) gradient-based training techniques, e.g. w.r.t. to the automatic adaptation of learning rates or efficient regularization techniques such as drop out ◦The exploitation and combination of concepts that had been developed earlier for shallow networks, e.g. the ideas of weight-sharing or momentum. | The+Shallow+and+the+Deep_Page_149_Chunk4048 |
136 5. NETWORKS FOR REGRESSION AND CLASSIFICATION Figure 5.12: A 3 × 3 ’convolutional’ filter kernel is applied to a 3 × 3 image, here zero-padded. The 3 × 3 kernel with weights denoted in the illustration is centered on every pixel of the image to obtain the pixel values in the 3 × 3 convolved image. Note that the operation does not reduce the dimension of the data. ◦The use of activation functions that (supposedly) improve the efficiency and performance of networks in training and working phase, for example the Rectified Linear Unit (ReLU), cf. Sec. 5.4.2 ◦The consideration of particular architectures, e.g. Convolutional Neural Networks (CNN), designed for the analysis of specific types of data, such as images, language, time series or other data with a low-dimensional spatial, temporal or functional structure. Quite a few of these concepts have been known well before the rise of Deep Learning. Ultimately their combination made the great success of Deep Net- works possible, see e.g. [Sch15,LBH18]. Convolutional and pooling layers All convolutional neural networks (CNN) share some characteristic design fea- tures which facilitate the processing of structured data. For instance, in images or time series data, we expect localized information: Time series like the €/$ exchange rate will display short time correlations that decrease over time. Like- wise, pixels in a photographic image are expected to be similar in intensity and color if they belong to the same object, while far away pixels may be totally unrelated. In the following we will discuss CNN in the context of images, the transfer to other structured data is straightforward. The localization is accounted for by connecting units in a first layer to limited neighborhoods or patches of the input data, see Fig. 5.12 for an illustration. By choice of the weights in such a filter kernel, nodes can implement a particular local operation or convolution. 10 The weights can be adapted in the training 10The term is used somewhat loosely in the context of Deep Learning. | The+Shallow+and+the+Deep_Page_150_Chunk4049 |
5.5. SPECIFIC ARCHITECTURES 137 Figure 5.13: The 3 × 3 convolved image from Fig. 5.12 is reduced to 2 × 2 pixels by applying a max-pooling (upper) or average pooling (lower) to all 2 × 2 patches in the 3 × 3 filtered image. process, e.g. by using gradient based Backpropagation of Error for the entire network. A number K of different filters is applied to all patches of the same size in the input data. As each filter applies the same operation, the number of adaptive weights depends only on K and on the dimension and type of the kernels, while it is independent of the dimension of the input. This basic concept of weight-sharing was already present in very early network models, see below for Fukushima’s Neocognitron as an example. The first convolutional layer in a CNN therefore represents the input data in terms of many versions of the image obtained through a potentially large set of adaptive filters. These may include but are not limited to (approximations of) intuitive operations like the detection of edges or other local patterns. Most frequently, after convolution, a pooling operation is applied in a sub- sequent layer. Pooling reduces the dimensionality by combining, usually small, patches of nodes into a super-pixel. Popular examples replace plaquettes of 2×2 or 3 × 3 nodes by the average activation (average pooling) or by the maximum activity in the patch, see Fig. 5.13. Typically, the pooling nodes are hard wired, i.e. not trainable, although one could for instance consider adaptive weighted averages for pooling. After the first convolution and pooling, the input image is represented by a number of filtered and dimension reduced versions. Frequently, convolutional and pooling layers are stacked in alternating fashion, yielding increasingly ab- stract representations of decreasing dimension. Ultimately, one or several train- able dense layers, fully connected as in conventional feed-forward architectures, are employed to represent the target classification or regression. A large number of network similar architectures have been developed and are made available in the public domain, see https://modelzoo.co for an example repository. Many of these networks have been pre-trained on more or less generic data sets and can be fine-tuned by the user for the specific task at hand. Despite the simplifications through weight-sharing and similar regularization techniques, Deep Networks often comprise a huge number of adaptive weights and consequently can be very data hungry. In fact, it appears often surpris- | The+Shallow+and+the+Deep_Page_151_Chunk4050 |
138 5. NETWORKS FOR REGRESSION AND CLASSIFICATION Figure 5.14: An early deep architecture (schematic) known as the Neocog- nitron, first introduced by K. Fukushima in 1980 (1979 in Japanese) [Fuk80]. Besides input and output (recognition), several layers of so-called S-cells and C-cells are stacked see the text for details. Illustration redrawn after [Fuk19] with kind permission from the author. ing that heavily over-parameterized networks can be trained successfully at all and suffer less drastically from over-fitting effects than one might expect on theoretical grounds. Early examples: Neocognitron and LeNet Inspired by an early model of human vision of Hubel and Wiesel [HW59], Kuni- hiko Fukushima introduced the so-called Neocognitron network architecture as early as 1979 (in Japanese) and 1980 [Fuk80,Fuk88], see preceding S-also [Fuk19] for a more recent presentation and discussion of different versions of the basic architecture. The Neocognitron already comprises many elements of modern Convolu- tional Neural Networks. As illustrated in Figure 5.14, the network consists of an input and output layer, with stacked hidden layers of alternating types. Units are typically connected to patches of nodes in the preceding layer. Feature ex- traction layers (shaded blue in Fig. 5.14) employ local filters to these patches. Their units correspond to the simple neurons or S-cells suggested by Hubel and Wiesel, which are activated, for instance, by characteristic patterns like straight lines of a particular orientation. Nodes in a subsequent layer of complex or C-cells perform pooling operations in the sense that they are activated inde- pendent of the precise location of the stimulation within their receptive field in the preceding S-layer. Thanks to the averaging or pooling C-cells, the Neocogni- tron is, to a certain degree, insensitive to shifts and distortions of input patterns. | The+Shallow+and+the+Deep_Page_152_Chunk4051 |
5.5. SPECIFIC ARCHITECTURES 139 Figure 5.15: A deep architecture (schematic) known as LeNet, specifically LeNet-5, introduced by LeCun et al. in [LBD+89]. Image available under license CC BY3.0 at https://www.researchgate.net/publication/319905492_Image_retrieval_ method_based_on_metric_learning_for_convolutional_neural_network/figures. The sequence of S and C layers represents the input in decreasing detail and, ultimately, the network response (e.g. a classification) is provided in the output layer. In contrast to more recent CNN architectures, Fukushima did not train the Neocognitron end-to-end by gradient descent techniques. Instead, the filters realized by S-cells were either pre-wired or adapted by means of unsupervised learning techniques. The Neocognitron has been studied and used in the context of brain-inspired pattern recognition, including handwritten digit recognition and similar tasks. It constitutes a groundbreaking work that inspired many, if not all modern Deep Networks for visual pattern recognition and similar tasks. Another groundbreaking architecture, known as LeNet is due to LeCun and collaborators [LBD+89]. It is an early example of a Convolutional Neural Net- work (CNN) for pattern recognition and image processing and can be considered the starting point for this popular type of architecture. The structure is simi- lar to the above discussed Neocognitron. Alternating convolutional and pooling layers process an input with increasing abstraction towards a fully connected output layer. The LeNet can be trained from end-to-end by gradient based Backpropagation. It was initially introduced for the task of handwritten digit recognition (ZIP-code reading). Groups of nodes perform the same task on different patches of the input. Consequently, many nodes can share the same weight values which reduces the effective number of adaptive quantities drastically. LeNet constitutes a very early example of weight-sharing, which to date plays an important role in the training of deep networks, see also Sec. 7.2.5. Further improvements of the network performance were achieved by developing and applying a specific pruning technique named Optimal Brain Damage [LDS90], which we introduce in Sec. 7.2.4. Appraisal and critique of deep learning The success of Deep Learning, mainly in the context of image processing, has triggered a lot of excitement in, both, academia and the general public. This includes exaggerated claims with respect to general intelligence or applications | The+Shallow+and+the+Deep_Page_153_Chunk4052 |
140 5. NETWORKS FOR REGRESSION AND CLASSIFICATION in critical areas like clinical medicine. Recently, several scholars have expressed criticism of Deep Learning and the hype surrounding it, with [Mar18,Pre21,Zad19,AN20] being just a few examples. In a sense, the situation is highly reminiscent of the strong expectations and the later disappointments in previous waves of machine learning popularity. In the biased opinion of the author of this text, several deplorable trends can be observed in academia and in the general public: ◦Deep Learning is often presented as fundamentally new and totally differ- ent from so-called conventional, supposedly old school machine learning, ignoring decades of research that facilitated the recent developments. ◦The mis-identification of machine learning with image analysis only, in particular in the non-scientific media. While image classification, scene analysis, face recognition etc. are certainly relevant and particularly ap- pealing problems, machine learning should be seen from a much broader perspective. ◦A wide-spread tendency to use overly complex DNN to tackle even rela- tively simple, specific applications without investing a thoughtful analysis and often without a proper critical evaluation of the performance or com- parison with baseline techniques. ◦The exclusive use of ready-made programming environments while having a limited understanding of the basic underlying principles.11 Frequently such systems offer only limited user control or the options are not exploited properly. ◦A lack of theoretical insight into Deep Learning, or - much worse - a lack of interest into a better understanding of the relevant phenomena. The last points are particularly unfortunate in view of the many interesting challenges and open questions posed by Deep Learning, some of which are sum- marized in [Sej20]. Despite these and other points of criticism, Deep Learning will play an im- portant role in forthcoming years. It will certainly facilitate the exploration of new and exciting application areas. At the same time Deep Learning will con- tinue to provide highly interesting theoretical challenges that deserve significant attention. 11Even worse, this is often combined with displaying code in illegible small but colorful fonts on a black background. | The+Shallow+and+the+Deep_Page_154_Chunk4053 |
Chapter 6 Distance-based classifiers One can state, without exaggeration, that the observation of and the search for similarities and differences are the basis of all human knowledge. — Alfred Nobel The use of distances or dissimilarities for the comparison of observations with a set of labeled reference data points provides a simple yet powerful tool for classification. In particular, the use of prototypes or exemplars, derived from a given data set, is the basis for a very successful family of machine learning approaches. Prototype-based classifiers are appealing for a number of reasons. The extraction of information from previously observed data in terms of typi- cal representatives, the prototypes, is particularly transparent and intuitive, in contrast to many, more black-box like systems. The same is true for the working phase, in which novel data are compared with the prototypes by use of a suitable (dis-)similarity or distance measure. Prototype systems are frequently employed for the unsupervised analysis of complex data sets, aiming at the detection of underlying structures, such as clusters or hierarchical relations, see also Chapter 8 and, for instance, [HTF01, Bis95a, DHS00]. Competitive Vector Quantization, the well-known K-means algorithm or Self-Organizing Maps are prominent examples for the use of pro- totypes in the context of unsupervised learning [Koh97,HTF01,DHS00]. In the following the emphasis is on supervised learning in prototype-based systems. In particular, we focus on the framework of Learning Vector Quan- tization (LVQ) for classification. Besides the most basic concepts and train- ing prescriptions we present extensions of the framework to unconventional distances and to the use of adaptive measures in so-called relevance learning schemes [BHV16]. The aim of this chapter is far from giving a complete review of the ongoing fundamental and application oriented research in the context of prototype-based 141 | The+Shallow+and+the+Deep_Page_155_Chunk4054 |
142 6. DISTANCE-BASED CLASSIFIERS Figure 6.1: Left panel: Illustration of the Nearest Neighbor (NN) Classi- fier for an artificial data set containing three different classes. Right panel: A corresponding NPC scheme for the same data. Prototypes are represented by larger symbols. Both schemes are based on Euclidean distance and yield piecewise linear decision boundaries. learning. It provides, at best, first insights into supervised schemes and can serve as a starting point for the interested reader. The emphasis will be on Teuvo Kohonen’s Learning Vector Quantization and its extensions [Koh97]. Examples for training prescriptions are given and the use of unconventional distance measures is discussed. As an important conceptual extension of LVQ, Relevance Learning is introduced, with Matrix Relevance LVQ serving as an example. In a sense, the philosophies behind LVQ and the SVM are diametrically opposed to each other: while support vectors represent the difficult cases in the data set, which are closest to the decision boundary, cf. Sec. 4.3, LVQ represents the classes by - supposedly - typical exemplars relatively far from the class borders. Note that LVQ systems could be formulated and interpreted as layered neural networks with specific, distance-based activations and a crisp output reflecting the Winner-Takes-All principle. In fact, after years of denying the relation in the literature, it has become popular again to point out the conceptual vicinity to neural networks. Recent publications also discuss the embedding of LVQ modules in deep learning approaches [VMC16,VBVS17]. | The+Shallow+and+the+Deep_Page_156_Chunk4055 |
6.1. PROTOTYPE-BASED CLASSIFIERS 143 6.1 Prototype-based classifiers Among the many frameworks developed for supervised machine learning, proto- type-based systems are particularly intuitive, flexible, and easy to implement. Although we restrict the discussion to classification problems, many of the con- cepts carry over to regression or, to a certain extent, also to unsupervised learn- ing, see [BHV16]. Several prototype-based classifiers have been considered in the literature. Some of them can be derived from well-known unsupervised schemes like the Self-Organizing-Map or the Neural Gas [Koh97,RMS92,HSV05], which can be extended in terms of a posterior labelling of the prototypes. Here, the focus is on the so-called Learning Vector Quantization (LVQ), a framework which was originally suggested by Teuvo Kohonen [Koh97]. As a starting point for the dis- cussion, we briefly revisit the well-known k-Nearest-Neighbor (kNN) approach to classification, see [DHS00,CH67]. 6.1.1 Nearest Neighbor and Nearest Prototype Classifiers Nearest Neighbor classifiers [DHS00, CH67] constitute one of the simplest and most popular classification schemes. In this classical approach, a number of labeled feature vectors is stored in a reference set: D = {ξµ, yµ = y(ξµ)}P µ=1 with ξµ ∈RN. In contrast to the discussion of the perceptron and similar systems, here we do not have to restrict the presentation to binary labels. We therefore denote the (possibly multi-class) labels by yµ ∈{1, 2, . . . C} where C is the number of classes. An arbitrary novel feature vector or query ξ ∈RN can be classified ac- cording to its (dis-) similarities to the samples stored in the reference data. To this end, its distance from all reference vectors ξµ ∈D has to be computed. Most frequently, the simple (squared) Euclidean distance is used in this context: d(ξ, ξµ) = (ξ −ξµ)2. The query ξ is then assigned to the class of its Nearest Neighbor exemplar in D. In the more general kNN classifier, the assignment is determined by means of a voting scheme that considers the k closest reference vectors [CH67]. The NN or kNN classifier is obviously very easy to implement as it does not even require a training phase. Nevertheless one can show that the kNN approach bears the potential to realize Bayes optimal performance if the number k of neighbors is chosen carefully [HTF01, DHS00, CH67]. Consequently, the method serves, to date, as an important baseline algorithm and is frequently used as a benchmark to compare performances with. Fig. 6.1 (left panel) illustrates the NN classifier and displays how the system implements piecewise linear class borders. Several difficulties are evident already in this simple illustration. Class borders can be overly complex, for instance if single data points in the set have been classified incorrectly. Furthermore, the | The+Shallow+and+the+Deep_Page_157_Chunk4056 |
144 6. DISTANCE-BASED CLASSIFIERS fact that every data point contributes with equal weight can lead to overfitting effects because the classifier over-rates the importance of individual examples. As a consequence, it might not perform well when presented with novel, unseen data. Straightforward implementations of kNN compute and sort the distances of ξ from all available examples in D. While methods for efficient sorting can reduce the computational costs to a certain degree, the problem persists and is definitely relevant for very large data sets. Both drawbacks could be attenuated by reducing the number of reference data in an intelligent way while keeping the most relevant properties of the data. Indeed, the selection of a suitable subset of reference vectors by thinning out D was already suggested in [Har68]. An alternative, essentially bottom-to-top approach is considered in the following sections. 6.1.2 Learning Vector Quantization This successful and particularly intuitive approach to classification was intro- duced and put forward by Teuvo Kohonen [Koh97,RMS92,BHV16,Koh95,SK99, NE14]. The basic idea is to replace the potentially large set of labeled example data by relatively few, representative prototype vectors. LVQ was originally motivated as a simplifying approximation of a Bayes classifier under the assumption that the underlying density of data corresponds to a superposition of Gaussians [Koh97]. LVQ replaces the actual density esti- mation by a simple and robust method of supervised Vector Quantization. Each of the C classes is to be represented by (at least) one representative. Formally, we consider the set of prototype vectors wj, cj M j=1 with wj ∈RN and cj ∈{1, 2, . . . C}. (6.1) Here, the prototype labels cj = c(wj) indicate which class the corresponding prototype is supposed to represent. The so-called Nearest Prototype classifier (NPC) assigns an arbitrary, e.g. novel, feature vector ξ to the class c∗= c(w∗) of the closest prototype w∗(ξ) with d w∗(ξ), ξ = min d(wj, ξ) M j=1 (6.2) where ties can be broken arbitrarily. In the following, the closest prototype w∗(ξ) of a given input vector will be referred to as the winner. For brevity we will frequently omit the argument of w∗(ξ) and use the shorthand w∗when it is obvious which input vector it refers to. Figure 6.1 (right panel) illustrates the NPC concept: class borders corre- sponding to relatively few prototypes are smoother than the corresponding NN decision boundaries shown in the left panel. Consequently, an NPC classifier | The+Shallow+and+the+Deep_Page_158_Chunk4057 |
6.1. PROTOTYPE-BASED CLASSIFIERS 145 can be expected to be more robust and less prone to overfitting effects. The performance of LVQ systems has proven to be competitive in a variety of practical classification problems [Neu02]. In addition, their flexibility and inter- pretability constitute important advantages of prototype-based classifiers since prototypes are obtained and can be interpreted within the space of observed data, directly. This feature facilitates the discussion with domain experts and stands in contrast to many other, less transparent machine learning frameworks. An LVQ system for nearest prototype classification can be interpreted as a neural network with a single hidden layer. The prototypes correspond to hidden units with a distance-based activation which feed into a winner-takes-all output unit. This appealing analogy is elaborated in, for example, [RKSV20]. However, it is not essential for the following. 6.1.3 LVQ training algorithms So far we have not addressed the question of where and how to place the pro- totypes for a given data set. A variety of LVQ training algorithms have been suggested in the literature [Koh97,SK99,Koh90,NE14,BGH07,Gho21,Wit10]. The first, original scheme suggested by Kohonen [Koh97] is known as LVQ1. Essentially, it already includes all aspects of the many modifications that were suggested later. The algorithm can be summarized in terms of the following steps: LVQ1 algorithm, random sequential presentation of data – at time step t, select a single feature vector ξµ with class label yµ randomly from the data set D with uniform probability 1/P. – identify the winning prototype, i.e. the currently closest prototype w∗ µ = w∗(ξµ) given by d(w∗ µ, ξµ) = min d(wj, ξµ) M j=1 (6.3) with class label c∗ µ = c(w∗ µ). – perform a Winner-Takes-All (WTA) update: (6.4) w∗ µ(t+1) =w∗ µ(t) + ηw Ψ(c∗ µ, yµ) ξµ−w∗ µ with Ψ(c, y)= +1 if c=y −1 else. The magnitude of the update is controlled by the learning rate ηw. The actual update step (6.4) moves the winning prototype even closer to the presented feature vector if w∗ µ and the example carry the same label as indicated by | The+Shallow+and+the+Deep_Page_159_Chunk4058 |
146 6. DISTANCE-BASED CLASSIFIERS Ψ(c∗ µ, yµ) = +1. On the contrary, w∗ µ is moved farther away from ξµ if the winning prototype represents a class different from yµ, i.e. Ψ(c∗ µ, yµ) = −1. A popular initialization strategy is to place prototypes in the class-conditional mean vectors in the data set, i.e. wj(0) = P µ=1 δ[yµ, cj] ξµ P µ=1 δ[yµ, cj] with the Kronecker-delta δ[i, j]. If several prototypes are employed per class, independent random variations could be added in order to avoid coinciding pro- totypes, initially. More sophisticated initialization procedures can be realized, for instance by applying a K-means procedure in each class separately. After repeated presentations of the entire training set, the prototypes should represent their respective class by assuming class-typical positions in feature space, ideally. Numerous modifications of this basic LVQ scheme have been considered in the literature, see for instance [Koh90, NE14, BGH07, Gho21, Wit10] and ref- erences therein. In particular, several approaches based on differentiable cost- functions have been suggested. They allow for training in terms of gradient descent or other optimization schemes. Note that LVQ1 and many other heuris- tic schemes cannot be interpreted as descent algorithms in a straightforward fashion. One particular cost function based algorithm is the so-called Robust Soft LVQ (RSLVQ) which has been motivated in the context of statistical mod- elling [SO03]. The popular Generalized LVQ (GLVQ) [SY95, SY98] is guided by an objective function that relates to the concept of large margin classifica- tion [CGBNT03]: EGLV Q = P µ=1 Φ(eµ) with eµ = d(wJ µ, ξµ) −d(wK µ , ξµ) d(wJµ, ξµ) + d(wK µ , ξµ). (6.5) Here, the vector wJ µ denotes the closest of all prototypes which carry the same label as the example ξµ, i.e. cJ µ = yµ. Similarly, wK µ denotes the closest proto- type with a label different from yµ. For short, we will frequently refer to these vectors as the correct winner wJ µ and the incorrect winner wK µ , respectively. The cost function (6.5) in general comprises a non-linear and monotonically increasing function Φ(e). A particularly simple choice is the identity Φ(e) = e, while the authors of [SY95,SY98] suggest the use of a sigmoidal Φ(e) = 1/[1 + exp(−γ e)], where γ > 0 controls its steepness. Negative values eµ < 0 indicate that the corresponding training example is correctly classified in the NPC scheme, since then d(wJ µ, ξµ) < d(wK µ , ξµ).1 For large values of the steepness γ the costs approximate the number of misclassified 1Note that the argument of Φ obeys −1 ≤eµ ≤1. | The+Shallow+and+the+Deep_Page_160_Chunk4059 |
6.1. PROTOTYPE-BASED CLASSIFIERS 147 training data, while for small γ the minimization of EGLV Q corresponds to maximizing the margin-like quantities eµ. A popular and conceptually simple strategy to optimize EGLV Q is stochastic gradient descent in which single examples are presented in randomized order [RM51,Bot91,FP96]. In contrast to LVQ1, two prototypes are updated in each step of the GLVQ procedure: Generalized LVQ (GLVQ), stochastic gradient descent – at time step t, select a single feature vector ξµ with class label yµ randomly from the data set D with uniform probability 1/P. – identify the correct and incorrect winners, i.e. the prototypes wJ µ with d(wJ µ, ξµ) = min d(wj, ξµ) cj µ = yµM j=1 wK µ with d(wK µ , ξµ) = min d(wj, ξµ) cj µ ∕= yµM j=1 (6.6) with class labels cJ µ = yµ and cK µ ∕= yµ, respectively. – update both winning prototypes according to: wJ µ(t+1) = wJ µ(t) −ηw ∂Φ(eµ) ∂wJµ wK µ (t+1) = wK µ (t) −ηw ∂Φ(eµ) ∂wK µ , (6.7) where the gradients are evaluated in wL µ(t) for L = J, K. For the full form of the gradient terms we refer the reader to [SY95,SY98]. Note that – if Euclidean distance is used – the chain rule implies that the updates are along the gradients ∂d(wL µ, ξµ) ∂wL µ ∝ wL µ −ξµ for L = J, K. (6.8) Moreover, the signs of the pre-factors in (6.7) are given by Ψ(cL, yµ) = ±1 as in (6.1.3). In essence, GLVQ performs updates which move the correct (incorrect) prototype towards (away) from the feature vector, respectively. Hence, the basic concept of the intuitive LVQ1 is preserved in GLVQ. In both LVQ1 and GLVQ, very often a decreasing learning rate ηw is used to ensure convergence of the prototype positions [RM51]. Alternatively, schemes for automated learning rate adaptation or more sophisticated optimization meth- ods can be applied, see e.g. [SNW11], which we will not discuss here. | The+Shallow+and+the+Deep_Page_161_Chunk4060 |
148 6. DISTANCE-BASED CLASSIFIERS 6.2 Distance measures and relevance learning So far, the discussion focussed on Euclidean distance as a standard measure for the comparison of data points and prototypes. This choice appears natural and it is arguably the most popular one. One has to be aware, however, that other choices may be more suitable for real world data. Depending on the application at hand, unconventional measures might outperform Euclidean distance by far. Hence, the selection of a specific distance constitutes a key step in the design of prototype-based models. In turn, the possibility to choose a distance based on prior information and insight into the problem contributes to the flexibility of the approach. 6.2.1 LVQ beyond Euclidean distance As discussed above, training prescriptions based on Euclidean metrics generi- cally yield prototype displacements along the vector (ξµ −w) as in Eq. (6.8). Replacing the Euclidean distance by a more general, differentiable measure δ(ξµ, w), allows for the analogous derivation of LVQ training schemes. This is conveniently done in cost function based schemes like GLVQ, cf. Eq. (6.5), but it is also possible for the more heuristic LVQ1, which will serve as an exam- ple here. As a generalization of Eq. (6.4) we obtain the analogous WTA update from example µ at time t: w∗ µ(t + 1) = w∗ µ(t) −ηw Ψ(c∗ µ, yµ) 1 2 ∂δ(w∗ µ,ξµ) ∂w∗ µ . (6.9) Obviously, the winner w∗ µ has to be determined by use of the same measure δ, for the sake of consistency. Along these lines, LVQ update rules can be derived for quite general dissim- ilarities, provided the distance δ is differentiable with respect to the prototype positions. Note that the formalism does not require metric properties of δ. As a minimal condition, non-negativity δ(w, ξ) ≥0 should be satisfied for w ∕= ξ and δ(ξ, ξ) = 0. Note that cost function based approaches can also employ non-differentiable measures if one resorts to alternative optimization strategies which do not re- quire the use of gradients [SNW11]. Alternatively, differentiable approximations of non-differentiable δ can be used, see [HV05] for a discussion thereof. In the following, we mention just a few prominent alternatives to the stan- dard Euclidean metrics that have been used in the context of LVQ classifiers. We refer to, e.g., [BHV16,BHV14,HV05] for more detailed discussions and further references. Statistical properties of a given data set can be taken into account explicitly by employing the well-known Mahalanobis distance [Mah36]. This classical measure is a popular tool in the analysis of data sets. Duda et al. present a detailed discussion and several application examples [DHS00]. Standard Minkowski distances satisfy metric properties for values of p ≥1 | The+Shallow+and+the+Deep_Page_162_Chunk4061 |
6.2. DISTANCE MEASURES AND RELEVANCE LEARNING 149 in dp(ξ, ξ) = N j=1 ξj −ξj p1/p for ξ, ξ ∈RN, (6.10) which includes Euclidean distance as a special case for p = 2. Larger (smaller) values of p put emphasis on the components ξj and ξj with larger (smaller) deviations |ξj −ξj|, respectively. For instance, in the limit p →∞we have d∞(ξ, ξ) = max j=1,...N ξj −ξj . Setting p ∕= 2 has been shown to improve performance in several practical applications, see [BBL07,GW10] for specific examples. The squared Euclidean distance can be rewritten in terms of scalar products: d(w, ξ)2 = (w · w −2w · ξ + ξ · ξ) . (6.11) So-called kernelized distances [Sch01] replace all inner products in (6.11) by a kernel function κ: dκ(w, ξ)2 = κ(w, w) −2κ(w, ξ) + κ(ξ, ξ). (6.12) As in the SVM formalism, the function κ can be associated with a non-linear transformation from RN to a potentially higher-dimensional feature space. In SVM training one takes advantage of the fact that data can become linearly separable due to the transformation, as discussed in Sec. 4.3. Similarly, kernel distances can be employed in the context of LVQ in order to achieve better classification performance, see [VKNR12] for a particular application. As a last example, statistical divergences can be used to quantify the dis- similarity of densities or histogram data. For instance, image data is frequently characterized by color or other histograms. Similarly, text can be represented by frequency counts in a bag of words approach. In the corresponding classifi- cation problems, the task would be to discriminate between class-characteristic histograms. Euclidean distance is frequently insensitive to the relevant dis- criminative properties of histograms. Hence, the classification performance can benefit from using specific measures, such as statistical divergences. The well- known Kullback-Leibler divergence is just one example of many measures that have been suggested in the literature. For further references and an example application in the context of LVQ see [MSS+11,Mwe14]. There, it is also demon- strated that even non-symmetric divergences can be employed properly in the context of LVQ, as long as the measures are used in a consistent way. 6.2.2 Adaptive distances in relevance learning In the previous subsection, a few alternative distance measures have been dis- cussed. In practice, a particular one could be selected based on prior insights or according to an empirical comparison in a validation procedure. | The+Shallow+and+the+Deep_Page_163_Chunk4062 |
150 6. DISTANCE-BASED CLASSIFIERS The elegant framework of relevance learning allows for a significant con- ceptual extension of distance-based classification. It is particularly suitable for prototype systems and was was introduced and put forward in the context of LVQ in [HV02,SBH09,Sch10,Bun11,SBS+10,BSH+12], for instance. Relevance learning has proven useful in a variety of applications, including biomedical problems and image processing tasks, see for instance [Bie17]. In this very elegant approach, only the parametric form of the distance measure is fixed in advance. Its parameters are considered adaptive quantities which can be adjusted or optimized in the data-driven training phase. The basic idea is very versatile and can be employed in a variety of learning tasks. We present here only one particularly clear-cut and successful example in the context of supervised learning: the so-called Matrix Relevance LVQ for classification [SBH09]. Similar to several other schemes (e.g. [WS09, BAP+12, BCLC15]), Matrix Relevance LVQ employs a generalized quadratic distance of the form δΛ(w, ξ) = (w −ξ)⊤Λ (w −ξ) = N i,j=1 (wi −ξi) Λij (wj −ξj). (6.13) Heuristically, diagonal entries of Λ quantify the importance of single feature dimensions in the distance and can also account for potentially different mag- nitudes of the features. Pairs of features are weighted by off-diagonal elements, which reflect the interplay of the different dimensions. Note that for Λ = IN/N, Eq. (6.13) recovers the simple squared Euclidean distance. In order to fulfill the minimal requirement of non-negativity, δΛ ≥0, a con- venient re-parameterization is introduced in terms of an auxiliary, unrestricted matrix Ω∈RN×N: Λ = Ω⊤Ω, i.e. δΛ(w, ξ) = [Ω(w −ξ)]2 . (6.14) Hence, δΛ can be interpreted as the conventional squared Euclidean distance but after a linear transformation of feature space. Note that Eqs. (6.13, 6.14) define only a pseudo-metric in RN since Λ can be singular with rank(Λ) < N implying that δΛ(w, ξ) = 0 is possible even if w ∕= ξ. Obviously, we could employ a fixed distance of the form (6.13) in GLVQ or LVQ1 as outlined in the previous sections. The key idea of relevance learning, however, is to consider the elements of the relevance matrix Λ ∈RN×N as adaptive quantities which can be optimized in the data-driven training process. Numerous simplifications or extensions of the basic idea have been sug- gested in the literature. The restriction to diagonal matrices Λ corresponds to the original formulation of Relevance LVQ in [HV02], which assigns a single, non-negative weighting factor to each dimension in feature space. Rectangular (M × N)-matrices matrices Ωwith M < N can be used to parameterize a low- rank relevance matrix [BSH+12]. The corresponding low-dimensional intrinsic representation of data facilitates, for instance, the class-discriminative visualiza- tion of complex data [BSH+12]. The flexibility of the LVQ classifier is enhanced | The+Shallow+and+the+Deep_Page_164_Chunk4063 |
6.2. DISTANCE MEASURES AND RELEVANCE LEARNING 151 significantly when local distances are used, i.e. when separate relevance matrices are employed per class or even per prototype [SBH09,BSH+12]. Here we restrict the discussion to the simplest case of a single, N ×N matrix Ωcorresponding to a global distance measure. The heuristic extension of the LVQ1 prescription by means of relevance matrices is briefly discussed in [BHV16] and its convergence behavior is analysed in [BHS+16]. Gradient based updates for the simultaneous adaptation of prototypes and relevance matrix can be derived from a suitable cost function. We observe that ∂δΛ(w, ξ) ∂w = Ω⊤Ω(ξ−w) and ∂δΛ(w, ξ) ∂Ω = Ω(w −ξ) (w −ξ)⊤. (6.15) The full forms of the gradients with respect to the terms eµ in the GLVQ cost function are presented in [SBH09], for instance. They yield the so-called Generalized Matrix Relevance LVQ (GMLVQ) scheme, which can be formulated as a stochastic gradient descent procedure: Generalized Matrix LVQ (GMLVQ), stochastic gradient descent – at time step t, select a single feature vector ξµ with class label yµ randomly from the data set D with uniform probability 1/P. – with respect to the distance δΛ (6.13) with Λ = Ω(t)⊤Ω(t), identify the correct and incorrect winners, i.e. the prototypes wJ µ with δΛ(wJ µ, ξµ) = min δΛ(wj, ξµ) cj µ = yµM j=1 wK µ with δΛ(wK µ , ξµ) = min δΛ(wj, ξµ) cj µ ∕= yµM j=1 (6.16) with class labels cJ µ = yµ and cK µ ∕= yµ, respectively. – update both winning prototypes and the matrix Ωaccording to: wJ µ(t + 1) = wJ µ(t) −ηw ∂Φ(eµ) ∂wJµ wK µ (t + 1) = wK µ (t) −ηw ∂Φ(eµ) ∂wK µ , Ω(t + 1) = Ω(t) −ηΩ ∂Φ(eµ) ∂Ω . (6.17) where the gradients are evaluated in Ω(t) and wL µ(t) for L = J, K. In both GMLVQ and Matrix LVQ1, the relevance matrix is updated in order to decrease or increase δΛ(wL µ, ξµ) for the winning prototype(s), depending on the class labels in the, by now, familiar way. | The+Shallow+and+the+Deep_Page_165_Chunk4064 |
152 6. DISTANCE-BASED CLASSIFIERS Figure 6.2: Visualization of the Generalized Matrix Relevance LVQ system as obtained from the z-score transformed Iris flower data set, see Sec. 6.2.2 for details. Left panel: Class prototypes are shown as bar plots with respect to the four feature space components in the left column. The right column shows the eigen- value spectrum of Λ, the diagonal elements of Λ, and the off-diagonal elements in a gray-scale representation (top to bottom). Right panel: Projection of the P = 150 feature vectors onto the two leading eigenvectors of the relevance matrix Λ. Frequently, the learning rate of the matrix updates is chosen to be relatively small, ηΩ≪ηw, in the stochastic gradient descent procedure. This follows the intuition that the prototypes should be enabled to follow changes in the distance measure. The relative scaling can be different in batch gradient realizations of GMLVQ as for instance in [VWB21]. The matrix Ωcan be initialized as the N-dim. identity or in terms of indepen- dent random elements. In order to avoid numerical difficulties, a normalization of the form i Λii = i,j Ω2 i,j = 1 is frequently imposed [SBH09]. In the following we illustrate Matrix Relevance LVQ in terms of a classical benchmark data set. In the famous Iris flower data set [Fis36], four numerical features are used to characterize 150 samples from three different classes which correspond to particular species of Iris flowers. We obtained the data set as provided at [Lic13] and used one prototype per class and a global relevance matrix Λ ∈R4×4. For the training, we employed the freely available beginner’s toolbox for GMLVQ with default parameter setting [VWB21]. An additional z-score transformation was applied, resulting in re-scaled features with zero mean and unit variance in the data set, see Sec. 8.1.1. This allows for an immediate interpretation of the relevances, without having to take into account the potentially different magnitudes of the features. Figure 6.2 visualizes the obtained classifier. The resulting LVQ system achieves almost perfect, error-free classification of the training data. It also | The+Shallow+and+the+Deep_Page_166_Chunk4065 |
6.3. CONCLUDING REMARKS 153 displays very good generalization behavior with respect to validation or test set performance not presented here. In the left panel, the prototypes after training and the resulting relevance matrix and its eigenvalues are displayed. As discussed above, the diagonal ele- ments Λii can be interpreted as the relevance of features i in the classification. Apparently, features 3 and 4 are the dominant ones in the Iris classification prob- lem. The off-diagonal elements represent the contribution of pairs of different features. Here, also the interplay of features 3 and 4 appears to be important. In more realistic and challenging data sets, Relevance Matrix LVQ can pro- vide valuable insights into the problem. GMLVQ has been exploited to identify the most relevant or irrelevant features, e.g. in the context of medical diag- nosis problems, see [Bie17] for a variety of applications. A recent application in the context of galaxy classification based on astronomical catalogue data is presented in [NWB18,NWB+19]. In an N-dimensional feature space, the GMLVQ relevance matrix intro- duces O(N 2) additional adaptive quantities. As a consequence, one might expect strong overfitting effects due to the large number of free model parame- ters. However, as observed empirically and analysed theoretically, the relevance matrix displays a strong tendency to become singular and displays very low rank(Λ) = O(1) ≪N after training [BHS+16]. This effect can be interpreted as an implicit, intrinsic mechanism of regularization, which limits the complexity of the distance measure, effectively. In addition, the low rank relevance matrix allows for the discriminative vi- sualization of the data by projecting feature vectors (and prototypes) onto its leading eigenvectors. As an illustrative example, Fig. 6.2 displays the Iris flower data set. 6.3 Concluding remarks Prototype-based models continue to play a highly significant role in putting forward advanced machine learning techniques. We encourage the reader to explore recent developments in the literature. Challenging problems, such as the analysis of functional data, non-vectorial data or relational data, to name only very few, are currently being addressed, see [BHV16, NE14] for further references. At the same time, exciting application areas are being explored in a large variety of domains. Most recently, prototype-based systems are also re-considered in the context of Deep Learning [GBC16,Sch15,LBH18,Hue19]. The combination of multilayer network architectures with prototype- and distance-based modules appears very promising and is the subject of on-going research, see, for instance, [SHRV18, VMC16] and references therein. | The+Shallow+and+the+Deep_Page_167_Chunk4066 |
154 6. DISTANCE-BASED CLASSIFIERS | The+Shallow+and+the+Deep_Page_168_Chunk4067 |
Chapter 7 Model evaluation and regularization Accuracy is not enough. — Paulo Lisboa In supervised learning the aim is to infer relevant information from given data, to parameterize it in terms of a model, and to apply it to novel data successfully. It is obviously essential to know or at least have some estimate of the performance that can be expected in the working phase. In this chapter we discuss several aspects related to the evaluation and val- idation of supervised learning. In Sec. 7.1, we take a rather general perspective on overfitting and underfitting effects without necessarily addressing a partic- ular classifier or regression framework. We present methods for controlling the complexity of neural networks in Sec. 7.2. Cross-validation and related methods are in the focus of Sec. 7.3. Specific quality measures beyond the simple overall accuracy for the evaluation of classifiers and regression systems are presented in 7.4.3. Finally, we address the importance of interpretable models in machine learning in Sec. 7.5. 7.1 Bias and variance, over- and underfitting Different sources of error can influence the performance of supervised learning systems. Here we decompose the expected prediction error into two main con- tributions, see e.g. [HTF01,Bis06]: the so-called bias corresponds to systematic deviations of trained models from the true target, while the term variance refers to variations of the model performance when trained from different realizations 155 | The+Shallow+and+the+Deep_Page_169_Chunk4068 |
156 7. MODEL EVALUATION AND REGULARIZATION K = 1 K = 3 K = 7 DA DB DC Figure 7.1: Illustration of the bias-variance dilemma in regression. In each row of graphs, a particular set of 10 points {xi, yi}10 i=1 is approximated by least square linear regression (K = 1), by a cubic fit (K = 3), and by fitting a polynomial of degree seven (K = 7). Rows correspond to three randomized, independently generated data sets DA,B,C. of the training data1. Frequently, a so-called irreproducible error is considered as a third, inde- pendent contribution [HTF01]. It could, for instance, stem from intrinsic noise in the test data which cannot be predicted even with perfect knowledge of the target rule. As the irreproducible error is beyond our control anyway, we refrain from including it in the discussion. 7.1.1 Decomposition of the error For the purpose of illustration, we consider a simple one-dimensional regression problem. The obtained insights, however, carry over to much more complex systems. In our example, least squares fits are based on data sets of the form D = {xµ, yµ}P µ=1. They contain real-valued arguments xµ ∈R , e.g. equidistant values in [−1, 1], and their corresponding target labels yµ ∈R. The data sets 1Note that the terms bias and variance are used in many different scientific contexts with area specific meanings. | The+Shallow+and+the+Deep_Page_170_Chunk4069 |
7.1. BIAS AND VARIANCE, OVER- AND UNDERFITTING 157 represent a function f(x) which is of course unknown to the learning system. We assume that the training labels are noisy versions of the true targets: yµ = f(xµ) + rµ with 〈rµ〉= 0 and 〈rµ rν〉= ρ2δµν (7.1) with the Kronecker-delta δµν. Hence, the deviation of the training labels from the underlying target function is given by uncorrelated, zero mean random quan- tities rµ in each data point. Further details are irrelevant for the argument, but we could, for instance, consider independent Gaussian random numbers with variance ρ2, i.e. rµ ∼N(0, ρ). We perform polynomial fits of the form fK(x) = K j=0 ajxj with coefficients aj ∈R (7.2) for powers xj with maximum degree K. Given a data set D = {xµ, yµ}P µ=1, the aj can be determined by minimizing the familiar quadratic deviation ESSE = 1 2 P µ=1 fH(xµ) −yµ2 . (7.3) Fig. 7.1 displays three randomized data sets DA,B,C with P = 11 equidistant xµ and noisy yµ representing the underlying target function f(x) = x3. For each of the three slightly different data sets, polynomial least square fits were performed with K = 1 (linear), K = 3 (cubic) and with degree K = 7. Hence, the same data sets were analysed by using models of different complexity. In order to obtain some insight into the interplay of model complexity and expected performance, we consider the thought experiment of performing the same training/fitting processes for a very large number of slightly different data sets of the same size, which all represent the target. We denote by 〈. . .〉D an average over many randomized realizations of the data set or – more formally – over the probability density of the training data in D. In this sense, the expected total quadratic deviation of a hypothesis function fH from the true target f in an arbitrary point x ∈R is given by fH(x) −f(x) 2 D , (7.4) where the randomness of D is reflected in the outcome fH of the training. We could also consider the integrated deviation over a range of x-values, which would relate the SSE to the familiar generalization error. However, the following argument would proceed in complete analogy due to the linearity of, both, integration and averaging. Performing the square in Eq. (7.4) we obtain f2 H(x) D −2 〈fH(x)〉D f(x) + f 2(x). (7.5) Note that the true target f(x) obviously does not depend on the data and can be left out from the averages. | The+Shallow+and+the+Deep_Page_171_Chunk4070 |
158 7. MODEL EVALUATION AND REGULARIZATION For the sake of brevity, we omit the argument x ∈R of the functions fH and f in the following. Including redundant terms (*) which add up to zero we can rewrite (7.5) as 〈fH〉2 D ∗ −2 〈fH(x)〉D f + f 2 + f 2 H D −2 〈fH〉2 D + 〈fH〉2 D ∗ (7.6) and obtain a decomposition of the expected quadratic deviation in x: fH −f 2 D = f −〈fH〉D 2 bias 2 + fH −〈fH〉D 2 D variance . (7.7) The equality with (7.6) is straightforward to show by expanding the squares and exploiting that 〈fH 〈fH〉D〉D = 〈fH〉2 D = 〈fH〉2 D D . Hence we can identify two contributions to the total expected error: ◦Bias (squared): (f −〈fH〉D)2 The bias term quantifies the deviation of the mean prediction from the true target, where the average is over many randomized data sets and corresponding training processes. A small bias indicates that there is very little systematic deviation of the hypotheses from the unknown target rule. ◦Variance: fH −〈fH〉D 2 D The variance measures how much the individual predictions, obtained after training on a given D, typically differ from the mean prediction. The observation of a small variance implies that the outcome of the learning is robust with respect to details of the training data. Similar considerations apply to more general learning problems, including classification schemes [Dom00]. 7.1.2 The bias-variance dilemma Ideally, we would like to achieve low variance and low bias at the same time, i.e. a robust and faithful approximation of the target rule. Both goals are clearly legitimate, but very often they constitute conflicting aims in practice, as further illustrated in the following. This is often referred to as the bias-variance dilemma or trade-off[HTF01, Bis95a, Bis06, Dom00]. It is closely related to the problem of overfitting in unnecessarily complex systems and its counterpart, the so-called underfitting in simplistic models. The concept of bias and variance is illustrated in Fig. 7.2 (left panel). In the illustration, the fits of two different models are displayed in the space of adaptive quantities, e.g. weights in a neural network, while bias and variance are defined in terms of the prediction error. However, we can assume that the deviation in weight space from the target is in general associated with the error. | The+Shallow+and+the+Deep_Page_172_Chunk4071 |
7.1. BIAS AND VARIANCE, OVER- AND UNDERFITTING 159 model B target fits of model A model parameter 2 adaptive parameter 1 prediction error test set performance ←high bias high variance → training set performance a measure of the model complexity Figure 7.2: Left panel: Illustration of the bias-variance Dilemma. The true target is represented by the filled circle in the center. Fits obtained from dif- ferent data sets Di in model A (open circles) show low bias and large variance, while model B (filled circles) displays large systematic bias with smaller vari- ance. Right panel: Schematic illustration of underfitting and overfitting (af- ter [HTF01]): expected error with respect to a test set (generalization error) and training set performance as a function of the model complexity, e.g. K in the polynomial fits of Fig. 7.1. Overfitting: low bias – high variance In terms of our polynomial regression example we can achieve low bias by em- ploying powerful models with large degree K. In the extreme case of K = P, for instance, we can generate models which perfectly reproduce the data points, fK(xµ) = yµ for all µ, in each individual training process. Because the training labels themselves are assumed to be unbiased with 〈yµ〉D = f(xµ), cf. Eq. (7.1), the averaged fit result will also be in exact agreement with the target in the arguments xµ. In fact, since the objective function (7.3) of the training treats positive and negative deviations symmetri- cally as well, there is no reason to expect systematic deviations of the fits with fK(x) > f(x) or fK(x) < f(x) for all fits in some arbitrary value of x. However, using a very flexible model with large K will result in fits which are very specific to the individual data set. As can be seen in Fig. 7.1, (right column), already for a moderate degree of K = 7, very different models emerge from the individual training processes. For the sample points themselves, the variance of the nearly perfect fit would be essentially determined by the statistical variance of the training labels ρ2 in Eq. (7.1). However, when interpolating or even extrapolating to x ∈/ {xµ}P µ=1, the different fits will vary a lot in their prediction fK(x). Consider, for instance, the extrapolation to x = ±1.1 in Fig. 7.1 as a pronounced example of the effect. Underfitting: low variance – high bias If emphasis is put on the robustness of the model, i.e. low variance, we would prefer simple models with low degree K in (7.2). This should prevent the fits | The+Shallow+and+the+Deep_Page_173_Chunk4072 |
160 7. MODEL EVALUATION AND REGULARIZATION from being overly specific to the individual data sets. As illustrated in the left column of Fig. 7.1, we achieve nearly identical linear models from the different data sets. However, a price is paid for the robustness: systematic deviations occur in each training procedure. We observe, for instance, that the linear fits (left column) obtained from DA,B,C are virtually identical. However, they display quite large deviations from the sample points – which represent a non- linear function, after all. These deviations are systematic in the sense that they are reproduced qualitatively in each data set. For instance, for the first sample point x1 = −1 we can see that always fK=1(x) > y1. As a consequence, interpolation and extrapolation will also be subject to systematic errors. Matched model complexity In our example, fits of degree K = 3 seem to constitute an ideal compromise. In the sample points, they achieve small deviations with no systematic ten- dency and, consequently, have relatively low bias. At the same time the fits fK=3(x) appear also robust against variations of the data set, corresponding to a relatively small variance. This is of course not surprising, since polynomials of degree K = 3 perfectly match the complexity of the underlying, true target function. It is important to realize that this kind of information is rarely available in practical situations. In fact, in absence of knowledge about the complexity of the target rule, it is one of the key challenges in supervised learning to select an appropriate model that achieves a good compromise with respect to bias and variance. In the following sections we will consider a variety of ways to control the complexity of a learning system with emphasis on feed-forward neural networks. The trade-off The above considerations suggest that there is a trade-offbetween the goals of small variance and bias [HTF01, NMB+18, Dom00]. Indeed, in many machine learning scenarios one observes such a trade-offwhich is illustrated in Fig. 7.2 (right panel). It shows schematically the possible dependence of the prediction performance in the training set and the generalization error (test set perfor- mance) as a function of the model complexity. In our simple example, we could use the polynomial degree K as a measure of the latter. It could be also interpreted as, for instance, the degree of a polynomial kernel in the SVM, the number of hidden units in a two-layer neural network, or the number of prototypes per class in an LVQ system. Similarly, the x-axis could correspond to a continuous parameter that controls the flexibility of the training algorithm, e.g. a weight decay parameter or the training time itself [HKP91,Bis95a,Bis06]. Generically, we expect the training error to be lower than the generalization error for any model. After all, the actual optimization process is based on the available training examples. Simplistic models that cannot cope with the complexity of the task display, both, poor training set and d due to large systematic bias. Increasing the | The+Shallow+and+the+Deep_Page_174_Chunk4073 |
7.1. BIAS AND VARIANCE, OVER- AND UNDERFITTING 161 model’s flexibility will reduce the bias and, consequently, training and test set error decrease with K in Fig. 7.2 (right panel). However, overly training set specific models display overfitting: while the training error typically decreases further with increasing K, the test set error displays a U-shaped dependence which reflects the increase of the model variance. It is important to realize that the extent to which the actual behavior follows the scenario in a practical situation depends on the detailed properties of the data and the problem at hand. While one should be aware of the possible implications of the bias-variance dilemma, the plausibility of the above discussed trade-offmust not be over-interpreted as a mathematical proof. Note that the decomposition (7.7) itself does not imply the existence of a trade-off, strictly speaking. As argued and demonstrated in e.g. [Sch93] and [NMB+18], a given practical problem does not necessarily display the U-shaped dependence of the general- ization error shown in Fig. 7.2 (right panel). There is also no general guarantee that measures which reduce the variance in complex models will really improve the performance of the system [Sch93]. In many practical problems, however, the assumed bias-variance trade-offcan indeed be controlled to a certain degree and may serve as a guiding principle for the model selection process. According to the above considerations, a reliable estimate of the expected generalization ability would be highly desirable in any given supervised learning scenario. It would be very useful to be able to compare and evaluate the use of different approaches, e.g. SVM and LVQ, in a given practical problem. Similarly, the expected performance should guide the selection of model parameters like the number of hidden units in a neural network. The aim is to select the most suitable student complexity in a situation as sketched in Fig. 7.2. The same applies to selecting a training procedure and suitable parameter values, e.g. the learning rate. In Sec. 7.3 we present the basic idea of how to obtain estimates of the generalization performance by means of cross-validation and related schemes. 7.1.3 Beyond the classical bias-variance trade-off(?) The unreasonable effectiveness of Deep Learning in Artificial Intelligence [Sej20] seems to raise some doubts about the validity of the bias-variance trade-off. Deep Learning systems are frequently heavily over-parameterized with very large numbers of layers, units and weights. For instance, the currently very popular Large Language Models can comprise billions of adaptive parameters, see Table 2.1 in the preprint version of [BMR+20]. According to the reasoning of the previous sections, one would expect serious overfitting effects in such extremely powerful systems. In practice, however, over-parameterized Deep Learning sys- tems are trained and applied with great success. In this context, a publication by Belkin et al. [BHMM19] has attracted a lot of attention. The authors discuss the so-called double descent or peaking phenomenon which is illustrated in Fig. 7.3. Beyond the classical under- and overfitting scenario of Fig. 7.2, the test error frequently undergoes a second | The+Shallow+and+the+Deep_Page_175_Chunk4074 |
162 7. MODEL EVALUATION AND REGULARIZATION prediction error test error training error number of parameters, model complexity interpolation threshold “classical” regime “modern” regime Figure 7.3: Illustration of the double descent phenomenon, after [BHMM19]. descent as a function of the number of adaptive parameters. This descent occurs in the over-parameterized regime, while the test error displays a peak at the so- called interpolation threshold. The illustration 7.3 refers to what is sometimes called model-wise double descent: for a given problem or data set, models of increasing complexity are considered. Similar peaking effects can be observed in models of fixed complexity with varying data set size in the so-called sample- wise double descent [You21,LVM+20,Vie23]. Double descent is the subject of ongoing discussions and apparently has led several researchers and practicioners to the somewhat hasty conclusion that the bias-variance trade-offgeneralization is wrong [You21]. Moreover, it is often assumed that double descent is a relatively novel phenomenon that was discov- ered specifically in Deep Learning, motivating the terms classical and modern regime in Fig. 7.3. However, as already mentioned in [BHMM19], double descent occurs also in much simpler settings, including shallow networks and elemen- tary regression systems. In [LVM+20], the authors present a brief prehistory of double descent, pointing out that it had been observed already in basic learning problems like linear regression or perceptron training, e.g. [OKKN90]. Plausible explanations for the occurrence of double descent have been pro- vided by several authors. It is important to note that, depending on the details of problem and method, it is incorrect to naively identify the number of parame- ters with the capacity or complexity of the model, as we suggestively did in Fig. 7.3. As one example, Daniela Witten presents an insightful discussion in terms of fitting cubic splines to a number of data points in [Wit20] (a twitter thread). There, the interpolation threshold corresponds to the situation in which the number of parameters exactly matches the number of data points. This is anal- ogous to other regression or interpolation schemes, such as the poynomial fits discussed in Sec. 7.1. Right at the interpolation threshold, there is only one possible solution for a given data set and the resulting model is very sensitive to small variations or noise. Above the interpolation threshold, many fits are possible. The specific selection of the solution with the minimum norm of coeffi- cients restricts the flexibility of the system drastically, resulting in the observed peaking and double descent. On the contrary, fitting under other types of reg- | The+Shallow+and+the+Deep_Page_176_Chunk4075 |
7.2. CONTROLLING THE NETWORK COMPLEXITY 163 ularization, cf. Sec. 7.2, would not necessarily display the peaking and double descent [Wit20]. The influence of implicit and explicit regularization on the emergence of double descent is also discussed in the context of ordinary least squares regression in [KLS20]. In general, explicit regularization as well as details of the training prescrip- tion can play an important role in determining the effective flexibility of a sys- tem. The important conclusion is that, if the model complexity is taken into account correctly, the bias-variance trade-offis still valid. 7.2 Controlling the network complexity So far we have discussed the control of the student complexity in terms of the actual model design, i.e. by choosing the degree of a polynomial fit, the number of prototypes in LVQ, or the size of a hidden layer in a neural network. These choices are made prior to the actual training and can be evaluated by comparing different settings after training. In a variety of approaches the (effective) complexity of a learning system is controlled by imposing constraints on the training process in a given archi- tecture. Appropriate restrictions can prevent the training algorithm from fully exploring the space of adaptive quantities. We discuss two basic methods: the so-called early stopping strategy in Sec. 7.2.1 and the concept of weight decay in 7.2.2, respectively. The latter is an important example for regularization by introducing a penalty term into the objective function that guides the train- ing. Here, we will use the term regularization more generally for all methods of implicit or explicit complexity control. Constructive algorithms which incorporate the addition of units or layers into the training process are discussed in Sec. 7.2.3. In Sec. 7.2.4 we present so-called pruning procedures that remove unnecessary weights or units during or after training. Eventually, two techniques that are particular relevant in the context of Deep Learning, weight-sharing and Dropout, are presented in Sec. 7.2.5 and 7.2.6, respectively. In practice, all these methods require or benefit from reliable estimates of the performance with respect to the prediction on novel data. For now we assume that such estimates are available, for instance by computing suitable error measures on a large representative test set. Practical methods like the well-known n-fold cross-validation will be discussed in Sec. 7.3. 7.2.1 Early stopping A conceptually very simple idea is to end the training process before the system becomes overly specific to the data set. For example, if we manually stop gradi- ent descent updates after a suitable number tmax of epochs, the resulting weight configuration may display a relatively low value of the objective function, yet without representing one particular local minimum too faithfully. Fig. 7.4 (left | The+Shallow+and+the+Deep_Page_177_Chunk4076 |
164 7. MODEL EVALUATION AND REGULARIZATION W(0) = 0 W(tmax) W(0)=0 W ∗ W ∗ Figure 7.4: Schematic illustration of early stopping and weight decay. Ellipses correspond to contour lines of the objective function. The blue solid lines rep- resent the unrestricted hypothetical updates by, for instance, gradient descent. Left panel: after tmax epochs, the training is stopped, which hinders the weight configuration from reaching the local minimum W ∗(red dot). Right panel: weights are initialized tabula rasa and restricted to small norms. Circles correspond to lines of equal norm |W|. panel) shows an illustration of the effect of early stopping on the optimization process. Being one of the most intuitive concepts of regularization, early stopping has been discussed very early in the context of neural networks, see for example [BC91,SL92]. A thorough discussion and further references can be found in text books as well, e.g. in [Bis06,GBC16]. The early stopping parameter tmax plays a role that is comparable to the degree K in the example of polynomial fits, cf.7.1. Note that tmax and other parameters that control the training process are often referred to as hyper- parameters in order to distinguish them from the actual adaptive quantities, e.g. weights and thresholds in a neural network. In order to set discrete parameter like the number of hidden units in the network, we have to train different systems separately and compare their train- ing and test set retrospectively. In early stopping we can monitor the system on the fly and stop as soon as overtraining effects set in. The proper choice of tmax based on heuristic criteria and the use of cross-validation, cf. Sec. 7.3.1 has been addressed in the literature, see e.g. [Pre97, Pre98, STW11] for examples and [GBC16,Bis06] for general discussions. 7.2.2 Weight decay and related concepts We have encountered weight decay as a regularization technique already in the discussion of simple linear regression in Sec. 2.2.2. There, a penalty term was added to the objective function that prevents the L2-norm of the weight vec- tor from growing arbitrarily large. Depending on the perspective, this can be motivated heuristically in a pragmatic machine learning approach or on the | The+Shallow+and+the+Deep_Page_178_Chunk4077 |
7.2. CONTROLLING THE NETWORK COMPLEXITY 165 basis of assuming prior knowledge in the context of statistical learning the- ory [HTF01, HKP91, Bis95a, Bis06, DHS00]. In the linear regression problem, weight decay facilitates the construction or computation of a meaningful solu- tion of the regression problem. Here we extend the concept to the implementation of non-linear functions in non-linear networks. Modifications of weight decay, e.g. by considering general Minkowski-norms or heuristically motivated penalty terms are discussed at the end of this subsection. Weight decay in non-linear neural networks The concept of weight decay generalizes to a variety of classification and re- gression problems, including the training of non-linear layered neural networks, see [Hin86,KSV88] for early works. The influence of weight decay has also been investigated in model scenarios from the statistical physics of learning perspec- tive, see [Bös96,ABS99,SR98] for examples. Compared to the case of linear regression, the formulation of weight decay based on statistical learning theory, is less obvious for non-linear neural net- works. However, the heuristic interpretation remains valid: restricting the mag- nitude of weights prevents the system from exploring the search space exhaus- tively and thus limits the effective complexity of the network. Figure 7.4 (right panel) illustrates the effect of limiting the Euclidean L2-norm of the weights W. The effect of weight decay can be motivated in terms of a single non-linear unit. Assume that the activation of the unit is given by the non-linear function g(x) ∈R with x = w · ξ, w, ξ ∈RN. (7.8) For weight vectors with small norm |w| ≈0 we have x ≈0 and a Taylor expansion implies that the activation is approximately g(x) ≈g(0) + g′(0) x + 1 2g′′ x2 + . . . (7.9) Thus, the activation is effectively linearized. By the same argument, the output of a layered network of in principle non-linear units, will become nearly linear if the magnitude of the weights are very small. If we represent the set of all weights in a network by W and consider gra- dient descent with respect a cost function E(W), we can limit the norm of W heuristically by reducing the magnitude of weights in or after each update step: W(t + 1) = W(t) 1 −γ −η ∇W E|W (t) (7.10) with the (small) weight decay parameter γ > 0. The update can also be inter- preted as gradient descent with respect to a modified objective function: E = E + 1 2 γ η |W|2 with ∇W E = ∇W E + γ η W (7.11) | The+Shallow+and+the+Deep_Page_179_Chunk4078 |
166 7. MODEL EVALUATION AND REGULARIZATION which also leads to (7.10).2 As an alternative to the use of a penalty term, sometimes a constraint of the type |W|2 ≤c with constant c > 0 is imposed. This can be done explicitly by projecting back onto the sphere in weight space |W|2 = c whenever the constraint is violated by the updates. This so-called max-norm regularization is considered in [SHK+14] in combination with Dropout, see Sec. 7.2.6. Weight decay as given by Eqs. (7.10, 7.11) can be interpreted as a soft implementation with Lagrange parameter γ/η. Variants of weight decay Following an argument presented in e.g. [HKP91] and [NH92b], the penalty term ∝ j W 2 j in Eq. (7.11) favors several non-zero weights of similar magnitude over a combination of zero weights with a few larger Wj. This can be seen by comparing the penalty of a pair of weights {w/2, w/2} to that of {w, 0}: w 2 2 + w 2 2 < w2 + 02. In the context of sparse classifiers or regression systems the aim is a system with a significant fraction of zero or very small weights, which could be removed. This can be achieved by employing modified penalty terms and update rules, for instance the one discussed in [HKP91]: E = E + 1 2 γ η j W 2 j 1 + W 2 j with ∂E Wk = γ η Wk (1 + W 2 k )2 . (7.12) Compared to (7.11) this leads to a Wk-dependent decay term in the gradient de- scent updates which favors the further decrease of small weights. Consequently, the modified weight decay, together with a removal of weights with |Wj| ≈0 after training, can serve as a method for pruning the neural network. Further methods for the removal of unnecessary weights in a trained network are briefly presented in Sec. 7.2.4. A family of systematic variations of weight decay can be based on Lp- regularization, where the penalty is given by the more general term ||W||p = j W p j 1/p . (7.13) Eq. (7.13) constitutes a proper norm only for p ≥1. Formally, extensions to 0 < p < 1 are possible but involve mathematical subtleties. The familiar Euclidean norm is recovered for p = 2. Another case of particular interest is p = 1, corresponding to the so-called Manhattan norm. In linear regression, a (non-differentiable) penalty term proportional to j |Wj| appears in the La- grangian form of the so-called Least Absolut Shrinkage And Selection Operator 2Here, the scaling of γ with η merely guarantees formal equivalence with Eq. (7.10). | The+Shallow+and+the+Deep_Page_180_Chunk4079 |
7.2. CONTROLLING THE NETWORK COMPLEXITY 167 (LASSO), see [HTF01] for a thorough discussion and comparison with L2-based Ridge Regression. Similar to the above discussed heuristic penalty (7.12), L1 regularization can also be used to enforce some weights to become exactly zero. It thus also relates to feature selection and pruning, cf. Sec. 7.2.4. 7.2.3 Constructive algorithms In the context of classification we have discussed constructive algorithms such as the so-called tiling algorithms, cf. Sec. 4.2.2. In these schemes, units or layers are added to the network until the given, labeled data set can be implemented. Similar ideas have been applied in layered networks for regression. Reviews of suggested methods and corresponding references can be found in [KY97,SC10]. A key issue of all constructive algorithms is the need to avoid overfitting due to the addition of too many units. Suitable stopping criteria are discussed in [KY97]. A popular constructive algorithm for regression is the so-called Cascade- Correlation algorithm suggested by Fahlmann and Lebiere in 1997 [FL90]. Very similar to the tiling-like algorithm (4.9), the original Cascade-Correlation scheme adds hidden units one at a time. However, in contrast to the construction in (4.9), the added node receives input from all previous hidden units. The new unit is trained by maximizing the correlation of its output with the residual error achieved so far, see [FL90] for details. The specific algorithm can lead to an architecture with many single unit hidden layers, i.e. a deep and narrow network as opposed to the shallow and wide architectures considered in Sec. 4.2.2 or 5.1.2. 7.2.4 Pruning The concept of pruning or trimming a neural network is diametrically opposed to that of constructive algorithms. The idea of pruning is to first train a relatively complex system which is capable of realizing the desired task to a satisfactory extent with respect to the given training data. In order to avoid or reduce overfitting, the network is then simplified by removing weights and/or nodes from the system without detoriating its performance too much. The dilution of trained networks has been considered very early, see [HKP91] for references. Pruning is usually done after training, and the system may be retrained thereafter. Likewise, it can be realized in intermediate steps of the training procedure. Pruning is considered an important ingredient of neural network training also in recent applications of machine learning. However, as Hugo Tessier puts it on https://towardsdatascience.com [Tes21]: “Unfortunately, the dozens, if not hundreds of papers published each year are revealing the hidden complexity of a supposedly straightforward idea.” In the literature, many recently suggested pruning procedures appear to be closely related to early works like [LDS90, HSW93], see [Ree93] for a survey. According to reviews like [BGFG20], many authors fail to relate and compare their work properly to early publications in the area. Consequently, we restrict | The+Shallow+and+the+Deep_Page_181_Chunk4080 |
168 7. MODEL EVALUATION AND REGULARIZATION ourself to the discussion of some early works that represent the basic ideas and inspired later, more specific schemes. We also limit the discussion to strategies for the removal of weights rather than entire units or layers. The latter is frequently referred to as structural pruning, see for instance [AK13] for a review and references. Modified weight decay procedures can be employed for the selection of unim- portant weights as outlined in Sec. 7.2.2. In the following we present two classical methods which are not based on weight decay, but remove weights explicitly ac- cording to their importance for the minimization of the cost function that guides the training process. Optimal Brain Damage (OBD) and Optimal Brain Surgeon(OBS) Two classical pruning algorithms with slightly macabre names have been sug- gested in the 1990s already: LeCun, Denker and Solla’s Optimal Brain Damage (OBD) [LDS90] and the Optimal Brain Surgeon (OBS) by Hassibi, Stork and Wolff[HSW93]. Both schemes are based on a ranking of individual weights Wj according to their saliency, i.e. the sensitivity of the system with respect to their removal (setting Wj = 0). Assume a network has been trained and the weight configuration is suffi- ciently close to a local minimum W ∗of the cost function E(W) with E(W ∗) = E∗. A Taylor expansion for W = W ∗+ U yields E(W) ≈E∗+1 2 i,j Ui H∗ ij Uj = E∗+1 2 i H∗ ii U 2 i +1 2 i,j(i∕=j) Ui H∗ ij Uj. (7.14) Here, the linear term vanishes in the minimum and H∗ ij = ∂2E ∂Wi∂Wj ∗is an ele- ment of the Hesse matrix computed in W ∗. Moreover, we have simply separated diagonal and off-diagonal contributions of the quadratic term. In [LDS90], the authors suggest to avoid the computation of the full, po- tentially very high-dimensional Hesse matrix and focus on the diagonal terms. Assuming that H∗is approximately diagonal, i.e. dominated by the H∗ jj, Eq. (7.14) reduces to E(W) ≈E∗+ 1 2 j H∗ jjU 2 j . (7.15) An efficient computation of the diagonal elements of H∗is also outlined in [LDS90]. The so-called saliencies sk = H∗ kkW 2 k (7.16) can be used as a guideline for the selection of weights that could be removed from the system without increasing E significantly. The OBD procedure can be summarized as follows (after [LDS90]): | The+Shallow+and+the+Deep_Page_182_Chunk4081 |
7.2. CONTROLLING THE NETWORK COMPLEXITY 169 Optimal Brain Damage (OBD) (7.17) 1. Train a network until a local minimum W ∗of E is reached or sufficiently well approximated 2. Compute the diagonal second derivatives H∗ kk 3. Compute the saliencies skk = H∗ kkW 2 k 4. Sort the weights by saliency and set some low-saliency weights to zero 5. Potentially retrain the remaining weights 6. Go to step 1. The alternative scheme of Optimal Brain Surgery (OBS) as suggested in [HSW93] follows a similar line of thought. However, there the saliencies are defined as sk = 1 2 W 2 k [H∗−1]kk . (7.18) Compared to Eq. (7.18), the factor H∗ kk is replaced by 1/[H∗−1]kk with H∗−1 denoting the inverse of the Hessian H∗. Note that for diagonal matrices H∗ the definitions (7.16) and (7.18) are identical. Further differences between OBD and OBS concern details of the procedure, see [HSW93]. Numerous modifications and extensions of OBD and OBS have been pro- posed in the literature. For instance, the authors of [PHL96] suggest a scheme which computes the saliencies with respect to the estimated generalization error rather than based on the cost function or training error. Hence, their pruning procedures, termed γOBS and γOBD, are more closely related to the actual goal of training. 7.2.5 Weight-sharing An elementary and intuitive method to reduce the flexibility of a neural network is to consider subsets of weights that assume the same value. This so-called weight-sharing usually relies on insights into the problem and data. Assume that we want to apply a set of (adaptive) filters to patches of a given input image. It appears natural to train and use only one shared set of weights per type of filter. This reduces the effective number of weights drastically and consequently simplifies the training a lot. The strategy is ubiquitous in the context of, for instance, Convolutional Neural Networks for image classification. For defining the sets of shared weights in advance, prior knowledge is re- quired which might not always be available. Nowlan and Hinton suggested a soft version of weight-sharing in [NH92b,NH92a]. They propose a penalty term | The+Shallow+and+the+Deep_Page_183_Chunk4082 |
170 7. MODEL EVALUATION AND REGULARIZATION full network with all units and connections network after dilution in Dropout Figure 7.5: Illustration of regularization by Dropout (redrawn after [SHK+14]). Left panel: the network with all nodes and weights. Right panel: four randomly selected units (red circles) and their incoming and out- going weights are temporarily removed. that enforces the distribution of weights to follow a mixture of Gaussians, the parameters of which are also subject to updates in the training process. Eventu- ally, weights can be grouped according to their membership to the contributing Gaussians. Within these clusters, weights can display very similar values but are not necessarily identical. Further discussions of hard or soft weight-sharing and references can be found in e.g. [UMW17]. More recently, the authors of [OLLB20] conclude that weight-sharing “is a pragmatic optimization approach” but “it is not a necessity in computer vision applications.” They also argue that approximate weight- sharing emerges in a self-organized way in unrestricted CNN, for example when trained from images that display translational invariance. 7.2.6 Dropout We have already discussed methods from the dilution of networks by removing weights from a given network, either by weight decay and related methods or by applying pruning techniques after training, see [HKP91] as well as sections 7.2.2 and 7.2.4. In the so-called Dropout regularization [HSK+12, SHK+14, GBC16], a ran- dom dilution is applied in the training process. More concretely: in every individual update step, e.g. in stochastic gradient descent, individual input and hidden units are excluded from the network. Only the remaining subnetwork is trained as usual. DropConnect has been suggested as a variation of the basic idea that excludes randomly selected weights instead of units [WZZ+13]. In Dropout, at each update step a randomly determined subnetwork of lim- ited complexity is considered. Therefore, Dropout reduces the flexibility of the system and restricts its adaptation to the details of the training data. The removal of nodes occurs independently with probability (1−p), the hyperparam- eter p determines the fraction of units present in the subnetwork. According | The+Shallow+and+the+Deep_Page_184_Chunk4083 |
7.3. CROSS-VALIDATION AND RELATED METHODS 171 to [SHK+14, GBC16], a value of p = 1/2 is typically used for hidden units, while p is close to 1 for input units (e.g. p = 0.8). Note that an individual Dropout dilution could by chance remove an entire layer or some other way cut all connections from input to output. Such configurations have to be excluded explicitly, but are very unlikely to occur in large networks. In the working phase and for testing purposes, the full network is used. Updating with Dropout will yield larger individual weights than conventional training. This is compensated for by multiplying all weights that were included in the Dropout by a factor p in the full network. Dropout can be interpreted as to simulate the training of an ensemble of simpler systems (the subnetworks). In the working phase, the complete network yields an estimate of the corresponding ensemble average. Moreover, due to this analogy, the potential usefulness of Dropout goes beyond the purpose of regularization: in the working phase it can be used for uncertainty estimation, see [GG16]. While Dropout and DropConnect were introduced and are mostly used in the context of deep neural networks, the concept can be transferred to other machine learning systems, see [RKSV20] for the consideration of Dropout in Learning Vector Quantization. 7.3 Cross-validation and related methods In supervised learning, the availability of well-defined performance measures allows us to formulate the training process as the optimization of a suitable objective function. However, one has to be aware that this does not necessarily reflect the ultimate goal of the learning. The cost function can only be defined with respect to the training set, while the generic goal of machine learning is to apply the inferred hypothesis to novel, unseen data. Objective functions serve, at best, as proxies for the actual aim of training. As a consequence, the strict minimization of the training error, for example, can be even counter-productive as it may lead to overtraining or overfitting ef- fects. From a more positive perspective, this also implies that we should not take optimization too seriously in the machine learning context. For instance, the existence of local minima in gradient descent based learning frequently turns out much less problematic than expected. In fact, the very success of simple- minded techniques like stochastic gradient descent relates to the fact that strict minimization of the cost function is usually not the primary goal of machine learning and could be even harmful. In this sense, the use of SGD can be interpreted as an implicit regularization which helps to avoid overfitting. Simi- larly, several of the explicit regularization techniques discussed in the previous sections hinder the strict minimization of the cost function in order to achieve better generalization behavior. On the downside, it becomes necessary to acquire reliable information about the expected performance on novel data, if we do not want to face unpleasant surprises in the working phase. Clearly, the training itself and the performance | The+Shallow+and+the+Deep_Page_185_Chunk4084 |
172 7. MODEL EVALUATION AND REGULARIZATION Figure 7.6: The requirement that the training data should be representative of the actual task at hand seems obvious, but is not always met in practice. © Jonathan van Engelenhoven, see https://www.instagram.com/banjoofjustice for more of his work. Cartoon reproduced from [Vie23] with kind permission of the artist. Remark: MNIST (see e.g. https://paperswithcode.com/dataset/mnist) is a pop- ular benchmark database of handwritten digits. on the actual training data does not provide us with such insights. Validation procedures can be employed which allow us to at least estimate the expected generalization performance [HTF01,Bis06]. The key idea is rather obvious: split the available data into a training set and a disjoint test set of examples. The former is then used for the adaptation of the model, which is eventually applied to the test set. This way, we can simulate working phase behavior while using only the available data. Obviously, the data is assumed to be representative for the task at hand, see Fig. 7.6 for a tounge-in-cheek illustration. This crucial assumption was already discussed in Sec. 2.1.2 in the context of the generic workflow of supervised learning. 7.3.1 n-fold cross-validation and related schemes The simple idea of splitting the available data randomly into one training and one test set has several problems: ◦If only relatively few examples are available, as it is very often the case in practical problems,3 we cannot afford to disregard the information con- 3Image classification tasks have become one of the few prominent exceptions due to the availability of very large databases. | The+Shallow+and+the+Deep_Page_186_Chunk4085 |
7.3. CROSS-VALIDATION AND RELATED METHODS 173 tained in a subset of these in the training. ◦The composition of the subsets could be lucky or unlucky in the sense that the test set might contain only very difficult or very easy cases. As a consequence, the test set performance might be overly pessimistic or optimistic, respectively. ◦Performing a single test only cannot give valid insight into the robustness of the system with respect to details of the training set, i.e. the variance in the sense of Sec. 7.1.1. All these issues are addressed in a very popular standard approach known as n-fold cross-validation, see e.g. [HTF01, Bis06, Ras18]. The idea is to split the available data randomly into a number n of disjoint subsets of (nearly) equal size, train on n−1 of the subsets and use the remaining subset for the validation: n-fold cross validation (7.19) • Generate subsets of D of (approximately) equal size: D = ∪n i=1Di with ∩(Di, Dj) = ∅for i ∕= j and cardinalities |Di| = P/n. • For i = 1 : n ◦Train the system on Dtrain i = D \ Di ◦Determine a suitable performance measure on Di • Compute average and variance of the performance measures over the n training processes. For each split we train the classifier or regression system on (1 −1/n) P ex- amples and evaluate its performance with respect to the P/n left out samples. Eventually, we have obtained n systems trained on slightly different data sets with n estimates of the performance, for instance in terms of the accuracies of a classifier or the MSE in regression problems. While the n validation sets are disjoint, we have to be aware that the training sets strongly overlap. The obtained estimates of the generalization error and, even more so, of the training error are definitely not statistically independent. Hence, the mean or variance obtained over the n-fold training process should not be over-interpreted. Nevertheless, the procedure will provide us with some insight into the expected generalization performance and the robustness of the system with respect to small changes in the training set. Obviously, the parameter n in n-fold cross validation will influence the work- load and the quality of the results: ◦For large n, many training processes have to performed, each one based on a large fraction of the available data. In turn, the individual validation sets will be relatively small. | The+Shallow+and+the+Deep_Page_187_Chunk4086 |
174 7. MODEL EVALUATION AND REGULARIZATION ◦For small values of n we obtain fewer, but more reliable individual esti- mates from larger validation sets. At the same time, the computational workload is reduced in comparison with the use of larger n. However, averages are performed over few individual results, only. Moreover, each training process make use of a relatively small subset of the data and cannot take full advantage of the available information. In a practical situation, the choice of n will depend primarily on the number P of available examples to begin with. For a more detailed discussion and corresponding references see, for instance, [HTF01,Ras18]. In the literature, a canonical value of n = 10 has become a standard choice, apparently. Variants Many variations of the basic idea of cross-validation have been considered in the literature [Ras18]. The split into n disjoint subsets of data may also suffer from lucky/unlucky set composition. Therefore, one often resorts to a few repeti- tions of the n-fold scheme, performed with randomized splits and an additional average over the realizations. In principle one could aim at realizing all possible splits of the data into (P −p) training and p validation examples. Their number P p grows rapidly with P and the computational costs can become unrealistically high. Alternatively, in repeated random sub-sampling validation, also known as Monte Carlo cross validation, one generates a number of independently generated random splits into P −p and p examples and computes averages and variances accordingly. This is closely related to so-called bootstrap methods [HTF01], which differ from cross-validation by resampling subsets of data with replacement. We refrain from a thorough comparison and discussion of the advantages and disadvantages of these variations of cross-validation. We refer the reader to, for example, [Efr83,ER97,Bur89,BHT23] and to textbooks like [HTF01,Bis95a,Bis06]. Leave-one-out cross-validation As a popular extreme case, one often resorts to the so-called Leave-One-Out validation [HTF01,Bis95a,Bis06,Ras18], in particular for very small data sets. It follows the idea of cross-validation, but selects just one example as the smallest possible validation set in each training run. Hence, we set n = P and run P training processes to obtain an average of the performance measure of interest. It is important to note that the Leave-One-Out estimate can be unreliable. It even bears the risk of systematically yielding misleading results: In very small data sets, leaving out one sample from a specific class can lead to a bias in the training set towards the other class(es), which may result in overly pessimistic estimates of the generalization performance [Ras18,APW+09]. A modification that mildens the effect is known by the self-explanatory name Leave-one-out from each class, generating validation sets that represent all classes [APW+09] with equal weight. | The+Shallow+and+the+Deep_Page_188_Chunk4087 |
7.4. PERFORMANCE MEASURES 175 7.3.2 Model and parameter selection The above discussed validation schemes can be employed in the context of model selection and, similarly, for the setting of parameters or hyper-parameters [Ras18]. We can use, for instance, n-fold cross-validation to compare the ex- pected performance of different classifiers or regression systems. We can also employ it to determine the size of a hidden layer in a feed-forward neural net- work, to set algorithm parameters like the learning rate in gradient descent, or to select a particular kernel in an SVM, to name just a few examples. In the illustration of Fig. 7.2, for instance, we would select the model complexity that corresponds to the minimum of the U-shaped generalization error curve. Similarly, (hyper-)parameters of the regularization schemes discussed in Sec. 7.2 can be tuned according to the performance of the system w.r.t. the validation sets. However, one has to be aware of the risk to over-interpret or even mis-use the results of cross-validation. As an example, consider gradient based training of a network with one hidden layer on a given data set D. Assume that, on the basis of n-fold cross-validation, we conclude that systems with, say, K = 3 hidden units yield the best generalization ability.4 Is it justified to expect the observed, averaged performance for K = 3 when applying the system to novel data? The problem is that we have used all of D to determine the supposedly best parameter setting. This constitutes a data- driven learning process and could be subject to overfitting effects in itself: The supposedly best choice of K may be very specific to D and could fail in the working phase. In order to obtain a more reliable estimate of the expected performance we would have to perform an extended validation procedure. We could split D into training set Dtrain and Dtest once, then apply n-fold cross-validation on Dtrain in order to determine a suitable value of K. Eventually, a system with the supposedly best setting can be re-trained on Dtrain and validated on Dtest to obtain a more realistic estimate. Now, of course, we face the problem of lucky/unlucky set compositions again, which suggests to perform a full loop along the lines of n-fold cross-validation (or one of the discussed variants). Strictly speaking, this has to be done separately for every independent pa- rameter or hyperparameter in an additional layer of validation. Obviously, prac- tical limitations apply, in particular when only small data sets are available. 7.4 Performance measures for regression and classification Despite the conceptual clarity of supervised learning, even the choice of an appropriate measure of success can constitute a non-trivial issue in practice. 4For the difficulty to even define “the best” see the next sections. | The+Shallow+and+the+Deep_Page_189_Chunk4088 |
176 7. MODEL EVALUATION AND REGULARIZATION 7.4.1 Measures for regression In regression, a differentiable objective or loss function typically guides the training process. It appears natural to consider the same function also for the evaluation of a system in validation or working phase. Most frequently, the familiar mean squared error (MSE) is used in both contexts. Depending on the application domain, alternative measures different from the loss function can be employed for the evaluation of regression models. Two popular measures that are essentially different from the MSE are: ◦Mean Absolute Error [WM05] MAE = 1 Q Q µ=1 |σµ −τ µ| (7.20) which weights deviations from the target differently than the MSE in a set of Q test or validation samples. The MAE also satisfies 0 ≤MAE ≤∞ where lower values correspond to better quality of the regression. ◦Coefficient of Determination (CoD) [DS98]. 5 CoD = 1 − µ(σµ −τ µ)2 µ(τ µ −〈τ〉µ)2 (7.21) where 〈τ µ〉is the mean target value in the data set. The CoD compares the mean squared deviation of the predictions from the targets, scaled by the variance of the target values in the data set. The measure satisfies −1 ≤CoD < 1, a value of CoD = 0 indicates that all predictions are σµ = 〈τ µ〉, CoD = 1 corresponds to perfect regression with all σµ =τ µ. A variety of further evaluation criteria, e.g. based on distance metrics or cor- relations, is available in the literature. A typology of measures is provided in [Bot19]. 7.4.2 Measures for classification The actual goal of machine learning for classification problems is to achieve a model that assigns data to the correct class with high probability in the working phase. Very often, the actual objective functions used in training provide at best a proxy of this ultimate goal. Loss functions for probabilistic classifiers based on cross-entropy like (5.23) and (5.24) could be used for training and evaluation. Most frequently, the evaluation of crisp classifiers is guided by criteria that directly relate to the accuracy with respect to the validation or test data. An overview of a variety of performance measures for classification is given in [SL09]. Assume we are comparing the performances of two different classifiers, A and B, which have been trained to perform a given binary classification. By means 5In the literature, the CoD is most frequently termed R2. The author refuses to denote a quantity that can become negative by a square. | The+Shallow+and+the+Deep_Page_190_Chunk4089 |
7.4. PERFORMANCE MEASURES 177 of cross-validation we can obtain the estimates for the generalization ability in terms of the overall error as, say, A g = 0.05 and B g = 0.30. Apparently, we could conclude that A is the better classifier and should be used in the working phase. A closer look into the available data D, however, might reveal that it con- sists of 95% class 1 samples, while only 5% of the data represent class 2. We furthermore might find that classifier 1 trivially assigns all feature vectors to class 1, resulting in 95% accuracy in D. On the other hand, model B might have learned from the data and provides 70% correct responses in both classes of the data set. Clearly, this insight might make us reconsider our previous evaluation of classifier A as the better one. If we are just after good overall accuracy and have reason to believe that the true prevalence of class 1 data is also about 95% in the real world, we can - of course - settle for the trivial model. If our main goal is to detect and identify the relatively rare occurrences of class 2, classifier B is obviously to be preferred. This somewhat extreme example illustrates two major questions that arise in the practical approach to classification problems: ◦How can we cope with strongly biased data sets when evaluating the per- formance of a classifier? ◦Can we evaluate classifiers beyond their overall accuracies in order to obtain better insight into the performance? Here we do not address the question of how to take class bias into account in the training process. Some strategies for the training from imbalanced data sets will be discussed in Section 8.7. 7.4.3 Receiver Operating Characteristics For two-class problems, both of the above mentioned questions can be addressed in the framework of the so-called Receiver Operating Characteristics (ROC) [HTF01,Bis95a,Bis06,DHS00,Faw06]. The concept and terminology goes back to signal processing tasks originally, but has become popular in the machine learning community. Most classifiers we have discussed obtain a binary assignment by applying a threshold operation to a so-called discriminative function g. In terms of the simple perceptron, for instance, we assign an input ξ ∈RN to class S = ±1 according to S = sign [g(ξ)] with g(ξ) = N j=1 wjξj, (7.22) as discussed in Chapter 3 in great detail. Having trained the perceptron as to implement the homogeneous lin. sep. function (7.22), we can introduce a | The+Shallow+and+the+Deep_Page_191_Chunk4090 |
178 7. MODEL EVALUATION AND REGULARIZATION threshold Θ after training and consider the modified classification SΘ = sign [g(ξ) −Θ] . (7.23) While this is formally identical with the consideration of an inhomogeneously lin. sep. function, see Sec. 3.3, here the perspective is different: We assume the threshold is introduced and varied manually after training. Furthermore, the concept could be applied to any discriminatory function for binary classification. Quite generally, for a large family of classifiers it is possible to realize and control a class bias by tuning Θ in Eq. 7.23. For very large negative Θ →−∞, all inputs will be assigned to class SΘ = −1, while large positive Θ →+∞ result in SΘ = +1 exclusively. In a similar way, probabilistic models can be used for crisp classification by thresholding the class membership probability which serves as the discriminative function g in Eq. 7.23. Employing a probability threshold 0 < Θ < 1 different from 1/2 imposes a bias towards one of the two classes. Very often it is important to distinguish the two class-specific errors that can occur: If a feature vector which is truly from class −1 is misclassified as S = +1, we account this as a false positive or false alarm type of error. The terminology reflects the idea that class +1 is to be detected, for instance in a medical test which discriminates diseased (positive test result) from healthy control patients (negative outcome). Analogously, the term false negative error is used when the classifier misses to detect a truly positive case.6 Similarly, the complementary true positive or true negative rates correspond to the class-wise accuracies in the two-class problem. The introduction of a controlled bias can be achieved in other classification frameworks as well and is, by no means, limited to linear classifiers. For instance, we can modify the Nearest Prototype Classification (NPC) in LVQ, Eq. (6.2). Identifying w∗ (−1), the closest one among all prototypes representing class −1 and the closest class-(+1)-prototype w∗ (+1), we can assign an arbitrary feature vector ξ to class +1 if d w∗ (+1), ξ < d w∗ (−1), ξ −Θ (7.24) and to class −1 else, thus introducing a margin Θ in the comparison of distances. Similarly, we could consider the output unit activation in a multilayered feed- forward neural network as the discriminative function and perform a biased thresholding along the same lines in order to obtain a crisp class assignment. For a given value of the threshold Θ we can obtain, e.g. from a validation or test set, the observed absolute number of false positive classifications FP, false negatives FN, true positives TP and true negatives TN. The corresponding rates are defined as fpr= FP FP +TN , tpr= TP FN +TP , fnr= FN FN +TP , tnr= TN FP +TN . (7.25) 6In the literature, other terms like type I/II errors are used frequently, but these are avoided here for the sake of clarity. | The+Shallow+and+the+Deep_Page_192_Chunk4091 |
7.4. PERFORMANCE MEASURES 179 true positive rate tpr false positive rate fpr Θ→+∞ Θ=0 Θ→−∞ ξ− δΘ Θ− g(ξ) Figure 7.7: Left panel: Schematic illustration of Receiver Operating Char- acteristics. The extreme working points with Θ →±∞are marked by empty circles. A filled circle corresponds to an unbiased classifier with Θ = 0, while the dashed line represents random, biased guesses. Right panel: Illustration of a two-class data set with discriminative function g(ξ). Feature vectors from the negative (positive) class are displayed as green (light) and red (dark) filled circles, respectively. A randomly selected negative example ξ−is marked by the large filled circle and corresponds to g(ξ−) = Θ−. The variation of the threshold by δΘ is referred to in the arguments employed in Sec. 7.4.4 to obtain the statistical interpretation of the AUC. Different names are used for the same quantities in the literature, depending on the actual context and discipline. In medicine, for instance, the term sensitivity (SENS) is frequently used for the tpr, while specificity (SPEC) refers to the tnr. An overview of the many quantities that can be derived from the four basic quantities TP, TN, FP, FN and rates (7.25) can be found in [Wik22], which also provides further relevant references. The quantities in Eq. (7.25) are not independent: Obviously, they satisfy tpr + fnr = 1 and tnr + fpr = 1. Consequently, two of the four rates can be selected to fully characterize the classification SΘ(ξ). In the framework of Receiver Operating Characteristics (ROC) one deter- mines tpr(Θ) and fpr(Θ) for a meaningful range of thresholds Θ and displays the true positive rate as a function of the false positive rate by eliminating the threshold parameter7. Figure 7.7 (left panel) displays an example ROC curve for illustration. The lower left corner, as marked by an empty circle, would correspond to the extreme 7For efficient implementation ideas see [DHS00,Faw06]. | The+Shallow+and+the+Deep_Page_193_Chunk4092 |
180 7. MODEL EVALUATION AND REGULARIZATION setting Θ →∞with all inputs assigned to the negative class. Obviously, the false positive rate is zero for this setting, the classifier does not give any false alarms. On the other hand, no positive cases are detected and tpr = 0, as well. The upper right corner in tpr = fpr = 1, also marked by an open circle, corresponds to Θ →−∞in (7.23) or (7.24): The classifier simply assigns every feature vector to the positive class, thus maximizing the true positive rate at the expense of having fpr = 1. An ideal, error-free classifier would obtain fpr = 0, tpr = 1 in the upper left corner of the ROC graph. The performance of an unmodified classifier with Θ = 0 is marked by a filled circle in the illustration 7.7 (left panel). It could correspond, for instance, to the NPC in LVQ or the homogeneous, unbiased perceptron, Eq. (7.22). By selecting a particular threshold −∞< Θ < +∞, the user can realize any combination of {tpr, tpr} that is available along the ROC curve. This way, the domain expert can adjust the actual classifier according to the specific needs in the problem at hand. In medical diagnosis systems, for instance, high sensitivity (tpr) might be more important than specificity (1 −fpr) or vice versa. To a certain extent, we can also compensate for the effects of unbalanced training data: In the illustrative example shown in Fig. 7.7 (left panel), the classifier with Θ = 0 realizes very low fpr, which might be a consequence of an over-representation of negative cases in the data set D. An objective function which is related to the number of mis-classification will favor classifiers with small fpr over those with higher tpr. In retrospect, this can be compensated for by biasing the classifier towards the detection of positive cases and move the working point closer to the upper left corner in the ROC. The hypothetical, best possible ROC is obviously given by the step function including the ideal working point fpr = 0, tpr = 1, i.e. the complete square in Fig. 7.7. On the other hand, a completely random guess with biased probability tpr = fpr for assignments to class +1 would correspond to the diagonal, i.e. the dashed line in the left panel of the illustration. 7.4.4 The area under the ROC curve When evaluating different classifiers (or frameworks, rather) one often resorts to the comparison of the area under the ROC curve, the so-called AUROC or less precisely AUC [Faw06]. Intuitively the AUC with 0 ≤AUC ≤1 provides information about the degree to which the ROC deviates from the diagonal with AUC = 1/2. Clearly, an AUC > 1/2 indicates better–than–random classifica- tion and the AUC is often used as a single numerical quality measure for the evaluation of classifiers. In principle, the precise shape of the ROC should be taken into account as well, as individual ROC can differ significantly from the idealized shape displayed in Fig. 7.7. The AUC with respect to novel data can be estimated, for instance, in the course of cross-validation along the lines of Sec. 7.3. It provides better insight into the performance of the trained system than a single specific working point. Therefore, it serves as the basis for model selection or the setting of parameters. | The+Shallow+and+the+Deep_Page_194_Chunk4093 |
7.4. PERFORMANCE MEASURES 181 Moreover, the AUC can be associated with a well-defined statistical inter- pretation. Fig. 7.7 (right panel) illustrates a two-class data set which can be classified according to a discriminative function which, in the illustration, is assumed to increase monotonically along the g(ξ)-axis. Note that here it is convenient, but not necessary, to argue in terms of a linear classifier like the perceptron, in which the weight vector w defines the discriminative direction. In the illustration, a particular, e.g. randomly selected, negative example ξ− is marked by a filled circle with the value g(ξ−) of the discriminative function. In other words, in a classifier with Θ = g(ξ−) the considered example would be located precisely at the decision boundary. Now assume that we select a random example ξ+ from the positive class, i.e. one of the feature vectors marked as red circles in the illustration. The probability for such an example to satisfy g(ξ+) > Θ−= g(ξ−) is given precisely by tpr(Θ−), which is the fraction of positive examples located on the correct side of the decision boundary defined by g(ξ) = Θ−. On the other hand, the local density of negative examples is given by the derivative dfpr/dΘ in Θ−: Shifting the threshold by δΘ, as marked by the gray shaded area, will result in correcting the output for δfpr = (dfpr/dΘ) δΘ many samples from the negative class. In summary, this implies that for a pair of feature vectors comprising one randomly selected ξ−from the negative class and one randomly selected ξ+ from the positive class, the probability that g(ξ+) > g(ξ−) is given by the integral +∞ −∞ tpr(ϑ) dfpr dϑ dϑ = 1 0 tpr dfpr = AUC. Hence, the AUC quantifies the probability with which a randomly selected pair {ξ−, ξ+} is ordered according to class membership in terms of the discriminative function g(. . .). This corresponds to the probability that a threshold value Θ exists, at which the classifier would separate such a pair of inputs correctly. This intuitive interpretation of the AUC in the ROC-analysis also makes it possible to perform the training of a classifier in such a way that the expected AUC is maximized, for details see [HR04,VKHB16]. 7.4.5 Alternative measures for two-class problems A variety of evaluation criteria for binary classification schemes have been sug- gested in the literature. The Precision-Recall (PR) formalism [DG06] can be considered as an alternative to the ROC. The PR evaluation is also based on the four quantities TP, TN, FP, and FN. Precision (Prec) and Recall (Rec) are defined as Prec = TP TP + FP Rec = TP TP + FN = tpr. (7.26) Similar to the ROC, a PR curve displaying Prec vs. Rec can be generated by varying a bias parameter. Also, the area under the PR curve is often pro- vided as a single quality parameter. However, unlike the AUROC it lacks an | The+Shallow+and+the+Deep_Page_195_Chunk4094 |
182 7. MODEL EVALUATION AND REGULARIZATION appealing statistical interpretation. For a discussion of supposed disadvantages or advantages of the PR formalism over the ROC see [DG06] and references therein. Like other quantities, Prec and Rec can also be computed at a single, spe- cific working point of the classifier. Various application domain specific measures have been defined, see e.g. [Wik22,KEP17,KHV14] for overviews and references. In fact, the large number of related quality measures can be a source of consid- erable confusion. Here we present only two popular examples: While the overall accuracy for a test set of in total ntot samples is computed as (TP + TN)/ntot, the so-called balanced accuracy corresponds to an equal weight average over classes: BAC = tpr + tnr 2 = 1 2 TP TP + FN + TN TN + FP . (7.27) It is supposedly more suitable for class imbalance in training and validation sets. Here, TP, FP and FN are the absolute numbers of true positives, false positives and false negatives observed in the data set. Similar claims have been made for the popular F1-measure, which is given by the harmonic mean of Prec and Rec: F1 = TP TP + (FP + FN)/2 = 2 Prec · Rec Prec + Rec (7.28) 7.4.6 Multi-class problems The evaluation of multi-class systems is more subtle. Several single valued measures can be computed, see below, but a multi-dimensional generalization of the ROC formalism or the PR scheme is far from obvious. Confusion matrix Most commonly, the so-called confusion matrix is provided in order to sum- marize the class-specific performance of a multi-class system with respect to a given data set. Illustration 7.8 displays the confusion matrix of a hypotheti- cal 4-class classification problem. In the left panel, each element corresponds to the number of feature vectors which belong to class i and are assigned to class j by the classifier. In the right panel of Fig. 7.8, percentages (rounded) are displayed. Here, diagonal elements correspond to the class-wise accuracies. Off-diagonal elements provide insight into which classes are relatively easy or difficult to separate. Note that although the confusion matrix provides detailed information about class-wise performances, it still corresponds to a single working point of the clas- sifier. A multi-dimensional ROC- or PR-like analysis for multi-class problems is non-trivial [Faw06,HT01]. However, many of the above mentioned single quan- tities for a given working point can be extended to multi-class problems in a straightforward fashion, see [KEP17]. An obvious example is the multi-class BAC which weights every class equally by 1/C in a C-class problem. | The+Shallow+and+the+Deep_Page_196_Chunk4095 |
7.4. PERFORMANCE MEASURES 183 predicted class [abs.] 1 2 3 4 true class 1 2 3 4 sum predicted class [%] 1 2 3 4 true class 1 2 3 4 sum Figure 7.8: Confusion matrix of a hypothetical imbalanced 4-class problem. Left panel: matrix elements correspond to the absolute number of samples from class i which are assigned to class j. Right panel: the corresponding percentages. Diagonal entries represent the class-wise accuracies. 7.4.7 Averages of class-wise quality measures A set of class-wise quantities can be derived from the confusion matrix or by considering sub-classification schemes of the type class i vs. all others. This way, all measures suitable for two-class problems, like AUROC or F1, can be considered in C-class problems as well. Averages of class-wise quantities can be computed in different ways, e.g. by weighting each class equally or by taking the number of examples per class into account. In the following, this subtlety is illustrated in terms of Precision (Prec) and Recall (Rec) as defined in Eq. (7.26) for two-class problems and the F1-measure (7.28). From the confusion matrix c(i, j) in terms of sample counts of a C-class problem we can determine the class-wise quantities TPi = c(i, i), FNi = j,(∕=i) c(i, j), FPi = j(∕=i) c(j, i) for i = 1, 2 . . . C. (7.29) For the matrix displayed in Fig. 7.8 (left panel) we would obtain TP1 = 72, FN1 = 6 + 11 + 4 = 21, and FP1 = 3 + 13 + 7 = 23 with respect to the first class. Macro-averages Based on Eq. (7.29) we can also compute the class-wise Precision and Recall values in analogy with Eq. (7.26): Preci = TPi TPi + FPi , Reci = TPi TPi + FNi . (7.30) | The+Shallow+and+the+Deep_Page_197_Chunk4096 |
184 7. MODEL EVALUATION AND REGULARIZATION The so-called macro-averages Precmac = 1 C C i=1 Preci and Recmac = 1 C C i=1 Reci. (7.31) are obtained with equal weight assigned to the C classes. Alternatively, a weighted macro-average of the form Precw−mac = 1 ntot C i=1 ni Preci and Recw−mac = 1 ntot C i=1 niReci (7.32) is frequently considered. Here ni denotes the number of samples in class i and the total number is ntot = C i=1 ni. Now we have at least two options for defining a macro-F1 measure: a) as an arithmetic mean of the class-wise F i 1 given by F i 1 = 2 Preci Reci Preci + Reci , F mac−a 1 = 1 C C i=1 F i 1 (7.33) b) or as a harmonic mean of macro-Recall and macro-Precision, i.e. F mac−b 1 = 2 Precmac Recmac Precmac + Recmac (7.34) Unfortunately, both measures appear in the literature under the same name, of- ten without clarifying which version was used. A recently published note [OB21] compares F mac−a 1 and F mac−b 1 . The authors show that the two quantities can differ significantly with F mac−b 1 ≥F mac−a 1 . They demonstrate that F mac−b 1 can yield overly large scores in class-imbalanced problems, while F mac−a 1 appears to be more robust in that respect. The authors conclude that “at the very least, researchers should indicate which formula they are using.” Micro-averages In contrast to the above discussed macro-averages, micro-averaging considers all data points without taking class-specifics into account. It is obtained by considering the averages TP = 1 C C i=1 TPi, TN = 1 C C i=1 TNi, FP = 1 C C i=1 FPi, FN = 1 C C i=1 FNi (7.35) and computing Precmic = TP TP + FP and Recmic = TP TP + FN . (7.36) | The+Shallow+and+the+Deep_Page_198_Chunk4097 |
7.5. INTERPRETABLE SYSTEMS 185 We note that ( TP + FP) = 1 C C i=1 (TPi + FPi) = 1 C C i=1 (TPi + FNi) = TP + FN = ntot C , as both sums add up all elements of the confusion matrix. As a consequence Precmic = Recmic = 1 ntot C i=1 TPi, = F mic 1 . (7.37) Since Precmic = Recmic, their harmonic mean F mic 1 also equals the overall accuracy. In general, macro-averages put the same emphasis on each class which makes sense if the aim is an approximately class-independent performance. The micro- average targets the overall quality, while the weighted macro-average constitutes a compromise between the micro- and macro-approach. Which averaging procedure should be used to evaluate a multi-class sys- tem depends on the actual target and the preference of the user. The class- composition in the training data set compared to the one that is expected in the working phase plays an important role. Assume, for instance, that a par- ticular class is represented by very few samples in the training set, but can be expected to be more frequent in the real world data. Then, micro-averaging bears the risk of disregarding the class in the evaluation with potential poor performance in the working phase as a consequence. 7.5 Interpretable systems The above discussed quality measures and validation procedures focus on the performance of a trained system in terms of classification or regression accura- cies. This is also true for the considered more sophisticated measures beyond overall or class-wise accuracies. Accuracy appears to be a natural evaluation criterion, if the goal is to, say, distinguish cats from dogs in images or perhaps discriminate diseased patients from healthy controls in a diagnosis problem. However, machine learning systems should be evaluated and compared to each other also according to complementary criteria. Potentially, some of these cannot even be expressed in terms of simple quantitative measures. As an illustration, we discuss an entertaining and frequently quoted example [Nik17]. It illustrates and summarizes an important issue in machine learning along the lines of the opening quote of this chapter: “Accuracy is not enough.” The story is that a classifier was trained to distinguish dogs from wolves based on a labeled set of still images. It seemed to work perfectly in training and validation, but failed completely on novel photos in the working phase. Even- tually, a check of the database showed that all wolves had been photographed in the snow, while dogs were shown on green grass. The classifier had “learned” to distinguish the image backgrounds, not the actual animals. | The+Shallow+and+the+Deep_Page_199_Chunk4098 |
186 7. MODEL EVALUATION AND REGULARIZATION As usual with stories like that, it is told in many versions, see [gwe09] for an interesting account of similar examples of supposedly mislead classifiers. Ap- parently, the origin of the wolves vs. dogs problem is [RSG16], a publication in which the authors purposefully trained a classifier on the misleading data. The aim was to illustrate the problem and to test a method for explain the inner workings of the classifier. However, the moral of the story is definitely relevant: unnoticed biases in data sets can result in seemingly good or even excellent performances. The effect is frequently much more subtle and more difficult to detect than in the wolves vs. dogs problem. As a particularly important example, medical data sets are frequently prone to selection biasses that facilitate a seemingly successful discrimination of diseased from healthy subjects. For example, the age or gender distribution in the classes could be different, while being essentially unrelated with the actual target diagnosis. Even the more frequent occurrence of missing values in one of the groups could be exploited by the machine learning system, resulting in seemingly good yet useless performance. It is the nature of machine learning systems that they are excellent artefact detectors. Hence, the evaluation and comparison of supervised learning models in terms of accuracy only (or similar performance oriented criteria) can be misleading and even dangerous. Responsible use of machine learning techniques requires at least a certain degree of insight into what is the basis of the system’s response. Which features, for instance, appear most relevant in a classification scheme? In the example of wolves vs. dogs: is it the properties of the animals or the color of the background that the assignment relies upon? In this sense, machine learning systems should be transparent and inter- pretable. At the very least, an effort should be made to understand how a given classifier or regression system works. Intuitive prototype-based systems and rel- evance learning constitute just two examples of approaches that can be useful in this context. A qualitative or quantitative study of the importance of given features can be performed in a variety of learning frameworks. The motivation for favoring white-box approaches is not limited to the de- tection of potential biases. Interpretable models also facilitate the discussion with the domain expert and increase the user acceptance for machine learning based support systems. Consequently, the topic of improved interpretability has attracted considerable interest within the machine learning community and continuous to do so. Partly, these efforts aim at closing the gap (if any) between the goals of statistical modelling and machine learning discussed in Sec. 2.2. For recent reviews and research articles we refer the reader to several spe- cial sessions which have been organized at the European Symposium on Neural Networks (ESANN) [VMGL12,BL13,BBVZ17]. The overview articles and ses- sion contributions should provide a useful starting point for further explorations of the topic. A comprehensive discussion of explainability and interpretability with many references can also be found in [Gho21]. | The+Shallow+and+the+Deep_Page_200_Chunk4099 |
Chapter 8 Preprocessing and unsupervised learning I will let the data speak for itself when it cleans itself. — Unknown In most of these lecture notes we have implicitly assumed that feature vectors and labels are provided ready-to-use for training. In practical situations, this is rarely the case. In general, real world data sets have to be thoroughly checked for inconsistencies, missing values or other issues. A list of useful rules for initial data analysis is provided in [BCS+22]. Quite often, outliers are removed from the data, e.g. single data points that appear implausible because they are very different from the bulk of the data or display unrealistic values of individual features. Such a cleaning of the data has to be performed with utmost care and in a controlled, reproducible way. The criteria and goals of data set curation and cleaning are usually very specific to the application domain. Therefore, it should rely on insights from the domain experts and it should never be guided by the idea of making the data consistent with a given research goal or hypothesis. For instance, removing suspected outliers may seriously affect the overall quality of the data set and introduce unwanted biases.1 Here, we focus on broadly applicable and popular preprocessing steps. The design of a classifier or regression system usually begins with the choice of how to represent the data to the system. This can amount to the extraction of engineered features, e.g. derived from images. But also seemingly simple data sets of directly observed numerical feature vectors often require careful preprocessing. 1(or in the worst case even wanted biases that support the desired result) 187 | The+Shallow+and+the+Deep_Page_201_Chunk4100 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.